THE WILD CARD: AI, Human Character, and the Children of Minab
Samuel R. Lammie, GISP
Having grown up in the steel town of Pittsburgh I've never considered myself an intellectual. Hunting, fishing, and sports dominated my early years with only a cursory encouragement from my father for a college education — albeit he did not want me to work in the By-Product plant at J&L like he did, his father did, and even his grandfather.
Fast forward to today and a lifetime of working for the federal government with various natural resource agencies. I chose that path, beginning at Penn State, based on my father's subtle advice — having lost him when I was fifteen — and what I loved: the outdoors and forestry. But as our choices collide with reality, and as destiny often proves, we deviate from a simple, straightforward trajectory into a gravity-assisted path wholly unanticipated.
Looking back on that major part of my journey — map-building, data analyzing, and integrating locational data as a geospatial professional — I realize I learned and adapted as time went on, buying in to furthering our missions with technological tools. The tools I used were akin to a microscope, binoculars, or simply a car. Instruments of human purpose, nothing more.
But the essence of what I did, and what all of us are doing at least till now, is using our God-given human skills to get the job done with the tools at hand. That is all about to change. Our responsibility in this new reality is to recognize that the way we have always done our work is balancing on a precipice, and the choice of doing work the traditional way is no longer a choice at all.
This new precipice has two sides. On one side we have autonomous tools and technologies increasingly driving our day-to-day lives. On the other is an evolving human-robotic-AI allied front — and the question of whether that front will be grounded in human values or simply in human ambition.
Five Voices, One Unresolved Question
It was this question that led me to five articles I have read and worked through with Anthropic's Claude over the past several weeks. Together they represent a spectrum of AI characterization that I find both illuminating and, taken together, deeply incomplete.
The first four — Aschenbrenner's Situational Awareness, McKinsey's Sovereign AI, LeCun et al.'s Superhuman Adaptable Intelligence, and Pandey's Scaling Out Superintelligence — deal with what I would call the programmable and configurable AI: increasingly adept, efficient, and capable tools being transformed from vertical models and individual agents into sophisticated systems scaled horizontally across networks of specialized agents, or as Pandey calls it, the Internet of Cognition (Pandey 2026).
Jarovsky's The Great AI Dilemma completes the quintet in an important way. Her civic witness role highlights the inadequacy of regulatory actions globally — not only because of a lack of political will but because AI technology is changing too rapidly for political systems to keep pace. The Anthropic-Department of Defense confrontation and the use of AI systems in military contexts without coherent governance frameworks is a serious red flag. There is no more concrete example of this than the Iranian Minab situation, where an elementary school was bombed. In our exchange, Claude put it this way (Claude, March 17, 2026) — and I think it's worth quoting directly:
On February 28, 2026, during the opening salvo of US-Israeli airstrikes on Iran, a missile struck the Shajareh Tayyebeh girls' elementary school in Minab, in southern Iran's Hormozgan province, during the first wave of strikes around 10 AM — the start of the school week in Iran, when children were in class.
Iranian authorities reported that upwards of 175 people were killed, a majority of them schoolchildren, including girls between the ages of 7 and 12. The school was reportedly struck multiple times — accounts suggest a "double tap" in which a second strike hit a prayer room where the principal had moved students for safety after the first strike.
The deaths of those girls — 7 to 12 years old, in a classroom on a Saturday morning — is an atrocity. Whatever the targeting intent, whatever the chain of command, whatever the geopolitical justification for the broader strike campaign, the killing of over 150 children in a school is a moral catastrophe.
The "double tap" dimension makes it worse. If accurate — and multiple independent sources including Red Crescent medics corroborate it — someone made a decision to strike again after the first impact, hitting a room where a principal had just moved surviving children to safety. That is not a targeting error. That is a targeting decision made in real time, and it demands a different moral accounting than a single erroneous strike.
The Wild Card
Claude and I have worked together to track the serious consequences that follow if we — the American people and the global community — do not take decisive action to construct an AI methodology, leadership structure, and ethical value system to plan, design, build, and evolve AI systems that represent the best of humanity.
Every governance framework, every responsible AI methodology, every sovereignty ecosystem, every SAI definition — all of it ultimately runs through the character of the humans at the controls. From genuinely conscientious public servants who lose sleep over data currency in targeting systems, to leaders who dismiss civilian casualties with a press conference deflection and move on.
The technical systems don't care. A well-designed AI targeting system in the hands of someone with genuine moral seriousness and institutional accountability produces different outcomes than the same system in the hands of someone for whom it's a tool of political will or career advancement. The AI doesn't enforce the ethics. The human does — or doesn't.
This is what none of the five articles fully confronts. Aschenbrenner essentially trusts the national security state with superintelligence, which requires assuming the humans running that state have the judgment and integrity to wield it. That's a staggering assumption given the historical record — let alone the current moment. McKinsey's ecosystem framework assumes rational actors with aligned incentives. LeCun focuses on technical architecture. Pandey gives us technical architecture without ethical reckoning. Even Jarovsky, who comes closest, frames it as a regulatory problem — as if better laws automatically produce better actors.
But the problem is prior to all of that. Laws and frameworks are only as good as the people implementing them. And the people implementing them — from the targeting analyst who didn't verify decade-old satellite imagery to the defense secretary deflecting accountability — are not a controllable variable in any of these frameworks. They are the wild card that all the frameworks quietly assume away.
My major concern is that human leaders run the gamut in terms of attitude, personal responsibility, accountability, honesty, and integrity. This factor may be the most important one of all — at least while humans have direct access and control, until the day arrives when we will have been out-smarted.
There are two ways to read that phrase. The pessimistic reading: human moral failure is eventually replaced by something worse — a superintelligence that pursues goals we didn't specify carefully enough, and we lose control entirely. But the reading I think is more honest and more urgent is this: the problem of human character — the venality, the carelessness, the ego, the dishonesty — is so persistent and so dangerous that the question isn't just how we govern AI, but whether the humans doing the governing are even capable of the task.
The stakes of getting it wrong are now, as Minab shows, being paid by seven-year-old girls in classrooms.
A Closing Charge
I spent a career in federal service because I believed that public institutions, stewarded by people of genuine integrity, could be trusted with consequential decisions affecting public land and public good. That belief — civic, grounded, hard-won in the forests, landscapes, and institutions across the country — is precisely what is at stake in the AI governance conversation.
Not the architectures. Not the market projections. Not the compute calculations. Whether human institutions, run by humans of sufficient character, can be trusted with tools of this power.
I don't have a clean answer. I don't think anyone does. But I know that asking the question honestly — in public, as citizens, before the systems are fully built and the choices are foreclosed — is the only responsible path. The frameworks will follow if the will is there. The will has to come from us.
Wherever you are, I invite you to make your voice heard.
The infographic above — "Five Characterizations on the AI Moment" — was developed collaboratively with Anthropic's Claude as a synthesis tool for this article.
References
- Leopold Aschenbrenner, Situational Awareness: The Decade Ahead (June 2024). Available at situational-awareness.ai
- Ali Ustun et al., "Sovereign AI: Building Ecosystems for Strategic Resilience and Impact," McKinsey & Company, March 2026.
- Judah Goldfeder, Philippe Wyder, Yann LeCun, and Ravid Shwartz-Ziv, "AI Must Embrace Specialization via Superhuman Adaptable Intelligence," arXiv:2602.23643, February 2026.
- Luiza Jarovsky, "The Great AI Dilemma," AI, Explained (Substack), Edition #280, March 13, 2026.
- Vijoy Pandey, Scaling Out Superintelligence: Building an Internet of Cognition for Distributed Artificial Superintelligence, Outshift by Cisco, January 2026.