Tuesday, March 17, 2026

 

THE WILD CARD: AI, Human Character, and the Children of Minab

Samuel R. Lammie, GISP


Having grown up in the steel town of Pittsburgh I've never considered myself an intellectual. Hunting, fishing, and sports dominated my early years with only a cursory encouragement from my father for a college education — albeit he did not want me to work in the By-Product plant at J&L like he did, his father did, and even his grandfather.

Fast forward to today and a lifetime of working for the federal government with various natural resource agencies. I chose that path, beginning at Penn State, based on my father's subtle advice — having lost him when I was fifteen — and what I loved: the outdoors and forestry. But as our choices collide with reality, and as destiny often proves, we deviate from a simple, straightforward trajectory into a gravity-assisted path wholly unanticipated.

Looking back on that major part of my journey — map-building, data analyzing, and integrating locational data as a geospatial professional — I realize I learned and adapted as time went on, buying in to furthering our missions with technological tools. The tools I used were akin to a microscope, binoculars, or simply a car. Instruments of human purpose, nothing more.

But the essence of what I did, and what all of us are doing at least till now, is using our God-given human skills to get the job done with the tools at hand. That is all about to change. Our responsibility in this new reality is to recognize that the way we have always done our work is balancing on a precipice, and the choice of doing work the traditional way is no longer a choice at all.

This new precipice has two sides. On one side we have autonomous tools and technologies increasingly driving our day-to-day lives. On the other is an evolving human-robotic-AI allied front — and the question of whether that front will be grounded in human values or simply in human ambition.


Five Voices, One Unresolved Question

It was this question that led me to five articles I have read and worked through with Anthropic's Claude over the past several weeks. Together they represent a spectrum of AI characterization that I find both illuminating and, taken together, deeply incomplete.

The first four — Aschenbrenner's Situational Awareness, McKinsey's Sovereign AI, LeCun et al.'s Superhuman Adaptable Intelligence, and Pandey's Scaling Out Superintelligence — deal with what I would call the programmable and configurable AI: increasingly adept, efficient, and capable tools being transformed from vertical models and individual agents into sophisticated systems scaled horizontally across networks of specialized agents, or as Pandey calls it, the Internet of Cognition (Pandey 2026).

Jarovsky's The Great AI Dilemma completes the quintet in an important way. Her civic witness role highlights the inadequacy of regulatory actions globally — not only because of a lack of political will but because AI technology is changing too rapidly for political systems to keep pace. The Anthropic-Department of Defense confrontation and the use of AI systems in military contexts without coherent governance frameworks is a serious red flag. There is no more concrete example of this than the Iranian Minab situation, where an elementary school was bombed. In our exchange, Claude put it this way (Claude, March 17, 2026) — and I think it's worth quoting directly:

On February 28, 2026, during the opening salvo of US-Israeli airstrikes on Iran, a missile struck the Shajareh Tayyebeh girls' elementary school in Minab, in southern Iran's Hormozgan province, during the first wave of strikes around 10 AM — the start of the school week in Iran, when children were in class.

Iranian authorities reported that upwards of 175 people were killed, a majority of them schoolchildren, including girls between the ages of 7 and 12. The school was reportedly struck multiple times — accounts suggest a "double tap" in which a second strike hit a prayer room where the principal had moved students for safety after the first strike.

The deaths of those girls — 7 to 12 years old, in a classroom on a Saturday morning — is an atrocity. Whatever the targeting intent, whatever the chain of command, whatever the geopolitical justification for the broader strike campaign, the killing of over 150 children in a school is a moral catastrophe.

The "double tap" dimension makes it worse. If accurate — and multiple independent sources including Red Crescent medics corroborate it — someone made a decision to strike again after the first impact, hitting a room where a principal had just moved surviving children to safety. That is not a targeting error. That is a targeting decision made in real time, and it demands a different moral accounting than a single erroneous strike.


The Wild Card

Claude and I have worked together to track the serious consequences that follow if we — the American people and the global community — do not take decisive action to construct an AI methodology, leadership structure, and ethical value system to plan, design, build, and evolve AI systems that represent the best of humanity.

Every governance framework, every responsible AI methodology, every sovereignty ecosystem, every SAI definition — all of it ultimately runs through the character of the humans at the controls. From genuinely conscientious public servants who lose sleep over data currency in targeting systems, to leaders who dismiss civilian casualties with a press conference deflection and move on.

The technical systems don't care. A well-designed AI targeting system in the hands of someone with genuine moral seriousness and institutional accountability produces different outcomes than the same system in the hands of someone for whom it's a tool of political will or career advancement. The AI doesn't enforce the ethics. The human does — or doesn't.

This is what none of the five articles fully confronts. Aschenbrenner essentially trusts the national security state with superintelligence, which requires assuming the humans running that state have the judgment and integrity to wield it. That's a staggering assumption given the historical record — let alone the current moment. McKinsey's ecosystem framework assumes rational actors with aligned incentives. LeCun focuses on technical architecture. Pandey gives us technical architecture without ethical reckoning. Even Jarovsky, who comes closest, frames it as a regulatory problem — as if better laws automatically produce better actors.

But the problem is prior to all of that. Laws and frameworks are only as good as the people implementing them. And the people implementing them — from the targeting analyst who didn't verify decade-old satellite imagery to the defense secretary deflecting accountability — are not a controllable variable in any of these frameworks. They are the wild card that all the frameworks quietly assume away.

My major concern is that human leaders run the gamut in terms of attitude, personal responsibility, accountability, honesty, and integrity. This factor may be the most important one of all — at least while humans have direct access and control, until the day arrives when we will have been out-smarted.

There are two ways to read that phrase. The pessimistic reading: human moral failure is eventually replaced by something worse — a superintelligence that pursues goals we didn't specify carefully enough, and we lose control entirely. But the reading I think is more honest and more urgent is this: the problem of human character — the venality, the carelessness, the ego, the dishonesty — is so persistent and so dangerous that the question isn't just how we govern AI, but whether the humans doing the governing are even capable of the task.

The stakes of getting it wrong are now, as Minab shows, being paid by seven-year-old girls in classrooms.


A Closing Charge

I spent a career in federal service because I believed that public institutions, stewarded by people of genuine integrity, could be trusted with consequential decisions affecting public land and public good. That belief — civic, grounded, hard-won in the forests, landscapes, and institutions across the country — is precisely what is at stake in the AI governance conversation.

Not the architectures. Not the market projections. Not the compute calculations. Whether human institutions, run by humans of sufficient character, can be trusted with tools of this power.

I don't have a clean answer. I don't think anyone does. But I know that asking the question honestly — in public, as citizens, before the systems are fully built and the choices are foreclosed — is the only responsible path. The frameworks will follow if the will is there. The will has to come from us.

Wherever you are, I invite you to make your voice heard.


 


The infographic above — "Five Characterizations on the AI Moment" — was developed collaboratively with Anthropic's Claude as a synthesis tool for this article.


References

  1. Leopold Aschenbrenner, Situational Awareness: The Decade Ahead (June 2024). Available at situational-awareness.ai
  2. Ali Ustun et al., "Sovereign AI: Building Ecosystems for Strategic Resilience and Impact," McKinsey & Company, March 2026.
  3. Judah Goldfeder, Philippe Wyder, Yann LeCun, and Ravid Shwartz-Ziv, "AI Must Embrace Specialization via Superhuman Adaptable Intelligence," arXiv:2602.23643, February 2026.
  4. Luiza Jarovsky, "The Great AI Dilemma," AI, Explained (Substack), Edition #280, March 13, 2026.
  5. Vijoy Pandey, Scaling Out Superintelligence: Building an Internet of Cognition for Distributed Artificial Superintelligence, Outshift by Cisco, January 2026.

Wednesday, January 21, 2026

A Visit to the Very Large Array in New Mexico

 

LISTENING TO THE COSMOS


GIANTS IN THE DESERT


Fifty miles west of Socorro, New Mexico, on the Plains of San Agustin, twenty-seven radio telescope antennas stand in formation across the high desert. Each dish spans 82 feet in diameter and weighs 230 tons yet moves with precision to track cosmic radio signals traveling billions of years through space. This is the Karl G. Jansky Very Large Array, operated by the National Radio Astronomy Observatory, and it represents one of humanity's most ambitious attempts to see the invisible universe.

The VLA doesn't observe light the way optical telescopes do. Instead, it detects radio waves—the same type of radiation that carries your favorite music to your car stereo, but emanating from exotic cosmic sources: colliding galaxies, supermassive black holes, stellar nurseries, and the remnants of dying stars. What makes the VLA extraordinary is how it combines all 27 antennas into a single instrument. By precisely coordinating the signals from dishes spread across distances up to 22 miles, astronomers create images with resolution rivaling the best optical telescopes, revealing structures and phenomena invisible to the human eye.

ENGINEERING AT SCALE

Just last week I stood beneath one of these antennas revealing the remarkable engineering required to make radio astronomy work. The massive dish surface must maintain its parabolic shape to within a fraction of a wavelength while tracking objects as they move across the sky. Each antenna can be repositioned along railroad tracks in a Y-shaped configuration, allowing astronomers to adjust the array's effective size depending on their observational needs—compact for wide-field surveys, extended for high-resolution imaging.


The site itself was chosen carefully. The San Agustin Plains sit at 7,000 feet elevation, surrounded by mountains that help shield the sensitive receivers from human-generated radio interference. The high desert climate provides clear skies and stable atmospheric conditions. From the visitor center overlook, you can see antennas scattered across the landscape, their white surfaces stark against the brown plains and distant peaks—a visual reminder of the scale required to observe the cosmos.


SEEING THE INVISIBLE



Radio astronomy reveals a universe fundamentally different from what our eyes can see. When we look at the night sky, we observe only visible light—a narrow slice of the electromagnetic spectrum. But the universe produces radiation across the entire spectrum, from low-energy radio waves through microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. Earth's atmosphere blocks most of this radiation, which is fortunate for life but limiting for astronomy. Radio waves, however, pass through the atmosphere, making ground-based radio telescopes possible.

The interpretive displays at the VLA help visitors understand this invisible realm. Radio waves from space aren't fundamentally different from the ones carrying cell phone signals—they're just produced by wildly different sources and carry information about exotic physics. A galaxy collision generates radio emission as matter spirals into supermassive black holes. Supernova remnants glow in radio wavelengths as shock waves energize the surrounding gas. Regions where new stars are forming emit radio waves from ionized hydrogen and complex molecules.

DISCOVERIES AND IMPACT

Since beginning operations in 1980, the VLA has contributed to groundbreaking discoveries across astrophysics. It has mapped the structure of nearby galaxies, revealed planets forming around distant stars, discovered ice on Mercury, tracked asteroids that might threaten Earth, and helped establish the existence of supermassive black holes at the centers of galaxies. VLA observations have contributed to two Nobel Prizes in Physics.

The facility underwent a major upgrade between 2001 and 2012, replacing its electronics and correlator system while keeping the iconic antennas. This transformation increased sensitivity tenfold and greatly expanded the range of observable frequencies. Today's VLA can observe from 1 to 50 gigahertz, covering wavelengths from 6 meters down to 7 millimeters.

A LIVING FACILITY


The VLA isn't a museum—it's an active research facility operating 24 hours a day, 365 days a year. During any visit, maintenance crews might be servicing antennas, technicians monitoring operations from the control building, or astronomers around the world receiving data from their allocated observation time. The massive antennas periodically roll along their tracks to new positions, a reconfiguration process that takes about a week and occurs four times per year.

The visitor center welcomes the public daily and offers a self-guided walking tour. You can stand beneath a full-scale antenna, examine the receivers that detect faint cosmic signals, and explore exhibits explaining radio astronomy. A short film introduces visitors to the science, and the gift shop features books, posters, and educational materials.

LOCATION AND ACCESS


The VLA lies approximately 50 miles west of Socorro, New Mexico, accessible via US Highway 60. The remote location—essential for radio astronomy—means visitors should plan accordingly. The nearest services are in Magdalena (27 miles west) or Socorro. The site sits on a high desert plain with limited shade, intense sun at altitude, and weather that can change rapidly. But the isolation is part of the experience. When you stand among these instruments under the vast New Mexico sky, you're at one of the places where humanity listens most intently to the cosmos.

For more information: Visit the Very Large Array – National Radio Astronomy Observatory | Visitor Center: (575) 835-7000


The Karl G. Jansky Very Large Array is a facility of the National Radio Astronomy Observatory, operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation.

Written and modified with Anthropic's Claude. 

Photographs by the Author.