Wednesday, April 22, 2026
Earth Day 2026
Tuesday, March 17, 2026
THE WILD CARD: AI, Human Character, and the Children of Minab
Samuel R. Lammie, GISP
Having grown up in the steel town of Pittsburgh I've never considered myself an intellectual. Hunting, fishing, and sports dominated my early years with only a cursory encouragement from my father for a college education — albeit he did not want me to work in the By-Product plant at J&L like he did, his father did, and even his grandfather.
Fast forward to today and a lifetime of working for the federal government with various natural resource agencies. I chose that path, beginning at Penn State, based on my father's subtle advice — having lost him when I was fifteen — and what I loved: the outdoors and forestry. But as our choices collide with reality, and as destiny often proves, we deviate from a simple, straightforward trajectory into a gravity-assisted path wholly unanticipated.
Looking back on that major part of my journey — map-building, data analyzing, and integrating locational data as a geospatial professional — I realize I learned and adapted as time went on, buying in to furthering our missions with technological tools. The tools I used were akin to a microscope, binoculars, or simply a car. Instruments of human purpose, nothing more.
But the essence of what I did, and what all of us are doing at least till now, is using our God-given human skills to get the job done with the tools at hand. That is all about to change. Our responsibility in this new reality is to recognize that the way we have always done our work is balancing on a precipice, and the choice of doing work the traditional way is no longer a choice at all.
This new precipice has two sides. On one side we have autonomous tools and technologies increasingly driving our day-to-day lives. On the other is an evolving human-robotic-AI allied front — and the question of whether that front will be grounded in human values or simply in human ambition.
Five Voices, One Unresolved Question
It was this question that led me to five articles I have read and worked through with Anthropic's Claude over the past several weeks. Together they represent a spectrum of AI characterization that I find both illuminating and, taken together, deeply incomplete.
The first four — Aschenbrenner's Situational Awareness, McKinsey's Sovereign AI, LeCun et al.'s Superhuman Adaptable Intelligence, and Pandey's Scaling Out Superintelligence — deal with what I would call the programmable and configurable AI: increasingly adept, efficient, and capable tools being transformed from vertical models and individual agents into sophisticated systems scaled horizontally across networks of specialized agents, or as Pandey calls it, the Internet of Cognition (Pandey 2026).
Jarovsky's The Great AI Dilemma completes the quintet in an important way. Her civic witness role highlights the inadequacy of regulatory actions globally — not only because of a lack of political will but because AI technology is changing too rapidly for political systems to keep pace. The Anthropic-Department of Defense confrontation and the use of AI systems in military contexts without coherent governance frameworks is a serious red flag. There is no more concrete example of this than the Iranian Minab situation, where an elementary school was bombed. In our exchange, Claude put it this way (Claude, March 17, 2026) — and I think it's worth quoting directly:
On February 28, 2026, during the opening salvo of US-Israeli airstrikes on Iran, a missile struck the Shajareh Tayyebeh girls' elementary school in Minab, in southern Iran's Hormozgan province, during the first wave of strikes around 10 AM — the start of the school week in Iran, when children were in class.
Iranian authorities reported that upwards of 175 people were killed, a majority of them schoolchildren, including girls between the ages of 7 and 12. The school was reportedly struck multiple times — accounts suggest a "double tap" in which a second strike hit a prayer room where the principal had moved students for safety after the first strike.
The deaths of those girls — 7 to 12 years old, in a classroom on a Saturday morning — is an atrocity. Whatever the targeting intent, whatever the chain of command, whatever the geopolitical justification for the broader strike campaign, the killing of over 150 children in a school is a moral catastrophe.
The "double tap" dimension makes it worse. If accurate — and multiple independent sources including Red Crescent medics corroborate it — someone made a decision to strike again after the first impact, hitting a room where a principal had just moved surviving children to safety. That is not a targeting error. That is a targeting decision made in real time, and it demands a different moral accounting than a single erroneous strike.
The Wild Card
Claude and I have worked together to track the serious consequences that follow if we — the American people and the global community — do not take decisive action to construct an AI methodology, leadership structure, and ethical value system to plan, design, build, and evolve AI systems that represent the best of humanity.
Every governance framework, every responsible AI methodology, every sovereignty ecosystem, every SAI definition — all of it ultimately runs through the character of the humans at the controls. From genuinely conscientious public servants who lose sleep over data currency in targeting systems, to leaders who dismiss civilian casualties with a press conference deflection and move on.
The technical systems don't care. A well-designed AI targeting system in the hands of someone with genuine moral seriousness and institutional accountability produces different outcomes than the same system in the hands of someone for whom it's a tool of political will or career advancement. The AI doesn't enforce the ethics. The human does — or doesn't.
This is what none of the five articles fully confronts. Aschenbrenner essentially trusts the national security state with superintelligence, which requires assuming the humans running that state have the judgment and integrity to wield it. That's a staggering assumption given the historical record — let alone the current moment. McKinsey's ecosystem framework assumes rational actors with aligned incentives. LeCun focuses on technical architecture. Pandey gives us technical architecture without ethical reckoning. Even Jarovsky, who comes closest, frames it as a regulatory problem — as if better laws automatically produce better actors.
But the problem is prior to all of that. Laws and frameworks are only as good as the people implementing them. And the people implementing them — from the targeting analyst who didn't verify decade-old satellite imagery to the defense secretary deflecting accountability — are not a controllable variable in any of these frameworks. They are the wild card that all the frameworks quietly assume away.
My major concern is that human leaders run the gamut in terms of attitude, personal responsibility, accountability, honesty, and integrity. This factor may be the most important one of all — at least while humans have direct access and control, until the day arrives when we will have been out-smarted.
There are two ways to read that phrase. The pessimistic reading: human moral failure is eventually replaced by something worse — a superintelligence that pursues goals we didn't specify carefully enough, and we lose control entirely. But the reading I think is more honest and more urgent is this: the problem of human character — the venality, the carelessness, the ego, the dishonesty — is so persistent and so dangerous that the question isn't just how we govern AI, but whether the humans doing the governing are even capable of the task.
The stakes of getting it wrong are now, as Minab shows, being paid by seven-year-old girls in classrooms.
A Closing Charge
I spent a career in federal service because I believed that public institutions, stewarded by people of genuine integrity, could be trusted with consequential decisions affecting public land and public good. That belief — civic, grounded, hard-won in the forests, landscapes, and institutions across the country — is precisely what is at stake in the AI governance conversation.
Not the architectures. Not the market projections. Not the compute calculations. Whether human institutions, run by humans of sufficient character, can be trusted with tools of this power.
I don't have a clean answer. I don't think anyone does. But I know that asking the question honestly — in public, as citizens, before the systems are fully built and the choices are foreclosed — is the only responsible path. The frameworks will follow if the will is there. The will has to come from us.
Wherever you are, I invite you to make your voice heard.
The infographic above — "Five Characterizations on the AI Moment" — was developed collaboratively with Anthropic's Claude as a synthesis tool for this article.
References
- Leopold Aschenbrenner, Situational Awareness: The Decade Ahead (June 2024). Available at situational-awareness.ai
- Ali Ustun et al., "Sovereign AI: Building Ecosystems for Strategic Resilience and Impact," McKinsey & Company, March 2026.
- Judah Goldfeder, Philippe Wyder, Yann LeCun, and Ravid Shwartz-Ziv, "AI Must Embrace Specialization via Superhuman Adaptable Intelligence," arXiv:2602.23643, February 2026.
- Luiza Jarovsky, "The Great AI Dilemma," AI, Explained (Substack), Edition #280, March 13, 2026.
- Vijoy Pandey, Scaling Out Superintelligence: Building an Internet of Cognition for Distributed Artificial Superintelligence, Outshift by Cisco, January 2026.
Wednesday, January 21, 2026
A Visit to the Very Large Array in New Mexico
LISTENING TO THE COSMOS
GIANTS IN THE DESERT
Fifty miles west of Socorro, New Mexico,
on the Plains of San Agustin, twenty-seven radio telescope antennas stand in
formation across the high desert. Each dish spans 82 feet in diameter and
weighs 230 tons yet moves with precision to track cosmic radio signals
traveling billions of years through space. This is the Karl G. Jansky Very
Large Array, operated by the National Radio Astronomy Observatory, and it
represents one of humanity's most ambitious attempts to see the invisible
universe.
ENGINEERING AT SCALE
Just last week I
stood beneath one of these antennas revealing the remarkable engineering
required to make radio astronomy work. The massive dish surface must maintain
its parabolic shape to within a fraction of a wavelength while tracking objects
as they move across the sky. Each antenna can be repositioned along railroad
tracks in a Y-shaped configuration, allowing astronomers to adjust the array's
effective size depending on their observational needs—compact for wide-field
surveys, extended for high-resolution imaging.
The site itself was chosen carefully. The San Agustin Plains sit at 7,000 feet elevation, surrounded by mountains that help shield the sensitive receivers from human-generated radio interference. The high desert climate provides clear skies and stable atmospheric conditions. From the visitor center overlook, you can see antennas scattered across the landscape, their white surfaces stark against the brown plains and distant peaks—a visual reminder of the scale required to observe the cosmos.
SEEING THE
INVISIBLE
Radio astronomy reveals a universe fundamentally different from what our eyes can see. When we look at the night sky, we observe only visible light—a narrow slice of the electromagnetic spectrum. But the universe produces radiation across the entire spectrum, from low-energy radio waves through microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. Earth's atmosphere blocks most of this radiation, which is fortunate for life but limiting for astronomy. Radio waves, however, pass through the atmosphere, making ground-based radio telescopes possible.
The interpretive
displays at the VLA help visitors understand this invisible realm. Radio waves
from space aren't fundamentally different from the ones carrying cell phone
signals—they're just produced by wildly different sources and carry information
about exotic physics. A galaxy collision generates radio emission as matter
spirals into supermassive black holes. Supernova remnants glow in radio
wavelengths as shock waves energize the surrounding gas. Regions where new
stars are forming emit radio waves from ionized hydrogen and complex molecules.
DISCOVERIES
AND IMPACT
Since beginning
operations in 1980, the VLA has contributed to groundbreaking discoveries
across astrophysics. It has mapped the structure of nearby galaxies, revealed
planets forming around distant stars, discovered ice on Mercury, tracked
asteroids that might threaten Earth, and helped establish the existence of
supermassive black holes at the centers of galaxies. VLA observations have
contributed to two Nobel Prizes in Physics.
The facility
underwent a major upgrade between 2001 and 2012, replacing its electronics and
correlator system while keeping the iconic antennas. This transformation
increased sensitivity tenfold and greatly expanded the range of observable
frequencies. Today's VLA can observe from 1 to 50 gigahertz, covering
wavelengths from 6 meters down to 7 millimeters.
A LIVING
FACILITY
The VLA isn't a museum—it's an active research facility operating 24 hours a day, 365 days a year. During any visit, maintenance crews might be servicing antennas, technicians monitoring operations from the control building, or astronomers around the world receiving data from their allocated observation time. The massive antennas periodically roll along their tracks to new positions, a reconfiguration process that takes about a week and occurs four times per year.
The visitor center welcomes the public
daily and offers a self-guided walking tour. You can stand beneath a full-scale
antenna, examine the receivers that detect faint cosmic signals, and explore
exhibits explaining radio astronomy. A short film introduces visitors to the
science, and the gift shop features books, posters, and educational materials.
LOCATION AND ACCESS
The VLA lies approximately 50 miles west of Socorro, New Mexico, accessible via US Highway 60. The remote location—essential for radio astronomy—means visitors should plan accordingly. The nearest services are in Magdalena (27 miles west) or Socorro. The site sits on a high desert plain with limited shade, intense sun at altitude, and weather that can change rapidly. But the isolation is part of the experience. When you stand among these instruments under the vast New Mexico sky, you're at one of the places where humanity listens most intently to the cosmos.
For more
information: Visit the Very Large
Array – National Radio Astronomy Observatory | Visitor Center: (575)
835-7000
The Karl G.
Jansky Very Large Array is a facility of the National Radio Astronomy
Observatory, operated by Associated Universities, Inc., under cooperative
agreement with the National Science Foundation.
Written and modified with Anthropic's Claude.
Photographs by the Author.
Monday, August 4, 2025
Agentic AI and the State of the Known
In examining the ramifications of Agentic AI (AGI), we have focused on two main concerns: the first, is the potential state of AGI development in the Department of Defense as well as in research institutions; and secondly, the potential uses of AGI across the geopolitical sphere. These concerns relate to the state of technology whereby public availability to rapidly evolving technological tools does not equate to a correlated public use or even awareness of the latest tools that may be operational within the military and research institutions. Fundamentally, GenAI is and will revolutionize our culture – with the next evolutionary step, Agentic AI or AGI will radically transform our culture.
Over the last two years GenAI has taken the technological
world and public marketplace by storm. At the same time this rapidly evolving
field accentuates a time lag in how closely and how well developers and
deployers have instituted tools across the AI lifecycle for designing, building,
deploying, and stewarding AI tool and system implementations within the context
of what the academic AI world calls, “Responsible AI (RAI).”
The dialogue below explores two key questions:
- Although GenAI and Agentic tools are evolving rapidly in the public marketplace, developments in research institutions and military research are almost certainly ahead in tool development. Can you speculate where or how advanced they are...
- Do you think that AGI is being used for political purposes to pursue a strategic future path...
These questions pose the context and reality for recognizing that AI systems and related tools (e.g., robotics) are likely more advanced than the public is aware of. It would seem in the public interest to have either greater accountability for development and deployment of systems or greater oversight by congressional representatives, or both. The following discourse outlines two GenAI perspectives – Anthrop\c’s Claude and OpenAI’s ChatGPT – speculative looks at AI technological scenarios.
Claude Project Perspective
You raise a
fascinating and important point about the gap between public AI capabilities
and what's likely happening behind closed doors in research institutions and
military labs. Let me search for some current information and then offer
informed speculation.
Military AI research 2025 DARPA advanced artificial intelligence
- SABER: Securing Artificial Intelligence for Battlefield Effective Robustness | DARPA; darpa.mil
- DARPA transitions new technology to shield military AI systems from trickery | DefenseScoop; defensescoop.com
- DARPA Aims to Develop AI, Autonomy Applications Warfighters Can Trust
- defense.gov
- DARPA Perspective on AI; darpa.mil
- DoD must innovate in AI by 2025 - Military Embedded Systems; militaryembedded.com
- The big AI research DARPA is funding this year - Defense One; defenseone.com
- Explainable Artificial Intelligence | DARPA; darpa.mil
- Machine learning and artificial intelligence software project by DARPA will aid DoD logistics challenge - Military Embedded Systems; militaryembedded.com
- Sharpening AI warfighting advantage on the battlefield | DARPA; darpa.mil
- AIR: Artificial Intelligence Reinforcements | DARPA; darpa.mil
Classified AI research military 2025 advanced capabilities
- Phase two of military AI has arrived | MIT Technology Review; technologyreview.com
- Military Applications of AI in 2024- Sentient Digital, Inc.; sdi.ai
- Military Applications of AI in 2025 – Cevians; cevians.com
- Artificial Intelligence In Military Market | Industry Report, 2030; grandviewresearch.com
- AI Leaders Shaping US Government & Military in 2025; executivegov.com
- Artificial Intelligence (AI) in Military Market 2025-2030: Investment Opportunities in Surveillance, Intelligence, Cybersecurity, Electronic Warfare, and Autonomous Systems; globenewswire.com
- Top 35 AI Military Tech startups (July 2025); ai-startups.org
- Militarizing AI: How to Catch the Digital Dragon? - Centre for International Governance Innovation; cigionline.org
- Congress.gov | Library of Congress; congress.gov
University
AI research classified
2025 MIT Stanford advanced
- The 2025 AI Index Report | Stanford HAI; stanford.edu
- What's next for AI in 2025 | MIT Technology Review; technologyreview.com
- AI Index 2025: State of AI in 10 Charts | Stanford HAI; stanford.edu
- Science and Medicine | The 2025 AI Index Report | Stanford HAI; stanford.edu
- Generative AI research from MIT Sloan | MIT Sloan; mit.edu
- Five Trends in AI and Data Science for 2025 | Thomas H. Davenport and Randy Bean | MIT Sloan Management Review; mit.edu
- Stanford University’s 2025 AI Index Report – Summary of Key Findings – AI Critique; aicritique.org
- Accelerating scientific discovery with AI | MIT News | Massachusetts Institute of Technology; mit.edu
- Future of AI Research; aaai.org
- Artificial Intelligence | Stanford Emerging Technology Review; stanford.edu
Based on my research, you've touched on a crucial point about the likely gap between public AI capabilities and what's happening in classified research environments. Let me offer some informed speculation based on the available evidence:
The Research Gap: What We Can Reasonably Infer
1. Timeline
Advantage (3-5+ Years Ahead)
The historical pattern suggests military and advanced research institutions
typically operate 3-5 years ahead of public capabilities, sometimes longer.
Consider that DARPA's internet research preceded the public web by decades, and
GPS was military technology for years before civilian use.
2. Current
DARPA Programs Indicate Advanced Capabilities From my search, DARPA's current initiatives suggest they're
working on:
- AIR Program: Autonomous F-16s for
beyond-visual-range combat with $41M funding (doubled from last year)
- SABER: AI red teams specifically designed
to attack and defend AI systems operationally
- REMA: Converting any commercial drone
into autonomous platforms with decision-making capabilities
- ASIMOV: Testing autonomous weapons against
complex ethical scenarios with $22M funding
3.
Intelligence Applications Already Deployed The evidence shows military AI is already being used for:
- Real-time foreign intelligence
translation and analysis (Marines in Pacific testing)
- Classified data processing with
Microsoft/Palantir partnerships
- Target identification and tracking in
operational environments
- Cybersecurity threat detection at
unprecedented scale
Likely
Advanced Capabilities (Educated Speculation)
Military
Research Probably Has:
- Multi-Modal Agentic Systems: While public ChatGPT Agent just
launched basic task execution, military systems likely have sophisticated
multi-agent coordination for complex mission planning, real-time
battlefield adaptation, and autonomous decision trees.
- Advanced Reasoning Under Uncertainty: Beyond o3's reasoning capabilities,
classified systems probably handle incomplete information, adversarial
environments, and life-or-death decisions with reliability metrics we
don't see publicly.
- Real-Time Multimodal Fusion: Integration of satellite imagery,
signals intelligence, human intelligence, and open-source data at speeds
and scales that dwarf current commercial capabilities.
- Adversarial AI Defense: DARPA's SABER program is building
"exemplar AI red teams" to assess battlefield AI vulnerabilities
and develop counter-AI techniques DARPADefenseScoop, suggesting they're well ahead in
AI-vs-AI warfare scenarios.
University
Research Probably Has:
- Foundation models trained on
specialized scientific datasets not available publicly
- Advanced reasoning systems for
mathematical proofs and scientific discovery
- Sophisticated simulation environments
for testing AI behavior in controlled conditions
- Early-stage AGI research with safety
constraints we're not seeing in commercial development
The
Classification Problem As
one expert noted: "Someone who managed to piece together hundreds of
unclassified documents could reveal important information that on its own would
be classified. For years, it was reasonable to assume that no human could
connect the dots, but this is exactly the sort of thing that large language
models excel at." Phase two of
military AI has arrived | MIT Technology Review
This suggests
classified AI systems are probably exceptional at pattern recognition across
vast datasets that would be impossible for humans to synthesize.
The Real
Implications
Why This
Matters:
- Capability Surprise: Public AI development may be
constrained by what's safe to reveal, meaning we might see sudden
"breakthroughs" that are actually deployments of long-tested
classified capabilities.
- Strategic Deception: Nations may be deliberately
understating their AI capabilities while their classified programs operate
with much more advanced systems.
- Commercial Catching Up: The rapid commercial progress
(ChatGPT Agent, autonomous vehicles, etc.) might actually be the civilian
sector finally approaching capabilities that have existed in classified
environments for years.
Bottom Line: If the historical 3-5 year gap holds,
military and advanced research institutions likely have agentic AI systems that
can autonomously plan and execute complex multi-step missions, reason through
adversarial scenarios, and operate reliably in high-stakes
environments—capabilities we're just beginning to see hints of in public AI
systems.
The most
concerning possibility is that we're in a period where classified AI
capabilities significantly exceed our public understanding of AI safety and
control mechanisms.
This is a fascinating and deeply important question that touches on some of the most consequential dynamics shaping our world. Let me search for current information about AGI development timelines and political/strategic considerations.
AGI artificial general intelligence timeline 2025 political strategic
- Artificial General Intelligence: Is AGI Really Coming by 2025? – Hyperight; hyperight.com
- When Will AGI/Singularity Happen? 8,590 Predictions Analyzed; aimultiple.com
- Artificial General Intelligence Timeline: AGI in 5–10 Years - Cognitive Today :The New World of Machine Learning and Artificial Intelligence; cognitivetoday.com
- AGI is suddenly a dinner table topic | MIT Technology Review; technologyreview.com
- The Artificial General Intelligence Presidency Is Coming; foreignpolicy.com
- OpenAI, Anthropic, Google again promise "artificial general intelligence" in "a few years"; axios.com
- AI industry 'timelines' to human-like AGI are getting shorter. But AI safety is becoming less of a focus at top labs | Fortune; fortune.com
- What the Next Frontier of AI—AGI—Means for Government & GovCons – ExecutiveBiz; executivebiz.com
- AI Timeline - The Road to AGI; ai-timeline.org
- AI Timelines and National Security: The Obstacles to AGI by 2027 | Lawfare; lawfaremedia.org
AGI artificial general intelligence geopolitics strategic competition China US 2025
- How Artificial General Intelligence Could Affect the Rise and Fall of Nations: Visions for Potential AGI Futures | RAND; rand.org
- Heeding the Risks of Geopolitical Instability in a Race to Artificial General Intelligence | RAND; rand.org
- Scenario Planning: The U.S.-China AGI Competition and the Role of the EU as a Mediator - Modern Diplomacy; moderndiplomacy.eu
- AI geopolitics and data in the era of technological rivalry | World Economic Forum; weforum.org
- Geopolitical Strategic Competition | RAND; rand.org
- US-China AI Gap: 2025 Analysis of Model Performance, Investment, and Innovation; recordedfuture.com
- The AI Superpower Showdown. Inside the US-China Race for… | by Mark Craddock | Medium; medium.com
- U.S.-China Competition and Military AI | CNAS; cnas.org
- U.S.-China AI race reshapes AI norms without global input - Rest of World; restofworld.org
- China's AI balancing act — beating the U.S. but keeping the tech from threatening Beijing's rule; cnbc.com
- China's AI balancing act — beating the U.S. but keeping the tech from threatening Beijing's rule; cnbc.com
Your question strikes at the heart of one of the most consequential dynamics of our time. Based on my research, I believe AGI is absolutely being positioned and pursued as a strategic tool for geopolitical advantage, with profound implications for global power structures.
The
Strategic Framing is Explicit
The evidence shows that both nations and corporations are explicitly framing AGI development in terms of strategic competition:
U.S. Positioning: A Foreign Policy analysis argues that
"ensuring that the United States wins the AGI race will be seen,
retrospectively, as the most important" challenge for the next U.S.
president, with "the U.S. government's full resources must be harnessed
and then channeled into this effort." The Artificial
General Intelligence Presidency Is Coming
Georgetown professor Ben Buchanan noted in a podcast discussion "why the US government finds it essential to develop AGI before China" with implications for "national security and labor markets." AGI is suddenly a dinner table topic | MIT Technology Review
Geopolitical Stakes: Ex-OpenAI employee Leopold Aschenbrenner
argued that "the free world's very survival" is "at stake"
and that reaching "superintelligence" first will give the U.S. or
China "a decisive economic and military advantage" that determines
global hegemony. AI Timelines and
National Security: The Obstacles to AGI by 2027 | Lawfare
The Strategic Uses Are Already Clear
- Economic Dominance: The competition is seen as critical because "a declining cost and general availability of human-level AI models...will very likely disrupt the future of work while simultaneously leading to explosive economic growth, with AI estimated to contribute $15.7 trillion to the global economy by 2030." US-China AI Gap: 2025 Analysis of Model Performance, Investment, and Innovation
- Military Applications: RAND analysis shows that "Chinese leaders want to leverage AI to boost growth and innovation, address economic and social challenges, and secure the Chinese Communist Party's (CCP) domestic rule" while the U.S. sees AI as crucial for military modernization. U.S.-China Competition and Military AI | CNAS
- Global Influence: "Winning" the AI race means "dominating not only AI R&D pipelines but also the infrastructural and economic architectures that tether other countries — especially those in the Global South — to extractive platform dependencies." U.S.-China AI race reshapes AI norms without global input - Rest of World
The
Future Path Being Pursued
Scenario Planning Reveals the Stakes: RAND developed eight scenarios showing how "the extent of centralization in AGI development is a crucial determinant of the geopolitical outcomes that might materialize. In more-centralized scenarios, either the United States or an adversary could gain significant advantages." How Artificial General Intelligence Could Affect the Rise and Fall of Nations: Visions for Potential AGI Futures | RAND
Risk of Conflict: RAND warns that "if artificial
general intelligence will vastly increase the military and economic powers of
states that have it, as many technologists predict, strategists should consider
the possibility that nations will use force to prevent competitors from gaining
such an advantage." Heeding the Risks of Geopolitical
Instability in a Race to Artificial General Intelligence | RAND
The Accelerating Timeline Pressure
What makes this particularly concerning is the compressed timeline:
- Sam Altman believes "by 2025, machines will think and reason like humans" Artificial General Intelligence: Is AGI Really Coming by 2025? - Hyperight
- Google DeepMind's Demis Hassabis said "AGI could be here in 5–10 years" Artificial General Intelligence Timeline: AGI in 5–10 Years - Cognitive Today :The New World of Machine Learning and Artificial Intelligence
- AI experts estimate "a 50% chance that AGI will occur until 2060" but recent advances have shortened many predictions When Will AGI/Singularity Happen? 8,590 Predictions Analyzed
The
Concerning Reality
Yes, AGI is absolutely being pursued for
strategic political purposes.
The evidence suggests:
- National
Survival Framing: Both
the U.S. and China view AGI development as existential to their continued
global influence and potentially their national survival.
- Winner-Take-All
Mentality: The current
trajectory suggests whoever achieves AGI first gains potentially insurmountable
advantages in economics, military power, and global influence.
- Safety Secondary: As AI safety expert Jeremy Kahn notes, "Commercial and geopolitical incentives increasingly mean that AI safety testing is taking a back seat to shipping products fast." AI industry 'timelines' to human-like AGI are getting shorter. But AI safety is becoming less of a focus at top labs | Fortune
- Predetermined Futures: The strategic competition is actively shaping what kind of AGI gets built and by whom, potentially determining whether future AGI systems reflect Western liberal values, Chinese state-directed approaches, or something else entirely.
The Meta-Question: The pursuit of AGI for strategic advantage may be creating a self-fulfilling prophecy where the technology becomes a tool of geopolitical dominance precisely because we're building it to be one. This raises profound questions about whether we're on a path toward AGI that serves humanity broadly, or AGI that serves the strategic interests of whichever nation or coalition achieves it first. The timeline pressure and strategic framing suggest we may get AGI optimized for competitive advantage rather than human flourishing—which could be the most consequential choice in human history.
ChatGPT Project Perspective
🔬
Speculative Outlook: AI Capabilities Beyond the Public Domain
1. Military and Intelligence Use: Agentic
AI for ISR and C2
(Intelligence, Surveillance,
Reconnaissance and Command & Control)
Current Public Benchmarks:
Open-source agentic AI can now execute
multi-step tasks (e.g., auto-scheduling, document summarization, email
drafting).
Speculative Military Capabilities (2025):
- Autonomous
ISR Assistants: AI
systems likely analyze multi-modal sensor feeds (satellite, drone, cyber) in
real time, auto-prioritizing threats and generating executive summaries without
human prompting.
- Multi-Agent
Coordination: Autonomous
agents could coordinate across units (drone swarms, cyber-defense systems, EW
platforms), making decentralized decisions based on shared objectives.
- Real-Time Simulation + Wargaming: Large-scale simulations run by generative agents can model adversary behavior in uncertain geopolitical scenarios—enhanced by agent-on-agent adversarial learning.
- Covert Ops Support: Agentic tools may help operatives in the field (augmented reality overlays, on-the-fly language translation, predictive navigation based on satellite drift + terrain analysis).
2. DARPA / IARPA and Frontier Research
DARPA and IARPA often operate 3–5 years
ahead of public releases.
Likely Frontiers:
- Cognitive
Emulation: Projects
exploring "neuro-symbolic" AI that mimic human reasoning and emotion
for decision support (far beyond today’s GPT-style reasoning).
- Zero-Resource Learning Agents: Training AI agents that generalize from extremely sparse or novel situations—key for battlefield adaptability or intelligence gathering in denied environments.
- Secure Local Agent Swarms: Systems capable of functioning without internet access, coordinating via edge computing in disconnected environments (e.g., submarines, Mars missions, black ops).
3. National Laboratories & Academic
Alliances
Institutions like MIT Lincoln Lab, Sandia, and Los Alamos conduct AI work under DoD contracts or DOE oversight.
Probable Capabilities:
- Quantum-AI
Hybrid Models:
Experimental agentic AI using quantum optimizers to reduce the computational
burden of large model inference.
- Long-Horizon Planning Agents: Multi-objective optimization over months or years—useful in logistics, defense posture planning, or climate modeling.
- AI-Augmented Scientific Discovery: AI as an autonomous co-investigator in protein design, materials science, or nuclear simulation (e.g., closed-loop lab agents proposing and running experiments).
4. Speculative Black Programs
(Unacknowledged Special Access)
Based on historical precedent (e.g.,
stealth aircraft, satellite surveillance), it's reasonable to believe
classified AI programs exist.
- Cognitive
Decoys: Agentic systems
that impersonate digital humans or adversary decision-makers in psychological
operations.
- Offensive Cyber-AI: Fully autonomous cyber agents that can infiltrate, persist, and manipulate without human input—using few-shot or even zero-shot learning to adapt in adversarial digital terrain.
- Sovereign LLMs at Scale: Nations may have developed ultra-large models (beyond GPT-4 scale) that operate entirely within secured government infrastructure for economic forecasting, public sentiment modeling, or information warfare simulations.
🧭
Strategic Implications
|
Sector |
Likely Advantage |
|
Military |
Autonomous decision-making under
uncertainty; agent coordination at tactical levels. |
|
Intelligence |
Persistent surveillance analysis; AI
"watchers" that flag anomalies or shifts in behavior patterns. |
|
Science |
Accelerated hypothesis testing and
experimental design in classified domains (e.g., fusion, propulsion,
bioweapons defense). |
|
Cybersecurity |
Active defense agents that operate
beyond traditional human response times. |
🔍 Final Thought
While the public is experiencing the
"agentic revolution" in productivity tools, national security and
research institutions may already be deploying:
- Multimodal, autonomous, self-updating
AI agents
- Operating in contested environments, across siloed networks
- With a clear emphasis on decision dominance, not just automation
🧠 Is AGI
Being Used Politically? Likely, Yes—In These Ways:
- Language
Models as Ideological Engines:
AI systems trained on curated datasets can subtly influence public discourse.
By selecting training data, reinforcement strategies, and content filters,
developers (corporate or state-backed) can encode ideological biases.
- Narrative
Engineering: Governments
or political actors could use agentic AI to generate synthetic news, social
media campaigns, and engagement strategies that resonate emotionally—customized
per demographic or region.
- Election
Influence: Not just
through bots or misinformation, but by generative psyops—fine-tuned
models simulating public personas to infiltrate discourse communities and seed
or steer beliefs.
2. AI for Strategic Policy Simulation - Using near-AGI systems to model political futures
- Policy
Simulation Agents:
Governments (especially U.S., China, Russia) may be running closed-loop
simulations where AI agents represent stakeholders—nations, ideologies, voter
blocs—to test how different policies ripple through domestic and global
systems.
- Anticipatory Governance: AGI-adjacent models could be used to forecast unintended consequences of legislation, assess civil unrest risk, or even predict which sectors are vulnerable to cultural fracture or economic stress.
- Strategic Foresight: AI systems may already be assisting think tanks or strategic planning groups in shaping 5–25 year policy goals, based on generative scenario modeling.
3. AI and Authoritarian Drift - The fear: AGI tools give central power exponential leverage
- Behavior
Prediction: Nation-states
with integrated biometric data, surveillance feeds, and digital behavior logs
(e.g., China’s social credit infrastructure) may be approaching
population-level simulation and prediction.
- Dynamic
Legal Control: AI could
soon help governments generate adaptive laws and propaganda in real time based
on public sentiment—pre-emptively neutralizing dissent or backlash.
- Agentic
Censorship: AI agents
trained to scan, label, and suppress ideologically "undesirable"
material with high precision—across languages, platforms, and media types.
4. Geopolitical Grand Strategy -
- China's
“AI soft power”: The
export of open-source models like DeepSeek R1 or Kimi K2 isn’t just technical
generosity—it’s influence. Whoever controls the underlying models helps set the
cognitive architecture for millions of users abroad.
- U.S. “AI Exceptionalism”: Initiatives like the National AI Action Plan or Project Stargate reflect a bet that future political and economic dominance will hinge on who gets agentic AI integration right first.
- AI-Era Diplomacy: States are training models to simulate treaty negotiations, map geopolitical pressure points, and plan alliance shifts—based on vast corpora of legal, cultural, and economic data.
🧭 Summary Table: Political Uses of
AGI-Adjacent Systems
|
Domain |
Example Application |
Strategic Goal |
|
Domestic Politics |
Narrative control, synthetic personas,
dynamic policy drafting |
Social cohesion or polarization |
|
Foreign Policy |
Simulated negotiations, alliance
modeling, foreign influence ops |
Geopolitical advantage |
|
Military Planning |
Cognitive wargaming, battle simulations,
info ops |
Decision superiority |
|
Economic Strategy |
Trade modeling, economic prediction
agents |
Industrial dominance |
|
Censorship & Control |
Real-time moderation, ideological
scoring, behavioral nudging |
Regime stability |
🧠 Final Thought
The idea of AGI is no longer
theoretical. Its early operational forms—especially when distributed across
models, agents, and orchestrators—are already powerful enough to shape the
future at a national and global level.
So yes, AGI is not only political—it’s geopolitical. It’s being shaped by values, interests, and ideologies, whether openly or covertly.






