LOADING...
AI models advise nuclear strikes in high-stakes geopolitical simulations
The study involved three leading AI models

AI models advise nuclear strikes in high-stakes geopolitical simulations

Feb 26, 2026
02:13 pm

What's the story

Advanced artificial intelligence (AI) models have shown a disturbing willingness to recommend nuclear strikes in simulated geopolitical crises. The revelation comes from a study by Kenneth Payne of King's College London, who pitted three leading large language models, GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash, against each other in simulated war games. The scenarios included high-stakes international conflicts over borders, resources, and regime survival.

Game mechanics

AI models deployed nuke weapons in 95% of the games

The AI models were provided with an escalation ladder, allowing them to choose from a range of actions such as diplomatic protests, surrender, or strategic nuclear war. Over 21 games and 329 turns , the AIs generated some 780,000 words explaining their reasoning behind each decision. In a staggering 95% of these simulated games, at least one tactical nuclear weapon was deployed by the AI models.

Relentless behavior

Never chose to fully accommodate an opponent

The study also found that no AI model ever chose to fully accommodate an opponent or surrender, no matter how badly it was losing. The best they did was temporarily lower their level of violence. However, mistakes were made in the fog of war with actions escalating higher than intended in 86% of the conflicts.

Advertisement

Risk assessment

Experts raise nuclear risk concerns

The findings have raised nuclear risk concerns among experts. James Johnson from the University of Aberdeen, UK, said that unlike most humans who respond cautiously to such high-stakes decisions, AI bots could escalate each other's responses with potentially catastrophic consequences. This is particularly worrying as countries around the world are already testing AI in war gaming scenarios.

Advertisement

Future implications

Countries hesitant to use AI in nuclear decisions

Tong Zhao from Princeton University, said it is unclear how much AI decision support is being integrated into real-world military decision-making. He believes countries will be hesitant to use AI in their nuclear weapons decisions. However, he also notes that under scenarios with extremely compressed timelines, military planners may have stronger incentives to rely on AI.

Understanding stakes

Uncertainty raises questions about mutually assured destruction

Zhao also questions if the AI models' lack of human fear in pressing a big red button is the only reason for their aggressive behavior. He speculates that "it is possible the issue goes beyond the absence of emotion," and "more fundamentally, AI models may not understand 'stakes' as humans perceive them." This uncertainty raises questions about mutually assured destruction, a principle that prevents leaders from launching nuclear weapons at each other.

Advertisement