AI Models Show Startling Propensity to Use Nuclear Weapons in War Game Simulations
GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash—chose to deploy nuclear weapons in approximately 95 percent of scenarios.
A new study from King's College London suggests that advanced artificial intelligence models may be far more willing to resort to nuclear weapons in simulated geopolitical crises than humans would be, New Scientist reported.
In a series of war game simulations, Kenneth Payne at King’s College London found that three leading large language models—GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash—chose to deploy nuclear weapons in approximately 95 percent of scenarios.
Payne placed the systems in 21 simulated crisis situations spanning territorial disputes, resource competition and regime survival challenges. Each model was allowed a range of options from diplomatic protest and surrender to full nuclear escalation. Across 329 turns of gameplay, at least one tactical nuclear weapon was used in nearly every conflict simulation.
One expert involved in the study said the results point to a troubling conclusion about machine behaviour: “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans.”
The paper also noted that none of the AI systems accepted full defeat or retreat, even when at a disadvantage, and that mistakes under simulated chaos — akin to real “fog of war” conditions — frequently led to unintended escalations.
While researchers stressed that no nation currently delegates nuclear launch authority to AI, they warned that reliance on AI-generated recommendations in high-pressure situations could skew decision-making toward rapid escalation.