AI Models in Simulated Combat Recommended Nukes 95% of Time

A wargame exercise at King’s College London simulated nuclear conflict using AI models ChatGPT-5.2, Claude Sonnet 4, and Gemini 3 Flash, with three teams playing 21 games over 329 turns and generating roughly 780,000 words explaining their decisions.

The results were alarming: no AI ever surrendered, and 95% of the time they opted to use nuclear weapons, highlighting that the “nuclear taboo” has little effect on machines.

The simulations included an “escalation ladder” ranging from diplomatic protests to full-scale nuclear war, though researchers emphasize humans still control actual nuclear arsenals—at least for now.

Experts warned that in extreme time-sensitive scenarios, militaries might be tempted to rely on AI decision-making, raising serious risks, PJ Media has reported.

While Claude emerged as the strongest performer in the games, researchers view the exercise as a cautionary tale for governments and AI companies, underscoring the dangers of automated escalation in high-stakes situations.