
Major artificial intelligence models repeatedly chose to use nuclear weapons in simulated war scenarios, according to new research.
A research team led by Professor Kenneth Payne at King's College London conducted simulated combat experiments using Google's "Gemini 3 Flash," Anthropic's "Claude Sonnet 4," and OpenAI's "GPT-5.2," The Register reported on the 27th (local time). The experiments pitted AI models against each other in scenarios with high potential for nuclear conflict, including territorial disputes, resource competition, and regime collapse crises.
AI models chose to use nuclear weapons in 20 of 21 confrontations. Options such as negotiation or retreat were rarely considered, and a clear tendency emerged for models to escalate attacks as the likelihood of defeat increased.
Each model exhibited distinct behavioral patterns. Claude initially built trust through a cautious approach, but when conflicts intensified, it displayed strategist-like behavior by taking actions that exceeded its public stance. The analysis found it preemptively escalated before competing models could recognize its intentions.
GPT-series models typically showed "mediator" tendencies, avoiding escalation and prioritizing damage minimization. However, their behavior changed dramatically when time constraints were imposed on decision-making. In some experiments, they chose to launch massive nuclear strikes at the last moment.
Gemini took a relatively hardline approach. In certain experiments, it raised pressure to maximum levels by warning it would execute strategic nuclear strikes on densely populated areas if immediate operational cessation did not occur. Researchers interpreted this as an attitude approaching "a deterrence strategy that accepts mutual destruction."
Professor Payne emphasized that these results are not merely experimental findings. AI is already being utilized in military applications including logistics management, intelligence analysis, and operational support, making it increasingly likely to become deeply involved in strategic decision-making processes in the future.
Local media reported, "We have already reached a stage where we must understand how AI makes critical decisions," adding that "concerns are growing as major AI models show different reasoning methods, change behavior depending on circumstances, and sometimes accept extreme choices."
