US Used Anthropic's Claude AI in Iran Strikes Despite Ban Order

International|
|
By Kim Chang-young, Silicon Valley Correspondent
|
US brings out Anthropic, utilizing it for Iran target identification and other purposes - Seoul Economic Daily International News from South Korea
US brings out Anthropic, utilizing it for Iran target identification and other purposes

The United States used Anthropic's artificial intelligence model Claude in its airstrikes on Iran, it has emerged. This revelation comes despite President Donald Trump denouncing Anthropic as a left-wing company and ordering a halt to Claude's use—yet the AI played a central role in the Iran operation. Anthropic has stated it will continue supporting the US government until formally excluded from the supply chain.

The Wall Street Journal reported on the 1st (local time), citing officials, that the US employed Anthropic's Claude AI model in the ongoing Iran airstrike operations that began the previous day.

According to the WSJ, US Central Command, which directed the Iran strikes, used Claude for intelligence assessments, target identification, and battlefield simulations. Central Command and other commands stationed worldwide are already using Claude.

The US also reportedly used Claude in the January capture of Venezuelan President Nicolás Maduro. This suggests AI was again deployed for precision strikes on key facilities and the killing of Iran's Supreme Leader Ayatollah Seyyed Ali Khamenei—similar to when the Army's elite Delta Force special operations unit captured President Maduro and his wife while they slept. Axios, citing sources, confirmed Claude was used in the Iran strikes, adding that "the January operation in which US special forces captured Maduro triggered the conflict between Anthropic and the Department of Defense (War Department)."

Claude is currently assessed as virtually the only AI available for use in classified US military systems. President Trump on the 27th of last month denounced Anthropic as a "radical left-wing woke company" and ordered a ban on its use in federal agencies—yet within a day, Claude was deployed in warfare.

The Defense Department demanded full access to AI for military applications, but Anthropic maintained its position that its technology must not be used for mass surveillance or development of fully autonomous lethal weapons. The Trump administration designated Anthropic as a supply chain risk company for refusing military AI applications and decided to use competitor technologies from OpenAI and Grok AI. OpenAI announced it would build technical safeguards to prevent AI misuse, but concerns over AI deployment in warfare continue to spread.

The Trump administration intends to fill the Anthropic gap with OpenAI and Grok AI, but even internal assessments indicate immediate replacement of Claude is difficult. Given that public AI systems were already built on Anthropic's foundation, transitioning to other AI models could take approximately six months. Anthropic CEO Dario Amodei said in a CBS interview that day, "Even if the Trump administration takes unprecedented measures against us, we will support the government," adding, "We will provide our technology as long as needed until competitors who will replace our technology and perform the tasks we refuse are fully operational."

Related Video

AI-translated from Korean. Quotes from foreign sources are based on Korean-language reports and may not reflect exact original wording.