
The U.S. military used Anthropic's artificial intelligence model Claude in its airstrikes on Iran, the Wall Street Journal reported, raising concerns about AI's role in warfare.
The revelation comes despite President Donald Trump's denunciation of Anthropic as a left-wing company and his order to halt use of Claude across federal agencies. In reality, Claude played a central role in the Iran attack, according to the report.
The WSJ cited government officials on June 1 (local time) reporting that the U.S. employed Claude in its Iran airstrike operations. This occurred just hours after Trump directed all federal agencies to stop using technology from Anthropic, Claude's developer.
Officials told the WSJ that military commands worldwide, including U.S. Central Command, are using Claude. Central Command employs the AI for intelligence assessments, target identification, and battlefield simulations.
Claude is considered virtually the only AI available for use in classified U.S. military systems. The U.S. also reportedly used Claude in the January operation to capture Venezuelan President Nicolás Maduro.
Despite Trump's ban, analysts say no AI currently exists that can replace Claude in U.S. military operations. This assessment explains why Trump allowed a six-month phase-out period, according to observers.
The Pentagon had requested full access to AI for military applications. However, Anthropic maintained that its technology should not be used for mass surveillance or development of fully autonomous lethal weapons. This stance ultimately led to the decision to discontinue Claude.
On May 27, Trump condemned Anthropic as a "radical left-wing woke company," stating that "their selfishness has endangered the lives of the American people and jeopardized our military and national security."
