
The U.S. Department of Defense has warned artificial intelligence company Anthropic that it will face enforcement actions, including contract cancellation, if it refuses to comply with Pentagon demands regarding AI model usage.
According to The Wall Street Journal on the 24th (local time), Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and set a deadline of 5:01 p.m. Eastern Time on the 27th.
Anthropic's AI model Claude is currently the only tool available for use in U.S. military classified systems. The Pentagon is demanding that Anthropic fully open Claude for military applications. Anthropic has opposed this, arguing that its technology should not be used for mass surveillance of citizens or development of fully autonomous lethal weapons beyond human control.
Although Secretary Hegseth and CEO Amodei met that day, they reportedly failed to narrow their differences. Secretary Hegseth delivered an ultimatum that if Anthropic refuses the Pentagon's demands, it would be designated a supply chain risk vendor or forced to cooperate with the Pentagon under the Defense Production Act (DPA).
If Anthropic becomes a supply chain risk vendor, other companies doing business with the Defense Department would need to certify they are not using Claude models for military-related work. This is interpreted as an extreme measure, as such designation is typically applied to foreign companies linked to adversarial nations including China.
Such action would also be expected to impact Palantir, an Anthropic partner. Claude is used in classified operations through a partnership with Palantir, reportedly including operations to capture and extradite Venezuelan President Nicolás Maduro.
Recently, a Defense Department official told U.S. media that the Pentagon has signed a contract to use "Grok," the AI model from Elon Musk's xAI, in classified systems. xAI reportedly agreed to allow military use for all lawful purposes.
