

Anthropic, the developer of AI model "Claude" known for emphasizing AI responsibility, has rejected the U.S. Department of Defense's demand for unrestricted access to its AI technology. The conflict has reached a critical point as the Pentagon has threatened to terminate all contracts and exclude the company from its supply chain if no agreement is reached.
On the 26th (local time), Anthropic CEO Dario Amodei posted an official statement on the company's website, saying "Our position remains unchanged despite the Defense Department's threats," refusing the demand for unlimited use of the company's AI model.
Anthropic reaffirmed its opposition to large-scale civilian surveillance through AI and the use of fully autonomous weapons—weapons that kill targets without direct human intervention. Defense Secretary Pete Hegseth had previously issued an ultimatum demanding Anthropic agree to provide Claude for all lawful purposes by 5:01 p.m. on the 27th. Anthropic dismissed the demand, stating it "cannot in good conscience comply with their request."
Analysts attribute this stance to the responsibility and transparency regarding AI that Anthropic has emphasized since its founding. CEO Amodei has been more outspoken than other companies in advocating for strong regulations to curb the potential risks posed by AI.
In his statement, CEO Amodei emphasized: "In a small number of cases, AI could undermine rather than protect democratic values," adding that "cutting-edge AI systems are not reliable enough to power fully autonomous weapons."
In response, Department of Defense Chief Technology Officer Emil Michael strongly criticized Anthropic and blamed the company for the breakdown in negotiations. In an interview with CBS, he explained that the Pentagon had made sufficient concessions during negotiations, including proposing to explicitly include provisions for compliance with federal laws restricting surveillance of Americans and inviting Anthropic to join an AI ethics committee. However, Anthropic countered that "there was no substantive progress."
The U.S. government has indicated it will terminate its federal contracts with Anthropic. The Financial Times, citing a senior administration official, reported that "if no agreement is reached with Anthropic, all contracts signed last August to provide Claude to three government departments will be terminated." Contracts through the General Services Administration (GSA) are also expected to cease.
The Pentagon may also designate Anthropic as a "supply chain risk entity." Previously, the Defense Department has designated companies from adversarial nations, such as China's Huawei, as supply chain risks and excluded them from the defense ecosystem. If Anthropic receives this designation, it would not only lose its $200 million contract signed with the Pentagon last year but also forfeit its advantage as the only AI model provider authorized for use in classified defense systems. Secretary Hegseth has also mentioned the possibility of using the Defense Production Act (DPA) to compel access to Anthropic's technology.
However, Anthropic is expected to pursue legal action against such decisions. Alan Rozenshtein, associate professor at the University of Minnesota Law School, told the FT that the Pentagon's decision is "an interpretation divorced from the law," predicting that "Anthropic would have strong legal defenses if designated as a supply chain risk entity."
Despite the pressure surrounding Anthropic, voices of support are emerging from within the AI industry. According to Reuters, approximately 200 employees from Google and OpenAI have publicly expressed support for Anthropic's position. CEO Amodei stated: "If the Defense Department decides to terminate the contract, we will do our best to ensure there is no disruption to military operations until a successor company takes over."
