US Defense Moves to Designate Anthropic 'Supply Chain Risk' Over AI Military Use Dispute

International|
|
By Kyunghwan Yoon, New York Correspondent
|
Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy - Seoul Economic Daily International News from South Korea
Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy

The US military's expanded use of artificial intelligence in overseas operations has ignited debate over ethical concerns, with AI developer Anthropic at the center of a growing standoff with the Pentagon.

Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy - Seoul Economic Daily International News from South Korea
Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy

The controversy intensified after Anthropic, whose AI model Claude is the only one used on classified military networks, signaled reluctance to cooperate with the Department of Defense. CEO Dario Amodei has argued that the government could eventually use Claude for domestic surveillance, potentially threatening democracy.

Amodei and his sister Daniela Amodei, the company's president, previously worked at OpenAI before founding Anthropic in 2021 over objections to that company's commercialization. President Donald Trump has now ordered all federal agencies to stop using Anthropic technology, raising questions about the company's planned initial public offering—expected to be among this year's largest.

Claude: The Only AI on Classified Military Networks

Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy - Seoul Economic Daily International News from South Korea
Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy

On May 13, Axios reported that the US military used Claude during the April 3 operation to capture Venezuelan President Nicolás Maduro. Sources said Claude rapidly processed real-time data during "Operation Firm Resolve," which resulted in the capture of Maduro. Dozens of Cuban security personnel and Venezuelan soldiers died in the operation, while US forces suffered zero casualties.

Claude, developed by the Google- and Amazon-backed startup, is considered industry-leading in contextual understanding and large-scale text processing, with exceptional reasoning and coding capabilities. The April 12 release of "Claude Cowork" enables users without programming knowledge to create automated applications for document summarization, data analysis, and contract review through conversation alone—a development that triggered sharp declines in software stocks on the New York Stock Exchange.

Last July, Anthropic joined OpenAI, Google's Gemini, and xAI's Grok in signing a $200 million software contract with the Pentagon. Only Anthropic's model is used on classified networks handling sensitive tasks such as operational planning and weapons targeting. Other models are limited to unclassified administrative networks.

Defense Secretary Pete Hegseth has stated he "will not adopt AI models that do not permit warfighting," emphasizing plans to integrate AI into military networks to maintain superiority over China.

Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy - Seoul Economic Daily International News from South Korea
Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy

Anthropic Opposes Weapons Targeting, Domestic Surveillance

The conflict escalated following the Maduro operation. On May 11, Pentagon Chief Technology Officer Emil Michael met with executives from OpenAI and Anthropic at the White House, requesting legal authorization to use AI tools on classified networks.

OpenAI removed several usage restrictions while retaining some safeguards, enabling approximately 3 million Defense Department employees to use ChatGPT on unclassified networks. Google reportedly reached a similar arrangement. Elon Musk's xAI recently signed a contract allowing Grok's use on classified systems for all military purposes.

Anthropic alone has resisted. Company executives told military officials they oppose automated weapons targeting and domestic surveillance applications. The company continues restricting Pentagon services under its usage policy.

Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy - Seoul Economic Daily International News from South Korea
Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy

In response, the Pentagon began reviewing contract termination, citing the impossibility of negotiating each individual case. Officials acknowledged no other AI model matches Claude's capabilities.

On May 18, CNBC and The Wall Street Journal reported the Pentagon is considering designating Anthropic a "supply chain risk entity"—a measure typically reserved for adversarial nations like China. Such designation would bar all defense contractors from using Claude in military collaboration.

Anthropic's hiring of former Biden administration officials, including ex-AI adviser Ben Buchanan and former NSC senior aide Tarun Chhabra, has further strained relations with the Trump administration. David Sacks, the White House science and technology adviser known as the "AI czar," has criticized Anthropic for pursuing "excessive progressivism" favoring regulation.

The Journal also reported that Anthropic approached 1789 Capital, a venture firm where Donald Trump Jr. is a partner, for investment during a recent $30 billion fundraising round. The firm declined, citing concerns over executives' criticism of President Trump, Biden-era hires, and support for AI regulation.

Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy - Seoul Economic Daily International News from South Korea
Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy

Amodei: "Cannot Accept in Good Conscience"

On May 24, Secretary Hegseth summoned Amodei and demanded compliance by 5:01 p.m. on May 27, threatening designation as a supply chain risk entity or invocation of the Defense Production Act—measures typically reserved for national emergencies in essential sectors like energy and healthcare.

Amodei reportedly argued that adherence to company guidelines would not impair military operations.

Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy - Seoul Economic Daily International News from South Korea
Trump Stalker: US Military 'Leftist AI' Weapons Could Harm Democracy

On the same day, Anthropic published version 3.0 of its "Responsible Scaling Policy" on its website and...

Related Video

AI-translated from Korean. Quotes from foreign sources are based on Korean-language reports and may not reflect exact original wording.