Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Anthropic has said the Pentagon’s designation of the AI lab as a supply chain risk will not affect the “vast majority” of its customers as it vowed to fight the measure in court.
The defence department has written to Anthropic formalising that the group is now seen as a risk to military supply chains, the company’s chief executive Dario Amodei confirmed on Thursday.
The move marks an escalation of the feud between the Pentagon and one of America’s leading AI labs over the terms governing its technology’s use by the military. It came despite its AI models still being used in operations, including during the US war against Iran.
The supply chain risk designation has sparked concerns about Anthropic’s commercial partnerships with a number of groups including Amazon that also work with the Pentagon. It requires Anthropic’s partners to cut ties with the company on military contracts.
A broad application of the measure would have severely impacted Anthropic’s revenue, which has shot to $19bn on an annualised basis and potentially its access to vital data centre infrastructure.
But according to Amodei, the defence department has limited the scope of the order “to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts”.
The designation is typically reserved for companies from countries such as China and Russia which are deemed US adversaries.
Amodei on Thursday said the company “[does] not believe this action is legally sound and we see no choice but to challenge it in court”.
Independent legal experts have also questioned whether the company’s designation as a national security risk would survive legal scrutiny.
After negotiations over the terms of Anthropic’s work with the military collapsed last Friday, defence secretary Pete Hegseth threatened sweeping action against the $380bn start-up.
Hegseth wrote that “effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity with Anthropic”.
Amodei had refused to move on two “red lines” prohibiting the use of his company’s AI model Claude in lethal autonomous weapons and mass domestic surveillance.
A senior official at the department on Thursday said that “from the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes”.
“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk,” the official added.
Amodei on Thursday said there had been “productive conversations” between his company and the Pentagon “over the last several days”.
The situation was inflamed on Wednesday by the publication of a message Amodei wrote to Anthropic employees.
In the 1,600-word note, written last Friday, Amodei accused the Pentagon of “straight up lies” and said he was frozen out because Anthropic had not “given dictator-style praise to [President Donald] Trump” in contrast to OpenAI CEO Sam Altman.
Following Anthropic’s statement on Thursday, Emil Michael, under-secretary of defence for research and engineering, indicated talks had halted.
“There is no active . . . negotiation” between the Pentagon and Anthropic, Michael posted on X.
Amodei in the statement had apologised for the memo to employees, saying: “It was a difficult day for the company, and I apologise for the tone of the post. It does not reflect my careful or considered views.”
Read the full article here












