Anthropic is suing the U.S. government after the War Department labeled the artificial intelligence company a national security “supply-chain risk” and ordered agencies to stop using its technology.
The complaint, filed Mar. 9 in the U.S. District Court for the Northern District of California, comes after Anthropic refused to remove two safety restrictions on its Claude AI models: a ban on using its tech for fully autonomous lethal warfare and for mass surveillance of Americans.
According to the lawsuit, Secretary of War Pete Hegseth demanded that Anthropic allow the military to use its models for “all lawful uses.” When the company declined to lift the two restrictions, President Trump directed federal agencies to immediately stop using Anthropic technology.
The War Department then designated the company a supply-chain risk and ordered contractors doing business with the military not to have any commercial relationships with Anthropic.
“Designating Anthropic as a supply chain risk would be an unprecedented action – one historically reserved for U.S. adversaries, never before publicly applied to an American company,” Anthropic CEO Dario Amodei wrote in a blog post.
The lawsuit said Anthropic’s attempts to negotiate resulted in public labels of Amodei as “ideological” and a “liar” with a “God-complex” who is “ok with putting our nation’s safety at risk.”
Anthropic argues the government’s actions are unlawful retaliation for the company’s speech about AI safety. The complaint alleges violations of the First Amendment, the Fifth Amendment’s due process protections and the Administrative Procedure Act.
The company says the designation has already triggered cancellations of federal contracts and threatens hundreds of millions of dollars in potential business, while also damaging its reputation in the broader technology market.
The War Department did not immediately respond to a request for comment.