Press "Enter" to skip to content

Anthropic, Pentagon Feud Deepens

The Pentagon has formally designated Anthropic as a “supply-chain risk,” escalating a high-profile dispute over how the U.S. military can use advanced AI systems.

In a notice, the Defense Department informed Anthropic that its company and products pose risks to national security and could be restricted from work involving military contractors, according to officials familiar with the decision, The Wall Street Journal reported.

The move could force companies that do business with the Pentagon to stop using Anthropic’s AI model, Claude.

The conflict centers on Anthropic’s refusal to remove safeguards that prevent its AI from being used for mass domestic surveillance or fully autonomous weapons. The Pentagon argues it must retain the ability to use AI systems for any lawful purpose in military operations.

The designation is unusual because the supply-chain risk label is typically applied to companies linked to foreign adversaries. Analysts say using it against a U.S. technology firm could chill private-sector cooperation with the government.

Anthropic, founded by former OpenAI researchers and previously approved for use on classified government networks, said it plans to challenge the designation in court.

The Information earlier reported that Anthropic CEO Dario Amodei told staff in a memo the company is being targeted because it has not “given dictator-style praise to Trump” and fought his AI agenda.

The dispute comes amid intensifying competition among AI firms for defense contracts. OpenAI recently secured a deal to provide AI systems for classified Pentagon environments after Anthropic’s relationship with the military deteriorated.

Read the Journal story.

×