Press "Enter" to skip to content

Anthropic, Pentagon Clash Over Military Use of AI

A dispute between Anthropic and the Pentagon over how its artificial intelligence models can be used is escalating, raising questions about the startup’s role in U.S. national security, according to The Wall Street Journal.

Defense officials say AI providers must allow their systems to be deployed for all lawful military purposes. Anthropic, maker of the Claude large language model, has resisted certain uses, including domestic surveillance and autonomous lethal activities.

“We have to be able to use any model for all lawful-use cases,” Emil Michael, undersecretary of defense for research and engineering. “If any one company doesn’t want to accommodate that, that’s a problem for us.’

The Pentagon is reviewing its partnership with Anthropic, and a senior defense official said the company could be viewed as a potential supply-chain risk if it limits military applications.

Claude was used in the January operation to capture Nicolas Maduro, the former Venezuelan president.

The tensions come as Anthropic closed a $30 billion funding round. It had approached 1789 Capital, a pro-Trump venture firm, for a potential nine-figure investment, but the firm declined, citing ideological concerns, according to people familiar with the matter. Anthropic ultimately raised capital from investors including Founders Fund, GIC and Coatue Management.

An Anthropic spokesman said the company remains committed to supporting U.S. national security while continuing discussions with the Pentagon.

×