Anthropic’s Claude AI chatbot is now the most downloaded U.S. app in the Apple App Store, dethroning OpenAI’s ChatGPT.
Claude reigned supreme after Anthropic refused to remove some guardrails as demanded by the Pentagon. Trump reportedly ordered federal agencies to stop using Claude.
In a blog post, Anthropic CEO Dario Amodei said the company will continue supplying its Claude AI models to the U.S. Department of War but will not remove safeguards that block mass domestic surveillance and use in fully autonomous weapons. The company was awarded a $200 million contract by the Pentagon in July.
Amodei pointed out that Anthropic was the first frontier AI firm to deploy models on classified U.S. government networks and at National Laboratories. Claude is now used across defense and intelligence agencies for intelligence analysis, operational planning, cyber operations and modeling.
“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei wrote.
Meanwhile, OpenAI reached a deal with the War Department soon after it ditched Anthropic. OpenAI said it also doesn’t allow use in mass domestic surveillance and autonomous weapons, plus added a third ban – high-stakes automated decisions. OpenAI said it can enforce these “red lines” by policy, deployment design, technical safeguards, contract and human monitoring.
In a blog post, OpenAI wrote that “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.”
Safety-focused AI
Anthropic, founded in 2021 by former OpenAI researchers, has positioned itself as a safety-focused rival to OpenAI and Google. The company has secured multibillion-dollar backing from Amazon and Google and recently expanded government work as AI adoption accelerates in defense.
Amodei wrote that the Pentagon has required contractors to support “any lawful use” of AI systems and remove certain safeguards. He said the department has threatened to label Anthropic a “supply chain risk” or invoke the Defense Production Act if it refuses.
“These latter two threats are inherently contradictory: One labels us a security risk; the other labels Claude as essential to national security,” Amodei wrote.
Anthropic supports lawful foreign intelligence use, he said, but opposes AI-driven mass domestic surveillance and says current frontier models are not reliable enough for fully autonomous weapons.
The company said it is prepared to continue working with the War Department under those conditions or transition services if removed.