Pentagon is Working on Alternatives After Fallout with Anthropic
The Pentagon then blacklisted Anthropic as a “supply-chain risk” after disputes over limits on AI use in surveillance and weapons.
The U.S. Department of Defense is developing alternatives to Anthropic’s AI systems following a breakdown in its $200 million contract with the company, according to reports citing Pentagon officials.
Cameron Stanley, the Pentagon’s chief digital and AI officer, said the department is actively working on integrating multiple large language models into government-controlled environments.
“The Department is actively pursuing multiple LLMs into the appropriate government-owned environments,” he said. “Engineering work has begun on these LLMs, and we expect to have them available for operational use very soon.”
The split comes after disagreements over how the military could use Anthropic’s AI. The company reportedly sought contractual restrictions preventing its technology from being used for mass surveillance of U.S. citizens or autonomous weapons systems without human oversight, terms the Pentagon declined to accept.
The Pentagon then blacklisted Anthropic as a “supply-chain risk” after disputes over limits on AI use in surveillance and weapons. As a result, Anthropic is suing the government, while the Defense Department moves to replace its technology with alternatives like OpenAI and others.
Following the fallout, OpenAI secured its own agreement with the Defense Department. The Pentagon has also partnered with xAI to deploy its Grok model in classified systems.
The developments suggest the Pentagon is moving ahead with a broader, multi-vendor AI strategy, while phasing out reliance on Anthropic’s technology.