The U.S. Department of Defense is pressing leading artificial intelligence companies to allow the military to deploy their technology for “all lawful purposes,” according to a report by Axios, setting up a policy clash with AI firm Anthropic.
The Pentagon has reportedly made similar requests to OpenAI, Google and xAI. An unnamed Trump administration official told Axios that one company has agreed to the terms, while others have shown some flexibility.
Anthropic, however, is said to be resisting the demand. The Defense Department is reportedly threatening to reconsider a $200 million contract with the company if it does not ease restrictions.
Anthropic’s AI model Claude has previously drawn scrutiny over its military use. The Wall Street Journal reported in January that Claude was used in a U.S. operation to capture then-Venezuelan President Nicolas Maduro.
Anthropic’s usage policies prohibit the deployment of its systems for fully autonomous weapons or mass domestic surveillance. A company spokesperson told Axios that discussions with the Defense Department center on “hard limits” around those issues rather than specific operations.
The standoff highlights growing tension between AI developers seeking to enforce ethical guardrails and government agencies pushing for broader operational flexibility as AI tools become increasingly embedded in national security strategy.














