The Pentagon is reportedly considering ending its partnership with artificial intelligence (AI) company Anthropic after it refused to remove certain restrictions on how the military can use its AI models. Citing a senior administration official, Axios reports that the Department of War (previously Department of Defense) are open to significantly reduce the partnership with Anthropic or terminate it completely. The reported tension comes after months of difficult negotiations between the Defense Department and the AI company over the scope of military applications for Anthropic’s technology.
What is the core reason for dispute
The report by Axios claims that the Pentagon is pressuring four leading AI companies to allow the military unrestricted use of their tools for “all lawful purposes”. This include sensitive applications in weapons development, intelligence gathering, and battlefield operations, the report notes. It added that while other companies have been more accommodating, Anthropic has maintained firm boundaries on two specific areas: mass surveillance of American citizens and fully autonomous weaponry systems.
What the Pentagon officials have to say
The AI company insists these limitations are non-negotiable, which is said to have frustrated Defense Department officials. According to a senior administration official, Pentagon’s impatience is growing with Anthropic’s position due to the restrictions which create operational challenges. “Everything’s on the table,” the official stated, including significantly reducing the partnership with Anthropic or terminating it completely. “But there’ll have to be an orderly replacement [for] them, if we think that’s the right answer,” the official reportedly added.
Anthropic responds to reports on ‘feud’ with the Pentagon
An Anthropic spokesperson responded to the reports, emphasising the company’s continued support for national security objectives – flatly denying that the company had not “not discussed the use of Claude for specific operations with the Department of War. We have also not discussed this with any industry partners outside of routine discussions on strictly technical matters.” “We remain committed to using frontier AI in support of U.S. national security,” the spokesperson said, though the company has not indicated any willingness to eliminate its ethical guardrails.“Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy. Anthropic’s conversations with the DoW to date have focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance — none of which relate to current operations,” the spokesperson was quoted as saying.Anthropic’s technology used to capture Venezuela President Recently, reports claimed that the US military used Claude in the operation to capture Venezuela’s Nicolas Maduro, through Anthropic’s partnership with AI software firm Palantir. An executive at Anthropic reportedly reached out to an executive at Palantir to ask whether Claude had been used in the raid.“It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot,” the official was quoted as saying.

Leave a Reply