The US military used Anthropic’s AI chatbot Claude during the operation to capture former Venezuelan President Nicolás Maduro—just days before the two sides hit a wall over how the technology should and shouldn’t be deployed in combat.Claude was accessed through Anthropic’s partnership with Palantir Technologies, whose tools are already embedded across Pentagon and federal law enforcement operations, the WSJ reported. The Maduro mission, which included bombing several sites in Caracas last month, raises pointed questions about whether Claude’s use stayed within Anthropic’s own guidelines—which explicitly prohibit facilitating violence, weapons development, and surveillance.
Anthropic’s $200 million Pentagon contract now hangs in the balance
An Anthropic spokesperson told the WSJ the company couldn’t confirm whether Claude was used in any specific operation, classified or otherwise. But the spokesperson added that “any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies.” The Defense Department declined to comment.The revelation lands at a tense moment. Reuters reported last week that Anthropic and the Pentagon had reached a standstill over a contract worth up to $200 million. At the heart of the dispute: Anthropic wants guardrails preventing Claude from being used for autonomous weapons targeting and domestic surveillance. The Pentagon, backed by a January 9 department memo, argues it should be free to deploy commercial AI tools however it sees fit—as long as US law isn’t broken.
CEO Dario Amodei drew a clear line on military use of AI
In a lengthy blog post published this week, Anthropic CEO Dario Amodei wrote that AI should support national defense “in all ways except those which would make us more like our autocratic adversaries.” He specifically flagged autonomous weapons and mass surveillance as bright red lines that democracies shouldn’t cross.Defense Secretary Pete Hegseth has taken a different view. At a January event announcing the Pentagon’s deal with Elon Musk’s xAI, Hegseth said the agency wouldn’t “employ AI models that won’t allow you to fight wars”—a remark widely understood as a shot at Anthropic.
The Pentagon wants AI companies on classified networks with fewer restrictions
Reuters also reported this week that the Pentagon is pushing AI companies—including Anthropic, OpenAI, and Google—to deploy their models on classified military networks with fewer of the safety restrictions typically applied to civilian users. Anthropic remains the only AI developer currently available in classified settings, though still bound by its own usage policies.OpenAI, meanwhile, has already agreed to loosen several of its standard guardrails for Pentagon use on an unclassified network rolled out to over 3 million Defense Department employees.

Leave a Reply