Just hours after Trump banned Anthropic, US govt used company’s tech in Iran strikes

Home Events Just hours after Trump banned Anthropic, US govt used company’s tech in Iran strikes
Spread the love

Just hours after Donald Trump banned Anthropic, US government used company's tech in Iran strikes; but why analysts are not surprised

The US military reportedly used artificial intelligence (AI) tools developed by Anthropic in a major strike on Iran just hours after President Donald Trump ordered federal agencies to stop using the same technology, reports Wall Street Journal. According to it, Commands around the world, including US Central Command in the Middle East, use Anthropic’s Claude AI tool. Quoting people familiar with the matter, the report said that the command uses the tool for intelligence assessments, target identification and simulating battle scenarios, even as tensions between the AI firm and the US government have grown over how the technology should be used in warfare.

Israel attacks Iran

Anthropic labelled a national security threat

On Friday, February 27, President Trump directed all federal agencies to immediately cease using Anthropic’s AI tools and declared that the company posed a national security risk, following a dispute over military access to the technology. The move followed weeks of disagreements between Anthropic and the Pentagon, with officials pushing for broader use of Claude in military applications and the company resisting certain terms.Despite the directive, the US forces carried out an air operation against targets in Iran using systems supported by Claude, highlighting how deeply the AI had already been integrated into military planning and execution, WSJ reported. Details about the extent of Claude’s involvement were not disclosed by military officials.

Broader feud over AI use

The dispute stems from Anthropic’s refusal to allow unrestricted military use of its AI models, especially for controversial purposes such as fully autonomous weapons or mass surveillance capabilities. The Pentagon had given the company a deadline to agree to its terms or face consequences, including removal from defense contracts.Anthropic has said it plans to challenge the government’s designation of the company as a supply chain risk, arguing that its ethical safeguards are necessary. As the Pentagon moves to transition away from Claude over the coming months, officials are also turning to other AI providers, including OpenAI, to fill military needs.The situation highlights growing tensions over how artificial intelligence should be used in national security and military operations, and raises questions about the future role of AI technology in defense planning.


Spread the love

Leave a Reply

Your email address will not be published.

× Free India Logo
Welcome! Free India