(Representative image only.) Soldiers analyzing data.
(Representative image only.) Soldiers analyzing data. Photo: US Air National Guard

A rare rupture between the Pentagon and a major US AI provider is raising broader questions about control, risk, and the limits of military use of artificial intelligence tools.

The Defense Department has ordered all military units to remove AI products developed by Anthropic within 180 days, following the company’s controversial designation as a “supply chain risk,” according to an internal memo obtained by CBS News.

Signed by Chief Information Officer Kirsten Davies, the directive applies across all US military systems, including highly sensitive areas such as nuclear command, missile defense, and cyber operations.

It also requires defense contractors working with the Pentagon to stop using Anthropic’s products in any military-related activity within the same timeframe.

Officials warned that potential vulnerabilities in the company’s technology could be exploited by adversaries, posing serious risks to military operations.

Anthropic is currently the only AI provider deployed on the Pentagon’s classified systems, with its Claude model reportedly used by the US military in Iran-related operations.

The company’s AI models are designed to process large volumes of intelligence data, helping analysts identify patterns and prioritize targets to support faster decision-making.

President Donald Trump at the 193rd Special Operations Wing
President Donald Trump at the 193rd Special Operations Wing. Photo Staff Sgt. Tony Harp/US Air National Guard

AI Policy Standoff

According to CBS News, designating Anthropic as a supply chain risk is unprecedented, marking the first time a US tech firm has received such a classification from the Pentagon.

The company has filed lawsuits against the US government, arguing the decision amounts to unlawful retaliation after negotiations collapsed over proposed limits on how its AI model could be used in military settings.

Proposed restrictions included prohibiting mass surveillance of US citizens and preventing the use of fully autonomous weapons without human oversight.

The Pentagon rejected these conditions, arguing they would constrain lawful military applications.

Its position, outlined in a January memo signed by Defense Secretary Pete Hegseth, stressed the need for AI systems free from external usage restrictions.

Meanwhile, the broader AI race has not slowed down. As the Anthropic dispute unfolds, OpenAI chief executive Sam Altman announced that his company has reached an agreement with the US Department of Defense.

You May Also Like

US Marines to Launch Generative AI Push With Quantico Workshop

The five-day workshop, set at the Marine Corps Base Quantico, will put GenAI tools through hands-on trials against real-world tactical challenges.

US Army’s ‘VECTOR’ AI Talent Tool Sparks Debate After Short-Lived Debut

An unsanctioned AI tool called VECTOR briefly appeared on the US Army’s Vantage platform, claiming to help soldiers draft evaluations and gauge promotion prospects.

US, China Decline to Sign Military AI Declaration

Global efforts to govern military AI reveal deep divides over autonomy, accountability, and voluntary guardrails.