A US federal judge has raised concerns that the Pentagon’s decision to blacklist Anthropic may be punitive, after the AI firm refused to relax safeguards tied to the use of its models in military surveillance and autonomous weapons.
During a court hearing in California, US District Judge Rita Lin said Anthropic’s designation as a “national security supply-chain risk” appeared less like a security measure and more like an effort to penalize the company, according to Reuters.
“It looks like DOW (Department of War) is punishing Anthropic for trying to bring public scrutiny to this contract dispute,” Lin said.
The designation, announced in February, restricts the company’s access to defense contracts and limits its ability to participate in Pentagon-backed programs.
Last week, a Pentagon directive set a 180-day deadline for all units to remove Anthropic’s AI products from military systems.
Legal experts also questioned whether the Pentagon has the authority to impose such sweeping restrictions, arguing that existing procurement laws may not support a blanket ban that extends beyond federal contracts.
The ongoing dispute reflects a growing fault line over how military AI should be governed, whether through strict, enforceable safeguards or fewer constraints to preserve operational flexibility.
As AI moves closer to deployment at the edge, the debate is shifting from capability to control, raising a central question over who ultimately decides how these systems are used.