The so-called Department of War and Anthropic are locked in a very public fight over a $200 million AI contract.
The Trump administration has worked hard to make it a culture war story and recently escalated by ordering federal agencies to cease using Anthropic’s technology.
David Sacks, the president’s AI czar, calls Anthropic’s position “woke AI.”
Defense Secretary Pete Hegseth has reportedly embraced the dispute as a chance to send a message — not just to Anthropic, but to every AI company now negotiating with the Pentagon.
That political framing obscures a far more uncomfortable question: Why can’t the Pentagon commit to two of the most foundational constitutional principles in American life?
Red Lines
Anthropic has attempted to enforce its standard usage policy, which draws two red lines.
Its AI model, Claude, will not be used for mass surveillance of Americans; and it will not be used to power fully autonomous weapons — systems that fire, target, or kill without a human in the decision loop.
A January memo signed by Hegseth makes the Pentagon’s position explicit: The department needs AI models “free from usage policy constraints that may limit lawful military applications.”
That is not a negotiating posture. It is a doctrine — one that demands AI companies hand over their technology with no conditions attached, and simply trust that the government will use it wisely.
Anthropic’s competitors have largely obliged, at least in public.
OpenAI, Google, and xAI have each agreed to lift their standard safeguards for the military’s unclassified systems.
While OpenAI and Google have signaled that extending those terms to classified systems may require new agreements, Elon Musk’s xAI has agreed to the Pentagon’s terms across all classification levels without apparent reservation, and it was the only frontier AI company to bid on the Pentagon’s autonomous drone software contest.
Whether that reflects principle or proximity to power is a fair question.
Constitutional Stakes
Let’s be precise about what the no-conditions standard actually demands: the capability to surveil Americans at scale using AI, and to deploy weapons that operate without human authorization.
These aren’t edge cases conjured by a squeamish tech company. They are constitutional questions that the American people have never formally authorized the Pentagon to resolve.
This should concern Americans across the political spectrum.
The Founders’ suspicion of standing armies and unchecked executive power wasn’t abstract; it was a direct response to British general warrants that allowed authorities to search and surveil colonists without cause.
That history is the direct ancestor of the Fourth Amendment.

Public Debate
Today, the Pentagon already has sweeping authority to collect data on Americans, from social media activity to concealed carry permits.
AI doesn’t merely expand that authority; AI transforms it.
Surveillance that once required enormous institutional resources can now happen automatically, continuously, and at scales the Founders could not have imagined.
The autonomous weapons question is equally grounded in law that already exists.
Anthropic isn’t inventing a new standard; it is asking the Pentagon to honor one it is already required to follow.
Department of Defense Directive 3000.09 has long mandated “meaningful human control” over the use of lethal force.
If the military cannot commit to keeping a human in the loop on lethal decisions, that is not a vendor problem. It is a policy choice that demands public debate, not a procurement workaround.
Practical Stakes
The practical stakes reinforce the constitutional ones.
Claude has been widely used inside Pentagon systems, including classified networks, though the Trump administration has ordered its phased removal from federal systems.
Pentagon officials have privately acknowledged that competing models are “just behind” for specialized government applications, and that disentangling from Anthropic would be “massively disruptive.”
A “supply chain risk” designation — a label normally reserved for foreign adversaries — would force every defense contractor to certify it has no connection to Anthropic, whose technology is already embedded across eight of the ten largest American companies.
Dean Ball, a former Trump AI adviser who helped shape the administration’s AI Action Plan, put it plainly: It would be “hard to think of a more strategically unwise move for the US military to make.”
Warning Signal
The “woke AI” framing is an attempt to make constitutional concerns sound like cultural grievances. It shouldn’t work.
No mass surveillance of Americans. No autonomous killing machines. These are not radical positions. They are values embedded in the Bill of Rights and in longstanding military doctrine.
The fact that the Pentagon cannot commit to them, and is threatening to punish a company that insists on them, should alarm every American regardless of political party.
Even if you distrust Anthropic’s motives entirely, the underlying question stands: Does the US military need, or deserve, AI tools it can use without any limits at all?
Congress should treat this dispute not as a contractor squabble, but as a warning signal.
If the government wants unfettered access to frontier AI, it needs to build the legal infrastructure to justify that trust: updated surveillance statutes for the AI era, codified human-control requirements for autonomous weapons, and independent oversight of high-stakes military deployments.
Without that infrastructure, the boundaries of American liberty are being set in procurement negotiations. That is not a stable arrangement, and it will become more unstable as the technology grows more powerful.
Whatever the immediate outcomes, the question this dispute has exposed won’t go away. Should the US military have access to AI tools it can use without any limits at all?
The answer to that question belongs to the American people and their elected representatives — not to defense contractors, Silicon Valley executives, or the Secretary of Defense.
Congress should act like it.

Riki Parikh is the Policy Director at The Alliance for Secure AI, a nonprofit organization that educates Americans about the potential risks of AI.
The views and opinions expressed here are those of the author and do not necessarily reflect the editorial position of Military AI.