A dispute over AI safety restrictions on commercial models is exposing a growing tension between private-sector guardrails and the US military’s demand for unrestricted battlefield capability.
Pentagon officials say some provider‑imposed limits on how AI can be used could hamstring mission planning and operational execution — or even cause models to halt mid-operation if usage violates contract terms.
The issue came into focus after Emil Michael, the US Defense Department’s top technology official, reviewed contracts governing AI systems already deployed within sensitive military commands, Reuters reported.
He said those agreements, signed under the Biden administration, include limits that could restrict the military’s ability to leverage AI for planning and executing kinetic operations such as strikes or explosive actions.
“I had a ‘holy, holy cow’ moment,” Michael said. “You couldn’t plan an operation … if it would potentially lead to kinetics.”
The concerns arise amid an ongoing dispute between the Pentagon and Anthropic over safeguards embedded in its Claude model, which was reportedly involved in planning operations including strikes on Iran and the raid that captured Venezuelan leader Nicolás Maduro.
“What we’re not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed,” Michael said.
The clash highlights a broader challenge as the Pentagon integrates AI into operational systems while relying on commercial providers.
Meanwhile, the military is experimenting with internally controlled platforms such as the Maven Smart System within secure defense networks.