The Pentagon is pressing AI developers to let advanced models run on classified military networks with fewer built-in restrictions.
The push comes as the US military ramps up efforts to integrate artificial intelligence into operational decision-making, particularly in classified environments where AI tools could assist with mission planning, intelligence fusion, and targeting analysis.
For defense planners, the challenge is balancing speed and autonomy against risks like model errors, hallucinations, or loss of human control.
An official familiar with the matter told Reuters that the Pentagon is now “moving to deploy frontier AI capabilities across all classification levels,” highlighting the need for tools that can operate across highly classified work.
Frontline AI Tensions
The move aligns with the Department of Defense’s 2026 AI Strategy, which calls for rapid deployment of commercial AI at “wartime speed,” cutting non-statutory barriers, and expanding AI use across classified warfighting, intelligence, and enterprise operations.
The Pentagon’s push highlights a split among AI developers over how far they are willing to adapt models for military use.
OpenAI has taken a more cooperative stance, agreeing to make tools like ChatGPT available for military use with modified safeguards.
Anthropic, by contrast, has resisted loosening restrictions, arguing that strong restrictions are necessary to prevent misuse and manage risks in military and cyber contexts.