US forces used Anthropic’s AI model Claude in the classified operations that led to the capture of Venezuelan leader Nicolás Maduro, according to The Wall Street Journal.
The model was deployed via a collaboration with Palantir, people familiar with the matter told the newspaper, reflecting how commercial AI systems are integrated into operational defense environments.
The company’s usage policies explicitly prohibit its models from supporting violence, weapons development, or surveillance.
While precise details about Claude’s role in the Maduro operation have not been made public, military planners increasingly view large-language models as tools for synthesizing complex data, analyzing intelligence, and supporting time-sensitive decision-making.
This priority is underscored by defense secretary Pete Hegseth, who said that “the future of American warfare is here, and it’s spelled AI.”
“As technologies advance, so do our adversaries,” he added. “But here at the War Department, we are not sitting idly by.”
Axios reported that the Pentagon has clashed with Anthropic over restrictions and safeguards of its AI models, with officials considering designating the company a “supply chain risk.”
The use of AI during the operation reflects a broader Pentagon push to embed generative AI across military workflows, including through GenAI.mil, a platform intended to support planning, analysis, and logistics at scale.