The Pentagon’s rapid adoption of artificial intelligence tools is raising concerns that growing reliance on large language models (LLMs) could weaken critical thinking skills among military personnel.
A study published in Trends in Cognitive Sciences suggests LLMs may standardize how users think and reason, potentially reducing cognitive diversity.
Additional research from Wharton and Princeton highlights a tendency for users to over-rely on AI-generated outputs even when they are incorrect, with “sycophantic” interactions reinforcing bias and unwarranted confidence in flawed results.
“The more you use AI, the more you will use your brain in a different way,” said Pierre Vandier, NATO’s Supreme Allied Commander Transformation, underscoring the need for oversight and critical evaluation.
As reported by Defense One, military commanders are already taking note of concerns that increased reliance on AI tools could affect how personnel process information, evaluate outputs, and distinguish accurate data from errors.
Rising Policy Pressure
The Pentagon is simultaneously pushing to scale AI across operations, with Defense Secretary Pete Hegseth outlining plans to build an “AI-first” military and expand adoption across domains.
However, Defense One reported that there is little indication the Pentagon is systematically monitoring its effects or putting safeguards in place to preserve critical thinking.
Broader concerns about military AI applications are also beginning to surface in policy discussions, with lawmakers in the US moving to impose limits on how AI can be deployed in defense operations.