U.S. Air Force Senior Airman Martin Gamez Corral, 28th Health Care Operations Squadron ambulance services medical technician, operates a virtual reality simulator at Ellsworth Air Force Base, S.D., Oct. 30, 2025. This VR system allows Airmen to train for simulated specific healthcare situations. (U.S. Air Force Photo by Airman 1st Class Addison Bolt)
An ambulance services medical technician operates a virtual reality simulator. Photo: Airman 1st Class Addison Bolt/US Air Force

Military planners are moving beyond whether AI can support battlefield medicine to a more critical question: when will humans trust it to decide?

UK and US defense teams have tested how far soldiers are willing to delegate life-or-death triage decisions to artificial intelligence under pressure.

Run by the Defence Science and Technology Laboratory (Dstl) with DARPA, the trials explored whether AI aligned with human priorities — such as prioritizing comrades, treating the most survivable, or even assisting attackers — affects willingness to hand over control.

Participants first mapped their own decision-making styles through virtual and desktop scenarios. 

An AI system then acted as lead medic, either matching or conflicting with those preferences.

Crucially, soldiers were asked whether they would delegate decisions without knowing they were interacting with AI until after the exercise.

The evaluations were held last October at Royal Air Force Bases Colchester and Brize Norton and conducted under DARPA’s broader “In the Moment” program, which explores how and when troops consent to AI-driven decision-making in high-risk environments.

Promoting AI Reliance

According to Dstl, the trials showed how confidence in AI could help medics treat more casualties faster, without losing the judgment of an experienced practitioner.

Parts of those assessments and associated post-trial analysis will now help shape the lab’s ongoing research streams tackling the impacts of AI solutions, particularly in resolution processes and human-machine teaming.

“In the future we’re expecting a lot more information to be coming into the warfighter,” Dstl Human Factors Specialist Suzy Broadbent stated.

“We’re really interested in how the warfighter makes decisions based on some of this information and how potentially AI systems can help with that.”

You May Also Like

US Army Tests AI for Faster Targeting, Smarter Command Networks

The US Army is testing an AI-powered target recognition capability for the NGC2.

No Internet, No Problem: EdgeRunner’s Tactical AI Chatbot Runs Off the Grid

Washington-based EdgeRunner AI is offering a ChatGPT-like solution that runs entirely offline, built for secure, mission-critical use in sensitive military environments.

Palantir’s Maven Becomes Pentagon’s Default AI for Targeting

AI platform shifts from ISR tool to force-wide targeting backbone

BAE Unveils UK’s First Autonomous XL Military Submarine

BAE Systems has successfully demonstrated “Herne,” an extra-large autonomous underwater vehicle designed for military purposes.