Military planners are moving beyond whether AI can support battlefield medicine to a more critical question: when will humans trust it to decide?
UK and US defense teams have tested how far soldiers are willing to delegate life-or-death triage decisions to artificial intelligence under pressure.
Run by the Defence Science and Technology Laboratory (Dstl) with DARPA, the trials explored whether AI aligned with human priorities — such as prioritizing comrades, treating the most survivable, or even assisting attackers — affects willingness to hand over control.
Participants first mapped their own decision-making styles through virtual and desktop scenarios.
An AI system then acted as lead medic, either matching or conflicting with those preferences.
Crucially, soldiers were asked whether they would delegate decisions without knowing they were interacting with AI until after the exercise.
The evaluations were held last October at Royal Air Force Bases Colchester and Brize Norton and conducted under DARPA’s broader “In the Moment” program, which explores how and when troops consent to AI-driven decision-making in high-risk environments.
Promoting AI Reliance
According to Dstl, the trials showed how confidence in AI could help medics treat more casualties faster, without losing the judgment of an experienced practitioner.
Parts of those assessments and associated post-trial analysis will now help shape the lab’s ongoing research streams tackling the impacts of AI solutions, particularly in resolution processes and human-machine teaming.
“In the future we’re expecting a lot more information to be coming into the warfighter,” Dstl Human Factors Specialist Suzy Broadbent stated.
“We’re really interested in how the warfighter makes decisions based on some of this information and how potentially AI systems can help with that.”