As artificial intelligence plays a growing role in modern conflicts, concerns are mounting over its potential to cause civilian harm through its use in targeting and decision-making.
Professor Toby Walsh, an AI researcher at the University of New South Wales, warned that AI tools may fail to reliably distinguish civilians from combatants, while their speed and opacity can further limit meaningful human oversight.
Against this backdrop, Australia has introduced a new policy to govern the military use of AI.
The policy sets three core requirements for the Department of Defence: compliance with Australian and international law, human accountability in decision-making, and proportionate risk controls.
It also requires systems to be explainable, reliable, and secure, with safeguards designed to reduce bias and unintended harm.
Any AI-enabled capability used in weapons systems must also undergo legal review to ensure adherence to applicable legal frameworks.
Risk controls should be applied across the full lifecycle of AI systems, from testing and evaluation through to deployment and ongoing monitoring.
In a similar move, US lawmakers are also pushing to impose limits on military AI.
Senator Elissa Slotkin has introduced legislation that would ban autonomous lethal decision-making, restrict AI-driven surveillance, and prohibit its use in nuclear weapons launches.