Japan’s Ministry of Defense has drawn a red line on artificial intelligence in the military: AI can help with defense operations, but humans stay in charge when it comes to lethal force.
The policy sets up a three-step system for managing risk.
First, projects are labeled high- or low-risk depending on how much AI shapes a weapon’s destructive power. High-risk systems must clear legal reviews proving they follow international law and keep humans in control. Fail the test, and the program is shut down.
Those that pass face seven more checks, from safety and transparency to bias prevention and operator control. The rules also flat-out ban “killer robots” — weapons that can pick and attack targets on their own.
Instead, Japan sees AI playing support roles, such as surveillance, logistics, cybersecurity, and command assistance. Ongoing projects include remote monitoring systems, automated warehouses, and tools to predict supply needs.
The Global Stakes
AI is already reshaping the battlefield. Russia has tested AI drones in Ukraine, while autonomous systems are showing up in Middle East conflicts.
China, Russia, and North Korea are also ramping up AI for hacking, propaganda, and strike planning.
Japan is trying to chart a different path. Its framework builds on a “human-first” stance laid out in a 2024 UN paper rejecting killer robots. It also ties into a new national AI law, passed in May, that promotes innovation while putting firm guardrails in place.
Tokyo is also pushing this approach abroad as well. The country has joined US- and G7-led initiatives to set military AI standards, and it backs a UN treaty banning killer robots by 2026 — a move facing fierce opposition from Beijing, Moscow, and Pyongyang.