UN Secretary-General António Guterres, in a video message broadcast on the first day, stressed that “machines with the power and freedom to take lives without human control are politically unacceptable and morally repugnant.” He reiterated the call to finalize binding legal instruments by 2026 to ensure the use of AI combat drones remains under meaningful human oversight. Guterres also emphasized the necessity of a global legal framework specifically addressing Lethal Autonomous Weapons Systems (LAWS).
Advocacy groups such as Human Rights Watch and Stop Killer Robots warned that, without strong international pressure, defense industries tend to prioritize technological gains over humanitarian consequences. Laura Nolan, campaign director at Stop Killer Robots, stated there is currently “no guarantee that the defense industry will adequately self-regulate,” underscoring the urgent need to ban fully autonomous systems operating without human intervention. This highlights the current weak safeguards against potential human rights abuses stemming from erroneous target identification.
Academics underscored the importance of FATE principles—Fairness, Accountability, Transparency, and Ethics—as ethical foundations for developing AI combat drones. A May 2025 report added traceability and governability to ensure every attack is auditable and final control remains with humans. UN consultations noted that without robust audit mechanisms, fatal errors on the battlefield risk increasing civilian casualties.
Countries including the United States, United Kingdom, and France have developed national guidelines for phased testing of autonomous systems but remain reluctant to commit to binding international rules. Meanwhile, Russia, China, and India favor voluntary guidelines over formal treaties. A Reuters report highlighted that despite ongoing technological readiness, global agreement remains stalled due to conflicting national strategic interests.
In response, several UN members proposed additional protocols to the Geneva Conventions mandating registration of autonomous weapon systems, AI testing standards for combat environments, and requirements for a “human in the loop” in all lethal actions. These draft protocols are expected to be presented at the next UN official meeting in September 2025. If approved, they would mark the international community's first binding legal step toward regulating AI combat drones.
AI-driven drone use has been documented in recent conflicts including Ukraine and Gaza. Reports indicate intensive use by Russia and Ukraine of semi-autonomous drones for reconnaissance and small missile strikes. Israel employs loitering munitions capable of automatic radar target recognition. Human Rights Watch cautions that algorithmic misidentification in conflict zones can lead to human rights violations without clear accountability.
The key challenge lies in the asymmetry between rapid AI technological innovation and slow legislative processes. Field practices have outpaced proposed legal and ethical standards. Experts warn that without a global oversight framework, the AI arms race will escalate, complicating international control over AI combat drone use. This informal UN session aims to generate stronger political momentum to draft initial protocols before the 2026 deadline.
Sources
- https://www.un.org/sg/en/content/sg/statement/2025-05-12/secretary-generals-video-message-the-informal-consultations-lethal-autonomous-weapons-systems
- https://www.reuters.com/sustainability/society-equity/nations-meet-un-killer-robot-talks-regulation-lags-2025-05-12
- https://www.hrw.org/news/2025/05/21/un-start-talks-treaty-ban-killer-robots
- https://www.reachingcriticalwill.org/disarmament-fora/others/informal-consultations-laws
- https://meetings.unoda.org/ccw/convention-on-certain-conventional-weapons-group-of-governmental-experts-on-lethal-autonomous-weapons-systems-2025
- https://globaleducationnews.org/regulating-autonomous-weapons-un-efforts-and-global-challenges-in-2025
- https://digitallibrary.un.org/record/4071100