A Phalanx close-in weapons system (CIWS), which Human Rights Watch calls a prototype for fully autonomous weapons—or “killer robots”—fires at sea. (Photo: U.S. Navy/flickr/cc)
Fully autonomous weapons, or “killer robots,” present a legal and ethical quagmire and must be banned before they can be further developed, a new human rights report published Thursday urges ahead of next week’s United Nations meeting on lethal weapons.
The report, titled Mind the Gap: The Lack of Accountability for Killer Robots, was jointly published by Human Rights Watch and Harvard Law School’s International Human Rights Clinic and outlines the “serious moral and legal concerns” presented by the weapons, which would “possess the ability to select and engage their targets without meaningful human control.”
Although fully autonomous weapons do not yet exist, their “precursors” are already in use, such as the Iron Dome in Israel and the Phalanx CIWS in the U.S., the report states.
Under current law, the makers and users of killer robots could get away with unlawful deaths and injuries if the weapons are allowed to develop. Allowing weapons that operate without human control to make decisions about the use of lethal force could lead to violations of international law and make it difficult to hold anyone accountable for those crimes. Moreover, civil liability would be “virtually impossible, at least in the United States,” the report found.
“No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party,” lead author and HRW Arms Division researcher Bonnie Docherty said in a press release on Thursday. “The many obstacles to justice for potential victims show why we urgently need to ban fully autonomous weapons.”
The UN will discuss the killer robots and more conventional arms at its upcoming meeting on inhumane weapons in Geneva, Switzerland from April 13-17. In the past, the UN has used the gathering to preemptively ban military tools such as blinding lasers (pdf).