Autonomous Weapons and Operational Risk
This report examines the risks in future autonomous weapons that would choose their own targets and the potential for catastrophic accidents.
Automation is increasing in a wide range of military systems, including future weapons. In response, a growing number of voices are calling for urgent discussion on the appropriate role of human and machine decision-making in the use of lethal force. The Center for a New American Security’s Ethical Autonomy Project examines the legal, moral, ethical, and policy issues associated with autonomous weapons—weapons that would select and engage targets on their own.
This paper is aimed at helping defense professionals think clearly and objectively about possible risks associated with autonomous weapons. Autonomous weapons generally do not exist, and their military costs and benefits can be speculated but are not yet clearly known. What is clear, however, is that they raise novel questions of risk. The essence of autonomy is delegating a task previously done by a person to a machine. This raises the important question of how to retain effective human control over the machine’s behavior and the risks—both the probability and consequences— associated with a loss of control.
The original publication can be found here.
Consideration of the key elements required for meaningful human control should provide a starting point for any assessment of developing technologies in the context of autonomous weapons systems.