menu hero image

Human rights, automation, and the dehumanization of lethal decision-making

This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems.

This paper seeks to address concerns expressed by some scholars that a ban on fully autonomous weapons might be problematic for various reasons. It argues in favour of a theoretical foundation for such a ban based on human rights and humanitarian principles that are not only moral, but also legal ones.

In particular, there is a duty upon individuals and states in peacetime, as well as combatants, military organizations, and states in armed conflict situations, not to delegate to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy in each and every case. This paper argues that it would be beneficial to establish this duty as an international norm, and express this with a treaty, before the emergence of a broad range of automated and autonomous weapons systems begin to appear that are likely to pose grave threats to the basic rights of individuals.

The original article can be found here.

Peter Asaro, International Review of the Red Cross

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us