We are working to have states adopt an international legal treaty that ensures meaningful human control over the use of force and rejects the automation of killing. It should do this through:
- ‘Prohibitions’ – banning autonomous weapon systems that do not allow for meaningful human control and banning all system that use sensors to target humans
- ‘Regulations’ – additional rules so that other autonomous weapon systems will be used with meaningful human control in practice.
The scope of technologies
A new treaty should apply to the range of weapons systems that detect and apply force to a target based on sensor inputs, rather than an immediate human command. In these systems, after activation by a person, there is a period of time where the weapon system can apply force to a target without additional human approval. This means the specific object to be attacked, and the exact time and place of the attack, are determined by sensor processing, not by humans. This range of systems is what we call ‘autonomous weapons’.
The shape of an international legal treaty
Within that broad scope of technologies, a new legal treaty should combine prohibitions and regulations.
A new treaty should prohibit:
- Autonomous weapon systems that do not allow for meaningful human control. Examples include autonomous weapon systems where the location or duration of their functioning cannot be limited, or weapon systems where mission parameters (time and space of operation, type of target etc) could change during an operation without human approval. We sometimes call these systems ‘fully autonomous weapons.’
- Autonomous weapon systems that would target humans, even when used with human control. Processing people through sensors, such systems reduce people to objects, and so are dehumanizing to civilian and military victims alike. They also pose other moral, legal and practical problems. We reject the automated killing of people.
Of course, systems that are not prohibited must still be used with meaningful human control. To ensure this, the treaty should include what are called ‘positive obligations’ – rules on the design, development and use of other autonomous weapons systems. Together, these rules should require that system users understand the way the weapons system works and understand the specific context in which it might be used. These understandings are vital for a user to predict the effects they will create, and so to make real moral and legal judgements.