2024 Nobel laureate in Physics raises concerns about killer robots
Geoffrey Hinton, a “Godfather of Artificial Intelligence”, has endorsed the call for new international law to prohibit and regulate autonomous weapons systems.
The Stop Killer Robots campaign takes note of the awarding of the 2024 Nobel Prize in Physics to Professor Geoffrey E. Hinton, together with John J. Hopfield, in recognition of their contributions to the field of artificial intelligence.
Professor Hinton was one of the first AI experts to endorse the call for new international law to prohibit and regulate autonomous weapons systems. In October 2013, Hinton endorsed an open letter from more than 270 engineers, computer scientists and AI experts that called for a ban on weapon systems that make the decision to apply violent force autonomously, without any human control. In July 2015, Hinton signed a call from nearly 35,000 AI and tech experts that called a military AI arms race “a bad idea” and urged a preventative ban on autonomous weapons that operate beyond meaningful human control. In November 2017, Hinton joined other AI experts in urging Canadian Prime Minister Justin Trudeau to take a stand against autonomous weapons, arguing that their development and use crossed a “clear moral line.”
Concerns about autonomy in weapons from AI experts including Geoffrey E. Hinton must be heeded so that the range of technical and ethical concerns they raise are avoided in time. As it becomes increasingly clear that artificial intelligence and automated decision-making will play a key role in the way we live as individuals, as societies, and as a global community, it is crucial to establish rules for how this applies to the use of force. Our campaign has worked for over a decade to ensure that there is meaningful human control in the use of force, calling for the creation of new international law to regulate autonomy in weapons systems. We believe that setting legal standards through the creation of a treaty is critical to ensuring peace and global stability.
We have seen clear and building momentum toward achieving our goal of new international law on autonomous weapons systems. In 2023, our coalition of over 250 civil society organisations in 70 countries successfully lobbied states to adopt the first ever UN resolution on autonomous weapons. This resolution was adopted by the overwhelming majority of states, and demonstrates that there is political will to address the risks posed by autonomous weapons. Further, in the last two years, we have seen cross-regional support manifest through a number of declarations adopted on autonomous weapons which underscore their legal, ethical, humanitarian impacts and the need for new international law.
Over the last twelve months, there have been increasing reports on the use of military AI and weapons with autonomous capabilities in ongoing international conflicts including Gaza and Ukraine. This has profound humanitarian implications. Accepting AI-enabled dehumanisation, targeting, and killing of people in military contexts is untenable, and will have significant consequences in policing, border control and wider society. The 79th UN General Assembly is currently underway, and States have the opportunity to respond to the calls of AI experts and close the gap between regulation and technological development. By supporting the resolution on autonomous weapons, states can make meaningful progress toward an international treaty. It is time for the global community to act with urgency and show continued political leadership on this issue to ensure that life or death decisions cannot be delegated to machines