Humans as data: Migrants, Killer robots, and You.
Ousman Noor is the Government Relations Manager for the Campaign to Stop Killer Robots. Prior to joining the Campaign, Ousman worked as a human rights barrister in London specialising in migrant and refugee justice.
Photo: Greg Bulla.
I’ve spent my career working for migrant and refugee justice, including 10 years as a lawyer in London and a year in Geneva working for migrant rights across Europe. I now work for the Campaign to Stop Killer Robots. Killer robots are weapons that use sensors to select a target and then automatically engage with force. By design, they take human control away from how, when and against who or what force is applied. These are not weapons to worry about in some distant future; advances in artificial intelligence and information technology are revolutionising military strategies now.
Killer robots are now being developed by military powers and being experimented for use against migrants. As tech and human rights organisations have pointed out, borders are used as testing grounds for technologies that are then deployed elsewhere. The only way to stop killer robots from targeting any of us is to ensure they cannot target humans.
After 6 years of discussions at the United Nations within the framework of the Convention on Conventional Weapons, no agreements have been reached on how to regulate them, giving practical free rein to arms manufacturers to develop and test their power and leaving policymakers playing catch from an increasingly disadvantaged position.
Technologies with autonomous capabilities are already being deployed against migrants with brutal consequences. In Europe, the European Border and Coast Guard Agency has contracted €100m to Airbus and Elbit Systems, an Israeli arms company, to operate unmanned drones for surveillance against migrants and refugees. The drones will fly for up to 36 hours at up 30,000 feet to detect and report on individual migrants. For an individual crossing the Mediterranean to seek safety from persecution, the consequence of being targeted by a drone may be an interception and forced return to a detention camp in Libya, where human rights organisations have documented systematic and widespread torture.
New machine-learning algorithms are processing signals and images at rapid speed, allowing for faces to be scanned and compared to stored biometric government records. The technologies are already being used at migrant entry checkpoints by the US and Chinese governments, and can be equipped on drones to enable near instantaneous identification through the aerial device. Along the US border with Mexico, 55 Integrated Fixed Towers have been constructed with a range of day and night sensors and radars capable of surveilling vast land masses and processing collected images without human input. These towers serve to detect and identify individual migrants, without their knowledge, in order for force to be applied to prevent their movement across the border.
In Israel, Sentry Gun Systems are set up along the border with Gaza, designed to create 1500-meter deep “automated kill zones”. These systems carry direct fire machine guns and precision guided missiles mounted on a protective bulletproof canopy and are capable of operating in all weathers, day and night. To detect incoming civilians approaching the border from Gaza, the systems carry an array of camera, infrared sensors, thermal imagers and laser range sensors. Upon identifying the individual, information is transferred to a commander operating a screen and joystick who can then trigger the weapon and engage the target, applying lethal force from a distance. Similar systems have also been set up by South Korea along the Demilitarised Zone for targeting individuals approaching from North Korea.
In all these scenarios, automated technologies are replacing the traditional role of trained officers in the selection of targets and application of force to human beings. The process dehumanises the target. Instead of a human being, perhaps a young person with a family, a faith and hopes for a better future attempting to reunite with a loved one across the border, these technologies perceive data collected by sensors that are automatically processed for determining what force to apply. The human experience is lost and individuals become statistics.
In the context of border control, the potential targets are migrants, but the status of being a migrant is dynamic and includes individuals and families from all countries with diverse identities. In a fast evolving and unpredictable world, none of us are safe from the possibility of becoming a migrant and victim to these technologies. Moreover, as these technologies are refined and optimised for use in border control, nothing prevents them from being deployed in other contexts such as policing and protest control or to assist an oppressive regime to stay in power. The threat of being target to these technologies will apply to us all.
Killer robots are coming after migrants, and everyone else may be next. The only way for us to stop their dehumanising effects is to prohibit their use on human beings. The Campaign to Stop Killer Robots is working for an international prohibition on fully autonomous weapons to ensure that meaningful human control is kept over the use of force. We demand that a red line is drawn to ensure that these technologies cannot be used on humans. Without a ban, all of humanity is exposed to becoming victim to an automated killing by a device that perceives humans as data. We must unite to compel our governments to be proactive in building societies that value human life and work now to achieve a ban and maintain control over our future.
To find out more about killer robots and what you can do, visit: www.stopkillerrobots.org
Original Article on Medium.com.