menu hero image
Graphic shows the countries with the strongest opposition to killer robots.

RightsCon: Preventing digital dehumanisation

On 10 June, the Campaign to Stop Killer Robots held its panel session “Preventing Digital Dehumanisation” as part of Access Now’s annual RightsCon event. Celebrating its 10th anniversary, RightsCon ran from 7 June to 11 June 2021 and took place virtually for the second year. The conference gathered individuals, stakeholders, and activists from around the world to discuss key issues in the tech and human rights space. This marks the second time the Campaign has participated in RightsCon.

The hour-long session was live tweeted and focussed on exploring how emerging technologies, such as machines with automated or autonomous decision-making capabilities, can reproduce or exacerbate social harms, inequalities, and oppression. The event featured Lucy Suchman, Professor Emerita of the Anthropology of Science and Technology at Lancaster University; Dr Sarah Shoker, Postdoctoral Fellow at the University of Waterloo and Founder of Glassbox; Laura Nolan, software engineer and member of the International Committee of Robot Arms Control (ICRAC); and Mutale Nkonde Nkonde, Executive Director of AI for the People. The Campaign Outreach Manager, Isabelle Jones, moderated with over 85 participants in attendance.

The session opened with a discussion on what digital dehumanization means, as outlined previously in a blog post by Nolan. Digital dehumanization was described as a phenomenon wherein the reliance on software of machines and systems to analyze, evaluate, and make decisions about humans, reduces a person to “something to be acted upon [as] an object, not a person with human rights and agency.” Nkonde highlighted racial discrimination as one aspect of digital dehumanization. The use of biased training datasets for systems lends itself to the same structures of “old racism using new machines” that continues to feed increased violence against Black bodies and drive further inequality.

Such systems – from predictive policing to social scoring – may have well-meaning intentions, but can actually lead to harmful effects as they continue to perpetuate cycles of oppression and violence. Speakers pointed to the robot dog deployed in New York City or the Bugsplat programme as examples of how these technologies are already being deployed. Suchman went on to talk about the “learning” that goes into developing what essentially are targeting systems. Training machines to differentiate between datasets of apples and oranges, or dogs and cats, is not the same as training the system to distinguish between civilians and combatants who possess varying human identities and relations. Instead, what we get is a “very crude stereotyping…then claiming that that enables more precise and accurate targeting.”  

Shoker also highlighted when “using any kind of machine learning software, essentially you are auditing human life, selecting certain attributes for the purpose of human intervention”. With the opacity of machine learning, this raises serious concerns that the identities ascribed to us ultimately reduces human agency and infringes upon rights. Shoker draws on the example of using patriarchal and gendered assumptions in targeting to identify military-aged males, which often render boys as young as 15 to be placed in a category of combatants.

The session closed with reflections from each panellist on what we can do to take action and drive changes in policy to prevent future harms. Reflections included a call for improved regulation over data through audits and human rights impact assessments, guidance for compliance, re-imagining existing narratives for policy change, de-mystifying technology, and collective action.

Isabelle

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us