menu hero image

Stop Killer Robots raises alarm over Google owner’s reversal on policy to not use AI for weapons

Tech giant Alphabet’s decision to renege on its pledge to not use Artificial Intelligence (AI) for weapons should push political leaders to establish international legal frameworks to regulate AI in weapons.

On 5 February 2025, Alphabet, which owns Google, reneged on its pledge to not use AI for weapons. Despite their stated approach to AI, which they claim is “grounded in understanding and accounting for its broad implications for people”, this policy decision increases the risks of automated harm in military, law enforcement, border control and surveillance contexts.

Incorporating artificial intelligence in military applications raises significant risks. These include the potential for unintended escalation of conflicts, lack of accountability in decision-making, and the incompatibility with international humanitarian law (IHL). Further, the possibility of AI systems being hacked or misused by adversaries, along with the possible lack of and the erosion of meaningful human control in critical situations, could lead to increased civilian harm.

2024 saw an increase in the use of automation in warfare. Notable examples include the use of the Lavender and Habsora decision support systems in Gaza, which have been employed by the Israeli military with devastating humanitarian results. Additionally in 2024, other AI developers including Meta, Anthropic and OpenAI all withdrew similar policies restricting the use of AI for military applications to allow US intelligence and defence agencies to use their AI systems. Alphabet’s decision, along with theirs, increases the likelihood of seeing similar systems developed and deployed. 

Despite the gravity of this development, tech workers have a history of successfully opposing unethical uses of technology and we have a track record of supporting them. Stop Killer Robots allied with Google tech workers who in 2018 protested and helped end  Project Maven, a U.S. military contract that raised ethical and other concerns over the appropriate use of Artificial Intelligence and machine learning. In 2019, then Campaign Coordinator Mary Wareham wrote to the heads of Google and Alphabet to recommend “a proactive public policy” committing to never engage in work aimed at the development and acquisition of fully autonomous weapons systems. In the same year, Stop Killer Robots visited the offices of Microsoft, Amazon, Clarifai and Palantir to call on them to not build autonomous weapons. Stop Killer Robots would support similar efforts by tech workers seeking to resist Alphabet’s policy change on the use of AI for weapons. 

For over a decade, Stop Killer Robots has been urging government and industry stakeholders to maintain meaningful human control in the use of force. We have highlighted the serious risks of delegating decision making to machines which aside from digital dehumanisation and algorithmic biases, also include a lack of accountability for the systems’ actions. Further, without adequate accountability or human control in the use of force,  there is an increased potential to create a destabilising arms race, lowering the threshold to war. Given the current political climate and the lack of regulation, the latter is especially concerning. This development from Alphabet makes it clear that we need regulation to safeguard humanity from these threats now.

Stop Killer Robots

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us