Google’s AI Policy Shift: No More Ban On Weapons And Surveillance Tech
In a significant departure from its earlier commitments to not use artificial intelligence in the fields of weapons or surveillance, Google has updated its ethical guidelines on the same.
The company’s original 2018 AI principles explicitly prohibited AI applications in four areas: weapons, surveillance, technologies that could cause overall harm, and uses violating international law and human rights.
Now, in a blog post, Demis Hassabis, head of AI at Google, and James Manyika, senior vice president for technology and society, explained the change. They pointed to AI’s growing presence and the need for companies in democratic nations to work with governments and national security.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” Mr Hassabis and Mr Manyika wrote. “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”
The updated principles focus on human oversight and feedback to make sure AI follows international law and human rights standards. Google also promises to test AI systems to reduce any unintended harmful effects.
This change is a major shift from Google’s earlier position, which drew attention in 2018 when the company faced internal protests over its Pentagon contract. Known as Project Maven, the contract involved using Google’s AI to analyse drone footage. Thousands of employees signed an open letter urging Google not to be involved in military projects, saying, “We believe that Google should not be in the business of war.” As a result, Google chose not to renew the contract.
Since OpenAI launched ChatGPT in 2022, AI has advanced rapidly, but regulations have struggled to keep pace. This shift has led Google to ease its self-imposed restrictions. James Manyika and Demis Hassabis noted that AI frameworks from democratic nations have helped shape Google’s understanding of AI’s risks and potential.