Google removes policy against the use of AI for weapons or surveillance
Google has quietly deleted his commitment not to use AI for weapons or surveillance, A promise that has been in place since 2018.
First spotted by BloombergGoogle has updated its AI Principles To remove an entire section on artificial intelligence applications, he is committed not to continue. Significantly, Google’s policy had previously declared that it would not conceive or deploy the technology of the AI for use in arms or in surveillance technology which violates “internationally accepted standards”.
It now seems that such use cases may not be entirely out of the table.
“There is a global competition for AI leadership in an increasingly complex geopolitical landscape”, ” Read Google’s blog Tuesday. “We believe that democracies should lead to the development of AI, guided by fundamental values such as freedom, equality and respect for human rights. And we believe that companies, governments and organizations sharing these values should work together to create an AI that protects people, promotes global growth and supports national security. “”
Although Google’s message concerned its update of the principles of AI, it did not explicitly mention the deletion of its ban on weapons or AI surveillance.
When he was contacted to comment, a Google spokesperson returned Mashable to the blog post.
Mashable lighting speed
“”[W]Updating the principles for a certain number of reasons, in particular the massive changes in AI technology over the years and the omnipresence of technology, the development of IA principles and executives by organs Global directors and the evolving geopolitical landscape, “said the spokesperson.

Google’s principles of AI of Google listing “applications that we will not continue” on January 30.
Credit: screenshot: Mashable / Google
Google first published its IA principles in 2018, following important events of employees against his Work with the United States Defense Ministry. (The company was already sadly Deleted “do not be bad” from his code of conduct The same year.) Project Maven aimed to use AI to improve weapon targeting systems, interpreting video information to increase the precision of military drones.
In an open letter in April, thousands of employees expressed the conviction that “Google should not be in the field of war” and asked that the company “writes, publishes and applies a clear policy indicating that neither Google nor His entrepreneurs never build war technology. ”
The company’s principles of the company were the result, with Google, ultimately not renewing its contract with the Pentagon in 2019. However, it seems that the attitude of the technology giant towards the weapons technology of It can now change.
Google’s new attitude towards AI weapons could be an effort to follow the competitors. Last January, Openai has changed its own policy To remove the ban on “activity which has a high risk of physical damage,” including “development of weapons” and “military and war”. In a statement to Mashable at the time, an Openai spokesperson said that this change had to bring clartes concerning the “cases of use of national security”.
“It was not clear if these beneficial use cases would have been authorized under” soldiers “in our previous policies,” said the spokesperson.
The opening of the possibility of an army AI is not the only change that Google has brought to its IA principles. As of January 30Google’s policy has listed seven basic objectives for AI applications: “Be socially beneficial”, “avoid creating or strengthening unfair prejudices”, “being built and tested for security”, “Be responsible for people “,” Incorporate the principles of confidentiality design “,” “respect the high standards of scientific excellence,” and “be made available for uses that agree with these principles”.
NOW Google’s revised policy Consolidated this list with only three principles, simply declaring that its AI approach is based on “daring innovation”, “responsible development and deployment” and the “collaborative process, together”. The company specifies that this includes membership of “largely accepted principles of international law and human rights”. However, any mention of weapons or surveillance is now absent.
Subjects
Artificial intelligence
Google