Google CEO Sundar Pichai on Thursday announced that Google is banning the development of Artificial Intelligence (AI) software that could be used in weapons or harm others.
The company has set strict standards for ethical and safe development of AI.
“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a blog post. “As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions."
The objectives Google has framed out for this include that the AI should be socially beneficial, should not create or promote bias, should be built and tested safely, should have accountability, and that it should uphold privacy principles, etc.
The company, however, will not pursue AI development in areas where it threatens harm on other people, weapons, technology that violate human rights and privacy, etc.
“Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” the post read.
However, while the company will not create weapons, it had said that it will continue to work with the military and government.
"These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe," Pichai said.
This decision comes after a series of resignation of several employees after public criticism of Google’s contract with the Defense Department for an AI that could help analyze drone video, called Project Maven.