June 8, 2018, Sundar Pichai, Google CEO had reported on a blog post explaining details about the search giant’s principles around Artificial intelligence (AI).
The post makes everyone known that Google’s AI technology will not be used for building weapons or mass surveillance systems further. Pichai’s post on the AI principles reveals that Google is now facing criticism from within its own ranks over participation in the controversial Project Maven with the US Defense Department. Google will not allow its AI to be used against principles of international law and human rights. The company’s AI principles report that the ‘AI’ would be socially beneficial, with avoiding creating or reinforcing unfair bias.
Pichai said, “AI is computer programming that learns and adapts,” and that it has profound potential to improve the lives of people. But he also admits that AI cannot solve all problems and its use will raise “equally powerful questions about its use.”He also added, “deep responsibility to get this right.” Google will have a total of seven principles to guide their AI work and research. These principles will not just be concepts, but according to Pichai, “are concrete standards, which will govern our research and product development and will impact our business decisions.”
The blog also presented that Google will not build AI technologies which would cause overall harm, weapons, including those which will “cause or directly facilitate injury to people.” It will also not allow AI technology from the company to be used for surveillance, which violates “internationally accepted norms.”
According to the post, Google will continue work with governments and the military in many other areas, which are based on cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.
Pichai also added, “These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”