Artificial intelligence technology will surely play a big part in our future, and the CEO of Google and parent company Alphabet, Sundar Pichai, is calling for new regulations over AI technologies.
In an editorial for The Financial Times , new rules to govern the use of certain technologies like self-driving cars, while for AI-based products in certain industries like healthcare, existing regulations can be modified or extended to the use of these products.
Referring to Alphabet’s own focus on the development of artificial intelligence, the CEO asserts that the importance of artificial intelligence means that “market forces” can’t be the determining factor in how the technology is used.
“[T]here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,”
Some examples of “potential harms” referred to by Pichai are the “nefarious” uses of facial recognition, as well as deepfakes. However, regulations must balance the pros and cons: the “potential harms … with social opportunities”.
At the moment, rules to govern the use of artificial intelligence are being considered by authorities in the U.S., European Union, and Australia, among others. However, authorities in the U.S. appear to be calling for a softer approach with the aim of encouraging innovation, while their European counterparts are looking at a firmer stance. In fact, Reuters reports that authorities in the EU are considering a 5-year band on all facial recognition tech in public areas.
And with that in mind, Google also has internal policies that prohibit using their technology “to support mass surveillance or violate human rights”—which means that Google doesn’t sell its own facial recognition technology.
But ultimately, lawmakers need to get on the same page, rather than allowing technology companies to self-regulate. Pichai says as much:
“International alignment will be critical to making global standards work. To get there, we need agreement on core values.”
But ultimately, the CEO says that “principles on paper are meaningless”—which is certainly true. In a nutshell, talks and murmurs over regulation and oversight of artificial intelligence technologies need to start solidifying into actual, real-world actions.