Google CEO warns against rush to deploy AI without oversight


Asked in a 60 Minutes interview about what keeps him up at night with regard to AI, Pichai said ‘the urgency to work and deploy it in a beneficial way, but at the same time it can be very harmful if deployed wrongly’. — Bloomberg

Alphabet Inc and Google chief executive officer Sundar Pichai said in an interview broadcast Sunday that the push to adopt artificial intelligence technology must be well regulated to avoid potential harmful effects.

Asked in a 60 Minutes interview about what keeps him up at night with regard to AI, Pichai said “the urgency to work and deploy it in a beneficial way, but at the same time it can be very harmful if deployed wrongly.”

Mountain View, California-based Google has been among the leaders in developing and implementing AI across its services. Software like Google Lens and Google Photos rely on the company’s image-recognition systems, while its Google Assistant benefits from natural language processing research that Google has been doing for years.

Still, its pace of deploying the technology has been deliberately measured and circumspect, whereas OpenAI’s ChatGPT has opened up a race to move forward with AI tools at a much faster clip.

“We don’t have all the answers there yet, and the technology is moving fast,” Pichai said. “So does that keep me up at night? Absolutely.”

Google is now playing catch-up in looking to infuse its products with generative AI – software that can create text, images, music or even video based on user prompts. ChatGPT and another OpenAI product, Dall-E, showed the technology’s potential, and countless businesses from Silicon Valley to China’s Internet leaders are now getting involved in presenting their own offerings.

Former Google CEO Eric Schmidt urged global tech companies to come together and develop standards and appropriate guardrails, warning that any slowdown in development would “simply benefit China.”

Despite the sense of urgency in the industry, Pichai cautioned against companies being swept up in the competitive dynamics. And he finds lessons in the experience of OpenAI’s more direct approach and debut of ChatGPT.

“One of the points they have made is, you don’t want to put out a tech like this when it’s very, very powerful because it gives society no time to adapt,” Pichai said. “I think that’s a reasonable perspective. I think there are responsible people there trying to figure out how to approach this technology, and so are we.”

Among the risks of generative AI that Pichai highlighted are so-called deepfake videos, in which individuals can be portrayed uttering remarks that they did not in fact give. Such pitfalls illustrate the need for regulation, Pichai said.

“There have to be consequences for creating deepfake videos which cause harm to society,” he said. “Anybody who has worked with AI for a while, you know, you realise this is something so different and so deep that we would need societal regulations to think about how to adapt.” – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

India's Star Health says it received $68k ransom demand after data leak
Portuguese school sets world record for largest programming lesson
Google wants US judge's app store ruling put on hold
Analysis-Tesla's sporty, two-seater robotaxi design puzzles experts
‘Silent Hill 2’ was clumsy, brutal and brilliant
'Kids are talking to each other.' Teachers, students in the US embrace phone ban
Opinion: Cell phones keep interrupting concerts. Some policies aren't helping
How private is deleted phone data? Expert explains after warrant suggests US judge tried to reset phone
Experts point to copycat behaviour, social media climate as reasons for rise in school threats in US area
US FAA approves SpaceX Falcon 9 return to flight after mishap probe

Others Also Read