Skip to main content

Google raises 5 safety concerns for the future of artificial intelligence

nestor ai paying attention artificial intelligence
Image used with permission by copyright holder
While artificial intelligence was once sci-fi subject matter, the field is advancing at such a rate that we’ll likely see it become a part of everyday life before too long. As a result, Google wants to make sure that an AI can be trusted to carry out a task as instructed, and do so without putting humans at risk.

That was the focus of a study carried out by Google in association with Stanford University; University of California, Berkeley; and OpenAI, the research company co-founded by Elon Musk. The project outlined five problems that need to be addressed so that the field can flourish, according to a report from Recode.

It’s noted that these five points are “research questions” intended to start a discussion, rather than offer up a solution. These issues are minor concerns right now, but Google’s blog post suggests that they will be increasingly important in the long-term.

The first problem asks how we’ll avoid negative side effects, giving the example of a cleaning AI cutting corners and knocking over a vase because that’s the fastest way to complete its janitorial duties. The second refers to “reward hacking,” where a robot might try take shortcuts to fulfill its objective without actually completing the task at hand.

The third problem is related to oversight, and making sure that robots don’t require too much feedback from human operators. The fourth raises the issue of the robot’s safety while exploring; this is illustrated by a mopping robot experimenting with new techniques, but knowing to fall short of mopping an electrical outlet (for obvious reasons).

The final problem looks at the differences between the environment a robot would train in, and their workplace. There are bound to be major discrepancies, and the AI needs to be able to get the job done regardless.

It’s really just a matter of time before we see AI being used to carry out menial tasks, but research like this demonstrates the issues that need to be tackled ahead of a wide rollout. User safety and the quality of the service will of course be paramount, so it’s vital that these questions are asked well ahead of time.

Editors' Recommendations

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
Google tackles scammers offering malware-laden ‘Bard’ tool
A person holds a phone with the Google logo and word 'Bard' on the screen. In the background is a Google Bard logo.

Google has revealed that it’s suing alleged scammers who are offering malware-laden “generative AI” software called Bard -- the same name as Google’s own generative AI tool.

Google filed the lawsuit in a California court on Monday, accusing the Vietnam-based group of creating pages for social media sites -- mainly Facebook -- that include ads for “Bard” software.

Read more
Qualcomm says its new chips are 4.5 times faster at AI than rivals
Two Qualcomm Snapdragon chips.

Qualcomm just announced two powerful new processors that excel at generative AI, one for laptops and the other for phones. As the potential applications for artificial intelligence continue to expand from text to images, video, and beyond, faster processing on your own device is becoming more important.

The Snapdragon X Elite is Qualcomm's exciting new laptop processor, boasting best-in-class CPU performance for Windows laptops and impressive GPU speed. For phones, the Snapdragon 8 Gen 3 blasts by the previous generation with 30% greater speed while drawing 20% less energy from your battery.

Read more
Are we about to see ‘the iPhone of artificial intelligence’?
The ChatGPT name next to an OpenAI logo on a black and white background.

Apple’s former design guru Jony Ive is reported to be in talks with prominent AI startup OpenAI to create what’s being dubbed “the iPhone of artificial intelligence.” And if you think that sounds like pie in the sky, then take note: The project could be bankrolled to the tune of $1 billion by Japanese tech giant SoftBank.

OpenAI, the startup behind the trailblazing ChatGPT chatbot, is in advanced talks with Ive and SoftBank chief Masayoshi Son about the potential project, according to a Financial Times report on Thursday.

Read more