While artificial intelligence was once sci-fi subject matter, the field is advancing at such a rate that we’ll likely see it become a part of everyday life before too long. As a result, Google wants to make sure that an AI can be trusted to carry out a task as instructed, and do so without putting humans at risk.
That was the focus of a study carried out by Google in association with Stanford University; University of California, Berkeley; and OpenAI, the research company co-founded by Elon Musk. The project outlined five problems that need to be addressed so that the field can flourish, according to a report from Recode.
It’s noted that these five points are “research questions” intended to start a discussion, rather than offer up a solution. These issues are minor concerns right now, but Google’s blog post suggests that they will be increasingly important in the long-term.
The first problem asks how we’ll avoid negative side effects, giving the example of a cleaning AI cutting corners and knocking over a vase because that’s the fastest way to complete its janitorial duties. The second refers to “reward hacking,” where a robot might try take shortcuts to fulfill its objective without actually completing the task at hand.
The third problem is related to oversight, and making sure that robots don’t require too much feedback from human operators. The fourth raises the issue of the robot’s safety while exploring; this is illustrated by a mopping robot experimenting with new techniques, but knowing to fall short of mopping an electrical outlet (for obvious reasons).
The final problem looks at the differences between the environment a robot would train in, and their workplace. There are bound to be major discrepancies, and the AI needs to be able to get the job done regardless.
It’s really just a matter of time before we see AI being used to carry out menial tasks, but research like this demonstrates the issues that need to be tackled ahead of a wide rollout. User safety and the quality of the service will of course be paramount, so it’s vital that these questions are asked well ahead of time.
- ChatGPT just plugged itself into the internet. What happens next?
- Bing Chat: how to use Microsoft’s own version of ChatGPT
- You can now try out Google’s Bard, the rival to ChatGPT
- Midjourney v5 language model update adds realism to human hands
- GPT-4: how to use, new features, availability, and more