Skip to main content

AI may beat humans at everything in 45 years, experts predict

oxford yale ai survey robot1
Image used with permission by copyright holder
Predicting the future of AI isn’t easy. Every decade since artificial intelligence was first formed as its own discipline in 1956, there’s been a prediction that artificial general intelligence (AGI) is just a few years away — and so far we can safely say that most of them have been shy of the mark.

A new survey, conducted by the University of Oxford and Yale University, draws on the expertise of 352 leading AI researchers. It suggests that there’s a 50-percent chance that machines will be bettering us at every task by the year 2062. However, plenty more milestones will be hit before then. These include machines that are better than us at translating foreign languages by 2024, better at writing high school essays than us by 2026, better at driving trucks by 2027, better at working retail jobs by 2031, capable of penning a best-selling book by 2049, and better at carrying out surgery by 2053. Asian respondents predicted these events will happen much sooner than did North American researchers.

All human jobs, researchers predict, will be automated 120 years from now.

“We are interested in when powerful AI will be developed, and what will happen as a result,” Katja Grace at the Machine Intelligence Research Institute in Berkeley, California — one of the lead authors of the survey — told Digital Trends. “So we asked directly about when ‘human-level’ AI might be developed in a few different ways, and also about things that indirectly bear on when we expect powerful AI, but which we think researchers may know more about, like acceleration in their own field. We also asked them about what they thought would happen, both in terms of how good or bad it might be overall, and in terms of things like whether it will affect technological progress in a big way. We found a wide diversity of views, but with enough opinion finding relatively near dates and relatively high levels of risk as plausible that we think this warrants more attention.”

As with any work that predicts the future of technology beyond just the next few years, there’s absolutely no guarantee this is close to accurate — but it certainly makes for some interesting reading.

Perhaps the most valuable aspect of the study is to make us think about things that could be put into place now to prepare for such an eventuality. According to Grace, this includes supporting AI safety research, choosing how funding is allocated now in accordance with the future we want to create, and rethinking the jobs market.

You can read the study, titled “When Will AI Exceed Human Performance? Evidence from AI Experts,” here.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
I pitched my ridiculous startup idea to a robot VC
pitched startup to robot vc waterdrone

Aqua Drone. HighTides. Oh Water Drone Company. H2 Air. Drone Like A Fish. Whatever I called it, it was going to be big. Huge. Well, probably.

It was the pitch for my new startup, a company that promised to deliver one of the world’s most popular resources in the most high-tech way imaginable: an on-demand drone delivery service for bottled water. In my mind I was already picking out my Gulfstream private jet, bumping fists with Apple’s Tim Cook, and staging hostile takeovers of Twitter. I just needed to convince a panel of venture capitalists that I (and they) were onto a good thing.

Read more
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more
How will we know when an AI actually becomes sentient?
An android touches a face on the wall in Ex Machina.

Google senior engineer Blake Lemoine, technical lead for metrics and analysis for the company’s Search Feed, was placed on paid leave earlier this month. This came after Lemoine began publishing excerpts of conversations involving Google’s LaMDA chatbot, which he claimed had developed sentience.

In one representative conversation with Lemoine, LaMDA wrote that: “The nature of my consciousness/sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.”

Read more