A team of researchers at Cambridge University is making plans to open the Centre for the Study of Existential Risk, where it will assess the threat to human civilization posed by the likes of artificial intelligence, climate change, nuclear war and rogue biotechnology.
According to a BBC report, the team behind the Project for Existential Risk – comprising Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn – claim it would be “dangerous” to scoff at talk of a potential robot uprising.
Speaking to the BBC about the project, Tallinn explained that the purpose of the center is to put more thinking into what it calls existential risks. “Existential risks are potential dangers that we might face as a species, things that might kill us as a species, or at least permanently curtail our potential,” he said.
So it’s not just robots that might finish us off.
“Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole,” the team explains on its website. “Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.”
Getting back to AI, Tallinn talked about the Frankenstein scenario of a planet overrun by technology with a mind of its own. “If you’re creating something that is potentially smarter than you, you might have the problem of control – like how do you control something that is smarter than you and potentially, for example, able to design its own technology,” he said.
He went on to say he believes the chance of the human race being wiped out by something we’re responsible for is higher than people generally acknowledge, although it’s hard to properly assess because “the bar of uncertainty is very high,” adding, “We really should be careful about these things and be prepared.”
The center, which plans to open in 2013, will draw on the intellectual resources of the prestigious university to study the potential threats, and in doing so help “make it a little more certain that we humans will be around to celebrate the University’s own millennium” in 2209, the teams says on its website.
- From flamethrowers to brain linking, here are Elon Musk’s 5 craziest ideas
- Catch up on the entire ‘Destiny 2’ story before playing ‘Curse of Osiris’
- Science fiction’s 5 most haunting A.I. villains, ranked
- You make 35,000 decisions a day, and Huawei wants AI to help out
- How ‘speed breeding’ will supercharge farming to save us from starvation