Skip to main content

Playing God: Meet the man who built the most lifelike android ever

Dr. David Hanson: Android designer
Image used with permission by copyright holder

Next month, leaders in the world of robotics, neuroscience, and artificial intelligence will converge on New York City for the second annual Global Future 2045 Congress, an event devoted entirely to the quest toward “neohumanism” – the next evolution of humankind. GF2045 is the brainchild of Russian billionaire Dmitry Itskov, who’s made it his life’s goal to transpose human consciousness into a machine, thus giving us the power of immortality. (Really.)

Among those presenting during the two-day GF2045 conference is renowned roboticist Dr. David Hanson, who will unveil the world’s most lifelike humanoid android, designed in the likeness of Itskov. Founder of Hanson Robotics, Hanson is a true Renaissance Man, with a background ranging from poetry to sculpting for Disney to the creation of humanlike androids that are said to possess the inklings of human intelligence and even emotion. As we edge closer to GF2045, which takes place June 15 and 16, we chatted with Dr. Hanson over Google+ Hangouts to get his insight on mankind’s march toward the future.

Ed. note: This interview has been shortened for the sake of brevity.

Digital Trends: You will be unveiling the world’s most realistic humanlike android at GF2045. What can we expect to see?

Dr. Hanson: What we’re doing with Dmitry is making a telepresence robot. It is basically a remote-controlled version of Dmitry. So we’re bringing together the technologies. There’s some risk associated with this because the time schedule is fantastically short for this particular project. I mean, we basically received this commission about a little less than two months ago. So we’re talking about a very, very short time frame for making a redesign of the human-scale technology to improve it, customizing it to be Dmitry, and then bringing together the best of our technologies available to achieve a remote control version of Dmitry.

“Well, I think there’s good reason to be afraid. We’re creating alien minds, one way or another.”

That said, things are looking pretty good. We think we’re going to have a very nice remote controlled face that will be under Dmitry’s command, will say what Dmitry says. It will look around under Dmitry’s control – so Dmitry can see through its eyes. And control its expression, so it’s able to express his intentions and emotions. So it becomes a very high resolution representative. With enough sensory information going back and forth, it basically becomes like one of these sci-fi scenarios, where you have this hologram or a whole presentation of a person, or like the movie Surrogate, or Avatar, where you have a robot identity that looks like you’re really there. Somewhere between a cellphone and a Star Trek-style teleportation device.

What’s the user interface like that for that?

There will be a screen, and on the screen, the remote user will see what the eye of the robot is seeing. And then there will also be a wide-angle presentation of the whole scene, so the user can see what’s outside the robot’s direct field of view, giving the user an impression of the robot’s peripheral vision. The user will then have the ability to control where the robot looks. The user will speak in an actual manner, and the robot will reproduce what the user is saying with lip motions, so you’ll have the lip sync.

Will it be the actual person’s voice?

It will Dmitry’s voice actually coming through. That’s the difference. Previously when we’ve created portraits of people, we created fully automated portraits of people. They weren’t remote controlled. There was face tracking and face detection software, and speech recognition software, and then some artificial intelligence that would generate some reply. And then the robot would speak that reply and generate expressions that would be approximately appropriate based on this very, very simple emotional model.

Philip K Dick Hanson
Image used with permission by copyright holder

So in that way, you were talking with a ‘techno-ghost’ of the person. In this case, the ghost is still inside the human body. And you are simply remotely connecting that ghost through these information infrastructures.

How do you see people using these kinds of robots in everyday life?

… These kind of telepresence robots could be applied in real-world scenarios, like tele-tourism. You could explore the markets of some exotic faraway destination. … You’d basically have this teleportation kind of experience. Maybe it’s better for a board meeting, where you’re controlling the android, and it’s really reflecting the full 3D – or 4D, if you include Time – nuance of your face and its expressions, as though you’re there. You can really look into somebody’s eyes much more effectively than you could with any other kinds of 3D display technology.

You said you believe robots with human emotions could transform civilization. Is it even possible to put human emotions in a machine? And, if so, how do you see that being transformative?

I would like to point out that some of what we are talking about is speculative. We’re talking about the stuff of dreams. And so some of these propositions come under heavy fire from critics because they say that it’s ridiculous, and that it can distort people’s expectations. And there’s no proof that you can do mind uploading, or have true science of mind, or achieve artificial general intelligence. And I would just like to say that I am a dreamer in these areas. There’s not proof that it is not possible, right? But that doesn’t mean that it’s proven that it is possible. And yet, we need dreamers. We need to dream big. Because all major surges in development, all major discoveries and acts of creativity come from this node of uncertainty that is best investigated with some hard practicality combined with far dreaming practice.

“Artists hack the mind. They hack perception. They create this shortcut to perceptual phenomena that are not understood by neuroscientists yet.”

That said, okay, I believe that if we achieve these things – and there’s an ‘if’ there – if we achieve self-redesigning machines with human-level or greater-than-human-level intelligence, that they will spiral towards unimaginable levels of super-intelligence, what we might call transcendental intelligence; intelligence that just gets smarter, and we just can’t predict what it’s going to do or be capable of. That will then solve problems, and identify problems and opportunities that we can’t really perceive. And it will open up opportunities for us as people that are unimaginable. And that will be absolutely transformative.

Do you see Itskov’s goals of mind uploading having similar effects, if achieved?

Mind uploading would be transformative in a separative way – what Dmitry is proposing – because it would ‘cure’ death. In Global Future 2045, the objective of achieving immortality for all of humanity by 2045, would radically transform what it means to be human because you could live in this virtual domain, or you could occupy a robot body. If computing continues [to advance] exponentially – if Moore’s Law carries on through whether its optical computing or nanotube computing, or something – well, if it does continue, then it will be more efficient. You’ll be able to pack all of the human mind into this kind of computing space that would be potentially much less impactful on the natural environment, so you’d be able to re-stabilize the natural ecosystems of the world. These would be potential consequences.

How do you respond to critics who are afraid and pessimistic about AI and transhumanism? Some people are afraid of what this technology could unleash.

Hanson-EinsteinWell, I think there’s good reason to be afraid. We’re creating alien minds, one way or another. And most of the research doesn’t focus on social AI, or capacity for compassion, or for getting along well with people. And so, most of the funded research, by some estimates, the majority that’s coming from research institutions, AI for military applications. And there’s not anything inherently wrong with that, it’s just that you could imagine – in the short-term, anyway – that a conscience would get in the way of efficacy of these kinds of devices. In the long-term, a conscience would be really essential. The ability to see and understand potential outcomes, their consequences for what motivates people and what’s good for society for the long-term, well all of that would be really great in those kinds of machines. But if you look at the dollars that are going to toward that kind of research, they are negligible. And so the social robots, on the other hand, the theory of mind, that can lead toward machines with consciousness and conscience at the same time.

Is public opinion changing about robots?

The public’s expectations about robots are shifting. When we acculturate to robots, when we get used to them, then we open up to them. And then we expect them. Our expectations keep ratcheting higher. It’s like … the automobile companies roll out automation in self-driving cars piece-wise, feature-by-feature. Now you’re car can parallel park. Next year, maybe it will pull around to the front of the house, and be waiting for you. Maybe in 10 or 15 years, you’ll get into a living room-like space, and it’ll just drive you to work while you google the whole way.

But if you introduce a self-driving vehicle today – completely self-driving – then people’s wonder and fear would result in too much disruption. It’s kind of like, you’ve kind of got to ease into these things. So the human reaction then, it can’t be fully predicted. And it’s something I believe developers and marketing teams evaluate as you kind of inch along down this path.

Your background includes sculpture, painting, drawing, and poetry. How do artists fit into the equation of transhumanism and advanced robotics?

Being an artist, you can introduce something that is more startling and disruptive. You don’t have to worry about those incremental steps. You can introduce something that really stirs things up and see what happens. By putting the technology together in this form that may be startling, the technology itself is really incrementally advancing. … With the robots, we put together these dialog systems with today’s AI, but we do it in an artistic way that then can seem like that there’s somebody in there. And arguably, it’s just these ghost-like shreds of who that person is. There’s not really a mind in these machines, like a human mind. But you can convey an amazing impression there.

The technology itself, there’s some advances. But we have not unlocked the Holy Grail of artificial intelligence with these humanlike robots yet. What we have done is put this burning idea in people’s minds. When the robots work well, people start to say ‘Wow, we could that. Should  we do that? What could it be good for? Wow, it could be good for all kinds of things! How could it be dangerous?’ People start to think about these questions – Inspire developers to think of these questions as well, as we go forward.

 Dr. David Hanson Einstein
Image used with permission by copyright holder

In this way, I think of sort of an advance guard of artists, a sort of reconnaissance team for the world of robots. The ‘canary in the coal mine’ is how Kurt Vonnegut characterized the artist. So, I believe that artistry is under-valued in technology development and robotics. Robotics, to me, seems like the greatest artisic medium. It is the marble of our age. And it’s a little surprising that you don’t have more artists leaping in, and trying to transform robotics in these spectacular and disruptive ways.

I mean, I use the marble quite specifically because robotics as a figurative medium is so under developed at this point. There’s so much opportunity for exploring it. And in the process, what you’re doing is, you’re injecting humanity into the technology also. You’re getting the technology to do things that are beyond the understanding of science and engineering because artists hack the mind. They hack perception. They create this shortcut to perceptual phenomena that are not understood by neuroscientists yet.

What do you recommend for people who might want to get into robotics and carry on what you will do in your lifetime?

I would recommend that people get foundation skills. And that means, learn how to draw. Learn your math. And learn how to play. These are kind of the fundamental skills. If you play with robots, and tinker, and just get into tinkering with things, then see where it leads you, then you will learn all kinds of other things. You will have the incentive to learn all these other disciplines. If you just pick up a textbook, well, you know, that can be interesting in its own way. But if you are picking up a textbook because you want to build something cool, or you want to discover something – you’re on the trail of something fascinating – then that playful spirit gives you something to hang all this knowledge on. It provides a skeleton for the flesh of all those skills.

Andrew Couts
Former Digital Trends Contributor
Features Editor for Digital Trends, Andrew Couts covers a wide swath of consumer technology topics, with particular focus on…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more