This story is part of our continuing coverage of CES 2020, including tech and gadgets from the showroom floor.
“It’s a preview of a wonderful technology we have, and a wonderful future we can create together,” Neon’s CEO Pranav Mistry said at the start of his keynote presentation.
So what is it? It’s not hyperbole, for a start. Neon is a step closer to living with a digital creation that not only understands and emotes with us in a meaningful and relatable way, but is also able to create valuable memories with us and truly share our lives.
Four months of work
Explaining exactly what Neon is, how it works, and the incredible depth of technology underlying it is a considerable challenge and one that Neon itself isn’t quite sure how to tackle. To help introduce Neon, Mistry started out by saying he wants to change the way we interact with machines, and no longer say just, “Stop,” “Next song,” or even, “Hey Google, Bixby, or Siri,” because it’s not how we talk to humans.
Mistry said he wants to “push the boundaries so machines understand more about us. Whether we are tired or happy, our expressions, and our emotions.”
In turn, the more machines understand us, the more we will be able to connect with them on a deeper, human level. He believes the path to this means machines need to look and act more like us, and this is where Neon’s journey really began.
The CES demonstration came just four months after the project started. Mistry and the team began by creating a digital version of a friend, which closely emulated his facial movements during conversation. This evolved into larger, grander tests until eventually, the digital version began to do things on its own. It would make expressions the real person had not. It had “learned,” and become something individual.
The Neon booth in Central Hall at CES is covered in large screens showing people on them, all moving, smiling, laughing, or silently mouthing words to the audience. Except these aren’t videos. These are Neons. They are digital creations born from real people, and although they visually represent the model on which they’re based, the movements, expressions, and “emotions” are entirely automatically generated.
Once you understood this, it was surreal walking around the booth, looking at the Neons who in turn are looking at you, and now understanding the movements they made were of their own doing, not a repeating video or animation. What was powering the Neon, and what did Mistry have in mind for their future?
The Neons are generated by the company’s own reality engine called Core R3. The R3 name refers to the principals on which the system is based — reality, real time, and responsiveness, and it’s the combination of all these that bring the Neon to life. It’s not an intelligent system, says Mistry, because it does not have the ability to learn or remember. Instead, it’s equal parts behavioral neural network and computational reality that independently generates the Neon’s “personality” by training it to emulate human behavior on a visual level — how your head moves when you’re happy, what your mouth does when you’re surprised, for example.
Once it has been created, Core R3 does not then continually run a Neon. It generates it initially, then the Neon continually relies on its own information to react based on its interactions with the real world. However, it doesn’t know you or remember you. It uses a combination of the Core R3-generated Neon, cameras, and other sensors to interact with us in the moment — but once that moment is over, everything is forgotten. In the near future, the company has big plans to change that.
Coming to life
Despite only being worked on for four months, there was a live demonstration of what a Neon can do now. There are two “states” for Neons currently, an auto mode where it does what it wants, whether it is thinking, responding, idling, or greeting you, plus a “live” mode where the Neon can be controlled remotely.
The Neon has multiple ways to respond and can choose how to do so, even when instructed to do a particular action. Tell it to smile and be happy, and it does so, but it chooses the way it will look when it does. The level of granular control is impressive, right down to eyebrow movement and the closing of eyes, along with head movements and both visual and verbal responses. This all happens with a response time of 20 milliseconds (the real-time aspect of R3), which removes the barrier between human and machine even further during any interaction. Speech is not produced by Neon at the moment, and in the demo, voice was pulled from third-party APIs, giving life to artificially intelligent voice assistants and chatbots everywhere.
The Neon is “domain independent,” Mistry said. A Neon could teach you yoga or it could help bridge language gaps around the world, for example. Potential uses for a Neon in business are obvious, such as in hotels, at the airport, or in public spaces. The Neon is an evolution of the clunky robots or lifeless video screens seen in these places around the world at the moment. But that’s not really very exciting, and certainly not the part of the Neon that’s truly groundbreaking.
Right now, a Neon cannot know who you are or remember you. Once your interaction is over, your relationship with it is lost to the digital ether. However, over the next year, the Neon team will work on the next version of Core R3, along with a project called Spectra that will add these important traits to Neon, and arguably bring it to life.
“Spectra will provide memory and learning,” Mistry told us, revealing the true direction of Neon.
By adding memory and the ability to learn, along with the advanced human-like visuals, a Neon has the potential to become a true digital companion. Speaking to Mistry after the presentation, his eyes lit up as he talked about the characters he loved as a child, and that the connection he had with them was not affected by the fact they were not “real.” A fully fledged Neon could bring similar joy to people, in a stronger and even more personal way.
What Neon showed at CES 2020 is very much the beginning, but there’s clearly a massive amount of investment, belief, and talent involved. Not many companies would have the guts to come to Las Vegas and show off a four-month-old demo after a few weeks of hyping it up. Mistry has worked with Microsoft on the Xbox, and with Samsung on the Gear VR in the past. He’s soft-spoken and charismatic, and everyone we spoke to at Neon had a similarly strong belief in what the company is doing.
It was contagious, especially if you’ve had sci-fi dreams about artificial humans and digital companions all your life.
A long way to go
However, there is a lot to consider before you’re picking out a name for your first Neon pal. How will the Neon come to life for you and me? Mistry, in true visionary fashion, was not concerned by such things. In his presentation, when talking about the importance of thinking big to do something amazing, he said:
“We don’t understand what’s the business model of something, or how we will bring something to market, let’s figure that out later.”
A Neon team member talked to us about how the company intends to “create” Neons in the future. They won’t use real people as models, and instead generate their own looks for Neons. Think about that for a moment: An entirely artificial digital human, with its own unique looks, and the ability to speak, emote, learn, and remember. It gives me a shiver, it’s so exciting.
Given the pace with which Core R3 has evolved already, it’s no surprise to hear Mistry intends to show the first beta version of a Neon, as well as a preview of Spectra, sometime in the next 12 months at an as-yet-undefined event called Neon 2020. What Neon showed at CES is a huge leap forward in avoiding the uncanny valley, changing the way we should think about digital humans. It’s a major step toward giving life to something that naturally does not have any. There’s a long, long way to go before the Neon reaches its potential, but the very fact the journey has started at all is thrilling.
Follow our live blog for more CES news and announcements.
- Three words: Robot. Talent. Agency.
- Drivers needed (sort of): Einride wants remote pilots for its driverless pods
- Because 2020’s not crazy enough, a robot mouth is singing A.I. prayers in Paris
- Meet the sci-fi startup building computer chips with real biological neurons
- Why everyone is binge-watching pandemic movies right now