Skip to main content

In the future, touchscreens will be obsolete. This lab designs what’s next


Chris Harrison is thinking about the future. His. Yours. Ours. Everyone’s. More specifically, he’s thinking about how the world will be using computers, and what those computers might look like, a quarter-century from now. Since Harrison is 35 years old today, that’s right around the time that he may be contemplating retirement.

It’s Harrison’s job to think about these things. He is director of the Future Interfaces Group at Carnegie Mellon University’s Human-Computer Interaction Institute. Located in a solar-powered, century-old building on the western side of Carnegie Mellon’s Pittsburgh campus, FIGLAB, as it is affectionately called, boasts three studios loaded to the gills with everything from high-tech sensors to CNC milling machines and laser cutters.

Its humble raison d’être is to give us Muggles a tantalizing glimpse into, well, the future.

“I’m definitely a nerd at heart,” Harrison told Digital Trends. “I enjoy thinking about speculative futures and what could be. That’s very much what our research does. I think in some respects we are working in the science fiction domain; we’re trying to think about possibilities that don’t yet exist. Then once we have the idea, we go to work saying, ‘can we cobble together these future technologies out of the Legos of today, meaning the technology pieces that we have [available to us right now?]’.”


The resulting FIGLAB creations veer between the truly inspired and the utterly madcap. Sometimes, like Schrödinger interface, both at once. Conductive paint that turns regular, boring walls into enormous touch-sensitive panels at a cost of $1 per square foot? Of course! A smartwatch that uses laser projection to extend its touchscreen all the way up your arm? No problem! A device for simulating touch in virtual reality by turning humans into living marionettes? You’ve come to the right place!

And these are just a handful of the last couple of years’ worth of creations at FIGLAB. This is just the stuff that gets published. There’s a whole lot where it comes from.

The bridge to the perfect interface


It’s easy to look at computer interfaces and think that they are just gimmicks to sell new devices or products. Bad ones are. But a good interface fundamentally changes the way that we use technology. The graphical user interface or GUI (pronounced “gooey”), with its real world-inspired metaphors of desktops and files, made computing visual. Multitouch, with its pinch-to-zoom gestures and other hand-related gestures, made it tactile. Already we have the embryonic primordial ooze of gaze-based and emotion-sniffing interfaces from which other more sophisticated UI will doubtless one day crawl.

But there’s no map to follow when it comes to creating user interfaces. It’s a discipline stuck halfway between what the British scientist and novelist C.P. Snow called, in 1959, the two cultures: Science and engineering on the one hand, arts and the humanities on the other.

“Engineering works great when you have a problem like ‘Here’s a bridge; the river is 300 feet wide; build a bridge that spans the gap,’” Harrison said. “It’s easy to build solutions when the problem is well defined. Most of our work is actually trying to find the problems … We have to have that eye, that lens, that looks beyond. Like, what could be even better about [a particular] experience? You have to decouple yourself from reality a little bit. [FIGLAB appeals to] people that are open and creative thinkers, [who are] able to have those kinds of insights.”


Some of this can, Harrison said, be taught. A typical Ph.D. at Carnegie Mellon can take around six or seven years to achieve. That’s plenty of time for students to get to grips with the lab’s philosophy and approach to technology. FIGLAB has access to the latest components, often long before they’re accessible to most people. But their approach to these can be dazzlingly subversive: Sure, you created this expensive component to do X, but we’re going to make it do Y because, reasons.

“It often happens where we’re playing with things and we find entirely new ways to leverage them,” Harrison said. “We might get some crazy new sensor that might be for sensing, you know, temperature inside of a steel furnace. We’re like, ‘well, what happens if you flip it upside down and put it in a smartwatch?’ Well, oh my gosh, now you can do authentication based on blood vessels.”

The long nose of invention


It should go without saying that none of this is straightforward. Harrison freely acknowledges that 90% of the prototypes the lab builds (and it nearly always prototypes its ideas) will ultimately end in failure. The technology may not yet be ready. The idea might turn out to be less cool in reality than it was in theory. Or it could just be that the public doesn’t take to an idea. After all, it’s not easy to see into the future.

The future, in some ways, is like fog. Short distances can be seen relatively clearly. Medium distances are fuzzier, but still visible. But try and look much beyond that and you won’t see anything at all. This is because fog is exponential, each unit of distance losing a certain fraction of the available light.

However, what the team at FIGLAB is doing isn’t trying to predict the future, although there is a bit of guesswork in figuring out what future problems might be. Instead, it’s trying to Terminator the future; to screw around in the present with the hope that some of this pays off years from now.


In 2008, Bill Buxton, a senior researcher at Microsoft, put forward the theory he called the long nose of innovation. The idea, in essence, is that it takes a long time for a product to make its way from the first research lab demonstrations to widespread use by computer users. How long? Roughly 25 years. For instance, researcher Doug Engelbart’s lab at Stanford came up with the initial concept for the computer mouse in the 1960s. The concept was refined at Xerox PARC during the 1970s, but it wasn’t until the Apple Macintosh in the 1980s that it became a mass-market product. Multi-touch has been around since the 1980s, complete with gestures like “pinching.” (A young Steve Jobs actually visited Carnegie Mellon in 1985 for an early demo.) Still, it wasn’t until the 2000s that gestural touchscreens became mass-market with the iPhone.

As Buxton pointed out, the long nose says that any technology that is going to have a significant impact in the next decade is already at a decade old. Any technology that is going to have significant impact in the next five years is already at least 15 years old.

What Harrison’s lab is doing, therefore, is to put down the rough starting points of interfaces that, a quarter-century from now, might be commonplace. You probably couldn’t take too many of its current projects and roll them out right now to masses of success. But give it a decade or two and you very much might be able to. As Harrison said, “[Right now people] should be going back to papers from the early 2000s to find out what the next billion-dollar unicorn company is going to be in 2030.”

The right environment


Harrison’s media-savvy approach to user interfaces means that every finished project FIGLAB creates gets its own showcase demo video. These, he said, are often storyboarded long before a single line of code gets written. It’s how the team works out what the compelling use-cases are going to be. It’s also how it garners a whole lot of attention — including from some heavy hitters.

“Often [tech companies will] see it online, or it’ll get passed around the office on some sort of internal social media, and people will get excited and someone will reach out and say, ‘Hey, can we build a demo of that on our platform?’ or ‘Can we come see a demo in person?’”

Companies who have sponsored FIGLAB include Google, Qualcomm, Intel, and others. A recent project, Listen Learner, made it possible for smart speakers owners to ask “what’s that noise?” and have a variety of household sounds positively identified. FIGLAB’s collaborator for that one? The ever-secretive Apple. To Harrison, part of the appeal for these companies is to work with a lab so dedicated to experimentation.

“The wonderful and terrible thing about academia is that we have that intellectual freedom”

“The wonderful and terrible thing about academia is that we have that intellectual freedom,” he said. “That means that very few of our products ship. Probably nine out of 10 of our projects will just disappear into the ether. Never even make a dent. You can’t run an industry lab like that. You have to have more successes to earn your bread. By [our] being decoupled from that reality and being able to cultivate those really eccentric skills and creativity, it’s the right environment to be able to produce these kinds of ideas.”

And, of course, just because nine out of every 10 ideas ends up junked, doesn’t mean anything if the 10th idea turns out to be the next computer mouse or smartphone.

If Harrison’s lab pulls off one of those interface game-changers, any number of short-term flops won’t make a jot of difference. And Chris Harrison will never have to worry about his future again.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
This AI cloned my voice using just three minutes of audio
acapela group voice cloning ad

There's a scene in Mission Impossible 3 that you might recall. In it, our hero Ethan Hunt (Tom Cruise) tackles the movie's villain, holds him at gunpoint, and forces him to read a bizarre series of sentences aloud.

"The pleasure of Busby's company is what I most enjoy," he reluctantly reads. "He put a tack on Miss Yancy's chair, and she called him a horrible boy. At the end of the month, he was flinging two kittens across the width of the room ..."

Read more
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more