IBM wants your next computer to think (and sense) like you

IBM 5 in 5

For the last seven years, computing giant IBM has released The Five in Five, its forecast of technologies and innovations that its researchers believes will come to pass in five years’ time. This year, IBM is has done something unusual. Instead of listing five disparate ideas, it has put the weight of its five predictions behind a single notion: In five years time, computers will be able to see, hear, touch, taste, and smell — albeit in their own ways.

It’s easy to say computers already do things like this. After all, computers and websites can “see” cull through images for recognizable face, apps can identify songs by “hearing” snippets, and the Curiosity rover is (in a sense) “tasting” and “smelling” rocks and soil samples to better understand the Martian environment. But IBM is looking beyond these specialized applications to computers that can analyze and interpret the real world in real time, then proactively reprogram themselves to improve at particular sensory tasks, the same way a musician trains her ear or a gourmand hones his palette.

If IBM is right, we could be at the beginning of a new age of computing, where devices move on from being simple calculators and bit-pushers to things that can understand their world — and ours.

Cognitive computing

IBM Watson (Jeopardy)

IBM’s prediction has less to do with better sensors and more to do with better ways to interpret what comes from them, a field known as cognitive computing.

Today’s processors basically consider one command at a time, perform that function, and blindly move on to the next function. These sequences of instructions can be dizzyingly complex, but the processors are just automatons that can only do what they’re told. Generally, these kinds of computers are dubbed Von Neumann machines, after Princeton mathematician John Von Neumann who laid out the idea in 1945. They’re tremendously powerful and flexible tools responsible for many technological breakthroughs of the last six decades, from digital data storage to personal computers, the Internet and mobile technologies. But while these processors have become ever smaller and more complex, they basically only do what people tell them to do: nothing more, and nothing less.

Cognitive computing applies concepts from neurobiology to computing, including the ways our senses process information and the way our brains develop skills and capabilities. Although cognitive computing develops in part from work in artificial intelligence, the idea is not to create machine intelligence or thinking machines such as the fictional AIs that turn up as villains in so many stories. Instead, the idea is to create devices and services that function in a similar way to human senses — only perhaps faster and with a great deal more precision — to help us with everyday tasks. In essence, cognitive computing is about creating tools that can see, hear, perceive, and draw conclusions about things in very human-like ways. It’s meant to extend our senses and capabilities to new levels, much as we’ve done with tools like microscopes, telescopes, and space probes.

IBM TrueNorth team

IBM is one of the few companies on the planet that tackle cognitive computing. Over the decades it has amassed a tremendous intellectual property portfolio and continues to invest heavily in difficult, long-term projects that push the limits of computational power and real-time systems. One recent example is Watson, the supercomputing system that roundly defeated all-time champions on the television quiz show Jeopardy last year. (Watson’s technology is now being put to work in health care.) Another example is TrueNorth, which IBM is calling its first cognitive-computing chip. Although it’s based on the same fundamental technologies as traditional Von Neumann processors, TrueNorth is designed to simulate some of the architecture of an organic brain using a massively parallel architecture. It simulates axons, neurons, dendrites, and synapses across a network of processing cores, and uses a parallel compiler that actually maps the long-distance neural pathways of a macaque monkey. TrueNorth is being developed with DARPA (the same folks who brought us the Internet back in the 1960s). Eventually, they aspire to create a cognitive-computing architecture that closely estimates the human brain. IBM already got there earlier this year with simulators on the Lawrence Livermore National Lab Sequoia supercomputer — although it was running more than 1500 times slower than real time.

The key to cognitive computing is that the systems can modify their behavior over time based both on new input (including sensory data like images and sound) but also feedback from humans that they’re on the right track. In a sense, cognitive computing systems will be trained to do things it normally takes humans to do, like recognize pictures, understand and act upon speech, or connect seemingly disparate pieces of information to draw an expert conclusion. Even better, they will be able to constantly improve their performance without being reprogrammed or having to wait for new versions.

So how does IBM believe cognitive computing will enable computers to augment our senses in the next five years?


IBM Five in Five 2012 Vision

A computer as simple as a point-and-shoot camera can already recognize faces, but cognitive computing will allow computers to recognize different elements of photos or videos in real time, much the way a human would. For instance, vision systems could be trained to pick out items in scenes based on things like color values, angles, and edge information, so that they could easily distinguish (say) a forest from a cityscape, or a desert from the inside of a store. When applied to video, a computer could monitor security camera footage for prowlers, or issue a real-time alert when a basement floods. Online, cognitive computing systems could look at photos uploaded to social networks and alert authorities about possible emergencies or security problems. The technology could also be applied to high-resolution medical scans, enabling doctors and diagnosticians to more-comprehensively review data and perhaps catch some conditions long before they exhibit symptoms.

Of course, the technology has all sorts of commercial applications. Images of every product you buy and every image you upload to social networks could be analyzed to pick up on your interests. Take a lot of pictures of sports cars? Ads for Porsches might start appearing on your smartphone. If you take a picture of some awesome ankle boots you see on the subway A coupon for something like Fluevog Shoes might mysteriously be delivered a few minutes later. Similarly, taking cell phone video of that fender-bender so you have evidence you weren’t at fault might make car insurance offers roll in.


As with vision, computers can already recognize and process speech, but it’s hard work for a traditional machine. Systems like Apple’s Siri and Google Voice Search have to offload the heavy lifting to cloud-based systems because it’s too much for a phone to handle – that’s why they don’t work offline. 

But IBM imagines many other uses besides virtual assistants. For instance, a computer could understand and interpret an infant’s sounds, then send messages to parents or caretakers. A project called Deep Thunder is already using audio data (among other things) to make quick, hyper-local weather forecasts in flood- and slide-prone areas of Brazil. Smartphones could understand when you’re talking to something (or someone) besides the phone and automatically mute their microphone. Analyzing ultrasonics in real time could allow us to listen to bats or dolphins, and medical devices that restore human hearing, like cochlear implants, could be dramatically improved.


IBM Five in Five 2012 Touch

Rather than suggesting that computers will be able to better understand touch in the near future, IBM believes they’ll be able to reproduce it for us in ways never before possible. In five years, IBM says “you will be able to touch through your phone.” The same modern haptic technology provides a subtle vibration when you touch a button on your phone could be improved to provide much more advanced feedback that simulates textures – the coarseness of pumice, or the slightly-soft feel or a ripe pear.

There’s reason to be dubious about this particular technology. Many aspects of touch, like mass, specific heat, density, and size, aren’t related to texture. Nonetheless, haptic technology can be much more precise than what’s used to make phones and game controllers vibrate, and things like 3D printers have already paved the way for high-resolution, portable data formats for textures. IBM seems focused on retail applications, like enabling users to feel simulated clothing fabric before they decide to buy: If the technology works out, it could have lots of other applications, including gaming — imagine having to find your way through levels or puzzles using nothing but touch. One advantage of this technology is that it doesn’t seem dependent at all on the heavy lifting of cognitive computing: All the pieces seem to exist right now, which may make it the most feasible of IBM’s forecasts.

Smell and taste

IBM Five in Five 2012 Smell

Cognitive computers that understand smell and taste could essentially play the role of perfumer or flavoring manufacturer. By analyzing how different chemical compounds in food react with each other – and how humans sense them – a computer could concoct new flavor combinations and recipes that can do everything from make school lunches more appealing to improving nutrition in under-developed regions. In haute cuisine, a computer might dream up a flavor combination that even seasoned chefs would never have considered (figs, beets, and pulque? anyone?) but still delight our palettes. Hits would quickly “trickle down” to ordinary fare.

A computer with a sense of smell could analyze chemical signatures (whether in the air or on surfaces, objects, or people) and apply highly specific knowledge to interpret that information. One day smartphones might have the sophisticated nose of a wine connoisseur, or be able to detect that a person is getting sick (or at least needs a mint) just by analyzing their breath when they speak on the phone. Phones might also be able to identify flowers (or perfumes) just by scent. Since instruments can be so much more sensitive than the human nose, the technology also has major applications in health care, emergency services, and industry: Imagine hospital equipment that can tell whether or not it’s sterile, smartphones and other equipment that can help locate trapped survivors (or ruptured gas lines) in a disaster, or even smartphones that can tell you how fresh a loaf of bread (or some deli salad) might be.

Is any of this practical?

IBM Blue Gene/P

The resource-intensive nature of IBM’s cognitive computing ambitions probably mean that, even if some of these technologies can be demonstrated in five years, they’re certainly not going to be mainstream.

With a few possible exceptions (like being able to “touch” textures through a smartphone or interpret baby noises), many of IBM’s cognitive-computing applications will require major real-time horsepower. IBM’s TrueNorth simulation was running on a Blue Gene/Q supercomputer capable of 16.32 petaflops — back in June of this year, it was the fastest supercomputer in the world. Computer hardware is always advancing rapidly, but that’s not processing power that’s going to make it into smartphones or traditional PCs in the next five years. The best hope is that computer-intensive sensory applications might become available as cloud-based services.

While IBM’s moxie to take on massive computing projects is certainly to be admired, it’s not necessarily the only way to engineer systems that give human-like results. Companies like Google, for instance, face gargantuan computing problems with things like their core Web search, which not only has to keep a constantly-updated index of essentially the entire Internet, but present relevant search results nearly instantly. Google doesn’t do this with cognitive computing and hardware on the scale of the human brain. Instead, it relies on actual humans: By analyzing the way millions of its users interact with its services, Google is essentially crowd-sourcing real, life, human intelligence to make its systems deliver what people want. It’s not cheap, but for now it’s more practical than throwing supercomputers at these problems. After all, there are billions of humans on the Internet, and only two or three computers on the planet right now potentially capable of doing things like the TrueNorth simulation.

Fortunately, these two approaches are not incompatible, and its possible some of IBM’s forecast sensory technologies might come to pass in the semi-near future through clever combinations of human input and trainable computing resources. One day, asking our phones about the funny noise the car is making or whether the milk is starting to go sour might be as everyday as sending a text message or sharing a photo.


Playing ‘Battlefield V’ on an $800 Nvidia card is stunning. And disappointing

‘Battlefield V’ is the first game to use Nvidia’s ray tracing support, now available with the RTX 2080 and 2080 Ti graphics cards. The feature can, in an ideal scenario, make the game look better, but the performance hit may not be…
Emerging Tech

Of all the vape pens in the world, these 5 are the best

Vaping concentrates has become significantly more popular, especially among those that use cannabis for medicinal purposes. But don’t use just any vape pen: we found these five devices to be our favorites in 2018.

Honda will squeeze another model into its SUV lineup at the Los Angeles show

Honda will introduce a new SUV with a familiar nameplate at the 2018 Los Angeles auto show. It envisioned the 2019 Passport as a five-seater alternative to the eight-seater Pilot. The two models will share a platform and many tech features.

PlayStation 4 celebrates fifth anniversary with ‘Call of Duty: Black Ops 4’ bundle

The PlayStation 4 is officially five years old today, November 15, and to celebrate the anniversary, Sony is offering a new console bundle that includes a free copy of Call of Duty: Black Ops 4.

Everything you need to know about Garmin’s GPS watches and trackers

Garmin jumped into the GPS smartwatch and fitness tracker market five years ago and has built a portfolio of devices that rivals competitor Fitbit. Here's your guide to the latest and greatest fitness devices that Garmin has to offer.
Smart Home

All the best Amazon Black Friday deals for 2018

Amazon may be an online-only retailer, but that doesn’t mean its Black Friday sales are anything to sniff at. In fact, due to its online status, Amazon has huge flexibility with the range of products and deals it can offer. Here's our…

HP takes $100 off of leather-clad Spectre Folio 13 bundle for Black Friday

HP is offering a discount to Black Friday shoppers for a bundle that includes its leather-wrapped answer to Apple's MacBook Air. HP is offering a $100 discount on the Spectre Folio 13 when bundled with a mouse and leather sleeve.

Save a heap with these Black Friday 2018 graphics card deals

The Black Friday 2018 sales period is finally here and it's brought with it a tonne of great component deals. We've been scouring websites and catalogs for days to find you the best graphics cards deals for Black Friday 2018.

The best Target Black Friday deals for 2018

The mega-retailer opens its doors to the most competitive shoppers at 6 p.m. on Thursday, November 22, and signs indicate that the retailer means business this year. We've sifted through all of the deals, from consumer electronics to small…

Still miss Windows 7? Here's how to make Windows 10 look more like it

There's no simple way of switching on a Windows 7 mode in Windows 10. Instead, you can install third-party software, manually tweak settings, and edit the registry. We provide instructions for using these tweaks and tools.

Cyber Monday 2018: When it takes place and where to find the best deals

Cyber Monday is still a ways off, but it's never too early to start planning ahead. With so many different deals to choose from during one of the biggest shopping holidays of the year, going in with a little know-how makes all the…

Ditch the passwords and buy Xbox games with just your face

Passwords are the past. The latest version of Windows 10 allows you to sign in with your Microsoft account on the web through Microsoft Edge using Windows Hello or a FIDO 2 Yubikey. 

Canceling Amazon Prime is easy, and you might get a refund

Don't be intimidated. Learning how to cancel Amazon Prime is easier than you might think. You might even get a partial or full refund on the cost, depending on how much you've used it. Check out our quick-hit guide for doing so.

Editing a PDF is easy when you have the right tools in hand

Editing PDF files can be a real pain, but there are a few tricks to make the process a bit easier. This guide will give you three easy methods for how to edit a PDF, two of which work without needing Adobe Acrobat.