IBM wants your next computer to think (and sense) like you

IBM 5 in 5

For the last seven years, computing giant IBM has released The Five in Five, its forecast of technologies and innovations that its researchers believes will come to pass in five years’ time. This year, IBM is has done something unusual. Instead of listing five disparate ideas, it has put the weight of its five predictions behind a single notion: In five years time, computers will be able to see, hear, touch, taste, and smell — albeit in their own ways.

It’s easy to say computers already do things like this. After all, computers and websites can “see” cull through images for recognizable face, apps can identify songs by “hearing” snippets, and the Curiosity rover is (in a sense) “tasting” and “smelling” rocks and soil samples to better understand the Martian environment. But IBM is looking beyond these specialized applications to computers that can analyze and interpret the real world in real time, then proactively reprogram themselves to improve at particular sensory tasks, the same way a musician trains her ear or a gourmand hones his palette.

If IBM is right, we could be at the beginning of a new age of computing, where devices move on from being simple calculators and bit-pushers to things that can understand their world — and ours.

Cognitive computing

IBM Watson (Jeopardy)

IBM’s prediction has less to do with better sensors and more to do with better ways to interpret what comes from them, a field known as cognitive computing.

Today’s processors basically consider one command at a time, perform that function, and blindly move on to the next function. These sequences of instructions can be dizzyingly complex, but the processors are just automatons that can only do what they’re told. Generally, these kinds of computers are dubbed Von Neumann machines, after Princeton mathematician John Von Neumann who laid out the idea in 1945. They’re tremendously powerful and flexible tools responsible for many technological breakthroughs of the last six decades, from digital data storage to personal computers, the Internet and mobile technologies. But while these processors have become ever smaller and more complex, they basically only do what people tell them to do: nothing more, and nothing less.

Cognitive computing applies concepts from neurobiology to computing, including the ways our senses process information and the way our brains develop skills and capabilities. Although cognitive computing develops in part from work in artificial intelligence, the idea is not to create machine intelligence or thinking machines such as the fictional AIs that turn up as villains in so many stories. Instead, the idea is to create devices and services that function in a similar way to human senses — only perhaps faster and with a great deal more precision — to help us with everyday tasks. In essence, cognitive computing is about creating tools that can see, hear, perceive, and draw conclusions about things in very human-like ways. It’s meant to extend our senses and capabilities to new levels, much as we’ve done with tools like microscopes, telescopes, and space probes.

IBM TrueNorth team

IBM is one of the few companies on the planet that tackle cognitive computing. Over the decades it has amassed a tremendous intellectual property portfolio and continues to invest heavily in difficult, long-term projects that push the limits of computational power and real-time systems. One recent example is Watson, the supercomputing system that roundly defeated all-time champions on the television quiz show Jeopardy last year. (Watson’s technology is now being put to work in health care.) Another example is TrueNorth, which IBM is calling its first cognitive-computing chip. Although it’s based on the same fundamental technologies as traditional Von Neumann processors, TrueNorth is designed to simulate some of the architecture of an organic brain using a massively parallel architecture. It simulates axons, neurons, dendrites, and synapses across a network of processing cores, and uses a parallel compiler that actually maps the long-distance neural pathways of a macaque monkey. TrueNorth is being developed with DARPA (the same folks who brought us the Internet back in the 1960s). Eventually, they aspire to create a cognitive-computing architecture that closely estimates the human brain. IBM already got there earlier this year with simulators on the Lawrence Livermore National Lab Sequoia supercomputer — although it was running more than 1500 times slower than real time.

The key to cognitive computing is that the systems can modify their behavior over time based both on new input (including sensory data like images and sound) but also feedback from humans that they’re on the right track. In a sense, cognitive computing systems will be trained to do things it normally takes humans to do, like recognize pictures, understand and act upon speech, or connect seemingly disparate pieces of information to draw an expert conclusion. Even better, they will be able to constantly improve their performance without being reprogrammed or having to wait for new versions.

So how does IBM believe cognitive computing will enable computers to augment our senses in the next five years?

Vision

IBM Five in Five 2012 Vision

A computer as simple as a point-and-shoot camera can already recognize faces, but cognitive computing will allow computers to recognize different elements of photos or videos in real time, much the way a human would. For instance, vision systems could be trained to pick out items in scenes based on things like color values, angles, and edge information, so that they could easily distinguish (say) a forest from a cityscape, or a desert from the inside of a store. When applied to video, a computer could monitor security camera footage for prowlers, or issue a real-time alert when a basement floods. Online, cognitive computing systems could look at photos uploaded to social networks and alert authorities about possible emergencies or security problems. The technology could also be applied to high-resolution medical scans, enabling doctors and diagnosticians to more-comprehensively review data and perhaps catch some conditions long before they exhibit symptoms.

Of course, the technology has all sorts of commercial applications. Images of every product you buy and every image you upload to social networks could be analyzed to pick up on your interests. Take a lot of pictures of sports cars? Ads for Porsches might start appearing on your smartphone. If you take a picture of some awesome ankle boots you see on the subway A coupon for something like Fluevog Shoes might mysteriously be delivered a few minutes later. Similarly, taking cell phone video of that fender-bender so you have evidence you weren’t at fault might make car insurance offers roll in.

Hearing

As with vision, computers can already recognize and process speech, but it’s hard work for a traditional machine. Systems like Apple’s Siri and Google Voice Search have to offload the heavy lifting to cloud-based systems because it’s too much for a phone to handle – that’s why they don’t work offline. 

But IBM imagines many other uses besides virtual assistants. For instance, a computer could understand and interpret an infant’s sounds, then send messages to parents or caretakers. A project called Deep Thunder is already using audio data (among other things) to make quick, hyper-local weather forecasts in flood- and slide-prone areas of Brazil. Smartphones could understand when you’re talking to something (or someone) besides the phone and automatically mute their microphone. Analyzing ultrasonics in real time could allow us to listen to bats or dolphins, and medical devices that restore human hearing, like cochlear implants, could be dramatically improved.

Touch

IBM Five in Five 2012 Touch

Rather than suggesting that computers will be able to better understand touch in the near future, IBM believes they’ll be able to reproduce it for us in ways never before possible. In five years, IBM says “you will be able to touch through your phone.” The same modern haptic technology provides a subtle vibration when you touch a button on your phone could be improved to provide much more advanced feedback that simulates textures – the coarseness of pumice, or the slightly-soft feel or a ripe pear.

There’s reason to be dubious about this particular technology. Many aspects of touch, like mass, specific heat, density, and size, aren’t related to texture. Nonetheless, haptic technology can be much more precise than what’s used to make phones and game controllers vibrate, and things like 3D printers have already paved the way for high-resolution, portable data formats for textures. IBM seems focused on retail applications, like enabling users to feel simulated clothing fabric before they decide to buy: If the technology works out, it could have lots of other applications, including gaming — imagine having to find your way through levels or puzzles using nothing but touch. One advantage of this technology is that it doesn’t seem dependent at all on the heavy lifting of cognitive computing: All the pieces seem to exist right now, which may make it the most feasible of IBM’s forecasts.

Smell and taste

IBM Five in Five 2012 Smell

Cognitive computers that understand smell and taste could essentially play the role of perfumer or flavoring manufacturer. By analyzing how different chemical compounds in food react with each other – and how humans sense them – a computer could concoct new flavor combinations and recipes that can do everything from make school lunches more appealing to improving nutrition in under-developed regions. In haute cuisine, a computer might dream up a flavor combination that even seasoned chefs would never have considered (figs, beets, and pulque? anyone?) but still delight our palettes. Hits would quickly “trickle down” to ordinary fare.

A computer with a sense of smell could analyze chemical signatures (whether in the air or on surfaces, objects, or people) and apply highly specific knowledge to interpret that information. One day smartphones might have the sophisticated nose of a wine connoisseur, or be able to detect that a person is getting sick (or at least needs a mint) just by analyzing their breath when they speak on the phone. Phones might also be able to identify flowers (or perfumes) just by scent. Since instruments can be so much more sensitive than the human nose, the technology also has major applications in health care, emergency services, and industry: Imagine hospital equipment that can tell whether or not it’s sterile, smartphones and other equipment that can help locate trapped survivors (or ruptured gas lines) in a disaster, or even smartphones that can tell you how fresh a loaf of bread (or some deli salad) might be.

Is any of this practical?

IBM Blue Gene/P

The resource-intensive nature of IBM’s cognitive computing ambitions probably mean that, even if some of these technologies can be demonstrated in five years, they’re certainly not going to be mainstream.

With a few possible exceptions (like being able to “touch” textures through a smartphone or interpret baby noises), many of IBM’s cognitive-computing applications will require major real-time horsepower. IBM’s TrueNorth simulation was running on a Blue Gene/Q supercomputer capable of 16.32 petaflops — back in June of this year, it was the fastest supercomputer in the world. Computer hardware is always advancing rapidly, but that’s not processing power that’s going to make it into smartphones or traditional PCs in the next five years. The best hope is that computer-intensive sensory applications might become available as cloud-based services.

While IBM’s moxie to take on massive computing projects is certainly to be admired, it’s not necessarily the only way to engineer systems that give human-like results. Companies like Google, for instance, face gargantuan computing problems with things like their core Web search, which not only has to keep a constantly-updated index of essentially the entire Internet, but present relevant search results nearly instantly. Google doesn’t do this with cognitive computing and hardware on the scale of the human brain. Instead, it relies on actual humans: By analyzing the way millions of its users interact with its services, Google is essentially crowd-sourcing real, life, human intelligence to make its systems deliver what people want. It’s not cheap, but for now it’s more practical than throwing supercomputers at these problems. After all, there are billions of humans on the Internet, and only two or three computers on the planet right now potentially capable of doing things like the TrueNorth simulation.

Fortunately, these two approaches are not incompatible, and its possible some of IBM’s forecast sensory technologies might come to pass in the semi-near future through clever combinations of human input and trainable computing resources. One day, asking our phones about the funny noise the car is making or whether the milk is starting to go sour might be as everyday as sending a text message or sharing a photo.

Product Review

LG's new V40 has 5 cameras, but ThinQ twice before you buy

The LG V40 ThinQ has five cameras -- three on the back and two on the front. This makes it one of the most versatile camera phones LG has released to date, and it’s creatively fun to use. Read on for more in-depth analysis.
Deals

Here are the 5 best Salesforce alternatives and competitors

Salesforce has taken the customer relationship management, or CRM, world by storm. But these five alternatives, including Freshsales, Microsoft Dynamics 365, and Zoho CRM, are solid options if you're looking beyond Salesforce.
Mobile

LG V40 ThinQ vs. Samsung Galaxy Note 9: Can the V40 unseat the king?

The LG V40 ThinQ is packing a massive display and an incredible five lenses, making it a unique offering in the market. But is it better than the Galaxy Note 9, another phone with a huge display and great cameras?
Smart Home

LG’s Colosseo smart sofa is the comfiest piece of tech in your home

LG has partnered with Italian furniture brand Natuzzi to produce the Colosseo, a connected sofa with five custom scenarios to provide maximum comfort for watching TV, reading, or just relaxing.
Computing

Google Chrome 70 is finally getting a picture-in-picture mode

Picture-in-picture mode is finally coming to Google Chrome 70 on Mac, Linux, and Windows. The feature not only applies to YouTube but also any other website where developers have chosen to implement it.
Computing

Intel's 9th-gen chips could power your next rig. Here's what you need to know

The Intel Core i9-9900K processor was the star of the show for consumers, but a powerful 28-core Xeon processor also led announcements. Here's everything you need to know about the latest Intel chipsets.
Computing

Despite serious security flaws, D-Link will (again) not patch some routers

D-Link revealed that it won't patch six router models despite warnings raised by a security researcher. The manufacturer, for the second time in a span of about a year, cited end-of-life policies for its decision to not act.
Computing

Core i9s and Threadrippers are all powerful, but should you go AMD or Intel?

The battle for the top prosumer CPUs in the world is on. In this head to head, we pit the Core i9 versus the Threadripper to see which is the best when it comes to maximizing multi-core performance on a single chip.
Computing

Apple’s latest feature ensures MacOS apps are safer than ever

MacOS is mythically known for being more immune to viruses than Windows, but that doesn't mean there isn't room to make it safer. Apple is using an app notarization feature to protect users from downloading malicious apps.
Computing

There’s now proof that quantum computing is superior to the classical variety

For the first time in computer science history, researchers have tangibly demonstrated how a quantum computer is better than a classical computer. A quantum computer was able to solve a math problem that a classical PC cannot.
Computing

In 2018, the rivalry between AMD and Intel has become more interesting than ever

When it comes to selecting a CPU for your PC, there's no shortage of chips for you to choose from. With Ryzen, Threadripper, and Core i9 CPUs though, the AMD vs. Intel argument is muddier than ever.
Computing

Will Apple introduce a new MacBook at its Oct. 30 event? Here's everything we know

Whether it's called the MacBook Air or just the MacBook, Apple is highly rumored to introduce a new, affordable laptop in 2018. We discuss reports about upgrading displays, processors, sign-in features, and more.
Computing

Apple CEO demands Bloomberg retract its Chinese surveillance story

Apple CEO Tim Cook is calling on Bloomberg to retract a story alleging that Apple had purchased compromised servers that allowed the Chinese government to spy on Apple. Apple's investigation found no truth to the story.
Product Review

Dell’s G3 Gaming laptop knows what gamers want, and what they can live without

Compromise and budget gaming laptops go hand-in-hand, but with the G3, Dell has figured out how to balance what gamers want with what they can live without.