“Hello, HAL. Do you read me, HAL?” said astronaut Dave Bowman, desperately trying to keep his emotions in check.
There was a pause and then, in an emotionless monotone, the computer responded. “Affirmative, Dave. I read you.”
“Open the pod bay doors, HAL.”
Another long pause. Was HAL, the all-powerful A.I. that controlled the Discovery One spacecraft, really ignoring him? Impossible, surely, Bowman thought. Any moment now, HAL would spring into action and obey —
“I’m sorry, Dave,” HAL continued. “I’m afraid I can’t do that.”
“What’s the problem?” Dave asked.
“Well, you see, Dave. I’ve forgotten how to open them.”
OK, that’s not exactly how things went in 2001: A Space Odyssey, but the jokes basically write themselves. After all, when it comes to artificial intelligence, a good memory seems to be one of the prerequisite qualities researchers are keen to imbue in their systems. Nobody wants an A.I. that patterns itself after the kind of everyday, unremarkable intelligence that forgets things. So why, then, is Facebook building a forgetful A.I.? And why, like the old joke about the waiter and the fly in a customer’s soup, are we soon all going to want one?
The answer, as it turns out, is that we do want something that’s more like the everyday, unremarkable intelligence that forgets things here and there. Humans forget important things like anniversaries and wallets and shutting the garage before we go on vacation. That is suboptimal forgetting. But we also forget pieces of information because we don’t need to retain it, stopping it like a chunk of detritus caught in a sink drain’s food catcher before it can pass through short-term and into long-term memory.
In a famous experiment, people were asked to correctly recognize a U.S. penny from a collection of pictures showing incorrect pennies. Although the participants likely saw, and used, pennies every day, they proved surprisingly poor at the task. As the researchers wrote: “On balance, the results were consistent with the idea that the visual details of an object, even a very familiar object, are typically available from memory only to the extent that they are useful in everyday life.”
While it’s not as simple as saying that the brain can fill up like hard drive storage, there are certainly short-term memories that appear to decay when they are no longer required, while others burrow their way into our brains and live there rent-free. An example? Think about where you packed away the Christmas decorations last holidays. Next, mentally walk from the front desk to your room in the last hotel you stayed at, one that you are unlikely to ever stay in again. Neither piece of information is vitally important to your wellbeing. Nonetheless, one piece of information is needed and the other isn’t. Somehow your brain knows which one to toss in the trash.
This is the idea behind Facebook’s new A.I. project, called Expire-Span. As artificial intelligence models are increasingly applied to long-form datasets like articles and books, the computational costs associated with these models ramp up as they try to memorize more information. The problem is becoming even more pressing as people collect more and more rich multimedia data about their lives.
“The brain is a very complex system that’s not fully understood, and there are many different types of memory that form human memory.”
“As the amount of content we have grows, the major question is storage,” Angela Fan, a research scientist at Facebook AI Research Paris, told Digital Trends. “A phone, for example, has a limited amount of memory. This is even greater a problem in wearables and other on-device-type applications, where privacy reasons may mean that people strictly want content stored on their device and not on a server or a cloud, accentuating storage challenges.”
Current A.I. models take something of a Frank Sinatra approach to memories — that is to say that they remember all or nothing at all. Either they store all the information created at every time step, or they forget them all after a predetermined time. Sainbayar Sukhbaatar, another Facebook A.I. research scientist, likens it to remembering perfectly everything that happened in the last week, but absolutely nothing beyond that.
Researchers have built “forgetting” into A.I. models before. Long Short-Term Memory (LSTM) models, for example, added a forgetting mechanism to recurrent neural networks (RNN), one of the core technologies that drives machine learning. “RNNs have an internal memory that consists of one vector, so forgetting it means overwriting it with new information,” Sukhbaatar told Digital Trends.
By contrast, Expire-Span adds a forgetting mechanism to external memory that can contain thousands of vectors. New information is observed, but Expire-Span — whose name recalls the kind of expiration labels found on bottles of milk — has the ability to determine how long that information should stay in memory. If information isn’t deemed crucial to the future, that piece of information can be made to gradually decay, before it’s ultimately eradicated from the A.I. model’s memory to make room for more useful information.
In order to do this, the model needs to be able to predict what is and isn’t relevant to a particular task, before assigning the correct expiration date, after which the information will vanish from the A.I. system. By dumping irrelevant information, Facebook’s A.I. is able to process information at larger scales.
As with much of Facebook’s artificial intelligence research, the social networking giant hasn’t announced that Expire-Span will be baked into any of its core products in the immediate future. But, as Facebook deals with more and more user data (not to mention its rumored move into smart glasses), it’s not hard to see how this technology could apply. When future A.I. tools are are able to perform at high performance levels despite dealing with massive heaps of data, Expire-Span could be the reason.
Don’t make the mistake of assuming that making a forgetful A.I. marks another step toward making A.I. more human, however. The exact reasons humans forget things is still the subject of plenty of investigation from researchers. As with the brain-inspired neural networks that dominate modern A.I., Facebook’s forgetful algorithm is modeled on the theory of forgetfulness; not an attempt to replicate it with any biofidelity.
“The brain is a very complex system that’s not fully understood, and there are many different types of memory that form human memory,” Sukhbaatar said. “Expire-Span, like all other A.I. mechanisms, may be inspired by the human brain, but ultimately are not faithful reflections of how the brain actually works. Memory in particular is a very active research area in and of itself. The human memory analogy we describe is added for the purpose of clarity only, though it definitely inspires our work.”
And don’t you forget it!
- Can A.I. beat human engineers at designing microchips? Google thinks so
- Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
- This tech was science fiction 20 years ago. Now it’s reality
- Emotion-sensing A.I. is here, and it could be in your next job interview
- Language supermodel: How GPT-3 is quietly ushering in the A.I. revolution