Skip to main content

Dreamcatcher is an A.I. that could help analyze the world’s dreams

Edyta Bogucka

Google search queries and social media posts provide a means of peering into the ideas, concerns, and expectations of millions of people around the world. Using the right web-scraping bots and big data analytics, everyone from marketers to social scientists can analyze this information and use it to draw conclusions about what’s on the mind of massive populations of users.

Recommended Videos

Could A.I. analysis of our dreams help do the same thing? That’s a bold, albeit intriguing, concept — and it’s one that researchers from Nokia Bell Labs in Cambridge, U.K., have been busy exploring. They’ve created a tool called “Dreamcatcher” that can, so they claim, use the latest Natural Language Processing (NLP) algorithms to identify themes from thousands of written dream reports.

Dreamcatcher is based on an approach to dream analysis referred to as the continuity hypothesis. This hypothesis, which is supported by strong evidence from decades of research into dreams, suggests that our dreams are reflections of the everyday concerns and ideas of dreamers.

That might sound like common sense. But it’s a very different way of thinking about dreams than the more complex interpretations put forward by theorists like Freud and Jung, who viewed dreams as windows into hidden libidinal desires and other usually obscured thought processes.

The automatic dream analyzer

The A.I. tool — which Luca Aiello, a senior research scientist at Nokia Bell Labs, told Digital Trends is an “automatic dream analyzer” — parses written description of dreams and then scores them according to an established dream analysis inventory called the Hall-Van De Castle scale.

“This inventory consists of a set of scores that measure by how much different elements featured in the dream are more or less frequent than some normative values established by previous research on dreams,” Aiello said. “These elements include, for example, positive or negative emotions, aggressive interactions between characters, presence of imaginary characters, et cetera. The scale, per se, does not provide an interpretation of the dream, but it helps quantify interesting or anomalous aspects in them.”

Edyta Bogucka

The written dream reports came from an archive of 24,000 such records, taken from DreamBank, the largest public collection of English language dream reports yet available. The team’s algorithm is capable of pulling these reports apart and reassembling them in a way that makes sense to the system — for instance, by sorting references into categories like “imaginary beings,” “friends,” “male characters,” “female characters” and so on. It can then further categorize these categories by filtering them into groups like “aggressive,” “friendly,” “sexual” to indicate different types of interaction.

By taking note of the person recording the dream and its content, the researchers can discover some interesting links. A written record might be something like: “I was at a house. Ezra and a friend were on the computer. This unicorn thing kept running towards me when I opened a door. There were other strange creatures there and ones like chickens. They kept trying to attack me.” The Dreamcatcher tool can start with this description and automatically extract various insights; ultimately filing it under “Teenage concerns and activities.” (The dream was, in fact, recorded by Izzy, an “adolescent schoolgirl.”)

 

Aiello said that some of these insights are expected, while others reveal surprising lines of possible future inquiry. “For example, an adolescent’s dreams were characterized by increasing frequency of sexual interactions as she approached her adult life,” Aiello said. “More surprisingly, we found that blind people’s dreams feature more imaginary characters than the norm, which suggests that our senses influence the way we dream.”

This kind of analysis is something that psychologists looking at this data could also do — although nowhere near as quickly as an A.I. tool. “It is exciting to witness the growing ability of NLP to capture increasingly complex and intangible aspects of language,” Aiello said. “However, it is even more exciting to think that thanks to these techniques we gained the ability to perform dream analysis on a very large scale, something that would be impossible through the time-consuming process of manual dream annotation.”

Sweet dreams are made of these

Comparing the Dreamcatcher system to scores calculated by psychologists, the A.I. algorithm matched 76% of the time. That suggests that further improvements could be made. Nonetheless, it’s a valuable start. Aiello — along with fellow researchers Alessandro Fogli and Daniele Quercia — believe the finished product could have profound applications.

“As more people volunteer to share their dreams, we envision the possibility of analyzing the dreams of a whole population — even of a whole country — to monitor its psychological well-being over time”

One might be for something like a mood-tracking app that asks users to record their dreams, and then pulls out recurrent imagery over a certain duration. Aiello said such a tool could make daily dream reporting a habit for people; rewarding them with on-the-fly dream analysis.

However, the more intriguing concept is the one described at the start of this article: a kind of large scale dream-tracking project that could map the world’s dreams onto real events to see how one informs the other. As with so many other forms of big data analysis, this would become more useful — and captivating — the more it was combined and cross-referenced with other real-world data.

“As more people volunteer to share their dreams, we envision the possibility of analyzing the dreams of a whole population — even of a whole country — to monitor its psychological well-being over time,” Aiello said. “Clearly, this would be only possible with the use of automated tools like ours that make dream analysis feasible on a large scale. This opportunity would be particularly compelling in the wake of global challenges that have an impact on everyone’s psyche. Today it’s COVID, next year it will likely be the economic crisis, and in three or four years it could be global warming.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
I let Gemini turn complex research into podcasts. I’ll never go back
Audio Overview in Gemini.

The shift away from Google Assistant, and into the Gemini era, is nearly in its last stages. One can feel nostalgic about the eponymous virtual assistant, but it’s undeniable that the arrival of Gemini has truly changed what an AI agent can do for us.

The language understanding chops are far better with Gemini. Conversations are natural, app interactions are fluid, integration with other Google products is rewarding, and even in its free state, Gemini takes Siri to the cleaners even on an iPhone.

Read more
Samsung might put AI smart glasses on the shelves this year
Google's AR smartglasses translation feature demonstrated.

Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rightfully so. It serves as the flagship launch vehicle for a reinvigorated Android XR platform, with plenty of hype from Google’s own quarters.
But it seems Samsung has even more ambitious plans in place and is reportedly experimenting with different form factors that go beyond the headset format. According to Korea-based ET News, the company is working on a pair of smart glasses and aims to launch them by the end of the ongoing year.
Currently in development under the codename “HAEAN” (machine-translated name), the smart glasses are reportedly in the final stages of locking the internal hardware and functional capabilities. The wearable device will reportedly come equipped with camera sensors, as well.

What to expect from Samsung’s smart glasses?
The Even G1 smart glasses have optional clip-on gradient shades. Photo by Tracey Truly / Digital Trends
The latest leak doesn’t dig into specifics about the internal hardware, but another report from Samsung’s home market sheds some light on the possibilities. As per Maeil Business Newspaper, the Samsung smart glasses will feature a 12-megapixel camera built atop a Sony IMX681 CMOS image sensor.
It is said to offer a dual-silicon architecture, similar to Apple’s Vision Pro headset. The main processor on Samsung’s smart glasses is touted to be Qualcomm’s Snapdragon AR1 platform, while the secondary processing hub is a chip supplied by NXP.
The onboard camera will open the doors for vision-based capabilities, such as scanning QR codes, gesture recognition, and facial identification. The smart glasses will reportedly tip the scales at 150 grams, while the battery size is claimed to be 155 mAh.

Read more
The search system in Gmail is about to get a lot less frustrating
Gmail icon on a screen.

Finding relevant information on Gmail can be a daunting task, especially if you have a particularly buzzy inbox. Right now, the email client uses a search operator system that acts somewhat like a shortcut, but not many users know about it.
Today, Google has announced an update to how search on Gmail works, thanks to some help from AI. When you look up a name or keyword in Gmail, the matching results are shown in chronological order.
Moving ahead, search results will be shown based on their relevance. In Google’s words, relevance will take into account three factors viz. frequent contacts, most-clicked emails, and how recently the relevant emails arrived in your inbox.

Old search (left), new search (right) Google
“With this update, the emails you’re looking for are far more likely to be at the top of your search results — saving you valuable time and helping you find important information more easily,” the company says in a blog post.
The updated search system in Gmail is rolling out to users worldwide, and it will be implemented on the desktop version as well as the mobile app. And just in case you are wondering, this is not an irreversible change to the search function in Gmail.
Google says users can switch between “most relevant” and “most recent” search results at their convenience. The overarching idea is to help users find the intended material at a quicker pace.

Read more