Skip to main content

Teaching machines to see illusions may help computer vision get smarter

Do you remember the kind of optical illusions you probably first saw as a kid, which use some combination of color, light, and patterns to create images that prove deceptive or misleading to our brains? It turns out that such illusions — where perception doesn’t match up with reality — may, in fact, be a feature of the brain, rather than a bug. And teaching a machine to recognize the same kind of illusions may result in smarter image recognition.

This is what computer vision experts from Brown University have been busy working on. They are teaching computers to see context-dependent optical illusions, and thereby to hopefully create smarter, more brain-like artificial vision algorithms that will prove more robust in the real world.

Recommended Videos

“Computer vision has become ubiquitous, from self-driving cars parsing a stop sign to medical software looking for tumors in an ultrasound,” David Mely, one of the Cognitive Science researchers who worked on the project, now working at artificial intelligence company Vicarious, told Digital Trends. “However, those systems have weaknesses stemming from the fact that they are modeled after an outdated blueprint of how our brains work. Integrating newly understood mechanisms from neuroscience like those featured in our work may help making those computer vision systems safer. Much of the brain remains poorly understood, and further research at the confluence of brains and machines may help unlock further fundamental advances in computer vision.”

In their work, the team used a computational model to explore and replicate the ways that neurons interact with one another when viewing an illusion. They created a model of feedback connections of neurons, which mirrors that of humans, that responds differently depending on the context. The hope is that this will help with tasks like color differentiation — for example, helping a robot designed to pick red berries to identify those berries even when the scene is bathed in red light, as might happen at sunset.

“A lot of intricate brain circuitry exists to support such forms of contextual integration, and our study proposes a theory of how this circuitry works across receptive field types, and how its presence is revealed in phenomena called optical illusions,” Mely continued. “Studies like ours, that use computer models to explain how the brain sees, are necessary to enhance existing computer vision systems: many of them, like most deep neural networks, still lack the most basic forms of contextual integration.”

While the project is still in its relative infancy, the team has already translated the neural circuit into a modern machine learning module. When it was tested on a task related to contour detection and contour tracing, the circuit vastly outperformed modern computer vision technology.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Google Gemini’s best AI tricks finally land on Microsoft Copilot
Copilot app for Mac

Microsoft’s Copilot had a rather splashy AI upgrade fest at the company’s recent event. Microsoft made a total of nine product announcements, which include the agentic trick called Actions, Memory, Vision, Pages, Shopping, and Copilot Search. 

A healthy few have already appeared on rival AI products such as Google’s Gemini and OpenAI’s ChatGPT, alongside much smaller players like Perplexity and browser-maker Opera. However, two products that have found some vocal fan-following with Gemini and ChatGPT have finally landed on the Copilot platform. 

Read more
Rivian set to unlock unmapped roads for Gen2 vehicles
rivian unmapped roads gen2 r1t gallery image 0

Rivian fans rejoice! Just a few weeks ago, Rivian rolled out automated, hands-off driving for its second-gen R1 vehicles with a game-changing software update. Yet, the new feature, which is only operational on mapped highways, had left many fans craving for more.
Now the company, which prides itself on listening to - and delivering on - what its customers want, didn’t wait long to signal a ‘map-free’ upgrade will be available later this year.
“One feedback we’ve heard loud and clear is that customers love [Highway Assist] but they want to use it in more places,” James Philbin, Rivian VP of autonomy, said on the podcast RivianTrackr Hangouts. “So that’s something kind of exciting we’re working on, we’re calling it internally ‘Map Free’, that we’re targeting for later this year.”
The lag between the release of Highway Assist (HWA) and Map Free automated driving gives time for the fleet of Rivian vehicles to gather ‘unique events’. These events are used to train Rivian’s offline model in the cloud before data is distilled back to individual vehicles.
As Rivian founder and CEO RJ Scaringe explained in early March, HWA marked the very beginning of an expanding automated-driving feature set, “going from highways to surface roads, to turn-by-turn.”
For now, HWA still requires drivers to keep their eyes on the road. The system will send alerts if you drift too long without paying attention. But stay tuned—eyes-off driving is set for 2026.
It’s also part of what Rivian calls its “Giving you your time back” philosophy, the first of three pillars supporting Rivian’s vision over the next three to five years. Philbin says that philosophy is focused on “meeting drivers where they are”, as opposed to chasing full automation in the way other automakers, such as Tesla’s robotaxi, might be doing.
“We recognize a lot of people buy Rivians to go on these adventures, to have these amazing trips. They want to drive, and we want to let them drive,” Philbin says. “But there’s a lot of other driving that’s very monotonous, very boring, like on the highway. There, giving you your time back is how we can give the best experience.”
This will also eventually lead to the third pillar of Rivian’s vision, which is delivering Level 4, or high-automation vehicles: Those will offer features such as auto park or auto valet, where you can get out of your Rivian at the office, or at the airport, and it goes off and parks itself.
While not promising anything, Philbin says he believes the current Gen 2 hardware and platforms should be able to support these upcoming features.
The second pillar for Rivian is its focus on active safety features, as the EV-maker rewrote its entire autonomous vehicle (AV) system for its Gen2 models. This focus allowed Rivian’s R1T to be the only large truck in North America to get a Top Safety Pick+ from the Insurance Institute for Highway Safety.
“I believe there’s a lot of innovation in the active safety space, in terms of making those features more capable and preventing more accidents,” Philbin says. “Really the goal, the north star goal, would be to have Rivian be one of the safest vehicles on the road, not only for the occupants but also for other road users.”

Read more
Jaguar Land Rover, Nissan hit the brake on shipments to U.S. over tariffs
Range Rover Sport P400e

Jaguar Land Rover (JLR) has announced it will pause shipments of its UK-made cars to the United States this month, while it figures out how to respond to President Donald Trump's 25% tariff on imported cars.

"As we work to address the new trading terms with our business partners, we are taking some short-term actions, including a shipment pause in April, as we develop our mid- to longer-term plans," JLR said in a statement sent to various media.

Read more