Skip to main content

Intel is using A.I. to build smell-o-vision chips

While smell-o-vision may be a long way from being ready for your PC, Intel is partnering with Cornell University to bring it closer to reality. Intel’s Loihi neuromorphic research chip, a a powerful electronic nose with a wide range of applications, can recognize dangerous chemicals in the air.

“In the future, portable electronic nose systems with neuromorphic chips could be used by doctors to diagnose diseases, by airport security to detect weapons and explosives, by police and border control to more easily find and seize narcotics, and even to create more effective at home smoke and carbon monoxide detectors,” Intel said in a press statement.

Recommended Videos

With machine learning, Loihi can recognize hazardous chemicals “in the presence of significant noise and occlusion,” Intel said, suggesting the chip can be used in the real world where smells — such as perfumes, food, and other odors — are often found in the same area as a harmful chemical. Machine learning trained Loihi to learn and identify each hazardous odor with just a single sample, and learning a new smell didn’t disrupt previously learned scents.

Please enable Javascript to view this content

Intel Labs senior research scientist Nabil Imam who worked on the Loihi development team used the same computational principles of scent analysis as biological brains in humans and animals. The company worked with Cornell to analyze the brain’s electrical activity when animals smell odors, while Intel Labs scientists derived a set of algorithms to configure them to neuromorphic silicon.

There’s still plenty of work to be done on electronic noses. Like image detection in machine learning, olfactory learning requires similar smells to be categorized. Fruits with similar odors, for example, can be difficult for neuromorphic systems like to Loihi to identify.

“Imam and team took a dataset consisting of the activity of 72 chemical sensors in response to 10 gaseous substances (odors) circulating within a wind tunnel,” Intel detailed. “The sensor responses to the individual scents were transmitted to Loihi, where silicon circuits mimicked the circuitry of the brain underlying the sense of smell. The chip rapidly learned neural representations of each of the 10 smells, including acetone, ammonia and methane, and identified them even in the presence of strong background interferents.”

Intel claims Loihi can learn 10 different odors right now. In the future, robots equipped with electronic noses might be used to monitor the environment, and doctors could use these computerized olfactory systems for medical diagnosis in instances where diseases emit particular odors.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
Intel’s Panther Lake chips might not roar until 2026
The Intel Pather Lake SoC showcased at the Embedded World 2025. Captured by PC Games Hardware.

Intel’s next-generation Panther Lake laptop processors, originally expected to launch in late 2025, may now be delayed until the first quarter of 2026. The delay appears to be linked to challenges with Intel’s 18A process node, which plays a critical role in the Panther Lake architecture. This shift could affect Intel’s competitive timeline as rival chipmakers continue advancing their own next-gen processors.

According to slides obtained by VideoCardz, Intel has listed Panther Lake for a Q1 2026 release, suggesting a delay from previous expectations. These internal documents also hint that Panther Lake could be branded under the Core Ultra 300 series when it finally launches, following Intel’s recent shift in naming conventions.

Read more
Apple Intelligence could solve my coding struggles — but this key feature is nowhere to be seen
Coding on a MacBook

About a year ago, I started learning how to code in Swift, Apple’s app development language. The idea was to eventually be able to build my own iOS apps from scratch and rediscover the fun of coding.

After a while, though, I began to lose interest. My last coding practice was almost 20 years ago when I taught myself HTML and CSS, and getting back into the mindset was hard. I also didn’t have a specific app goal in mind, meaning the drive to push through the tough sections wasn’t there.

Read more
Google is giving free access to two of Gemini’s best AI features
Gemini Advanced on the Google Pixel 9 Pro Fold.

Google’s Gemini AI has steadily made its way to the best of its software suite, from native Android integrations to interoperability with Workspace apps such as Gmail and Docs. However, some of the most advanced Gemini features have remained locked behind a subscription paywall.
That changes today. Google has announced that Gemini Deep Research will now be available for all users to try, alongside the ability to create custom Gem bots. You no longer need a Gemini Advanced (or Google One AI Premium) subscription to use the aforementioned tools.

The best of Gemini as an AI agent
Deep Research is an agentic tool that takes over the task of web research, saving users the hassle of visiting one web page after another, looking for relevant information. With Deep Research, you can simply put a natural language query as input, and also specify the source, if needed.

Read more