Skip to main content

Like a wearable guide dog, this backback helps Blind people navigate

Visual Assistance System for the Visually Impaired

In “Secondhand Spoke,” the 15th episode of the 12th season of Family Guy, teenage son Chris Griffin is being bullied. With Chris unable to come up with responses to the verbal gibes of his classmates, his smarter baby brother, Stewie, hops in a backpack so that Chris can surreptitiously carry him around. Prompted by Stewie, Chris not only manages to get back at the bullies, but even winds up getting nominated for class president for his troubles.

Recommended Videos

That Family Guy B-plot bears only the most passing of resemblances to a new project carried out by Intel and the University of Georgia. Nonetheless, it’s an intriguing one: A smart backpack that’s able to help its wearer better navigate a given environment without problems — all through the power of speech.

What researcher Jagadish Mahendran and team have developed is an A.I.-powered, voice-activated backpack that’s designed to help its wearer perceive the surrounding world. To do it, the backpack — which could be particularly useful as an alternative to guide dogs for visually impaired users — uses a connected camera and fanny pack (the former worn in a vest jacket, the latter containing a battery pack), coupled with a computing unit so it can respond to voice commands by audibly describing the world around the wearer.

That means detecting visual information about traffic signs, traffic conditions, changing elevations, and crosswalks, alongside location information, and then being able to turn it into useful spoken descriptions, delivered via Bluetooth earphones.

A useful assistive tool

“The idea of developing an A.I.-based visual-assistance system occurred to me eight years ago in 2013 during my master’s,” Mahendran told Digital Trends. “But I could not make much progress back then for [a] few reasons: I was new to the field and deep learning was not mainstream in computer vision. However, the real inspiration happened to me last year when I met my visually impaired friend. As she was explaining her daily challenges, I was struck by this irony: As a perception and A.I. engineer I have been teaching robots how to see for years, while there are people who cannot see. This motivated me to use my expertise, and build a perception system that can help.”

ai navigation backpack for the blind setup
Jagadish Mahendran

The system contains some impressive technology, including a Luxonis OAK-D spatial A.I. camera that leverages OpenCV’s Artificial Intelligence Kit with Depth, which is powered by Intel. It is capable of running advanced deep learning neural networks, while also providing high-level computer vision functionality, complete with a real-time depth map, color information, and more.

“The success of the project is that we are able to run many complex A.I. models on a setup that has a simple and small form factor and is cost-effective, thanks to OAK-D camera kit that is powered by Intel’s Movidius VPU, an A.I. chip, along with Intel OpenVINO software,” Mahendran said. “Apart from A.I., I have used multiple technologies such as GPS, point cloud processing, and voice recognition.”

Currently in testing phase

As with any wearable device, a big challenge involves making it something that people would actually want to wear. Nobody wants to look like a science-fiction cyborg outside of Comic-Con.

Fortunately, Mahendran’s A.I. vest does well under these parameters. It conforms to the standards of what the late Xerox PARC computer scientist Mark Weiser said was necessary for ubiquitous computing: Receding into the background without attracting attention to itself. The components are all hidden away from view, with even the camera (which, by design, must by visible in order to record the necessary images) looking out at the world through three tiny holes in the vest.

Image used with permission by copyright holder

“The system is simple, wearable, and unobtrusive so that the user doesn’t get unnecessary attention from other pedestrians,” Mahendran said.

Currently, the project is in the testing phase. “I did the initial [tests myself] in downtown Monrovia, California,” Mahendran said. “The system is robust, and can run in real time.”

Mahendran noted that, in addition to detecting outdoor obstacles — ranging from bikes to overhanging tree branches — it can also be useful for indoor settings, such as detecting unclosed kitchen cabinet doors and the like. In the future, he hopes that members of the public who need such a tool will be able to try it out for themselves.

“We have already formed a team called Mira, which is a group of volunteers from various backgrounds, including people who are visually impaired,” Mahendran said. “We are growing the project further with a mission to provide an open-source, A.I. based visual assistance system for free. We are currently in the process of raising funds for our initial phase of testing.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more
Inside the rapidly escalating war between deepfakes and deepfake detectors
Facebook Deepfake Challenge

 

Imagine a twisty-turny movie about a master criminal locked in a war of wits with the world’s greatest detective.

Read more
How A.I. bumblebee brains could usher in a new era for navigation
ai bee brain opteran

Artificial intelligence is a discipline that, historically, has rewarded big thinkers. James Marshall, professor of computer science at the U.K.’s University of Sheffield, thinks small.

That’s not intended as a slight, so much as it is an accurate description of his work. His startup, Opteran Technologies, has just received $2.8 million to continue pursuing that work. Where others are focused on building A.I. with human-level intelligence, pushing even further into the realms of “artificial general intelligence,” Marshall has his sights set on something a whole lot smaller than the human brain. He wants to build an artificial honeybee brain.

Read more