Skip to main content

A.I. headphones could warn distracted pedestrians when there’s traffic around

Headphones have the ability to seal us in our own isolated sound bubbles; putting an invisible wall around wearers, even in public spaces. At least, it can feel that way. In reality, while the world might seem like it disappears when you put on your fancy AirPods Pro, it doesn’t actually. As walking across a busy street without paying attention would quickly remind you.

Could machine intelligence help where human intelligence fails us?

That’s certainly what researchers from Columbia University hope. They have developed a Pedestrian Audio Warning System (PAWS) that seeks to alert headphone wearers of the threat posed by passing vehicles. The smart headphone tech technology works by using machine learning algorithms to interpret vehicle sounds from up to 60 meters away. It can then provide information about the location of those vehicles. The results could be a major boon to pedestrian safety at a time when, tragically, more pedestrians than ever are killed on roads in the United States.

PAWS headphones 1
Electrical Engineering and Data Science Institute/Columbia University

The headphones used for the prototype system feature an array of low-cost microphones, located in different parts of the headset. The relevant sound features of possible cars are extracted by an onboard custom integrated circuit, which then sends them to a paired smartphone app. The smartphone uses machine learning algorithms to determine what is and is not a vehicle sound. The neural network it relies on was trained using audio from a wide range of both vehicles and environmental conditions.

The system is still far from complete. For one thing, it can only identify the approximate position of vehicles; not their trajectory. Being able to determine this would be far more useful than simply assuming a static road state for vehicles that are, in reality, anything but static. Secondly, the researchers are still trying to figure out the best way to signal this information to wearers. One possibility would be to offer warning beeps on different sides of stereo headphones to make clear exactly where a sound is emanating from.

The PAWS project has already received a grant of $1.2 million from the National Science Foundation. According to IEEE Spectrum, the team is hoping to pass a “more refined” version of the tech over to a company that could bring it to market.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more