“We were curious about the privacy threats that these devices can expose users to,” Shyam Gollakota, senior author of the study and UW associate professor of computer science and engineering, told Digital Trends. “So we asked the question, ‘How about a simple task of playing music on these devices? Can we use music to achieve surveillance on people?’”
Gollakota and his team used software called CovertBand, which allows a smart device to double as a remote-controlled sonar system, by taking advantage of a device’s built-in speaker and microphone. The researchers manipulated the devices to play modified music and the software could then analyze reflected sounds to track body movements and position.
“The way this works is that we embed a chirp signal in the music and hide it using the beats of the music,” Gollakota explained. “These signals get reflected off the human body and can be observed by the microphones in these devices. We can analyze these reflections and can figure a whole host of things about the person.”
The UW team hid the subtle chirp in songs by artists like 2Pac and Michael Jackson, which you can hear here. The chirps are slight but not always indistinguishable. In the study, listeners could identify the edited songs 58 percent of the time.
Using CovertBand the researchers were able to detect multiple individuals within the same room as the device and even behind barriers, such as thin walls. Without barriers they could detect a walking individual about twenty feet away with an error of around seven inches. Through a thin wall that distance decreased by about half.
Though the idea is unsettling, this isn’t the most secretive surveillance technique — an attacker has to literally play music for it to work. If someone tried this on your home smart TV, you’d surely notice. It nonetheless demonstrates the potential for such devices to be exploited in private or public spaces.
“Be careful about what kind of audio can be played on your device,”Gollakota advised. “Strictly control what kind of apps can use both your speaker and microphones and ensure that only the most trusted apps can do so.”
The researchers will present their report next month at the Ubicomp 2017 conference.
- A.I. analyzes video to detect signs of cerebral palsy in infants
- How emotion-tracking A.I. will change computing as we know it
- Tiny amounts of water bounce along the surface of the Moon
- China’s mind-controlled cyborg rats are proof we live in a cyberpunk dystopia
- A security flaw leaves Ring doorbells and cameras vulnerable to spying