Skip to main content

Eavesdropping tech reverse-engineers speech based on light bulb vibrations

Lamphone: Real-Time Passive Sound Recovery from Light Bulb Vibrations

​The next James Bond movie may not be out until later this year, but thanks to researchers from Ben Gurion University and the Weizmann Institute of Science in Israel you can get your high-tech spy fix today — by checking out their new proof-of-concept eavesdropping demonstration. In a project called Lamphone, they have shown how it’s possible to listen to what is being said in a room even without physically accessing the space or using any traditional recording implements. How? By checking out the minute vibrations in a light bulb resulting from speech in the immediate vicinity.

“We [demonstrated] that speech can be recovered from a hanging bulb in real-time by passively analyzing its vibrations via electro-optical sensor,” Ben Nassi, one of the researchers on the Lamphone project, told Digital Trends.

In their demonstration, the researchers set out to record the audio in a third-floor office, using a single 12-watt LED bulb hanging from the ceiling. The eavesdropper was positioned on a pedestrian bridge, 25 meters (82 feet) from the target. The system requires an electro-optical sensor, telescope, and computer with audio processing software. The researchers developed a special algorithm that is able to reverse-engineer audio from monitoring the way a hanging lightbulb (currently the lightbulb must be hanging for it to work) moves as sound waves from speech bounce around a room.

While the audio fidelity of the re-created sound isn’t perfect, it’s certainly good enough that it could clue an eavesdropper in on what is happening. “We were able to recover speech that was accurately transcribed by Google’s Speech to Text API,” the researchers write on an accompanying project webpage. “We were also able to recover singing that was recognized by Shazam.”

The researchers claim that the range that sound could be recovered from may be extended with the right equipment, such as a larger telescope. In the future, the researchers plan to look at whether it is possible to analyze sound from additional light sources, such as decorative LED flowers.

A paper describing the work, titled “Lamphone: Real-Time Passive Sound Recovery from Light Bulb Vibrations,” is available to read online.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Chocolate mousse in space is more important than you think
Astronaut Andreas Mogensen with his chocolate mousse aboard the space station.

Astronauts on board the International Space Station (ISS) keep a busy schedule during their six-month stints in orbit. Most of their time is taken up with carrying out scientific research in the unique microgravity conditions that the facility provides, while the occasional spacewalk takes care of upgrades and general maintenance.

The research programs include learning about the best way to grow crops off-Earth and aboard the relatively cramped conditions of the orbital facility, an especially important task if we’re ever to send astronauts on long-duration missions to a lunar base or even to Mars.

Read more
No more GPUs? Here’s what Nvidia’s DLSS 10 could look like
RTX 4070 logo on a graphics card.

The latest version of Nvidia's Deep Learning Super Sampling (DLSS) is already a major selling point for some of its best graphics cards, but Nvidia has much bigger plans. According to Bryan Catanzaro, Nvidia's vice president of Applied Deep Learning Research, Nvidia imagines that DLSS 10 would have full neural rendering, bypassing the need for graphics cards to actually render a frame.

During a roundtable discussion hosted by Digital Foundry, Catanzaro delved deeper into what DLSS could evolve into in the future, and what kinds of problems machine learning might be able to tackle in games. We already have DLSS 3, which is capable of generating entire frames -- a huge step up from DLSS 2, which could only generate pixels. Now, Catanzaro said with confidence that the future of gaming lies in neural rendering.

Read more
Spotify using AI to clone and translate podcasters’ voices
spotify app available in windows 10 store

Spotify has unveiled a remarkable new feature powered by artificial intelligence (AI) that translates a podcast into multiple languages using the same voices of those in the show.

It’s been made possible partly by OpenAI’s just-released voice generation technology that needs only a few seconds of listening to replicate a voice.

Read more