Skip to main content

These amazing audio deepfakes showcase progress of A.I. speech synthesis

Visual deepfakes, in which one person’s face is spliced onto another person’s body, are so 2019. Here in 2020, deepfake technology trends have shifted a bit, and now the cool kids are using the technology is to create impressive “soundalike” audio tracks.

While these have plenty of scary potential when it comes to fake news and the like, for now, it seems that creators are perfectly happy to use them for more irreverent purposes, such as getting famous figures to perform songs they never had any real involvement with.

Here are five of the weirdest and best — including one made specifically for Digital Trends that you won’t find anywhere else.

Jay-Z raps ‘We Didn’t Start the Fire’

Jay-Z covers "We Didn't Start the Fire" by Billy Joel (Speech Synthesis)

No, this audio deepfake of Jay-Z rapping Billy Joel’s We Didn’t Start the Fire didn’t start any fires when it comes to showcasing this vocal synthesis tech. But, having triggered one of the first legal complaints about its usage (by Jay-Z’s record label), YouTube deepfake audio creator Vocal Synthesis helped raise awareness of these tools for a lot of people.

The vocal reproduction of Jay-Z’s voice isn’t perfect in his unofficial cover of Joel’s 1989 smash hit. But, in the breathy staccato style used by Jay, some of the more awkward vocal glitches are masked pretty well. This is a great showcase of deepfake audio in action: Its strengths, its weaknesses, and its eerie abilities to take a piece of text we immediately associate with one person and turn it into something that sounds convincingly like it came out of someone else’s mouth.

The queen recites The Sex Pistols

Queen Elizabeth II reads "God Save the Queen" by Sex Pistols (Speech Synthesis)

Another Vocal Synthesis creation, Queen Elizabeth II (that’s the current queen) reading the Sex Pistol’s 1977 single God Save the Queen is the kind of brilliant meta-parody the internet does so well. The song’s title is, of course, taken from the national anthem of the same name; repurposed to fit lyrics resentful of the English class system and the idea of a monarchy. The original song was famously banned from broadcast by both the BBC and United Kingdom’s Independent Broadcasting Authority.

The Queen Elizabeth voice synthesis on this particular creation wavers in and out, sounding more like a stitched-together tapestry of different samples than one cohesive reading. But is there anything more punk in its conception than a homemade DIY creation which turns, literally, the voice of authority against itself? Brilliant stuff.

Bill Clinton ponders if ‘Baby Got Back’

Bill Clinton reads "Baby Got Back" by Sir Mix-A-Lot (Speech Synthesis)

He likes big butts and he can’t deny. There’s something of a subgenre among deepfake audio makers of getting former U.S. presidents to lend their instantly recognizable voices to perform an array of musical numbers.

Bill Clinton playing Sir Mix-a-Lot doesn’t do it for you? How about George W. Bush performing 50 Cent’s In Da Club. Or maybe you’d just settle for a medley of former POTUS’s spitting NWA’s F*ck Tha Police? (At least the last two of these are NSFW, although in the age of working from home such things may no longer apply!)

Frank Sinatra and Ella Fitzgerald get their ‘La La Land’ on

Jukebox AI regenerates "city of stars" using Frank Sinatra's voices and music style.

So far, all of these have concentrated on synthesizing vocals only. That’s a good start, but an artist’s voice is just one part of their repertoire. What if you could use deepfake audio technology to not just reproduce a person’s voice, but also to learn their other musical stylings and use this to dream up a whole new piece of music?

This is the basis of Open AI’s Jukebox, a music-generating neural network that generates music — including, in its own words, “rudimentary singing … in a variety of genres and artist styles.” Unsurprisingly, this powerful tool is already being put to work, as evidenced by the above collaboration between Frank Sinatra and Ella Fitzgerald singing City of Stars from 2016’s Oscar-winning movie La La Land. The results aren’t perfect, but they definitely give a taste of where all of this is going.

Nirvana interprets ‘Clint Eastwood’

Top 4 Music Deep Fakes in the Style of Nirvana (sorta) sing Clint Eastwood by Gorillaz

In a piece created especially for Digital Trends, the folks at generative A.I. group Dadabots, CJ Carr and Zack Zukowski, whipped up a deepfake audio of legendary grunge band Nirvana riffing on Clint Eastwood, the 2001 single from the British virtual band Gorillaz.

“We used the pretrained, 5 billion-parameter Jukebox model,” Carr told Digital Trends. “It’s been trained on 7,000-plus bands, including Nirvana’s discography. We ran models on multiple Linux servers, set them to grunge and Nirvana, with the hook from Clint Eastwood as lyrics, then generated 27 different 90-second clips on our V100s, and picked our favorite top four.”

As Carr notes, there is still a degree of human creativity involved because they need to select the best pieces. A lot of the time, Carr said, the music clips sound less like one specific band and more like a generic group in that genre. Nonetheless, it’s pretty fascinating stuff.

“Sometimes it invents its own lyrics, [such as] ‘I got sunshine in my head,’ Carr said. “Sometimes the band goes into a breakdown. It kinda has a mind of its own. The realism and room for its own creativity is astonishing. I feel like we’re just scratching the surface on how to manipulate it.”

Editors' Recommendations

Topics
Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Microsoft just discovered the next big evolution in displays
Resident Evil 4 running on the LG UltraGear 45 gaming monitor.

Microsoft is working on a new patent that aims to bring unprecedented levels of control to displays. The new tech, dubbed Pixel Luminesce for Digital Display, allows you to micromanage every single pixel of your display, adjusting the brightness as needed. If and when this makes it out of the development stage, it could end up being huge for all sorts of use cases, and could bring major improvements to some of the best gaming monitors.

The patent application describing the tech, first shared by Windows Report, describes the new technology as something that would enable selective dimming. With Microsoft's new tech, you could decide that one part of the display stays brighter while the rest of it remains unaffected, and this would happen dynamically.

Read more
SWAT team’s Spot robot shot multiple times during standoff
Spot, a robot dog.

A Boston Dynamics’ Spot robot deployed by the Massachusetts State Police (MSP) was shot during a standoff in Cape Cod, Massachusetts.

It’s believed to be the first time that the robot helper has taken a bullet during active duty, and it highlights how the machine can help keep law enforcement out of harm’s way during challenging situations.

Read more
Microsoft Edge is slowly becoming the go-to browser for PC gamers
microsoft edge chromium to roll out automatically soon chrome

Microsoft Edge is already jam-packed with features that other web browsers don't have, but a new one might well help your PC run faster while gaming. The default Windows web browser now has the option to limit the amount of RAM it uses, helping you prioritize RAM access to other applications or games. The feature is currently being tested in the Canary version of Microsoft Edge and could roll out to everyone if Microsoft deems it useful enough and gets quality feedback.

Spotted by X (formerly Twitter) user Leopeva64, the setting for this new feature is buried in the System and Performance section of the latest Canary version of Microsoft Edge. It is being rolled out gradually, so not everyone has it yet, but it gives two options for controlling your PC resources.

Read more