We’re not in the business of writing regularly about “fake” news, but it’s hard not to be concerned about the kind of mimicry technology is making possible. First, researchers developed deep learning-based artificial intelligence (A.I.) that can superimpose one person’s face onto another person’s body. Now, researchers at Chinese search giant Baidu have created an A.I. they claim can learn to accurately mimic your voice — based on less than a minute’s worth of listening to it.
“From a technical perspective, this is an important breakthrough showing that a complicated generative modeling problem, namely speech synthesis, can be adapted to new cases by efficiently learning only from a few examples,” Leo Zou, a member of Baidu’s communications team, told Digital Trends. “Previously, it would take numerous examples for a model to learn. Now, it takes a fraction of what it used to.”
Baidu Research isn’t the first to try and create voice-replicating A.I. Last year, we covered a project called Lyrebird, which used neural networks to replicate voices including President Donald Trump and former President Barack Obama with a relatively small number of samples. Like Lyrebird’s work, Baidu’s speech synthesis technology doesn’t sound completely convincing, but it’s an impressive step forward — and way ahead of a lot of the robotic A.I. voice assistants that existed just a few years ago.
The work is based around Baidu’s text-to-speech synthesis system Deep Voice, which was trained on upwards of 800 hours of audio from a total of 2,400 speakers. It needs just 100 5-second sections of vocal training data to sound its best, but a version trained on only 10 5-second samples was able to trick a voice-recognition system more than 95 percent of the time.
“We see many great use cases or applications for this technology,” Zou said. “For example, voice cloning could help patients who lost their voices. This is also an important breakthrough in the direction of personalized human-machine interfaces. For example, a mom can easily configure an audiobook reader with her own voice. The method [additionally] allows creation of original digital content. Hundreds of characters in a video game would be able to have unique voices because of this technology. Another interesting application is speech-to-speech language translation, as the synthesizer can learn to mimic the speaker identity in another language.”
- OpenAI’s GPT-3 algorithm is here, and it’s freakishly good at sounding human
- These amazing audio deepfakes showcase progress of A.I. speech synthesis
- Audio deepfakes are going to wreak havoc on the recording industry
- Wild new ‘brainsourcing’ technique trains A.I. directly with human brainwaves
- Twitter’s misinformation problem will never disappear as long as bots exist