Stanford computer science professor James Landay said the study began as a “coffee shop conversation” between himself and Stanford adjunct professor Andrew Ng, currently chief scientist at Baidu. “Andrew said that Baidu’s speech recognition tools were getting really great, but that they didn’t know the right experiment to quantify it,” Landay told Digital Trends.
Baidu’s Deep Speech 2 cloud-based speech recognition software is based on a deep learning neural network: an impressive machine learning tool that is able to train itself by analyzing enormous datasets of real speech.
“Previously, we didn’t have the data and computational ability to build these models, so that a computer could understand different accents and patterns of speech,” Landay continued.
In the end, the casual conversation between Landay and Ng turned into a full-blown experiment, involving 32 participants speaking either Chinese or English. All participants had grown up text messaging, and both were using the standard keyboards which come with the iPhone.
For the English speakers this meant the regular iOS QWERTY keyboard, while the Mandarin speakers used Apple’s Pinyin keyboard. In both cases, speech recognition was around three times faster than users were able to type — while the error rate was 20.4 percent lower for the English speech recognition, and 63.4 percent lower for the Mandarin equivalent.
“My expectation was that speech would be faster than text,” Landay said. “We know this, because you can talk faster than you can type. The problem in the past was that you got a lot of errors with speech recognition, and this slowed you down. I thought speech would prove faster. What I didn’t expect was that it would wind up being three times faster. I figured maybe we would get 50 percent faster. Instead it was much more than that.”
The test isn’t 100 percent comprehensive, of course. Currently the world’s fastest mobile keyboard (at least in English) is the third-party Fleksy keyboard. In a 2014 Guinness World Record for fastest texting, a user was able to type a 126-letter sentence in just 18.44 seconds. However, Landay noted that this study chose a regular iPhone keyboard because it gives a good indication of the typical typist. “Most people don’t take the time to learn alternative keyboards,” he said.
As to what the study means, Landay suggests it represents an important benchmark for speech recognition. “There’s still room to improve, but we think some kind of inflection point has been passed,” he said. “Further improvements will come in recognizing names, performing better in noisy environments, etc.”
This, he said, opens up more possibilities for developers to think more seriously about incorporating speech recognition into their systems without worry. “What will increasingly make sense is relying on speech,” he said. “For example, multimodal interfaces combining speech with other elements to help people navigate. The biggest challenge, though, is going to be understanding the meaning of words and sentences. That part still has a way to go.”
- Using Alexa is tricky if you have a speech disability. Voiceitt could fix that
- Chinese firm working on facial recognition that can identify you under a mask
- Google’s Gboard is about to get a lot better at speech recognition