Skip to main content

Translate One2One is an earpiece that can translate languages in seconds

Imagine if understanding a foreign language you have never learned was as easy as just listening. Australian startup Lingmo International has leveraged IBM Watson technology for its Translate One2One, an earpiece that can translate languages in near real-time.

Using IBM Watson’s Natural Language Processor and Language Translation APIs and Lingmo’s machine learning applications, the earpiece-equipped device can translate across English, Japanese, French, Italian, Spanish, Brazilian Portuguese, German, and Chinese. The earpiece has a microphone built-in which picks up spoken words, allowing the earpiece to translate speech in seconds. An iOS app is available which offers speech-to-text and text-to-speech translation for even more languages.

Recommended Videos

If you find yourself in foreign lands and out of the reach of Bluetooth and Wi-Fi, Translate One2One can still work just as well. “As the first device on the market for language translation using AI that does not rely on connectivity to operate, it offers significant potential for its unique application across airlines, foreign government relations and even not-for-profits working in remote areas,” said Danny May, Lingmo’s founder, in a press release.

Translate One2One is not the first to the party. Last year, tech startup Waverly Labs released its Pilot earpiece which also translated multiple languages on the go. Similar to Translate One2One, Pilot can translate languages without internet connection by downloading language packs from a companion app. But, unlike Translate One2One, Pilot requires a cell phone connection to work offline.

The Translate One2One earpiece was unveiled earlier this month at the United Nations Artificial Intelligence (AI) for Good Summit in Geneva, Switzerland. It is available to purchase today at $179 and will ship by July. You can put your order in here at Lingmo’s official website. A travel pack with two earpieces is also available for $229 so you and someone who does not fully understand your language can have a pretty seamless conversation without having to yell “what do you mean?” at each other.

Keith Nelson Jr.
Former Staff Writer, Entertainment
Keith Nelson Jr is a music/tech journalist making big pictures by connecting dots. Born and raised in Brooklyn, NY he…
I tested Google Gemini and Apple Intelligence. Here’s which one you should use
Pixel Studio tools on a Pixel.

This year feels like a turning point for smartphones with all the new AI features being rolled out by Apple, Google, and basically every other tech company. I've been particularly intrigued by Apple Intelligence, which has introduced some promising new AI capabilities that will be rolled out to select devices over the coming months.

Google's Gemini AI system for the Pixel 9 Pro and other Android devices also shows promise and seems slightly further along than Apple's product. Both tools claim to make our lives easier and enhance our interaction with our devices when fully implemented.

Read more
Google’s new Gemini 2.0 AI model is about to be everywhere
Gemini 2.0 logo

Less than a year after debuting Gemini 1.5, Google's DeepMind division was back Wednesday to reveal the AI's next-generation model, Gemini 2.0. The new model offers native image and audio output, and "will enable us to build new AI agents that bring us closer to our vision of a universal assistant," the company wrote in its announcement blog post.

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google's new flagship AI model, you can expect to see it begin powering AI features across the company's ecosystem in the coming months. As with OpenAI's o1 model, the initial release of Gemini 2.0 is not the company's full-fledged version, but rather a smaller, less capable "experimental preview" iteration that will be upgraded in Google Gemini in the coming months.

Read more
ChatGPT unveils Sora with up to 20-second AI video generation
An AI generated image of a woman who walks the streets of Tokyo.

OpenAI has been promising to release its next-gen video generator model, Sora, since February. On Monday, the company finally dropped a working version of it as part of its "12 Days of OpenAI" event.

"This is a critical part of our AGI roadmap," OpenAI CEO Sam Altman said during the company's live stream.

Read more