Google’s AI just got ears

The Google Gemini AI logo.
Google

AI chatbots are already capable of “seeing” the world through images and video. But now, Google has announced audio-to-speech functionalities as part of its latest update to Gemini Pro. In Gemini 1.5 Pro, the chatbot can now “hear” audio files uploaded into its system and then extract the text information.

The company has made this LLM version available as a public preview on its Vertex AI development platform. This will allow more enterprise-focused users to experiment with the feature and expand its base after a more private rollout in February when the model was first announced. This was originally offered only to a limited group of developers and enterprise customers.

Recommended Videos

1. Breaking down + understanding a long video

I uploaded the entire NBA dunk contest from last night and asked which dunk had the highest score.

Gemini 1.5 was incredibly able to find the specific perfect 50 dunk and details from just its long context video understanding! pic.twitter.com/01iUfqfiAO

— Rowan Cheung (@rowancheung) February 18, 2024

Google shared the details about the update at its Cloud Next conference, which is currently taking place in Las Vegas. After calling the Gemini Ultra LLM that powers its Gemini Advanced chatbot the most powerful model of its Gemini family, Google is now calling Gemini 1.5 Pro its most capable generative model. The company added that this version is better at learning without additional tweaking of the model.

Gemini 1.5 Pro is multimodal in that it can interpret different types of audio into text, including TV shows, movies, radio broadcasts, and conference call recordings. It’s even multilingual in that it can process audio in several different languages. The LLM may also be able to create transcripts from videos; however, its quality may be unreliable, as mentioned by TechCrunch.

When first announced, Google explained that Gemini 1.5 Pro used a token system to process raw data. A million tokens equate to approximately 700,000 words or 30,000 lines of code. In media form, it equals an hour of video or around 11 hours of audio.

There have been some private preview demos of Gemini 1.5 Pro that demonstrate how the LLM is able to find specific moments in a video transcript. For example, AI enthusiast Rowan Cheung got early access and detailed how his demo found an exact action shot in a sports contest and summarized the event, as seen in the tweet embedded above.

However, Google noted that other early adopters, including United Wholesale Mortgage, TBS, and Replit, are opting for more enterprise-focused use cases, such as mortgage underwriting, automating metadata tagging, and generating, explaining, and updating code.

Editors' Recommendations

Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
OpenAI needs just 15 seconds of audio for its AI to clone a voice

In recent years, the listening time required by a piece of AI to clone someone’s voice has been getting shorter and shorter.

It used to be minutes, now it’s just seconds.

Read more
We may have just learned how Apple will compete with ChatGPT

As we approach Apple’s Worldwide Developers Conference (WWDC) in June, the rumor mill has been abuzz with claims over Apple’s future artificial intelligence (AI) plans. Well, there have just been a couple of major developments that shed some light on what Apple could eventually reveal to the world, and you might be surprised at what Apple is apparently working on.

According to Bloomberg, Apple is in talks with Google to infuse its Gemini generative AI tool into Apple’s systems and has also considered enlisting ChatGPT’s help instead. The move with Google has the potential to completely change how the Mac, iPhone, and other Apple devices work on a day-to-day basis, but it could come under severe regulatory scrutiny.

Read more
Gemini Advanced vs. Copilot Pro: which is the better deal?

Google and Microsoft both have free versions of their AI chatbots, but the paid versions offer significantly more features. Google recently announced its Gemini Advanced chatbot, which is paired with its Google One AI Premium suite of tools, services, and storage options. It is considered a direct competitor to Microsoft's Copilot Pro service, which rolls AI into Office apps. Which is the better package deal? Let's dig in.
Availability and pricing
Gemini Advanced and Microsoft Copilot Pro are currently available with competitive pricing. You can sign up for either service for $20 per month and access a similar suite of productivity tools and services.

Google has an ongoing deal offering a free two-month trial for web and mobile users, but there is still no telling how long the deal will last. Microsoft offers a one-month free trial of Copilot Pro to those who install the Copilot mobile app on iOS or Android devices.

Read more