Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Google brings second-gen AI models to the Gemini mobile app

AI Model selector option in the Gemini mobile app for iPhones.
Oplus_20054016 Nadeem Sarwar / Digital Trends

Earlier today, Google made a few notable AI announcements, at a time when the tech industry is peeling the layers of China’s DeepSeek AI and the search giant is staring at anti-trust heat in China. The latest from Google is an experimental version of Gemini 2.0 Pro, which is claimed to be the company’s latest and greatest so far.

“It has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we’ve released so far,” says the company. This one raises the prompt context window to 2 million tokens, allowing it to ingest and comprehend massive inputs with ease.

Recommended Videos

On the more affordable side of things, Google is pushing the new 2.0 Flash-Lite model as a public preview. Focused on reduced costs and snappier performance, this one is now available in Google AI Studio and Vertex AI systems.

What’s new Gemini mobile app?

AI models available in the Gemini mobile app for Android phones.
Nadeem Sarwar / Digital Trends

For smartphone users, the Gemini app is now getting access to these AI upgrades. Starting today, the mobile application will let users pick between the new Gemini 2.0 Flash Thinking Experimental and Gemini 2.0 Pro Experimental models.

Currently ranked as No. 1 on the Chatbot Arena LLM Leaderboard — and ahead of OpenAI’s ChatGPT-4o and DeepSeek R1 — the Gemini 2.0 Flash Thinking Experimental model is a massive leap forward for a couple of reasons.

First, it can work with data pulled from apps such as YouTube, Google Maps, and Search. Based on your queries, this Gemini model can cross-check information from within those platforms and offer relevant answers.

Google

Second, it comes with thinking and reasoning capabilities. To put it simply, you can see, in real-time, how this model breaks down your commands and puts together the information as a cohesive response.

The result, as Google puts it, is improved explainability, speed, and performance. It allows text and image input with support for up to a million token window, while the knowledge cut-off boundary is set to June 2024.

Next, we have the Gemini 2.0 Pro Experimental model, which is now available to folks who pay for a Gemini Advanced subscription. Google says this one is “exceptional at complex tasks,” particularly at chores such as maths problem-solving and coding.

This multi-modal AI model can also pull relevant data from Google Search, and combine it with enhanced world understanding chops to handle more demanding tasks. You can access these new Gemini 2.0 series models on the mobile app as well as the web dashboard.

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
Gemini Advanced can make videos now, and they’re amazing
The Veo 2 prompt in Gemini Advanced.

Google has added a new and exciting feature to Gemini Advanced, its AI personal assistant and chatbot. Using just a text prompt, Gemini can now create an 8-second animated video, bringing your words to life in a way you won’t quite believe. The feature is powered by Veo 2, its video model introduced in late 2024, which is designed to create realistic videos complete with a deep understanding of human movements, real-world scenes, and even different lens types. 

Google explains how it’s simple to create videos with Gemini and Veo 2. “Just describe the scene you want to create — whether it's a short story, a visual concept, or a specific scene — and Gemini will bring your ideas to life. The more detailed your description, the more control you have over the final video. This opens up a world of fun creative possibilities, letting your imagination go wild to picture unreal combinations, explore varied visual styles from realism to fantasy, or quickly narrate short visual ideas.”

Read more
Google just gave vision to AI, but it’s still not available for everyone
Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open

Google has just officially announced the roll out of a powerful Gemini AI feature that means the intelligence can now see.

This started in March as Google began to show off Gemini Live, but it's now become more widely available.

Read more
Gemini in Google Maps now lets you plan a vacation from screenshots
Google Maps on the Asus Zenfone 11 Ultra.

How often do you take screenshots of exciting destinations from travel blogs and TikTok videos but forget about them entirely when you're planning your next vacation? Don't fuss if the answer is "plenty." The increasing information overload leaves little room for memories of a fascinating spot in another random part of the world. Thankfully, the new Gemini AI features in Google Maps can do just that, so your interesting saves don't go buried under the myriad screenshots on your phone.

After recently receiving Gemini's superpowers to assist you in discovering places, Google Maps is gaining the ability to look through your screenshots to help you plan travels. The Maps app is getting a new "screenshot list" feature that will identify text from your screenshots and open up details on Google Maps. Google's blog post also says you can save useful places in a list, which can be shared with others who might be traveling with you.

Read more