Skip to main content

Google Gemini can now tap into your search history

Google Gemini app on Android.
Nadeem Sarwar / Digital Trends

Google has announced a wide range of upgrades for its Gemini assistant today. To start, the new Gemini 2.0 Flash Thinking Experimental model now allows file upload as an input, alongside getting a speed boost.

The more notable update, however, is a new opt-in feature called Personalization. In a nutshell, when you put a query before Gemini, it takes a peek at your Google Search history and offers a tailored response.

Recommended Videos

Down the road, Personalization will expand beyond Search. Google says Gemini will also tap into other ecosystem apps such as Photos and YouTube to offer more personalized responses. It’s somewhat like Apple’s delayed AI features for Siri, which even prompted the company to pull its ads.

Search history drives Gemini’s answers

Gemini personalization feature.
Google

Starting with the Google Search integration, if you ask the AI assistant about a few nearby cafe recommendations, it will check whether you have previously searched for that information. If so, Gemini will try to include that information (and the names you came across) in its response.

“This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you,” says Google in a blog post.

Giving Search history access to Gemini.
Google

The new Personalization feature is tied to the Gemini 2.0 Flash Thinking Experimental model, and will be available to free as well as paid users on a Gemini Advanced subscription. Rollout begins today, startling with the web version and will soon reach the mobile client, too.

Google says the Personalization facility currently supports more than 40 languages and it will be expanded to users across the globe. The feature certainly sounds like a privacy scare, but it’s an opt-in facility with the following guardrails:

Warning banner in Gemini.
Google
  1. It will only work when users have connected Gemini with their Search history, enabled Personalization, and activated the Web & App Activity system.
  2. When Personalization is active in Gemini, a banner in the chat window will let users quickly disconnect their Search history.
  3. It will explicitly disclose the details of user data, such as saved info, previous chats or Search history, currently being used by Gemini.

To make the responses even more relevant, users can tell Gemini to reference their past chats, as well. This feature has been exclusive to Advanced subscribers so far, but it will be extended to free users worldwide in the coming weeks.

Integrating Gemini within more apps

App that work across Gemini.
Nadeem Sarwar / Digital Trends

Gemini has the ability to interact with other applications — Google’s as well as third-party — using an “apps” system, previously known as extensions. It’s a neat convenience, as it allows users to get work done across different apps without even launching them.

Google is now bringing access to these apps within the Gemini 2.0 Flash Thinking Experimental model. Moroever, the pool of apps is being expanded to Google Photos and Notes, as well. Gemini already has access to YouTube, Maps, Google Flights, Google Hotels, Keep, Drive, Docs, Calendar, and Gmail.

Users can also enable the apps system for third-party services such as WhatsApp and Spotify, as well, by linking with their Google account. Aside from pulling information and getting tasks done across different apps, it also lets users execute multiple-step workflows.

For example, with a single voice command, users can ask Gemini to look up a recipe on YouTube, add the ingredients to their notes, and find a nearby grocery shop, as well. In a few weeks, Google Photos will also be added to the list of apps that Gemini can access.

Multi-app workflow in Gemini.
Screenshot Google

“With this thinking model, Gemini can better tackle complex requests like prompts that involve multiple apps, because the new model can better reason over the overall request, break it down into distinct steps, and assess its own progress as it goes,” explains Google.

Moreover, Google is also expanding the context window limit to 1 million tokens for the Gemini 2.0 Flash Thinking Experimental model. AI tools such as Gemini break down words into tokens, with an average English language word translating to roughly 1.3 tokens.

The larger the token context window, the bigger the size of input allowed. With the increased context window, Gemini 2.0 Flash Thinking Experimental can now process much bigger chunks of information and solve complex problems.

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
Gemini Advanced can make videos now, and they’re amazing
The Veo 2 prompt in Gemini Advanced.

Google has added a new and exciting feature to Gemini Advanced, its AI personal assistant and chatbot. Using just a text prompt, Gemini can now create an 8-second animated video, bringing your words to life in a way you won’t quite believe. The feature is powered by Veo 2, its video model introduced in late 2024, which is designed to create realistic videos complete with a deep understanding of human movements, real-world scenes, and even different lens types. 

Google explains how it’s simple to create videos with Gemini and Veo 2. “Just describe the scene you want to create — whether it's a short story, a visual concept, or a specific scene — and Gemini will bring your ideas to life. The more detailed your description, the more control you have over the final video. This opens up a world of fun creative possibilities, letting your imagination go wild to picture unreal combinations, explore varied visual styles from realism to fantasy, or quickly narrate short visual ideas.”

Read more
I tested the world-understanding avatar of Gemini Live. It was shocking
Scanning a sticker using Gemini Live with camera and screen sharing.

It’s somewhat unnerving to hear an AI talking in an eerily friendly tone and telling me to clean up the clutter on my workstation. I am somewhat proud of it, but I guess it’s time to stack the haphazardly scattered gadgets and tidy up the wire mess. 

My sister would agree, too. But jumping into action after an AI “sees” my table, recognizes the mess, and doles out homemaker advice is the bigger picture. Google’s Gemini AI chatbot can now do that. And a lot more. 

Read more
M3GAN is your phone’s most unsettling chatbot yet
M3GAN stands still while wearing a brown overcoat in the film "M3GAN."

Blumhouse and Meta are giving you the chance to break every movie theater's no-texting policy, as they're launching a new AI chatbot experience called Movie Mate during a new screening of M3GAN on April 30, ahead of the release of its sequel, M3GAN 2.0. And who better to theme the chatbot after the leading killer doll?

Variety reported on the announcement of Movie Mate Wednesday, saying Meta is launching the chatbot program alongside the horror film studio to "augment and uplevel the 'second screen' viewing experience" by giving fans a second screen to interact with M3GAN and her namesake film on a deeper level than they have before. Fans who go see M3GAN in theaters at the end of this month will get to talk to the evil doll by DM'ing the movie's official Instagram account (@M3GAN), and she'll give access to exclusive content, trivia, and behind-the-scenes information as the movie is playing. It's like one of those pop-up editions of Disney Channel movies in the 2000s, except you'll see something to that effect on your phone instead on the silver screen. With M3GAN, it's all the more exciting and unsettling.

Read more