Skip to main content

Google’s new Gemini 2.0 AI model is about to be everywhere

Gemini 2.0 logo
Google DeepMind

Less than a year after debuting Gemini 1.5, Google’s DeepMind division was back Wednesday to reveal the AI’s next-generation model, Gemini 2.0. The new model offers native image and audio output, and “will enable us to build new AI agents that bring us closer to our vision of a universal assistant,” the company wrote in its announcement blog post.

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google’s new flagship AI model, you can expect to see it begin powering AI features across the company’s ecosystem in the coming months. As with OpenAI’s o1 model, the initial release of Gemini 2.0 is not the company’s full-fledged version, but rather a smaller, less capable “experimental preview” iteration that will be upgraded in Google Gemini in the coming months.

Recommended Videos

“Effectively,” Google DeepMind CEO Demis Hassabis told The Verge, “it’s as good as the current Pro model is. So you can think of it as one whole tier better, for the same cost efficiency and performance efficiency and speed. We’re really happy with that.”

Google is also releasing a lightweight version of the model, dubbed Gemini 2.0 Flash, for developers.

Introducing Gemini 2.0 | Our most capable AI model yet

With the release of a more capable Gemini model, Google advances its AI agent agenda, which would see smaller, purpose-built models taking autonomous action on the user’s behalf. Gemini 2.o is expected to significantly boost Google’s efforts to roll out its Project Astra, which combines Gemini Live’s conversational abilities with real-time video and image analysis to provide users information about their surrounding environment through a smart glasses interface.

Google also announced on Wednesday the release of Project Mariner, the company’s answer to Anthropic’s Computer Control feature. This Chrome extension is capable of commanding a desktop computer, including keystrokes and mouse clicks, in the same way human users do. The company is also rolling out an AI coding assistant called Jules that can help developers find and improve clunky code, as well as a “Deep Research” feature that can generate detailed reports on the subjects you have it search the internet for.

Deep Research, which seems to serve the same function as Perplextiy AI and ChatGPT Search, is currently available to English-language Gemini Advanced subscribers. The system works by first generating a “multi step research plan,” which it submits to the user for approval before implementing.

Once you sign off on the plan, the research agent will conduct a search on the given subject and then hop down any relevant rabbit holes it finds. Once it’s done searching, the AI will regurgitate a report on what its found, including key findings and citation links to where it found its information. You can select it from the chatbot’s drop-down model selection menu at the top of the Gemini home page.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Google Gemini arrives on iPhone as a native app
the Google extensions feature on iPhone

Google announced Thursday that it has released a new native Gemini app for iOS that will give iPhone users free, direct access to the chatbot without the need for a mobile web browser.

The Gemini mobile app has been available for Android since February, when the platform transitioned from the older Bard branding. However, iOS users could only access the AI on their phones through either the mobile Google app or via a web browser. This new app provides a more streamlined means of chatting with the bot as well as a host of new (to iOS) features.

Read more
Google expands AI Overviews to over 100 more countries
AI Overviews being shown in Google Search.

Google's AI Overview is coming to a search results page near you, whether you want it to or not. The company announced on Monday that it is expanding the AI feature to more than 100 countries around the world.

Google debuted AI Overview, which uses generative AI to summarize the key points of your search topic and display that information at the top of the results page, to mixed reviews in May before subsequently expanding the program in August. Monday's roll-out sees the feature made available in seven languages — English, Hindi, Indonesian, Japanese, Korean, Portuguese, and Spanish — to users in more than 100 nations (you can find a full list of covered countries here)

Read more
Google’s AI detection tool is now available for anyone to try
Gemini running on the Google Pixel 9 Pro Fold.

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Read more