Skip to main content

AI Teammates are coming to your workplace

A screenshot fro Google I/O showing AI Teammates next to the presenter.
Google

At Google I/O 2024, Google has announced a new AI feature within its Google Workspace ecosystem called AI Teammate. The idea is simple: create an AI agent and make up a job for it within your organization. This AI Teammate, powered by Gemini, will be able to act within your virtual office just like any other teammate would, and can be given a name and asked questions.

As shown in the demo, you can grant it full access to a range of Google apps, Spaces, meetings, chats, and documents within your workplace — and give it a job. In the demo, this AI Teammate was given a description, as well as a variety of jobs and instructions, including monitoring and tracking specific projects, analyzing data, and facilitating team collaboration.

Google talked about it as a “collective memory of work together,” allowing you easily recall things, ask it questions, or summarize the current state of a project. In one example, the AI Teammate even responded to a chat, mentioning some conflicting decisions based on previous meetings.

A screenshot from Google I/O showing an AI Teammate side by side with the presenter.
Google

It feels like the early days here, and Google said it even plans to open up AI Teammates to third-party companies that can build these agents for specific purposes within a company or organization. It’s not hard to imagine companies doing much more with this in the future, creating more capable AI agents that can do work on their behalf.

The idea of an AI Teammate sounds innocuous enough, but in a world where there are increasing anxieties about AI replacing jobs, having it there as a virtual teammate only emphasizes this reality.

AI Teammates is part of the larger expansion of Gemini into Google Workspace, including some new features in Gmail mobile.

Google did not state when AI Teammates would be rolling out, only saying “stay tuned.”

Editors' Recommendations

Luke Larsen
Luke Larsen is the Senior editor of computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
I was wrong — Nvidia’s AI NPCs could be a game changer
A character in an AI-driven dialogue tree by Nvidia.

I wasn’t a fan of the Covert Protocol demo that Nvidia showed me on a video call late last week. The short mystery demo features a handful of NPCs that are directed entirely by AI. There’s a door greeter, an executive waiting for a room, and a receptionist -- and all three featured what were, on the surface, bland dialogue trees generated with AI.

At GDC 2024, trying out the demo myself, I was converted.

Read more
Reddit seals $60M deal with Google to boost AI tools, report claims
The Reddit logo.

Google has struck a deal worth $60 million that will allow it to use Reddit content to train its generative-AI models, Reuters reported on Thursday, citing three people familiar with the matter.

The claim follows a Bloomberg report earlier in the week that said Reddit had inked such a deal, though at the time, the name of the other party remained unclear.

Read more
All RTX GPUs now come with a local AI chatbot. Is it any good?
A window showing Nvidia's Chat with RTX.

It's been difficult to justify packing dedicated AI hardware in a PC. Nvidia is trying to change that with Chat with RTX, which is a local AI chatbot that leverages the hardware on your Nvidia GPU to run an AI model.

It provides a few unique advantages over something like ChatGPT, but the tool still has some strange problems. There are the typical quirks you get with any AI chatbot here, but also larger issues that prove Chat with RTX needs some work.
Meet Chat with RTX
Here's the most obvious question about Chat with RTX: How is this different from ChatGPT? Chat with RTX is a local large language model (LLM). It's using TensorRT-LLM compatible models -- Mistral and Llama 2 are included by default -- and applying them to your local data. In addition, the actual computation is happening locally on your graphics card, rather than in the cloud. Chat with RTX requires an Nvidia RTX 30-series or 40-series GPU and at least 8GB of VRAM.

Read more