Skip to main content

Google tells workers to be wary of AI chatbots

Alphabet has told its employees not to enter confidential information into Bard, the generative AI chatbot created and operated by Google, which Alphabet owns.

The company’s warning also extends to other chatbots, such as Microsoft-backed ChatGPT from OpenAI, Reuters reported on Thursday.

The AI-powered chatbots have generated huge interest in recent months due to their impressive ability to converse in a human-like way, write essays and reports, and even succeed in academic tests.

But Alphabet has concerns about its workers inadvertently leaking internal data via the tools.

In ongoing work to refine and improve the advanced AI technology, human reviewers may read the conversations that users have with the chatbots, posing a risk to personal privacy and also the potential exposure of trade secrets, the latter of which Alphabet appears to be particularly concerned about.

In addition, the chatbots are partly trained using users’ text exchanges, so with certain prompts, the tool could potentially repeat confidential information that it receives in those conversations to members of the public.

Like ChatGPT, Bard is now freely available for anyone to try. On its webpage, it warns users: “Please do not include information that can be used to identify you or others in your Bard conversations.”

It adds that Google collects “Bard conversations, related product usage information, info about your location, and your feedback,” and uses the data to improve Google products and services that include Bard.

Google says it stores Bard activity for up to 18 months, though a user can change this to three or 36 months in their Google account.

It adds that as a privacy measure, Bard conversations are disconnected from a Google account before a human reviewer sees them.

Reuters said that while Alphabet’s warning has been in place for a while, it recently expanded it, telling its workers to avoid using precise computer code generated by chatbots. The company told the news outlet that Bard can sometimes make “undesired code suggestions,” though the current iteration of the tool is still considered to be a viable programming aid.

Alphabet isn’t the only company to warn its employees about the privacy and security risks linked to using the chatbots. Samsung recently issued a similar instruction to its workers after a number of them fed sensitive semiconductor-related data into ChatGPT, and Apple and Amazon, among others, have reportedly also enacted a similar internal policy.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Reddit seals $60M deal with Google to boost AI tools, report claims
The Reddit logo.

Google has struck a deal worth $60 million that will allow it to use Reddit content to train its generative-AI models, Reuters reported on Thursday, citing three people familiar with the matter.

The claim follows a Bloomberg report earlier in the week that said Reddit had inked such a deal, though at the time, the name of the other party remained unclear.

Read more
Google may build Gemini AI directly into Chrome
The Google Gemini AI logo.

Google is now fleshing out its newly unified Gemini AI system in its browser with its first attempt at implementing Chat with Gemini into the Chrome Omnibox.

This latest effort will update Google Chrome with a Chat with Gemini shortcut in the Chrome Omnibox, allowing users to access the AI chatbot feature without having to go to the Gemini website, according to WindowsReport. The Omnibox serves as an address bar and search bar, and it adds multiple other tasks to a browser. Now with a simple @ prompt, you can also access Google's AI chatbot to answer questions, create images, and generate summaries, among other tasks.

Read more
All RTX GPUs now come with a local AI chatbot. Is it any good?
A window showing Nvidia's Chat with RTX.

It's been difficult to justify packing dedicated AI hardware in a PC. Nvidia is trying to change that with Chat with RTX, which is a local AI chatbot that leverages the hardware on your Nvidia GPU to run an AI model.

It provides a few unique advantages over something like ChatGPT, but the tool still has some strange problems. There are the typical quirks you get with any AI chatbot here, but also larger issues that prove Chat with RTX needs some work.
Meet Chat with RTX
Here's the most obvious question about Chat with RTX: How is this different from ChatGPT? Chat with RTX is a local large language model (LLM). It's using TensorRT-LLM compatible models -- Mistral and Llama 2 are included by default -- and applying them to your local data. In addition, the actual computation is happening locally on your graphics card, rather than in the cloud. Chat with RTX requires an Nvidia RTX 30-series or 40-series GPU and at least 8GB of VRAM.

Read more