Google tells workers to be wary of AI chatbots

Alphabet has told its employees not to enter confidential information into Bard, the generative AI chatbot created and operated by Google, which Alphabet owns.

The company’s warning also extends to other chatbots, such as Microsoft-backed ChatGPT from OpenAI, Reuters reported on Thursday.

Recommended Videos

The AI-powered chatbots have generated huge interest in recent months due to their impressive ability to converse in a human-like way, write essays and reports, and even succeed in academic tests.

But Alphabet has concerns about its workers inadvertently leaking internal data via the tools.

In ongoing work to refine and improve the advanced AI technology, human reviewers may read the conversations that users have with the chatbots, posing a risk to personal privacy and also the potential exposure of trade secrets, the latter of which Alphabet appears to be particularly concerned about.

In addition, the chatbots are partly trained using users’ text exchanges, so with certain prompts, the tool could potentially repeat confidential information that it receives in those conversations to members of the public.

Like ChatGPT, Bard is now freely available for anyone to try. On its webpage, it warns users: “Please do not include information that can be used to identify you or others in your Bard conversations.”

It adds that Google collects “Bard conversations, related product usage information, info about your location, and your feedback,” and uses the data to improve Google products and services that include Bard.

Google says it stores Bard activity for up to 18 months, though a user can change this to three or 36 months in their Google account.

It adds that as a privacy measure, Bard conversations are disconnected from a Google account before a human reviewer sees them.

Reuters said that while Alphabet’s warning has been in place for a while, it recently expanded it, telling its workers to avoid using precise computer code generated by chatbots. The company told the news outlet that Bard can sometimes make “undesired code suggestions,” though the current iteration of the tool is still considered to be a viable programming aid.

Alphabet isn’t the only company to warn its employees about the privacy and security risks linked to using the chatbots. Samsung recently issued a similar instruction to its workers after a number of them fed sensitive semiconductor-related data into ChatGPT, and Apple and Amazon, among others, have reportedly also enacted a similar internal policy.

Editors' Recommendations

Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
How to use Google Gemini, the main challenger to ChatGPT

Google's Gemini AI chatbot has quickly become one of the major players in the generative AI space. Despite its rocky start, Gemini is one of the only true competitors to ChatGPT. Here's everything you need to know about it.
What is Google Gemini?
Google first introduced its AI endeavor as Bard in March 2023 in a free and experimental capacity. The chatbot was originally run on the LaMDA large language model (LLM).

In August 2023, it introduced Google Duet as an enterprise option featuring AI-inundated Workspace apps, including Gmail, Drive, Slides, Docs, and others.

Read more
GPTZero: how to use the ChatGPT detection tool

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available. Being able to plug in a prompt and get out a stream of almost good enough text is a tempting proposition for many people who aren't confident in their writing skills or are looking to save time. However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That prevents them from learning as much as they could, which has given teachers a whole new headache when it comes to detecting AI use.

Teachers and other users are now looking for ways to detect the use of ChatGPT in students' work, and many are turning to tools like GPTZero, a ChatGPT detection tool built by Princeton University student Edward Tian. The software is available to everyone, so if you want to try it out and see the chances that a particular piece of text was written using ChatGPT, here's how you can do that.
What is GPTZero?

Read more
Reddit seals $60M deal with Google to boost AI tools, report claims

Google has struck a deal worth $60 million that will allow it to use Reddit content to train its generative-AI models, Reuters reported on Thursday, citing three people familiar with the matter.

The claim follows a Bloomberg report earlier in the week that said Reddit had inked such a deal, though at the time, the name of the other party remained unclear.

Read more