Skip to main content

Facebook might get chatbots — and that could be a problem

Facebook owner Meta is planning to introduce chatbots with distinct personalities to its social media app. The launch could come as soon as this September and would be a challenge to rivals like ChatGPT, but there are concerns that there could be serious implications for users’ privacy.

The idea comes from the Financial Times, which reports that the move is an attempt to boost engagement with Facebook users. The new tool could do this by providing fresh search capabilities or recommending content, all through humanlike discussions.

The Facebook app icon on an iPhone home screen, with other app icons surrounding it.
Brett Johnson / Unsplash

According to sources cited by the Financial Times, the chatbots will take on different personas, including “one that emulates Abraham Lincoln and another that advises on travel options in the style of a surfer.”

Recommended Videos

This wouldn’t be the first time we’ve seen chatbots take on their own personalities or converse in the style of famous people. The Character.ai chatbot, for example, can adopt dozens of different personalities, including those of celebrities and historical figures.

Privacy concerns

facebook privacy mark zuckerberg
Josh Edelson/Getty Images / Meta

Despite the promise Meta’s chatbots could show, fears have also been raised over the amount of data they will likely collect — especially considering Facebook has an abysmal record at protecting user privacy.

Ravit Dotan, an AI ethics adviser and researcher, was quoted by the Financial Times as saying “Once users interact with a chatbot, it really exposes much more of their data to the company, so that the company can do anything they want with that data.”

This not only raises the prospect of far-reaching privacy breaches but allows for the possibility of “manipulation and nudging” of users, Dotan added.

A big risk

A Meta Connect 2022 screenshot showing Mark Zuckerberg avatar.
Meta

Other chatbots like ChatGPT and Bing Chat have had a history of “hallucinations,” or moments where they share incorrect information — or even misinformation. The potential damage caused by misinformation and bias could be much higher on Facebook, which has nearly four billion users, compared to rival chatbots.

Meta’s past attempts at chatbots have fared poorly, with the company’s BlenderBot 2 and BlenderBot 3 both quickly devolving into misleading content and inflammatory hate speech. That might not give users much hope for Meta’s latest effort.

With September fast approaching, we might not have long to see whether Facebook is able to surmount these hurdles, or if we will have another hallucination-riddled launch akin to those suffered elsewhere in the industry. Whatever happens, it’ll be interesting to watch.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
Chatbots are going to Washington with ChatGPT Gov
glasses and chatgpt

In an X post Monday commenting on DeepSeek's sudden success, OpenAI CEO Sam Altman promised to "pull up some releases" and it appears he has done so. OpenAI unveiled its newest product on Tuesday, a "tailored version of ChatGPT designed to provide U.S. government agencies with an additional way to access OpenAI’s frontier models," per the announcement post. ChatGPT Gov will reportedly offer even tighter data security measures than ChatGPT Enterprise, but how will it handle the hallucinations that plague the company's other models?

According to OpenAI, more than 90,000 federal, state, and local government employees across 3,500 agencies have queried ChatGPT more than 18 million times since the start of 2024. The new platform will enable government agencies to enter “non-public, sensitive information” into ChatGPT while it runs within their secure hosting environments -- specifically, the Microsoft Azure commercial cloud or Azure Government community cloud -- and cybersecurity frameworks like IL5 or CJIS. This enables each agency to "manage their own security, privacy and compliance requirements,” Felipe Millon, Government Sales lead at OpenAI told reporters on the press call Tuesday.

Read more
OpenAI’s big, new Operator AI already has problems
OpenAI logo on a white board

OpenAI has announced its AI agent tool, called Operator, as a research preview as of Thursday, but the launch isn’t without its minor hiccups.

The artificial intelligence brand showcased features of the new tool in an online demo, explaining that Operator is a Computer Using Agent (CUA) based on the GPT-4o model, which enables multi-modal functions, such as the ability to search the web and being able to understand the reasoning of the search results.

Read more
Get ready: Google Search may bring a pure ‘AI mode’ to counter ChatGPT
AI Overviews being shown in Google Search.

It is match point Google as the tech giant prepares to introduce a new “AI Mode” for its search engine, which will allow users to transition into an atmosphere that resembles the Gemini AI chatbot interface.

According to a report from The Information, Google will add an AI Mode tab to the link options in its search results, where the “All,” “Images,” “Videos,” and “Shopping” options reside. The AI Mode would make Google search more accessible and intuitive for users, allowing them to “ask follow-up” questions pertaining to the links in the results via a chatbot text bar, the publication added.

Read more