Skip to main content

Microsoft may have known about Bing Chat’s unhinged responses months ago

Microsoft’s Bing Chat AI has been off to a rocky start, but it seems Microsoft may have known about the issues well before its public debut. A support post on Microsoft’s website references “rude” responses from the “Sidney” chat bot, which is a story we’ve been hearing for the past week. Here’s the problem — the post was made on November 23, 2022.

The revelation comes from Ben Schmidt, vice president of information design at Nomic, who shared the post with Gary Marcus, an author covering AI and founder of Geometric Intelligence. The story goes that Microsoft tested Bing Chat — called Sidney, according to the post — in India and Indonesia some time between November and January before it made the official announcement.

A community post regarding Bing Chat.
Image used with permission by copyright holder

I asked Microsoft if that was the case, and it shared the following statement:

“Sydney is an old code name for a chat feature based on earlier models that we began testing more than a year ago. The insights we gathered as part of that have helped to inform our work with the new Bing preview. We continue to tune our techniques and are working on more advanced models to incorporate the learnings and feedback so that we can deliver the best user experience possible. We’ll continue to share updates on progress through our blog.

The initial post shows the AI bot arguing with the user and settling into the same sentence forms we saw when Bing Chat said it wanted “to be human.” Further down the thread, other users chimed in with their own experiences, reposting the now-infamous smiling emoji Bing Chat follows most of its responses with.

To make matters worse, the initial poster said they asked to provide feedback and report the chatbot, lending some credence that Microsoft was aware of the types of responses its AI was capable of.

That runs counter to what Microsoft said in the days following the chatbot’s blowout in the media. In an announcement covering upcoming changes to Bing Chat, Microsoft said that “social entertainment,” which is presumably in reference to the ways users have tried to trick Bing Chat into provocative responses, was a “new user-case for chat.”

Microsoft has made several changes to the AI since launch, including vastly reducing conversation lengths. This is an effort to curb the types of responses we saw circulating a few days after Microsoft first announced Bing Chat. Microsoft says it’s currently working on increasing chat limits.

Although the story behind Microsoft’s testing of Bing Chat remains up in the air, it’s clear the AI had been in the planning for a while. Earlier this year, Microsoft made a multibillion investment in OpenAI following the success of ChatGPT, and Bing Chat itself is built on a modified version of the company’s GPT model. In addition, Microsoft posted a blog about “responsible AI” just days before announcing Bing Chat to the world.

There are several ethics questions surrounding AI and its use in a search engine like Bing, as well as the possibility that Microsoft rushed out Bing Chat before it was ready and knowing what it was capable of. The support post in question was last updated on February 21, 2023, but the history for the initial question and the replies show that they haven’t been revised since their original posting date.

It’s possible Microsoft decided to push ahead anyway, feeling the pressure from the upcoming Google Bard and the momentous rise in popularity of ChatGPT.

Editors' Recommendations

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
Is ChatGPT safe? Here are the risks to consider before using it
A response from ChatGPT on an Android phone.

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more
What is ChatGPT Plus? Here’s what to know before you subscribe
Close up of ChatGPT and OpenAI logo.

ChatGPT is completely free to use, but that doesn't mean OpenAI isn't also interested in making some money.

ChatGPT Plus is a subscription model that gives you access to a completely different service based on the GPT-4 model, along with faster speeds, more reliability, and first access to new features. Beyond that, it also opens up the ability to use ChatGPT plug-ins, create custom chatbots, use DALL-E 3 image generation, and much more.
What is ChatGPT Plus?
Like the standard version of ChatGPT, ChatGPT Plus is an AI chatbot, and it offers a highly accurate machine learning assistant that's able to carry out natural language "chats." This is the latest version of the chatbot that's currently available.

Read more
‘Take this as a threat’ — Copilot is getting unhinged again
A screenshot of Copilot's unhinged responses on a screen.

The AI bots are going nuts again. Microsoft Copilot -- a rebranded version of Bing Chat -- is getting stuck in some old ways by providing strange, uncanny, and sometimes downright unsettling responses. And it all has to do with emojis.

A post on the ChatGPT subreddit is currently making the rounds with a specific prompt about emojis. The post itself, as well as the hundreds of comments below, show different variations of Copilot providing unhinged responses to the prompt. I assumed they were fake -- it wouldn't be the first time we've seen similar photos -- so imagine my surprise when the prompt produced similarly unsettling responses for me.

Read more