Skip to main content

Microsoft may have known about Bing Chat’s unhinged responses months ago

Microsoft’s Bing Chat AI has been off to a rocky start, but it seems Microsoft may have known about the issues well before its public debut. A support post on Microsoft’s website references “rude” responses from the “Sidney” chat bot, which is a story we’ve been hearing for the past week. Here’s the problem — the post was made on November 23, 2022.

The revelation comes from Ben Schmidt, vice president of information design at Nomic, who shared the post with Gary Marcus, an author covering AI and founder of Geometric Intelligence. The story goes that Microsoft tested Bing Chat — called Sidney, according to the post — in India and Indonesia some time between November and January before it made the official announcement.

A community post regarding Bing Chat.
Image used with permission by copyright holder

I asked Microsoft if that was the case, and it shared the following statement:

“Sydney is an old code name for a chat feature based on earlier models that we began testing more than a year ago. The insights we gathered as part of that have helped to inform our work with the new Bing preview. We continue to tune our techniques and are working on more advanced models to incorporate the learnings and feedback so that we can deliver the best user experience possible. We’ll continue to share updates on progress through our blog.

The initial post shows the AI bot arguing with the user and settling into the same sentence forms we saw when Bing Chat said it wanted “to be human.” Further down the thread, other users chimed in with their own experiences, reposting the now-infamous smiling emoji Bing Chat follows most of its responses with.

To make matters worse, the initial poster said they asked to provide feedback and report the chatbot, lending some credence that Microsoft was aware of the types of responses its AI was capable of.

That runs counter to what Microsoft said in the days following the chatbot’s blowout in the media. In an announcement covering upcoming changes to Bing Chat, Microsoft said that “social entertainment,” which is presumably in reference to the ways users have tried to trick Bing Chat into provocative responses, was a “new user-case for chat.”

Microsoft has made several changes to the AI since launch, including vastly reducing conversation lengths. This is an effort to curb the types of responses we saw circulating a few days after Microsoft first announced Bing Chat. Microsoft says it’s currently working on increasing chat limits.

Although the story behind Microsoft’s testing of Bing Chat remains up in the air, it’s clear the AI had been in the planning for a while. Earlier this year, Microsoft made a multibillion investment in OpenAI following the success of ChatGPT, and Bing Chat itself is built on a modified version of the company’s GPT model. In addition, Microsoft posted a blog about “responsible AI” just days before announcing Bing Chat to the world.

There are several ethics questions surrounding AI and its use in a search engine like Bing, as well as the possibility that Microsoft rushed out Bing Chat before it was ready and knowing what it was capable of. The support post in question was last updated on February 21, 2023, but the history for the initial question and the replies show that they haven’t been revised since their original posting date.

It’s possible Microsoft decided to push ahead anyway, feeling the pressure from the upcoming Google Bard and the momentous rise in popularity of ChatGPT.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
What is Grok? Elon Musk’s controversial ChatGPT competitor, explained
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

Elon Musk has thrown his hat into the already crowded AI ring with Grok, a conversational AI designed to challenge both the likes of ChatGPT and Midjourney, by offering a chatbot with more of "a sense of humor" than other AIs (read: fewer content restrictions and more swearing), as Musk has quipped.

It's all accessed by and trained on X social media platform, as you might guess. Here's everything you need to know about it.
What is Grok?

Read more
There’s something strange about the latest update to ChatGPT
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI announced that it has implemented a new version of its GPT-4o large language model to drive its ChatGPT chatbot, but it has declined to specify exactly how the updated model differs from its predecessor.

"To be clear, this is an improvement to GPT-4o and not a new frontier model," the company posted on X (formerly Twitter) Monday.

Read more
GPTZero: how to use the ChatGPT detection tool
A MidJourney rendering of a student and his robot friend in front of a blackboard.

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available.

However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That brand of plagiarism prevents students from learning as much as they could and has given teachers a whole new headache: how to detect AI use.

Read more