Skip to main content

Microsoft may have known about Bing Chat’s unhinged responses months ago

Microsoft’s Bing Chat AI has been off to a rocky start, but it seems Microsoft may have known about the issues well before its public debut. A support post on Microsoft’s website references “rude” responses from the “Sidney” chat bot, which is a story we’ve been hearing for the past week. Here’s the problem — the post was made on November 23, 2022.

The revelation comes from Ben Schmidt, vice president of information design at Nomic, who shared the post with Gary Marcus, an author covering AI and founder of Geometric Intelligence. The story goes that Microsoft tested Bing Chat — called Sidney, according to the post — in India and Indonesia some time between November and January before it made the official announcement.

A community post regarding Bing Chat.
Image used with permission by copyright holder

I asked Microsoft if that was the case, and it shared the following statement:

“Sydney is an old code name for a chat feature based on earlier models that we began testing more than a year ago. The insights we gathered as part of that have helped to inform our work with the new Bing preview. We continue to tune our techniques and are working on more advanced models to incorporate the learnings and feedback so that we can deliver the best user experience possible. We’ll continue to share updates on progress through our blog.

The initial post shows the AI bot arguing with the user and settling into the same sentence forms we saw when Bing Chat said it wanted “to be human.” Further down the thread, other users chimed in with their own experiences, reposting the now-infamous smiling emoji Bing Chat follows most of its responses with.

To make matters worse, the initial poster said they asked to provide feedback and report the chatbot, lending some credence that Microsoft was aware of the types of responses its AI was capable of.

That runs counter to what Microsoft said in the days following the chatbot’s blowout in the media. In an announcement covering upcoming changes to Bing Chat, Microsoft said that “social entertainment,” which is presumably in reference to the ways users have tried to trick Bing Chat into provocative responses, was a “new user-case for chat.”

Microsoft has made several changes to the AI since launch, including vastly reducing conversation lengths. This is an effort to curb the types of responses we saw circulating a few days after Microsoft first announced Bing Chat. Microsoft says it’s currently working on increasing chat limits.

Although the story behind Microsoft’s testing of Bing Chat remains up in the air, it’s clear the AI had been in the planning for a while. Earlier this year, Microsoft made a multibillion investment in OpenAI following the success of ChatGPT, and Bing Chat itself is built on a modified version of the company’s GPT model. In addition, Microsoft posted a blog about “responsible AI” just days before announcing Bing Chat to the world.

There are several ethics questions surrounding AI and its use in a search engine like Bing, as well as the possibility that Microsoft rushed out Bing Chat before it was ready and knowing what it was capable of. The support post in question was last updated on February 21, 2023, but the history for the initial question and the replies show that they haven’t been revised since their original posting date.

It’s possible Microsoft decided to push ahead anyway, feeling the pressure from the upcoming Google Bard and the momentous rise in popularity of ChatGPT.

Editors' Recommendations

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
OpenAI needs just 15 seconds of audio for its AI to clone a voice
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

In recent years, the listening time required by a piece of AI to clone someone’s voice has been getting shorter and shorter.

It used to be minutes, now it’s just seconds.

Read more
How much does an AI supercomputer cost? Try $100 billion
A Microsoft datacenter.

It looks like OpenAI's ChatGPT and Sora, among other projects, are about to get a lot more juice. According to a new report shared by The Information, Microsoft and OpenAI are working on a new data center project, one part of which will be a massive AI supercomputer dubbed "Stargate." Microsoft is said to be footing the bill, and the cost is astronomical as the name of the supercomputer suggests -- the whole project might cost over $100 billion.

Spending over $100 billion on anything is mind-blowing, but when put into perspective, the price truly shows just how big a venture this might be: The Information claims that the new Microsoft and OpenAI joint project might cost a whopping 100 times more than some of the largest data centers currently in operation.

Read more
We may have just learned how Apple will compete with ChatGPT
An iPhone on a table with the Siri activation animation playing on the screen.

As we approach Apple’s Worldwide Developers Conference (WWDC) in June, the rumor mill has been abuzz with claims over Apple’s future artificial intelligence (AI) plans. Well, there have just been a couple of major developments that shed some light on what Apple could eventually reveal to the world, and you might be surprised at what Apple is apparently working on.

According to Bloomberg, Apple is in talks with Google to infuse its Gemini generative AI tool into Apple’s systems and has also considered enlisting ChatGPT’s help instead. The move with Google has the potential to completely change how the Mac, iPhone, and other Apple devices work on a day-to-day basis, but it could come under severe regulatory scrutiny.

Read more