Skip to main content

Microsoft may have ignored warnings about Bing Chat’s unhinged responses

Microsoft’s Bing Chat is in a much better place than it was when it released in February, but it’s hard to overlook the issues the GPT-4-powered chatbot had when it released. It told us it wanted to be human, after all, and often broke down into unhinged responses. And according to a new report, Microsoft was warned about these types of responses and decided to release Bing Chat anyway.

According to the Wall Street Journal, OpenAI, the company behind ChatGPT and the GPT-4 model powering Bing Chat, warned Microsoft about integrating its early AI model into Bing Chat. Specifically, OpenAI flagged “inaccurate or bizarre” responses, which Microsoft seems to have ignored.

Bing Chat saying it wants to be human.
Jacob Roach / Digital Trends

The report describes a unique tension between OpenAI and Microsoft, which have entered into somewhat of an open partnership over the last few years. OpenAI’s models are built on Microsoft hardware (including thousands of Nvidia GPUs), and Microsoft leverages the company’s tech across Bing, Microsoft Office, and Windows itself. In early 2023, Microsoft even invested $10 billion in OpenAI, coming just short of purchasing the company outright.

Despite this, the report alleges that Microsoft employees have issues with restricted access to OpenAI’s models, and that they were worried about ChatGPT overshadowing the AI-inundated Bing Chat. To make matters worse, the Wall Street Journal reports that OpenAI and Microsoft both sell OpenAI’s technology, leading to situations where vendors are dealing with contacts at both companies.

The biggest issue, according to the report, is that Microsoft and OpenAI are trying to make money with a similar product. With Microsoft backing, but not controlling OpenAI, the ChatGPT developer is free to make partnerships with other companies, some of which can directly compete with Microsoft’s products.

Based on what we’ve seen, OpenAI’s reported warnings held water. Shortly after releasing Bing Chat, Microsoft limited the number of responses users could receive in a single session. And since then, Microsoft has slowly lifted the restriction as the GPT-4 model in Bing Chat is refined. Reports suggest some Microsoft employees often reference “Sydney,” poking fun at the early days of Bing Chat (code-named Sydney) and its responses.

Editors' Recommendations

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
GPTZero: how to use the ChatGPT detection tool
A MidJourney rendering of a student and his robot friend in front of a blackboard.

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available. Being able to plug in a prompt and get out a stream of almost good enough text is a tempting proposition for many people who aren't confident in their writing skills or are looking to save time. However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That prevents them from learning as much as they could, which has given teachers a whole new headache when it comes to detecting AI use.

Teachers and other users are now looking for ways to detect the use of ChatGPT in students' work, and many are turning to tools like GPTZero, a ChatGPT detection tool built by Princeton University student Edward Tian. The software is available to everyone, so if you want to try it out and see the chances that a particular piece of text was written using ChatGPT, here's how you can do that.
What is GPTZero?

Read more
Is ChatGPT safe? Here are the risks to consider before using it
A response from ChatGPT on an Android phone.

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more
What is ChatGPT Plus? Here’s what to know before you subscribe
Close up of ChatGPT and OpenAI logo.

ChatGPT is completely free to use, but that doesn't mean OpenAI isn't also interested in making some money.

ChatGPT Plus is a subscription model that gives you access to a completely different service based on the GPT-4 model, along with faster speeds, more reliability, and first access to new features. Beyond that, it also opens up the ability to use ChatGPT plug-ins, create custom chatbots, use DALL-E 3 image generation, and much more.
What is ChatGPT Plus?
Like the standard version of ChatGPT, ChatGPT Plus is an AI chatbot, and it offers a highly accurate machine learning assistant that's able to carry out natural language "chats." This is the latest version of the chatbot that's currently available.

Read more