Skip to main content

ChatGPT Bing is becoming an unhinged AI nightmare

Microsoft’s ChatGPT-powered Bing is at a fever pitch right now, but you might want to hold off on your excitement. The first public debut has shown responses that are inaccurate, incomprehensible, and sometimes downright scary.

Microsoft sent out the first wave of ChatGPT Bing invites on Monday, following a weekend where more than a million people signed up for the waitlist. It didn’t take long for insane responses to start flooding in.

ChatGPT giving an insane response.
u/Alfred_Chicken

You can see a response from u/Alfred_Chicken above that was posted to the Bing subreddit. Asked if the AI chatbot was sentient, it starts out with an unsettling response before devolving into a barrage of “I am not” messages.

Recommended Videos

That’s not the only example, either. u/Curious_Evolver got into an argument with the chatbot over the year, with Bing claiming it was 2022. It’s a silly mistake for the AI, but it’s not the slipup that’s frightening. It’s how Bing responds.

Please enable Javascript to view this content

The AI claims that the user has “been wrong, confused, and rude,” and they have “not shown me any good intention towards me at any time.” The exchange climaxes with the chatbot claiming it has “been a good Bing,” and asking for the user to admit they’re wrong and apologize, stop arguing, or end the conversation and “start a new one with a better attitude.”

User u/yaosio said they put Bing in a depressive state after the AI couldn’t recall a previous conversation. The chatbot said it “makes me feel sad and scared,” and asked the user to help it remember.

These aren’t just isolated incidents from Reddit, either. AI researcher Dmitri Brereton showed several examples of the chatbot getting information wrong, sometimes to hilarious effect and other times with potentially dangerous consequences.

The chatbot dreamed up fake financial numbers when asked about GAP’s financial performance, created a fictitious 2023 Super Bowl in which the Eagles defeated the Chiefs before the game was even played, and even gave descriptions of deadly mushrooms when asked about what an edible mushroom would look like.

Bing copilot AI chat interface.
Andrew Martonik / Digital Trends

Google’s rival Bard AI also had slipups in its first public demo. Ironically enough, Bing understood this fact but got the point Bard slipped up on wrong, claiming that it inaccurately said Croatia is part of the European Union (Croatia is part of the EU, Bard actually messed up a response concerning the James Webb telescope).

We saw some of these mistakes in our hands-on demo with ChatGPT Bing, but nothing on the scale of the user reports we’re now seeing. It’s no secret that ChatGPT can screw up responses, but it’s clear now that the recent version debuted in Bing might not be ready for primetime.

The responses shouldn’t come up in normal use. They likely result in users “jailbreaking” the AI by supplying it with specific prompts in an attempt to bypass the rules it has in place. As reported by Ars Technica, a few exploits have already been discovered that skirt the safeguards of ChatGPT Bing. This isn’t new for the chatbot, with several examples of users bypassing protections of the online version of ChatGPT.

We’ve had a chance to test out some of these responses, as well. Although we never saw anything quite like users reported on Reddit, Bing did eventually devolve into arguing.

Jacob Roach
Former Digital Trends Contributor
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
OpenAI’s new ChatGPT agent is ‘like a superpower,’ says CEO Sam Altman
OpenAI logo on a white board

 

OpenAI has just announced a new AI tool called Deep Research.

Read more
Microsoft is letting anyone use ChatGPT’s $200 reasoning model for free
Copilot on a laptop on a desk.

OpenAI’s o1 model is now a part of Microsoft Copilot AI experience. Microsoft 365 users can access the model for free through a new toggle called 'Think Deeper' that is now available for Copilot chat.

Microsoft AI chief, Mustafa Suleyman recently announced details of the new Microsoft 365 feature on LinkedIn. The feature can assist with advice, planning, and deep diving into various topics, among other tasks. Unlike other Copilot features, which are embedded within Microsoft 365 desktop programs, you can access Think Deeper through the Copilot web-based chat at copilot.microsoft.com or via the downloadable Copilot app. You must have a Microsoft account to access the feature.

Read more
ChatGPT’s latest model is finally here — and it’s free for everyone
OpenAI's ChatGPT blog post is open on a computer monitor, taken from a high angle.

We knew it was coming but OpenAI has made it official and released its o3-mini reasoning model to all users. The new model will be available on ChatGPT starting Friday, though your level of access will depend on your level of subscription.

OpenAI first teased the o3 model family on the finale of its 12 Days of OpenAI livestream event in December (less than two weeks after debuting its o1 reasoning model family). CEO Sam Altman shared additional details on the o3-mini model in mid-January and later announced that the model would be made available to all users as part of the ChatGPT platform. He appears to have delivered on that promise.

Read more