Skip to main content

ChatGPT Bing is becoming an unhinged AI nightmare

Microsoft’s ChatGPT-powered Bing is at a fever pitch right now, but you might want to hold off on your excitement. The first public debut has shown responses that are inaccurate, incomprehensible, and sometimes downright scary.

Microsoft sent out the first wave of ChatGPT Bing invites on Monday, following a weekend where more than a million people signed up for the waitlist. It didn’t take long for insane responses to start flooding in.

ChatGPT giving an insane response.
u/Alfred_Chicken

You can see a response from u/Alfred_Chicken above that was posted to the Bing subreddit. Asked if the AI chatbot was sentient, it starts out with an unsettling response before devolving into a barrage of “I am not” messages.

That’s not the only example, either. u/Curious_Evolver got into an argument with the chatbot over the year, with Bing claiming it was 2022. It’s a silly mistake for the AI, but it’s not the slipup that’s frightening. It’s how Bing responds.

The AI claims that the user has “been wrong, confused, and rude,” and they have “not shown me any good intention towards me at any time.” The exchange climaxes with the chatbot claiming it has “been a good Bing,” and asking for the user to admit they’re wrong and apologize, stop arguing, or end the conversation and “start a new one with a better attitude.”

User u/yaosio said they put Bing in a depressive state after the AI couldn’t recall a previous conversation. The chatbot said it “makes me feel sad and scared,” and asked the user to help it remember.

These aren’t just isolated incidents from Reddit, either. AI researcher Dmitri Brereton showed several examples of the chatbot getting information wrong, sometimes to hilarious effect and other times with potentially dangerous consequences.

The chatbot dreamed up fake financial numbers when asked about GAP’s financial performance, created a fictitious 2023 Super Bowl in which the Eagles defeated the Chiefs before the game was even played, and even gave descriptions of deadly mushrooms when asked about what an edible mushroom would look like.

Bing copilot AI chat interface.
Andrew Martonik / Digital Trends

Google’s rival Bard AI also had slipups in its first public demo. Ironically enough, Bing understood this fact but got the point Bard slipped up on wrong, claiming that it inaccurately said Croatia is part of the European Union (Croatia is part of the EU, Bard actually messed up a response concerning the James Webb telescope).

We saw some of these mistakes in our hands-on demo with ChatGPT Bing, but nothing on the scale of the user reports we’re now seeing. It’s no secret that ChatGPT can screw up responses, but it’s clear now that the recent version debuted in Bing might not be ready for primetime.

The responses shouldn’t come up in normal use. They likely result in users “jailbreaking” the AI by supplying it with specific prompts in an attempt to bypass the rules it has in place. As reported by Ars Technica, a few exploits have already been discovered that skirt the safeguards of ChatGPT Bing. This isn’t new for the chatbot, with several examples of users bypassing protections of the online version of ChatGPT.

We’ve had a chance to test out some of these responses, as well. Although we never saw anything quite like users reported on Reddit, Bing did eventually devolve into arguing.

Editors' Recommendations

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
We may have just learned how Apple will compete with ChatGPT
An iPhone on a table with the Siri activation animation playing on the screen.

As we approach Apple’s Worldwide Developers Conference (WWDC) in June, the rumor mill has been abuzz with claims over Apple’s future artificial intelligence (AI) plans. Well, there have just been a couple of major developments that shed some light on what Apple could eventually reveal to the world, and you might be surprised at what Apple is apparently working on.

According to Bloomberg, Apple is in talks with Google to infuse its Gemini generative AI tool into Apple’s systems and has also considered enlisting ChatGPT’s help instead. The move with Google has the potential to completely change how the Mac, iPhone, and other Apple devices work on a day-to-day basis, but it could come under severe regulatory scrutiny.

Read more
Copilot: how to use Microsoft’s own version of ChatGPT
Microsoft's AI Copilot being used in various Microsoft Office apps.

ChatGPT isn’t the only AI chatbot in town. One direct competitor is Microsoft’s Copilot (formerly Bing Chat), and if you’ve never used it before, you should definitely give it a try. As part of a greater suite of Microsoft tools, Copilot can be integrated into your smartphone, tablet, and desktop experience, thanks to a Copilot sidebar in Microsoft Edge. 

Like any good AI chatbot, Copilot’s abilities are constantly evolving, so you can always expect something new from this generative learning professional. Today though, we’re giving a crash course on where to find Copilot, how to download it, and how you can use the amazing bot. 
How to get Microsoft Copilot
Microsoft Copilot comes to Bing and Edge. Microsoft

Read more
GPTZero: how to use the ChatGPT detection tool
A MidJourney rendering of a student and his robot friend in front of a blackboard.

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available. Being able to plug in a prompt and get out a stream of almost good enough text is a tempting proposition for many people who aren't confident in their writing skills or are looking to save time. However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That prevents them from learning as much as they could, which has given teachers a whole new headache when it comes to detecting AI use.

Teachers and other users are now looking for ways to detect the use of ChatGPT in students' work, and many are turning to tools like GPTZero, a ChatGPT detection tool built by Princeton University student Edward Tian. The software is available to everyone, so if you want to try it out and see the chances that a particular piece of text was written using ChatGPT, here's how you can do that.
What is GPTZero?

Read more