Skip to main content

81% think ChatGPT is a security risk, survey finds

ChatGPT has been a polarizing invention, with responses to the artificial intelligence (AI) chatbot swinging between excitement and fear. Now, a new survey shows that disillusionment with ChatGPT could be hitting new highs.

According to a survey from security firm Malwarebytes, 81% of its respondents are worried about the security and safety risks posed by ChatGPT. It’s a remarkable finding and suggests that people are becoming increasingly concerned by the nefarious acts OpenAI’s chatbot is apparently capable of pulling off.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

Malwarebytes asked its newsletter subscribers to respond to the phrase “I am concerned about the possible security and/or safety risks posed by ChatGPT,” a sentiment with which 81% agreed. What’s more, 51% disagreed with the statement “ChatGPT and other AI tools will improve Internet safety” while just 7% agreed, suggesting there is widespread concern over the impact ChatGPT will have on online security.

The discontent with AI chatbots was not limited to security issues. Only 12% of surveyed individuals agreed with the phrase “The information produced by ChatGPT is accurate,” while 55% of people disagreed. As many as 63% of people did not trust ChatGPT’s responses, with a mere 10% finding them reliable.

Generating malware

A person using a laptop with a set of code seen on the display.
Sora Shimazaki / Pexels

This kind of response is not entirely surprising, given the spate of high-profile bad acts ChatGPT has been used for in recent months. We’ve seen instances of it being deployed for all manner of questionable deeds, from writing malware to presenting users with free Windows 11 keys.

In May 2023, we spoke to various security experts about the threats posed by ChatGPT. According to Martin Zugec, the Technical Solutions Director at Bitdefender, “the quality of malware code produced by chatbots tends to be low, making it a less attractive option for experienced malware writers who can find better examples in public code repositories.”

Still, that hasn’t stemmed public anxiety about what ChatGPT could be used to do. It’s clear that people are worried that even novice malware writers could task AI chatbots with dreaming up a devastating virus or unbreakable piece of ransomware, even if some security experts feel that’s unlikely.

Pause on development

A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.
Viralyft / Unsplash

So, what can be done? When Malwarebytes asked its readers what they thought about the statement “Work on ChatGPT and other AI tools should be paused until regulations can catch up,” 52% agreed, while a little under 24% disagreed.

This call from the public joins several open letters from prominent tech leaders to pause AI chatbot development due to its “large-scale risks.” Perhaps it’s time decision-makers started to take heed.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Is ChatGPT safe? Here are the risks to consider before using it
A response from ChatGPT on an Android phone.

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more
What is ChatGPT Plus? Here’s what to know before you subscribe
Close up of ChatGPT and OpenAI logo.

ChatGPT is completely free to use, but that doesn't mean OpenAI isn't also interested in making some money.

ChatGPT Plus is a subscription model that gives you access to a completely different service based on the GPT-4 model, along with faster speeds, more reliability, and first access to new features. Beyond that, it also opens up the ability to use ChatGPT plug-ins, create custom chatbots, use DALL-E 3 image generation, and much more.
What is ChatGPT Plus?
Like the standard version of ChatGPT, ChatGPT Plus is an AI chatbot, and it offers a highly accurate machine learning assistant that's able to carry out natural language "chats." This is the latest version of the chatbot that's currently available.

Read more
ChatGPT shortly devolved into an AI mess
A response from ChatGPT on an Android phone.

I've seen my fair share of unhinged AI responses -- not the least of which was when Bing Chat told me it wanted to be human last year -- but ChatGPT has stayed mostly sane since it was first introduced. That's changing, as users are flooding social media with unhinged, nonsensical responses coming from the chatbot.

In a lot of reports, ChatGPT simply spits out gibberish. For example, u/Bullroarer_Took took to the ChatGPT subreddit to showcase a response in which a series of jargon and proper sentence structure gives the appearance of a response, but a close read shows the AI spitting out nonsense.

Read more