ChatGPT has been a polarizing invention, with responses to the artificial intelligence (AI) chatbot swinging between excitement and fear. Now, a new survey shows that disillusionment with ChatGPT could be hitting new highs.
According to a survey from security firm Malwarebytes, 81% of its respondents are worried about the security and safety risks posed by ChatGPT. It’s a remarkable finding and suggests that people are becoming increasingly concerned by the nefarious acts OpenAI’s chatbot is apparently capable of pulling off.
Malwarebytes asked its newsletter subscribers to respond to the phrase “I am concerned about the possible security and/or safety risks posed by ChatGPT,” a sentiment with which 81% agreed. What’s more, 51% disagreed with the statement “ChatGPT and other AI tools will improve Internet safety” while just 7% agreed, suggesting there is widespread concern over the impact ChatGPT will have on online security.
The discontent with AI chatbots was not limited to security issues. Only 12% of surveyed individuals agreed with the phrase “The information produced by ChatGPT is accurate,” while 55% of people disagreed. As many as 63% of people did not trust ChatGPT’s responses, with a mere 10% finding them reliable.
This kind of response is not entirely surprising, given the spate of high-profile bad acts ChatGPT has been used for in recent months. We’ve seen instances of it being deployed for all manner of questionable deeds, from writing malware to presenting users with free Windows 11 keys.
In May 2023, we spoke to various security experts about the threats posed by ChatGPT. According to Martin Zugec, the Technical Solutions Director at Bitdefender, “the quality of malware code produced by chatbots tends to be low, making it a less attractive option for experienced malware writers who can find better examples in public code repositories.”
Still, that hasn’t stemmed public anxiety about what ChatGPT could be used to do. It’s clear that people are worried that even novice malware writers could task AI chatbots with dreaming up a devastating virus or unbreakable piece of ransomware, even if some security experts feel that’s unlikely.
So, what can be done? When Malwarebytes asked its readers what they thought about the statement “Work on ChatGPT and other AI tools should be paused until regulations can catch up,” 52% agreed, while a little under 24% disagreed.
This call from the public joins several open letters from prominent tech leaders to pause AI chatbot development due to its “large-scale risks.” Perhaps it’s time decision-makers started to take heed.
- Bing Chat’s ads are sending users to dangerous malware sites
- This powerful ChatGPT feature is back from the dead — with a few key changes
- Most people distrust AI and want regulation, says new survey
- Meta is reportedly working on a GPT-4 rival, and it could have dire consequences
- GPT-4: how to use the AI chatbot that puts ChatGPT to shame