Skip to main content

Experts fear ChatGPT will soon be used in devastating cyberattacks

ChatGPT has taken the world by storm in recent months, but just as it has amazed people with its technical capabilities, concerns have also been raised over its potential misuse. Now, it seems some IT leaders are worried it will soon be used in major cyberattacks, with the potential to cause devastation in the future.

In a survey of 1,500 IT and cybersecurity professionals conducted by BlackBerry, 51% of respondents believed that ChatGPT will be responsible for a successful cyberattack in the next 12 months. As much as 78% feel that attack will happen within two years, while a handful think it could happen within the next few months.

The ChatGPT name next to an OpenAI logo on a black and white background.
Image used with permission by copyright holder

And it’s not just rogue hackers or malware gangs that the survey respondents believe will be responsible. Up to 71% feel that nation-states could already be putting ChatGPT to work on malicious tasks.

When it comes to how exactly ChatGPT will be used to help spur cyberattacks, 53% of people said it would help hackers create more believable phishing emails, while 49% pointed to its ability to help hackers improve their coding abilities.

As well as that, 49% also believed ChatGPT will be used to spread misinformation and disinformation, and 48% think it could be used to craft entirely new strains of malware. A shade below that, 46% of respondents said ChatGPT could help improve existing attacks.

We’ve already seen a large range of impressive uses for AI tools like this, from writing novels to composing music. Yet those same skills that help ChatGPT fashion believable sentences could also be used to weave malicious code. As BlackBerry’s survey indicates, that’s a concern for a lot of people.

Changing the malware landscape

malwarebytes laptop
Image used with permission by copyright holder

How will these potential threats be kept in check? As much as 95% of survey respondents argued that governments have an obligation to regulate ChatGPT-like technology, with 85% saying the level of responsibility should be “moderate” or “significant.”

It’s not just going to be governments fighting off ChatGPT-driven malware, though — 82% of IT professionals surveyed are already planning to defend against this type of attack, with the same number saying they’d used AI tools to do so.

Despite the dire outlook, ChatGPT (and tools like it) have a lot of potential to do good, and three-quarters of the survey takers agreed that it will mainly be used to benefit people. But when it comes to malware, tools like ChatGPT could completely change the landscape. Whether it tips the scales in favor of the attackers or defenders remains to be seen. If it’s the latter, even the best antivirus apps might struggle to keep up.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Bing Chat just beat a security check to stop hackers and spammers
A depiction of a hacker breaking into a system via the use of code.

Bing Chat is no stranger to controversy -- in fact, sometimes it feels like there’s a never-ending stream of scandals surrounding it and tools like ChatGPT -- and now the artificial intelligence (AI) chatbot has found itself in hot water over its ability to defeat a common cybersecurity measure.

According to Denis Shiryaev, the CEO of AI startup Neural.love, chatbots like Bing Chat and ChatGPT can potentially be used to bypass a CAPTCHA code if you just ask them the right set of questions. If this turns out to be a widespread issue, it could have worrying implications for everyone’s online security.

Read more
Bing Chat’s ads are sending users to dangerous malware sites
Bing Chat shown on a laptop.

Since it launched, Microsoft’s Bing Chat has been generating headlines left, right, and center -- and not all of them have been positive. Now, there’s a new headache for the artificial intelligence (AI) chatbot, as it’s been found it has a tendency to send you to malware websites that can infect your PC.

The discovery was made by antivirus firm Malwarebytes, which discussed the incident in a blog post. According to the company, Bing Chat is displaying malware advertisements that send users to malicious websites instead of filtering them out.

Read more