Skip to main content

Experts fear ChatGPT will soon be used in devastating cyberattacks

ChatGPT has taken the world by storm in recent months, but just as it has amazed people with its technical capabilities, concerns have also been raised over its potential misuse. Now, it seems some IT leaders are worried it will soon be used in major cyberattacks, with the potential to cause devastation in the future.

In a survey of 1,500 IT and cybersecurity professionals conducted by BlackBerry, 51% of respondents believed that ChatGPT will be responsible for a successful cyberattack in the next 12 months. As much as 78% feel that attack will happen within two years, while a handful think it could happen within the next few months.

The ChatGPT name next to an OpenAI logo on a black and white background.
Image used with permission by copyright holder

And it’s not just rogue hackers or malware gangs that the survey respondents believe will be responsible. Up to 71% feel that nation-states could already be putting ChatGPT to work on malicious tasks.

When it comes to how exactly ChatGPT will be used to help spur cyberattacks, 53% of people said it would help hackers create more believable phishing emails, while 49% pointed to its ability to help hackers improve their coding abilities.

As well as that, 49% also believed ChatGPT will be used to spread misinformation and disinformation, and 48% think it could be used to craft entirely new strains of malware. A shade below that, 46% of respondents said ChatGPT could help improve existing attacks.

We’ve already seen a large range of impressive uses for AI tools like this, from writing novels to composing music. Yet those same skills that help ChatGPT fashion believable sentences could also be used to weave malicious code. As BlackBerry’s survey indicates, that’s a concern for a lot of people.

Changing the malware landscape

malwarebytes laptop
Image used with permission by copyright holder

How will these potential threats be kept in check? As much as 95% of survey respondents argued that governments have an obligation to regulate ChatGPT-like technology, with 85% saying the level of responsibility should be “moderate” or “significant.”

It’s not just going to be governments fighting off ChatGPT-driven malware, though — 82% of IT professionals surveyed are already planning to defend against this type of attack, with the same number saying they’d used AI tools to do so.

Despite the dire outlook, ChatGPT (and tools like it) have a lot of potential to do good, and three-quarters of the survey takers agreed that it will mainly be used to benefit people. But when it comes to malware, tools like ChatGPT could completely change the landscape. Whether it tips the scales in favor of the attackers or defenders remains to be seen. If it’s the latter, even the best antivirus apps might struggle to keep up.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
I used ChatGPT to help me make my first game. Don’t make the same mistakes I did
A person typing on a laptop that is showing the ChatGPT generative AI website.

Alongside writing articles about ChatGPT, coming to terms with AI chatbot has been a major mission of mine for the past year. I've found it useful for coming up with recipe ideas from a list of ingredients, writing fun alternate history ideas, and answering board game rules clarifications. But I wanted to see if it could do something more impressive: teach me how to make a game.
The first hurdle
I've wanted to make a game for a while now. I programmed a bunch of basic Flash games when I was a kid -- if you can find my Newgrounds profile, you can have a good laugh at them -- but I've had a few ideas ticking in my mind that have calcified into thoughts that will not shift. I need to make them someday and maybe someday is now.

But knowing how to start making a game isn't easy. I didn't really know what kind of game I was trying to make, or what engine I should use, or how you actually start making a game. Until recently, I just hadn't done it. I'd downloaded Unity once, became intimidated, and uninstalled it.

Read more
This one image breaks ChatGPT each and every time
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

Sending images as prompts to ChatGPT is still a fairly new feature, but in my own testing, it works fine most of the time. However, someone's just found an image that ChatGPT can't seem to handle, and it's definitely not what you expect.

The image, spotted by brandon_xyzw on X (formerly Twitter), presents some digital noise. It's nothing special, really -- just a black background with some vertical lines all over it. But if you try to show it to ChatGPT, the image breaks the chatbot each and every time, without fail.

Read more
Researchers just unlocked ChatGPT
ChatGPT versus Google on smartphones.

Researchers have discovered that it is possible to bypass the mechanism engrained in AI chatbots to make them able to respond to queries on banned or sensitive topics by using a different AI chatbot as a part of the training process.

A computer scientists team from Nanyang Technological University (NTU) of Singapore is unofficially calling the method a "jailbreak" but is more officially a "Masterkey" process. This system uses chatbots, including ChatGPT, Google Bard, and Microsoft Bing Chat, against one another in a two-part training method that allows two chatbots to learn each other's models and divert any commands against banned topics.

Read more