ChatGPT has taken the world by storm in recent months, but just as it has amazed people with its technical capabilities, concerns have also been raised over its potential misuse. Now, it seems some IT leaders are worried it will soon be used in major cyberattacks, with the potential to cause devastation in the future.
In a survey of 1,500 IT and cybersecurity professionals conducted by BlackBerry, 51% of respondents believed that ChatGPT will be responsible for a successful cyberattack in the next 12 months. As much as 78% feel that attack will happen within two years, while a handful think it could happen within the next few months.
And it’s not just rogue hackers or malware gangs that the survey respondents believe will be responsible. Up to 71% feel that nation-states could already be putting ChatGPT to work on malicious tasks.
When it comes to how exactly ChatGPT will be used to help spur cyberattacks, 53% of people said it would help hackers create more believable phishing emails, while 49% pointed to its ability to help hackers improve their coding abilities.
As well as that, 49% also believed ChatGPT will be used to spread misinformation and disinformation, and 48% think it could be used to craft entirely new strains of malware. A shade below that, 46% of respondents said ChatGPT could help improve existing attacks.
We’ve already seen a large range of impressive uses for AI tools like this, from writing novels to composing music. Yet those same skills that help ChatGPT fashion believable sentences could also be used to weave malicious code. As BlackBerry’s survey indicates, that’s a concern for a lot of people.
How will these potential threats be kept in check? As much as 95% of survey respondents argued that governments have an obligation to regulate ChatGPT-like technology, with 85% saying the level of responsibility should be “moderate” or “significant.”
It’s not just going to be governments fighting off ChatGPT-driven malware, though — 82% of IT professionals surveyed are already planning to defend against this type of attack, with the same number saying they’d used AI tools to do so.
Despite the dire outlook, ChatGPT (and tools like it) have a lot of potential to do good, and three-quarters of the survey takers agreed that it will mainly be used to benefit people. But when it comes to malware, tools like ChatGPT could completely change the landscape. Whether it tips the scales in favor of the attackers or defenders remains to be seen. If it’s the latter, even the best antivirus apps might struggle to keep up.
- Google might finally have an answer to Chat GPT-4
- One year ago, ChatGPT started a revolution
- OpenAI is on fire — here’s what that means for ChatGPT and Windows
- GPT-4 Turbo is the biggest update since ChatGPT’s launch
- Apple may finally beef up Siri with AI smarts next year