Skip to main content

Hackers are using AI to create vicious malware, says FBI

The FBI has warned that hackers are running wild with generative artificial intelligence (AI) tools like ChatGPT, quickly creating malicious code and launching cybercrime sprees that would have taken far more effort in the past.

The FBI detailed its concerns on a call with journalists and explained that AI chatbots have fuelled all kinds of illicit activity, from scammers and fraudsters perfecting their techniques to terrorists consulting the tools on how to launch more damaging chemical attacks.

A hacker typing on an Apple MacBook laptop while holding a phone. Both devices show code on their screens.
Sora Shimazaki / Pexels

According to a senior FBI official (via Tom’s Hardware), “We expect over time as adoption and democratization of AI models continues, these trends will increase.” Bad actors are using AI to supplement their regular criminal activities, they continued, including using AI voice generators to impersonate trusted people in order to defraud loved ones or the elderly.

It’s not the first time we’ve seen hackers taking tools like ChatGPT and twisting them to create dangerous malware. In February 2023, researchers from security firm Checkpoint discovered that malicious actors had been able to alter a chatbot’s API, enabling it to generate malware code and putting virus creation at the fingertips of almost any would-be hacker.

Is ChatGPT a security threat?

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

The FBI strikes a very different stance from some of the cyber experts we spoke to in May 2023. They told us that the threat from AI chatbots has been largely overblown, with most hackers finding better code exploits from more traditional data leaks and open-source research.

For instance, Martin Zugec, Technical Solutions Director at Bitdefender, explained that “The majority of novice malware writers are not likely to possess the skills required” to bypass chatbots’ anti-malware guardrails. As well as that, Zugec explained, “the quality of malware code produced by chatbots tends to be low.”

That offers a counterpoint to the FBI’s claims, and we’ll have to see which side proves to be correct. But with ChatGPT maker OpenAI discontinuing its own tool designed to detect chatbot-generated plagiarism, the news has not been encouraging lately. If the FBI is right, there could be tough times ahead in the battle against hackers and their attempts at chatbot-fueled malware.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Microsoft says bizarre travel article was not created by ‘unsupervised AI’
Microsoft logo

According to a recent article posted by Microsoft Travel on microsoft.com, attractions worth checking out on a visit to the Canadian capital of Ottawa include the National War Memorial, Parliament Hill, Fairmont Château Laurier, Ottawa Food Bank ... hang on, Ottawa Food Bank?

Spotted in recent days by Canada-based tech writer Paris Marx, the article puts Ottawa Food Bank at number 3 in a list of 15 must-see places in the city. And as if that wasn't bad enough, the accompanying description even suggests visiting it “on an empty stomach.”

Read more
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
ChatGPT may soon moderate illegal content on sites like Facebook
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

GPT-4 -- the large language model (LLM) that powers ChatGPT Plus -- may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

Read more