Skip to main content

Hackers are using AI to create vicious malware, says FBI

The FBI has warned that hackers are running wild with generative artificial intelligence (AI) tools like ChatGPT, quickly creating malicious code and launching cybercrime sprees that would have taken far more effort in the past.

The FBI detailed its concerns on a call with journalists and explained that AI chatbots have fuelled all kinds of illicit activity, from scammers and fraudsters perfecting their techniques to terrorists consulting the tools on how to launch more damaging chemical attacks.

A hacker typing on an Apple MacBook laptop while holding a phone. Both devices show code on their screens.
Sora Shimazaki / Pexels

According to a senior FBI official (via Tom’s Hardware), “We expect over time as adoption and democratization of AI models continues, these trends will increase.” Bad actors are using AI to supplement their regular criminal activities, they continued, including using AI voice generators to impersonate trusted people in order to defraud loved ones or the elderly.

Recommended Videos

It’s not the first time we’ve seen hackers taking tools like ChatGPT and twisting them to create dangerous malware. In February 2023, researchers from security firm Checkpoint discovered that malicious actors had been able to alter a chatbot’s API, enabling it to generate malware code and putting virus creation at the fingertips of almost any would-be hacker.

Please enable Javascript to view this content

Is ChatGPT a security threat?

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

The FBI strikes a very different stance from some of the cyber experts we spoke to in May 2023. They told us that the threat from AI chatbots has been largely overblown, with most hackers finding better code exploits from more traditional data leaks and open-source research.

For instance, Martin Zugec, Technical Solutions Director at Bitdefender, explained that “The majority of novice malware writers are not likely to possess the skills required” to bypass chatbots’ anti-malware guardrails. As well as that, Zugec explained, “the quality of malware code produced by chatbots tends to be low.”

That offers a counterpoint to the FBI’s claims, and we’ll have to see which side proves to be correct. But with ChatGPT maker OpenAI discontinuing its own tool designed to detect chatbot-generated plagiarism, the news has not been encouraging lately. If the FBI is right, there could be tough times ahead in the battle against hackers and their attempts at chatbot-fueled malware.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
OpenAI cracks down on ChatGPT scammers
ChatGPT logo on a phone

OpenAI has made it clear that its flagship AI service, ChatGPT is not intended for malicious use.

The company has released a report detailing that it has observed the trends of bad actors using its platform as it becomes more popular. OpenAI indicated it has removed dozens of accounts on the suspicion of using ChatGPT in unauthorized ways, such as for "debugging code to generating content for publication on various distribution platforms."

Read more
With 400 million users, OpenAI maintains lead in competitive AI landscape
OpenAI's new typeface OpenAI Sans

Competition in the AI industry remains tough, and OpenAI has proven that it is not taking any coming challenges lightly. The generative AI brand announced Thursday that it services 400 million weekly active users as of February, a 33% increase in less than three months.

OpenAI chief operating officer, Brad Lightcap confirmed the latest user statistics to CNBC, indicating that the figures had not been previously reported. The numbers have quickly risen from previously confirmed stats of 300 million weekly users in December.

Read more
xAI’s Grok-3 is impressive, but it needs to do a lot more to convince me
Tool-picker dropdown for Grok-3 AI.

Elon Musk-led xAI has announced their latest AI model, Grok-3, via a livestream. From the get-go, it was evident that the company wants to quickly fill all the practical gaps that can make its chatbot more approachable to an average user, rather than just selling rhetoric about wokeness and understanding the universe.

The company will be releasing two versions of its latest AI model viz. Grok-3 and Grok-3 mini. The latter is trained for low-compute scenarios, while the former will offer the full set of Grok-3 perks such as DeepSearch, Think, and Big Brain.
What’s all the fuss about

Read more