Skip to main content

‘Grim outlook’ as criminals commandeer AI chatbots, Europol says

Europol this week issued a stark warning highlighting the risks posed by criminals as they get to grips with the new wave of advanced AI chatbots.

In a post shared online this week, Europe’s law enforcement agency described how tools such as OpenAI’s ChatGPT and GPT-4, and Google’s Bard, will be increasingly used by criminals looking for new ways to con members of the public.

It identified three specific areas that concern it most.

First up is fraud and social engineering, where emails are sent to targets in the hope of getting them to download a malware-infected file or click on a link that takes them to an equally dangerous website.

Phishing emails, as they’re known, are usually full of grammatical errors and spelling mistakes and end up in the junk folder. Even those that do make it to the inbox are so appallingly written that the recipient is able to quickly discard them without a second thought.

But AI chatbots can create well-written messages free of sloppy errors, allowing criminals to send out convincing emails that mean recipients will have to pay extra attention when checking their messages.

Europol said the advanced chatbots can “reproduce language patterns that can be used to impersonate the style of speech of specific individuals or groups,” adding that such a capability can be “abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.”

A more convincing form of disinformation is also set to proliferate, with the new wave of chatbots excelling at creating authentic-sounding text at speed and scale, Europol said, adding: “This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.”

Thirdly, Europol cited coding as a new area being seized upon by cybercriminals to create malicious software. “In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages,” the agency pointed out. “For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.”

It said the situation “provides a grim outlook” for those on the right side of the law as nefarious activity online becomes harder to detect.

The AI chatbot craze took off in November 2022 when Microsoft-backed OpenAI released its impressive ChatGPT tool. An improved version, GPT-4, was released just recently, while Google has also unveiled its own similar tool, called Bard. All three are noted for their impressive ability to create natural-sounding text with just a few prompts, with the technology likely to assist or even replace a slew of different jobs in the coming years.

Other similar AI-based technology lets you create original images, videos, and audio with just a few text prompts, highlighting how no form of media will escape AI’s impact as the technology continues to improve.

Some leading voices have understandable concerns about its rapid rise, with a recent open letter signed by Elon Musk, Apple co-founder Steve Wozniak, and various experts claiming AI systems with human-competitive intelligence can pose “profound risks to society and humanity.” The letter called for a six-month pause to allow for the creation and implementation of safety protocols for the advanced tools, adding that if handled in the right way, “humanity can enjoy a flourishing future with AI.”

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Is ChatGPT safe? Here are the risks to consider before using it
A response from ChatGPT on an Android phone.

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more
What is ChatGPT Plus? Here’s what to know before you subscribe
Close up of ChatGPT and OpenAI logo.

ChatGPT is completely free to use, but that doesn't mean OpenAI isn't also interested in making some money.

ChatGPT Plus is a subscription model that gives you access to a completely different service based on the GPT-4 model, along with faster speeds, more reliability, and first access to new features. Beyond that, it also opens up the ability to use ChatGPT plug-ins, create custom chatbots, use DALL-E 3 image generation, and much more.
What is ChatGPT Plus?
Like the standard version of ChatGPT, ChatGPT Plus is an AI chatbot, and it offers a highly accurate machine learning assistant that's able to carry out natural language "chats." This is the latest version of the chatbot that's currently available.

Read more
ChatGPT shortly devolved into an AI mess
A response from ChatGPT on an Android phone.

I've seen my fair share of unhinged AI responses -- not the least of which was when Bing Chat told me it wanted to be human last year -- but ChatGPT has stayed mostly sane since it was first introduced. That's changing, as users are flooding social media with unhinged, nonsensical responses coming from the chatbot.

In a lot of reports, ChatGPT simply spits out gibberish. For example, u/Bullroarer_Took took to the ChatGPT subreddit to showcase a response in which a series of jargon and proper sentence structure gives the appearance of a response, but a close read shows the AI spitting out nonsense.

Read more