Great, hackers are now using ChatGPT to create malware

A new threat has surfaced in the ChatGPT saga, with cybercriminals having developed a way to hack the AI chatbot and inundate it with malware commands.

The research firm Checkpoint has discovered that hackers have designed bots that can infiltrate OpenAI’s GPT-3 API and alter its code so that it can generate malicious content, such as text that can be used for phishing emails and malware scripts.

Image used with permission by copyright holder

The bots work through the messaging app Telegram. Bad actors use the bots to set up a restriction-free, dark version of ChatGPT, according to Ars Technica.

Recommended Videos

ChatGPT has thumbs-up and thumbs-down buttons that you can press as part of its learning algorithm if it generates content that can be considered offensive or inappropriate. Normally, inputs like generating malicious code or phishing emails is off limits, with ChatGPT refusing to give a response.

This nefarious chatbot alternative has a price tag of $6 for every 100 queries, with the hackers behind it also giving tips and examples of the bad content you can generate with this version. The hackers have also made a script available on GitHub. The OpenAI, API-based script has the ability to allow users to fake a business or person, in addition to generating phishing emails through text-generation commands. The bots can also assist you in the ideal placement for the phishing link in the email, according to PC Gamer.

It is difficult to know how much of a threat this development will be to AI text generators moving forward, especially with major companies already committed to working with this increasingly popular technology. Microsoft Bing is set to soon add ChatGPT support to its browser in an upcoming update as a part of its ongoing collaboration with OpenAI, for example.

While ChatGPT remains free for the foreseeable future, minus the priority ChatGPT Plus subscription, this isn’t the first time the AI text generator has been targeted by scammers. In January, news broke that thousands of people were duped after paying for iOS and Android mobile app versions of the chatbot, which is currently a browser-based service.

The Apple App Store version was especially popular, despite its $8 weekly subscription price after a three-day trial. Users also had the option to pay a $50 monthly subscription, which notably was even more expensive than the weekly cost. The app was eventually removed from the Apple store after it received media attention.

ChatGPT is certainly the main target for scammers as it has surged in popularity, but it remains to be seen if bad actors will eventually jump on one of the many ChatGPT alternatives circulating.

Editors' Recommendations

Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
GPTZero: how to use the ChatGPT detection tool

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available. Being able to plug in a prompt and get out a stream of almost good enough text is a tempting proposition for many people who aren't confident in their writing skills or are looking to save time. However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That prevents them from learning as much as they could, which has given teachers a whole new headache when it comes to detecting AI use.

Teachers and other users are now looking for ways to detect the use of ChatGPT in students' work, and many are turning to tools like GPTZero, a ChatGPT detection tool built by Princeton University student Edward Tian. The software is available to everyone, so if you want to try it out and see the chances that a particular piece of text was written using ChatGPT, here's how you can do that.
What is GPTZero?

Read more
Is ChatGPT safe? Here are the risks to consider before using it

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more
What is ChatGPT Plus? Here’s what to know before you subscribe

ChatGPT is completely free to use, but that doesn't mean OpenAI isn't also interested in making some money.

ChatGPT Plus is a subscription model that gives you access to a completely different service based on the GPT-4 model, along with faster speeds, more reliability, and first access to new features. Beyond that, it also opens up the ability to use ChatGPT plug-ins, create custom chatbots, use DALL-E 3 image generation, and much more.
What is ChatGPT Plus?
Like the standard version of ChatGPT, ChatGPT Plus is an AI chatbot, and it offers a highly accurate machine learning assistant that's able to carry out natural language "chats." This is the latest version of the chatbot that's currently available.

Read more