Skip to main content

Don’t fall for it — ChatGPT scams are running rampant across social media

Malware and scams for ChatGPT continue to become more prevalent as interest in the chatbot developed by OpenAI expands.

There have been a number of instances of bad actors taking advantage of the popularity of ChatGPT since its introduction in November 2022. Many have been using false ChatGPT interfaces to scam unsuspecting mobile users out of money or infect devices with malware. The most recent threat is a mix of both, with hackers targeting Windows and Android users through phishing pages and aiming to steal their private data, which could include credit card and other banking information, according to Bleeping Computer.

Chat GPT PC Online Redline redirect.

I redirected it to closed.

/ #cybersecurity #infosec

— Dominic Alvieri (@AlvieriD) February 12, 2023

Security researcher Dominic Alvieri first observed the suspicious activity of, a domain that hosted an info-stealing malware called Redline, which posed as a ChatGPT for Windows desktop download. The website, which featured ChatGPT branding, was being advertised on a Facebook page as a legitimate OpenAI link to persuade people into accessing the nefarious site.

Alvieri found there were also fake ChatGPT apps on Google Play and various other third-party Android app stores, which could send malware to devices if downloaded.

Other researchers have backed up the initial claims, having found other malware that executes different malicious campaigns. Researchers at Cyble discovered, which sends out malware that “steals clipboard contents,” including Aurora stealer. Another domain called chat-gpt-pc[.]online sends out malware called Lumma stealer. Yet another called openai-pc-pro[.]online, malware that has not yet been identified.

Cyble has also connected the domain to a credit card-stealing page that poses as a payment page for ChatGPT Plus.

Meanwhile, Cyble said it has uncovered over 50 dubious mobile applications posing as ChatGPT by using its branding or a name that could easily confuse users. The research team said they all have been determined fake and harmful to devices. One is an app called chatGPT1, which is an SMS-billing fraud app that likely steals credit card information similar to what is described above. Another app is AI Photo, which hosts Spynote malware that is able to access and “steal call logs, contact lists, SMS, and files” from a device

The influx of malware and paid scammers began when OpenAI began throttling the speeds and access to ChatGPT due to its booming popularity. The first fake paid mobile apps hit Apple App and Google Play stores in December 202 but didn’t get media attention until nearly a month later, in mid-January. The first known major ChatGPT hack soon followed in mid-February. Bad actors used the OpenAI GPT-3 API to create a dark version of ChatGPT that is able to generate phishing emails and malware scripts. The bots work through the messaging app Telegram.

Now, it seems to be open season for fakes and alternatives since OpenAI introduced its paid ChatGPT Plus tier for $20 per month as of February 10. However, users should be wary that the chatbot remains a browser-based tool that can be accessed only at There are no mobile or desktop apps currently available for ChatGPT on any system.

Editors' Recommendations

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
Is ChatGPT safe? Here are the risks to consider before using it
A response from ChatGPT on an Android phone.

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more
What is ChatGPT Plus? Here’s what to know before you subscribe
Close up of ChatGPT and OpenAI logo.

ChatGPT is completely free to use, but that doesn't mean OpenAI isn't also interested in making some money.

ChatGPT Plus is a subscription model that gives you access to a completely different service based on the GPT-4 model, along with faster speeds, more reliability, and first access to new features. Beyond that, it also opens up the ability to use ChatGPT plug-ins, create custom chatbots, use DALL-E 3 image generation, and much more.
What is ChatGPT Plus?
Like the standard version of ChatGPT, ChatGPT Plus is an AI chatbot, and it offers a highly accurate machine learning assistant that's able to carry out natural language "chats." This is the latest version of the chatbot that's currently available.

Read more
ChatGPT shortly devolved into an AI mess
A response from ChatGPT on an Android phone.

I've seen my fair share of unhinged AI responses -- not the least of which was when Bing Chat told me it wanted to be human last year -- but ChatGPT has stayed mostly sane since it was first introduced. That's changing, as users are flooding social media with unhinged, nonsensical responses coming from the chatbot.

In a lot of reports, ChatGPT simply spits out gibberish. For example, u/Bullroarer_Took took to the ChatGPT subreddit to showcase a response in which a series of jargon and proper sentence structure gives the appearance of a response, but a close read shows the AI spitting out nonsense.

Read more