Skip to main content
  1. Home
  2. Computing
  3. News

ChatGPT just created malware, and that’s seriously scary

A self-professed novice has reportedly created a powerful data-mining malware using just ChatGPT prompts, all within a span of a few hours.

Aaron Mulgrew, a Forcepoint security researcher, recently shared how he created zero-day malware exclusively on OpenAI’s generative chatbot. While OpenAI has protections against anyone attempting to ask ChatGPT to write malicious code, Mulgrew found a loophole by prompting the chatbot to create separate lines of the malicious code, function by function.

Recommended Videos

After compiling the individual functions, Mulgrew had created a nigh undetectable data-stealing executable on his hands. And this was not your garden variety malware either — the malware was as sophisticated as any nation-state attacks, able to evade all detection-based vendors.

Just as crucially, how Mulgrew’s malware defers from “regular” nation-state iterations in that it doesn’t require teams of hackers (and a fraction of the time and resources) to build. Mulgrew, who didn’t do any of the coding himself, had the executable ready in just hours as opposed to the weeks usually needed.

The Mulgrew malware (it has a nice ring to it, doesn’t it?) disguises itself as a screensaver app (SCR extension), which then auto-launches on Windows. The software will then sieve through files (such as images, Word docs, and PDFs) for data to steal. The impressive part is the malware (through steganography) will break down the stolen data into smaller pieces and hide them within images on the computer. These images are then uploaded to a Google Drive folder, a procedure that avoids detection.

Equally impressive is that Mulgrew was able to refine and strengthen his code against detection using simple prompts on ChatGPT, really raising the question of how safe ChatGPT is to use. Running early VirusTotal tests had the malware detected by five out of 69 detection products. A later version of his code was subsequently detected by none of the products.

Note that the malware Mulgrew created was a test and is not publicly available. Nonetheless, his research has shown how easily users with little to no advanced coding experience can bypass ChatGPT’s weak protections to easily create dangerous malware without even entering a single line of code.

But here’s the scary part of all this: These kinds of code usually take a larger team weeks to compile. We wouldn’t be surprised if nefarious hackers are already developing similar malware through ChatGPT as we speak.

Aaron Leong
Former Computing Writer
Aaron enjoys all manner of tech - from mobile (phones/smartwear), audio (headphones/earbuds), computing (gaming/Chromebooks)…
It just got a lot easier to control a Windows 11 PC with your Android phone
Android smartphones now act as a multipurpose remote control for Windows 11 devices, offering instant locking, seamless file transfers, shared clipboard access, and easy screen mirroring.
microsoft-Phone-Link-app-windows-11

Microsoft has rolled out a significant upgrade to its Phone Link system and the "Link To Windows" app for Android, improving cross-platform connectivity with Windows 11. First and foremost, there's a new "Lock PC" toggle that lets you lock your Windows device remotely from your smartphone (provided the devices are connected).

According to a new report by Windows Latest, locking a Windows 11 PC from an Android phone takes a couple of seconds. Once unlocked, the PC reconnects to your phone. Besides that, the app also gets a "Recent Activity" feed that shows file transfers and clipboard history shared between the devices. There's a dashboard of the recent cross-device transactions.

Read more
AI chatbots like ChatGPT can copy human traits and experts say it’s a huge risk
AI that sounds human can manipulate users
phone-showing-ai-chatbots

AI agents are getting better at sounding human, but new research suggests they are doing more than just copying our words. According to a recent study, popular AI models like ChatGPT can consistently mimic human personality traits. Researchers say this ability comes with serious risks, especially as questions around AI reliability and accuracy grow.

Researchers from the University of Cambridge and Google DeepMind have developed what they call the first scientifically validated personality test framework for AI chatbots, using the same psychological tools designed to measure human personality (via TechXplore).

Read more
This advanced modular robot is ideal for Mars missions, its maker says
Swap out the parts to make different kinds of robots.
The Tron 2 robot.

LimX Dynamics is doing some fascinating work in the robotics arena. Four months after impressing us with its talented Oli humanoid robot, the three-year-old tech startup has just unveiled Tron 2, which, as its name cleverly suggests, is the follow-up to Tron 1.

Going by the video (top) released by LimX on Thursday, Tron 2 is an advanced, AI-powered modular humanoid robot featuring remarkable strength and movement.

Read more