Skip to main content

In the age of ChatGPT, Macs are under malware assault

It’s common knowledge — Macs are less prone to malware than their Windows counterparts. That still holds true today, but the rise of ChatGPT and other AI tools is challenging the status quo, with even the FBI warning of its far-reaching implications for cybersecurity.

That may be why software developer Macpaw launched its own cybersecurity division — dubbed Moonlock — specifically to fight Mac malware. We spoke to Oleg Stukalenko, Lead Product Manager at Moonlock, to find out whether Mac malware is on the rise, and if ChatGPT could give hackers a massive advantage over everyday users.

State-sponsored attacks

A person using a laptop with a set of code seen on the display.
Sora Shimazaki / Pexels

Apple silicon has rejuvenated Apple’s computers, with a spike in global sales ever since the chips debuted in 2020, according to Statista. All those extra Macs could make the platform a juicy target for malware writers enticed by a widening pool of potential paydays.

As Stukalenko puts it, “Because of a growing quantity of Mac computers, macOS has become an attractive target for cyberattacks … Even the notable case of North Korea’s Lazarus Group, which became one of the first state-sponsored groups to target Macs last year, keeps us on high alert.”

And while Stukalenko acknowledges that “In theory, a newer processor architecture [like Apple silicon] may be considered a safer one,” that doesn’t make it immune to threats. In fact, of all the malware samples analyzed by Moonlock, “almost all work on both Intel and ARM architectures” like the one that forms the basis of Apple silicon chips.

The ChatGPT threat

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

Ransomware often makes a big splash in the news, but it’s not the fastest-rising Mac malware threat, according to Moonlock — instead, that dubious accolade goes to various types of stealers. This malware usually takes the form of a trojan that gathers information from a victim’s system, Stukalenko says, such as usernames and passwords, credit card information, or login details. This category also includes keyloggers, which keep track of everything you type in the hopes of picking up sensitive info.

Another rising threat for Mac users? ChatGPT. While the chatbot itself is not malware, it has the potential to be misused by bad actors who, with some clever prompt engineering, can task it with writing malicious code for them. What do the engineers at Moonlock think about ChatGPT’s capacity as a hacker’s helper?

“СhatGPT can be used for quick prototyping of malware by generating multiple code snippets,” Stukalenko says, giving hackers an extra weapon in their arsenal against their targets. As well as that, the chatbot can be used “to quickly generate a similar new code based on the initial code,” resulting in “polymorphic” malware. This is able to “change its appearance continuously and rapidly morph its code” in order to evade antivirus detection. While not hugely popular right now, it could become a serious problem in the near future.

A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.
Viralyft / Unsplash

Despite OpenAI adding guardrails to ChatGPT that are meant to protect against malicious code generation, these defenses can be easily overcome, Stukalenko says. For instance, the Moonlock team was able to use ChatGPT to generate working encryption code that could be used in ransomware, working their way around the guardrails in a relatively straightforward fashion.

There’s some good news though. Even though ChatGPT can spin up functioning malware code, it is also prone to providing users with faulty outputs that behave weirdly, Stukalenko says, much like how some image generators create images of people with seven fingers. That’s similar to what cybersecurity experts told us when we quizzed them on the same topic in May 2023.

And Stukalenko notes that “ChatGPT brings higher risks for the whole cybersecurity ecosystem, but Mac users specifically are in no way under a more significant risk than users of any other [operating system].” In other words, this is a platform-agnostic problem, not a macOS problem.

How to stay safe

The MacBook Pro on a wooden table.
Digital Trends

So, is it correct to feel that Macs are safer than Windows machines? Stukalenko says that belief is not totally unfounded. “Apple prioritizes security, and the widely held belief that macOS is more protected than Windows has weight behind it,” Stukalenko says. “Over the years, Apple has been consistently adding more security features to macOS … Moreover, the review process of the App Store considerably reduces the risk of installing malware.”

But as we’ve seen, no system is totally beyond the clutches of viruses, trojans, and the like. As Stukalenko explains, “the robust security safeguards and the perceived system’s invulnerability have built a myth that malware doesn’t exist on macOS.”

“According to our own research,” they continue, “57% of Mac users either agree or hesitate to disagree with the statement that ‘Malware does not exist on macOS.’ This persistent misconception makes users vulnerable to potential cyberattacks.”

What can you do to stay safe on your Mac? According to Moonlock, you should prioritize downloading apps from the official App Store, as everything there has to be notarized and checked by Apple. If the app you want isn’t available there, avoid downloading apps through Google or banner ads, as these can hide malware.

Elsewhere, Stukalenko says you should avoid torrents at all costs, and install an antivirus app from a trusted developer. Put these tips into practice and you’ll go a long way to keeping your Mac safe — even from malware built with the automated assistance of ChatGPT.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
ChatGPT is violating your privacy, says major GDPR complaint
ChatGPT app running on an iPhone.

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

Read more
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
ChatGPT may soon moderate illegal content on sites like Facebook
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

GPT-4 -- the large language model (LLM) that powers ChatGPT Plus -- may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

Read more