Skip to main content

Hackers are using AI to spread dangerous malware on YouTube

YouTube is the latest frontier where AI-generated content is being used to dupe users into downloading malware that can steal their personal information.

As AI generation becomes increasingly popular on several platforms, so does the desire to profit from it in malicious ways. The research firm CloudSEK has observed a 200% to 300% increase in the number of videos on YouTube that include links to popular malware sources such as Vidar, RedLine, and Raccoon directly in the descriptions since November 2022.

The videos are set up as tutorials for downloading cracked versions of software that typically require a paid license for use, such as Photoshop, Premiere Pro, Autodesk 3ds Max, AutoCAD, among others.

Bad actors benefit by creating AI-generated videos on platforms such as Synthesia and D-ID. They create videos that feature humans with universally familiar and trustworthy features. This popular trend has been used on social media and has long been used in recruitment, educational, and promotional material, CloudSEK noted.

‍The combination of the previously mentioned methods makes it so users can easily be tricked into clicking malicious links and downloading the malware infostealer. When installed, it has access to the user’s private data, including “passwords, credit card information, bank account numbers, and other confidential data,” which can then be uploaded to the bad actor’s Command and Control server.

Other private info that might be at risk to infostealer malware includes browser data, Crypto wallet data, Telegram data, program files such as .txt, and System information such as IP addresses.

‍While there are many antiviruses and endpoint detection systems on top of this new brand of AI-generated malware, there are also many information stealer developers around to ensure the ecosystem remains alive and well. Though CloudSEK noted that the bad actors sprung up alongside the AI revolution in November 2022, some of the first media attention of hackers using ChatGPT code to create malware didn’t surface until early February.

Information stealer developers also recruit and collaborate with traffers, other actors who can find and share information on potential victims through underground marketplaces, forums, and Telegram channels. Traffers are typically the ones that provide the fake websites, phishing emails, YouTube tutorials, or social media posts on which information stealer developers can attach their malware. There has also been a similar scam with bad actors hosting fake ads on social media and websites for the paid version of ChatGPT.

However, on YouTube, they are taking over accounts and uploading several videos at once to get the attention of the original creator’s followers. Bad actors will take over both popular accounts and infrequently updated accounts for different purposes.

Taking over an account with over 100,000 subscribers and uploading between five and six malware-laced videos is bound to get some clicks before the owner gains control of their account again. Viewers might identify the video as nefarious and report it to YouTube, which will ultimately remove it. A less popular account might have infected videos live and the owner might not be aware for some time.

Adding fake comments and shortened bit.ly and cutt.ly links to videos also makes them appear more valid.

Editors' Recommendations

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
Hackers are using this incredibly sneaky trick to hide malware
A hacker typing on an Apple MacBook laptop, which shows code on its screen.

One of the most important things you can do to protect your online security is install one of the best password managers, but a recent cyberattack proves that you have to be careful even when doing that. Thanks to some sneaky malware hidden in Google Ads, you could end up with viruses riddling your PC.

The issue affects popular password manager KeePass -- or rather, it attempts to impersonate KeePass by using misleading Google Ads. First spotted by Malwarebytes, the nefarious link appears at the top of search results, meaning you’ll likely see it before the legitimate websites that follow beneath it.

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Bing Chat just beat a security check to stop hackers and spammers
A depiction of a hacker breaking into a system via the use of code.

Bing Chat is no stranger to controversy -- in fact, sometimes it feels like there’s a never-ending stream of scandals surrounding it and tools like ChatGPT -- and now the artificial intelligence (AI) chatbot has found itself in hot water over its ability to defeat a common cybersecurity measure.

According to Denis Shiryaev, the CEO of AI startup Neural.love, chatbots like Bing Chat and ChatGPT can potentially be used to bypass a CAPTCHA code if you just ask them the right set of questions. If this turns out to be a widespread issue, it could have worrying implications for everyone’s online security.

Read more