Skip to main content

Hackers are using AI to spread dangerous malware on YouTube

YouTube is the latest frontier where AI-generated content is being used to dupe users into downloading malware that can steal their personal information.

As AI generation becomes increasingly popular on several platforms, so does the desire to profit from it in malicious ways. The research firm CloudSEK has observed a 200% to 300% increase in the number of videos on YouTube that include links to popular malware sources such as Vidar, RedLine, and Raccoon directly in the descriptions since November 2022.

Recommended Videos

The videos are set up as tutorials for downloading cracked versions of software that typically require a paid license for use, such as Photoshop, Premiere Pro, Autodesk 3ds Max, AutoCAD, among others.

Bad actors benefit by creating AI-generated videos on platforms such as Synthesia and D-ID. They create videos that feature humans with universally familiar and trustworthy features. This popular trend has been used on social media and has long been used in recruitment, educational, and promotional material, CloudSEK noted.

‍The combination of the previously mentioned methods makes it so users can easily be tricked into clicking malicious links and downloading the malware infostealer. When installed, it has access to the user’s private data, including “passwords, credit card information, bank account numbers, and other confidential data,” which can then be uploaded to the bad actor’s Command and Control server.

Other private info that might be at risk to infostealer malware includes browser data, Crypto wallet data, Telegram data, program files such as .txt, and System information such as IP addresses.

‍While there are many antiviruses and endpoint detection systems on top of this new brand of AI-generated malware, there are also many information stealer developers around to ensure the ecosystem remains alive and well. Though CloudSEK noted that the bad actors sprung up alongside the AI revolution in November 2022, some of the first media attention of hackers using ChatGPT code to create malware didn’t surface until early February.

Information stealer developers also recruit and collaborate with traffers, other actors who can find and share information on potential victims through underground marketplaces, forums, and Telegram channels. Traffers are typically the ones that provide the fake websites, phishing emails, YouTube tutorials, or social media posts on which information stealer developers can attach their malware. There has also been a similar scam with bad actors hosting fake ads on social media and websites for the paid version of ChatGPT.

However, on YouTube, they are taking over accounts and uploading several videos at once to get the attention of the original creator’s followers. Bad actors will take over both popular accounts and infrequently updated accounts for different purposes.

Taking over an account with over 100,000 subscribers and uploading between five and six malware-laced videos is bound to get some clicks before the owner gains control of their account again. Viewers might identify the video as nefarious and report it to YouTube, which will ultimately remove it. A less popular account might have infected videos live and the owner might not be aware for some time.

Adding fake comments and shortened bit.ly and cutt.ly links to videos also makes them appear more valid.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
OpenAI might start watermarking ChatGPT images — but only for free users
OpenAI press image

Everyone has been talking about ChatGPT's new image-generation feature lately, and it seems the excitement isn't over yet. As always, people have been poking around inside the company's apps and this time, they've found mentions of a watermark feature for generated images.

Spotted by X user Tibor Blaho, the line of code image_gen_watermark_for_free seems to suggest that the feature would only slap watermarks on images generated by free users -- giving them yet another incentive to upgrade to a paid subscription.

Read more
OpenAI adjusts AI roadmap for better GPT-5
OpenAI press image

OpenAI is reconfiguring its rollout plan for upcoming AI models. The company’s CEO, Sam Altman shared on social media on Friday that it will delay the launch of its GPT-5 large language model (LLM) in favor of some lighter reasoning models to release first.

The brand will now launch new o3 and o4-mini reasoning models in the coming weeks as an alternative to the GPT-5 launch fans were expecting. In this time, OpenAI will be smoothing out some issues in developing the LLM before a final rollout. The company hasn’t detailed a specific timeline, just indicating that GPT-5 should be available in the coming months.

Read more
ChatGPT Plus is free for a limited time: Here’s how to check if you qualify
chatgpt plus promotional offer for students.

ChatGPT didn't just emerge onto the AI scene, it birthed an entire revolution of AI assistants and agents and made them accessible to consumers who were not so friendly with technology. Despite the space now being overcrowded with numerous intelligent chatbots and wrapper apps, ChatGPT is still the most popular of them all. And while you get plenty of features for free now, ChatGPT Plus, its paid tier, gets deeper thinking abilities, priority in times of traffic surge, and quicker access to new models. The downside, however, it is $20 monthly subscription. Thankfully, a select few people can get it for free now.

OpenAI's CEO and co-founder Sam Altman recently announced on X that ChatGPT Plus will be available for free until the end of May. However, the offer is only applicable if you are a college student, and more specifically, studying in a "degree-granting schools in the United States and Canada." The idea basically is to gain popularity among college-goers by helping them cram more before finals in the coming weeks.

Read more