Skip to main content

Hackers are using AI to spread dangerous malware on YouTube

YouTube is the latest frontier where AI-generated content is being used to dupe users into downloading malware that can steal their personal information.

As AI generation becomes increasingly popular on several platforms, so does the desire to profit from it in malicious ways. The research firm CloudSEK has observed a 200% to 300% increase in the number of videos on YouTube that include links to popular malware sources such as Vidar, RedLine, and Raccoon directly in the descriptions since November 2022.

The videos are set up as tutorials for downloading cracked versions of software that typically require a paid license for use, such as Photoshop, Premiere Pro, Autodesk 3ds Max, AutoCAD, among others.

Bad actors benefit by creating AI-generated videos on platforms such as Synthesia and D-ID. They create videos that feature humans with universally familiar and trustworthy features. This popular trend has been used on social media and has long been used in recruitment, educational, and promotional material, CloudSEK noted.

‍The combination of the previously mentioned methods makes it so users can easily be tricked into clicking malicious links and downloading the malware infostealer. When installed, it has access to the user’s private data, including “passwords, credit card information, bank account numbers, and other confidential data,” which can then be uploaded to the bad actor’s Command and Control server.

Other private info that might be at risk to infostealer malware includes browser data, Crypto wallet data, Telegram data, program files such as .txt, and System information such as IP addresses.

‍While there are many antiviruses and endpoint detection systems on top of this new brand of AI-generated malware, there are also many information stealer developers around to ensure the ecosystem remains alive and well. Though CloudSEK noted that the bad actors sprung up alongside the AI revolution in November 2022, some of the first media attention of hackers using ChatGPT code to create malware didn’t surface until early February.

Information stealer developers also recruit and collaborate with traffers, other actors who can find and share information on potential victims through underground marketplaces, forums, and Telegram channels. Traffers are typically the ones that provide the fake websites, phishing emails, YouTube tutorials, or social media posts on which information stealer developers can attach their malware. There has also been a similar scam with bad actors hosting fake ads on social media and websites for the paid version of ChatGPT.

However, on YouTube, they are taking over accounts and uploading several videos at once to get the attention of the original creator’s followers. Bad actors will take over both popular accounts and infrequently updated accounts for different purposes.

Taking over an account with over 100,000 subscribers and uploading between five and six malware-laced videos is bound to get some clicks before the owner gains control of their account again. Viewers might identify the video as nefarious and report it to YouTube, which will ultimately remove it. A less popular account might have infected videos live and the owner might not be aware for some time.

Adding fake comments and shortened bit.ly and cutt.ly links to videos also makes them appear more valid.

Editors' Recommendations

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
We may have just learned how Apple will compete with ChatGPT
An iPhone on a table with the Siri activation animation playing on the screen.

As we approach Apple’s Worldwide Developers Conference (WWDC) in June, the rumor mill has been abuzz with claims over Apple’s future artificial intelligence (AI) plans. Well, there have just been a couple of major developments that shed some light on what Apple could eventually reveal to the world, and you might be surprised at what Apple is apparently working on.

According to Bloomberg, Apple is in talks with Google to infuse its Gemini generative AI tool into Apple’s systems and has also considered enlisting ChatGPT’s help instead. The move with Google has the potential to completely change how the Mac, iPhone, and other Apple devices work on a day-to-day basis, but it could come under severe regulatory scrutiny.

Read more
Copilot: how to use Microsoft’s own version of ChatGPT
Microsoft's AI Copilot being used in various Microsoft Office apps.

ChatGPT isn’t the only AI chatbot in town. One direct competitor is Microsoft’s Copilot (formerly Bing Chat), and if you’ve never used it before, you should definitely give it a try. As part of a greater suite of Microsoft tools, Copilot can be integrated into your smartphone, tablet, and desktop experience, thanks to a Copilot sidebar in Microsoft Edge. 

Like any good AI chatbot, Copilot’s abilities are constantly evolving, so you can always expect something new from this generative learning professional. Today though, we’re giving a crash course on where to find Copilot, how to download it, and how you can use the amazing bot. 
How to get Microsoft Copilot
Microsoft Copilot comes to Bing and Edge. Microsoft

Read more
GPTZero: how to use the ChatGPT detection tool
A MidJourney rendering of a student and his robot friend in front of a blackboard.

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available. Being able to plug in a prompt and get out a stream of almost good enough text is a tempting proposition for many people who aren't confident in their writing skills or are looking to save time. However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That prevents them from learning as much as they could, which has given teachers a whole new headache when it comes to detecting AI use.

Teachers and other users are now looking for ways to detect the use of ChatGPT in students' work, and many are turning to tools like GPTZero, a ChatGPT detection tool built by Princeton University student Edward Tian. The software is available to everyone, so if you want to try it out and see the chances that a particular piece of text was written using ChatGPT, here's how you can do that.
What is GPTZero?

Read more