YouTube has recently seen a surge in the number of videos containing malicious links to information stealers in their descriptions, and many of them use AI-generated personas to trick viewers into trusting them.
CloudSEK cyber intelligence company reports (opens in a new tab) that since November 2022, there has been a massive 200-300% increase in content uploaded to a video hosting site that tricks viewers into installing well-known malware such as Vidar, RedLine, and Raccoon.
The videos pretend to be tutorials showing how to download illegal copies of popular paid design software such as Adobe Photoshop, Premiere Pro, Autodesk 3ds Max and AutoCAD for free.
How-to videos have become increasingly sophisticated, from screen recordings and audio tutorials to the use of AI to create a realistic depiction of a person guiding the viewer through the process, all to appear more trustworthy.
CloudSEK notes that there is an overall increase in AI-generated videos being used for legitimate educational, recruitment and promotional purposes, but now also for nefarious purposes.
Information thieves, as the name suggests, penetrate the user’s system and steal valuable personal information such as passwords and payment details and are distributed via malicious downloads and links such as those in video descriptions as in this case. This data is then sent to the threat actor’s server.
CloudSEK refers to the fact that with 2.5 billion monthly users, YouTube is a prime target for cybercriminals who, in order to avoid the platform’s automated content review process, work to trick the algorithm in various ways.
These include using region-specific tags, adding fake comments to make videos appear legit, and simply flooding the platform with multiple videos to make up for any videos removed and banned. CloudSEK found that 5 to 10 such malicious videos are uploaded every hour.
To optimize for SEO, a lot of hidden links are also used, as well as random keywords in different languages, so that the YouTube algorithm finally recommends them.
In addition, link shortening services such as bit.ly are used to hide the malicious nature of links, as well as links to file hosting services such as MediaFire.
“The threat from information thieves is rapidly evolving and becoming more sophisticated,” said CloudSEK researcher Pavan Karthick. “As part of a worrying trend, cybercriminals are now using AI-generated videos to expand their reach, and YouTube has become a convenient platform for their distribution.”
CloudSEK suggests that “traditional string-based rules will prove ineffective against malware that generates strings dynamically and/or uses encrypted strings.”
Instead, it advises companies to take a more manual approach where cybercriminals’ tactics and techniques are closely monitored to correctly identify threats.
In addition, CloudSEK suggests running awareness campaigns, sharing simple advice such as refraining from clicking on unknown links and using multi-factor authentication to secure accounts, preferably with an authenticator app.