Fake AI platforms have become an increasingly common lure for cybercriminals, deceiving inexperienced users with promises of cutting-edge, AI-powered video and image generation. To trick users into downloading a malicious payload known as Noodlophile Stealer, threat actors promote so-called “generative AI” tools via well-known Facebook groups and viral social media campaigns. Some of these postings have had over 62,000 views.
Once installed, this infostealer covertly collects cryptocurrency wallets, login credentials, and other sensitive information, all while users believe they are simply converting their images into videos.
Attackers create realistic AI-themed platforms, complete with sophisticated UI and fake testimonials, to provide legitimacy to their masquerade instead of using traditional phishing techniques or providing pirated software.
The complexity of these fraudulent schemes is increasing along with the general interest in AI content creation. Morphisec security researchers disclosed the scheme on May 8, highlighting platforms that claim to generate logos, websites, and other content.
The fact that each upload actually bugs the user’s device shows how simple it is to turn the interest surrounding artificial intelligence into a weapon against even the most tech-savvy customers.
From Facebook Scams to Noodlophile Stealer
The campaign starts with luring Facebook ads that promise AI-generated graphics, logos, and videos. These advertisements seem like popular tools like Luma Dream Machine or CapCut, but they redirect viewers to fake websites.
Visitors are urged to upload a personal image or video “for processing” by the site’s AI engine. The attacker prompts the user to download a ZIP file, VideoDreamAI.zip, which springs the trap disguised as a creative asset.
VideoDreamAI.zip contains Video Dream MachineAI.mp4.exe, an application that looks like a video file. A fake signature has been used to make this 32-bit C++ program appear to be an official CapCut release. After launching the actual CapCut.exe program, it runs a.NET loader named CapCutLoader. The spyware maintains a convincing disguise throughout the process.
Once activated, the Python component installs Noodlophile Stealer, a flexible data-theft malware. Additionally, malware usually installs XWorm, a remote access trojan that gives cybercriminals permanent control over the compromised system. All of this happens without alerting the user to the intrusion.
This operation leverages AI as a social engineering lure, which sets it apart. It focuses on small enterprises and creators interested to explore new automation solutions. To evade detection, the creators of the malware use powerful obfuscation techniques including Base64 encoding, password-protected files, and in-memory execution. Attackers even sale the tools as malware-as-a-service on illegal forums.
Building Multi‑Layered Defenses Against AI‑Driven Malware
Cybercriminals use the public’s interest about new technologies as a tool to spread malware. Meta removed more than 1,000 malicious URLs that delivered 10 unusual types of AI-powered malware using ChatGPT as bait at the beginning of 2023. This campaign highlights how cybercriminals exploit trust in well-known platforms to reach a broader, less sceptical audience.
Organisations and individuals both need to implement multi-layered protection. Before providing significant details on any AI website, make sure to check the domain and SSL certificate. Avoid opening attachments in the.zip or.rar formats from senders you do not recognise since they usually hide multi-stage malware. Install trustworthy endpoint security tools that can identify new threats and stop them before they cause harm to your system.
Morphisec’s CTO, Michael Gorelik, advises keeping personal and business activities separate. Use different devices and accounts for work-related tasks to limit exposure if one environment becomes compromised.
Regular staff training helps teams to detect phishing alerts, investigate odd downloads, and instantly report suspicious URLs. By verifying every platform, avoiding untrusted files and embracing proactive technologies such as Automated Moving Target Defence organisations can stay one step ahead of stealthy AI-powered malware.
To ensure your security systems are prepared to meet the evolving threats posed by AI misuse and hallucinations, consider Phishing Tackle’s cybersecurity awareness training and real-world simulated phishing tests. Our comprehensive solutions equip you with the tools and strategies needed to identify and remediate vulnerabilities before they can be exploited. Book a demo today to see how it can work for you.