TA547, a threat actor, has deployed a sophisticated approach in its recent campaign. The attacker launched an email campaign in March using a PowerShell script that was written with the help of AI systems such as Microsoft’s CoPilot or OpenAI’s ChatGPT.
Threat actor TA547’s intention is to spread the Rhadamanthys information stealer by going after German organisations. Proofpoint researchers discovered this malicious activity and noted its broad impact among many German organisations.
Since November 2017, TA547 has been a highly active threat actor, and financially motivated. They use phishing emails to spread a variety of malware for Windows and Android, such as Gootkit, ZLoader, Ursnif, and even Adhubllka ransomware.
Revealing the Development of TA547 From Conventional Malware to AI-Enhanced Strategies
The group has evolved into an initial access broker (IAB) for ransomware attacks in the last several years. They have also limited payloads to particular areas using geofencing tricks.
The threat actor started using the Rhadamanthys stealer, continuously adding features to collect more data from sources like the clipboard, browser, and cookies. According to Proofpoint, this operation marked the first time TA547 used the Rhadamanthys malware, which they have monitored since 2017.
Cybercriminals have been using a Malware-as-a-Service (MaaS) approach to distribute a malicious information-stealing malware known as info-stealer since September 2022. This indicates that they are using the malware’s access to other illegal organisations.
A recent attempt by TA547 used emails claiming to be invoices from the well-known German cash-and-carry company Metro. These fake invoices duped hundreds of German businesses across many industries into downloading the ransomware.
The email includes a password-protected ZIP file that contains an LNK file. When a victim opens an LNK file, it can cause the legitimate Windows utility PowerShell to download and run a hidden script.
This script decodes a variable-stored Base64-encoded Rhadamanthys executable file, loads it into memory as an assembly, and executes it.
Researchers analysing the PowerShell script loading Rhadamanthys observed that individual remarks for each component were prefixed with a pound/hash sign (#), a rarity in human-written code.
According to the researchers, these features match the code produced by AI systems like CoPilot, Gemini, and ChatGPT. Alternatively, TA547 could have copied the script from another source that used generative AI.
Although the researchers cannot conclusively prove that the PowerShell code was generated by a large language model (LLM), the script’s content suggests that TA547 may employ generative AI for scripting or editing.
According to Proofpoint’s senior manager of threat research, Daniel Blackford:
Developers are great at writing code, their comments are usually cryptic, or at least unclear and with grammatical errors. The PowerShell script suspected of being LLM-generated is meticulously commented with impeccable grammar. Nearly every line of code has some associated comment.
Phishing attacks using AI-generated text for social engineering have been discovered in the workplace. Furthermore, there are indications that threat actors use LLM tools for a variety of goals. In February, Microsoft researchers disclosed that five nation-state threat actors, including those from Russia, North Korea, Iran, and China, employed ChatGPT for scripting, target reconnaissance, translation, and other activities.
OpenAI took action against state-sponsored hacking groups such as Charcoal Typhoon, Salmon Typhoon (China), Crimson Storm (Iran), Emerald Sleet (North Korea), and Forest Blizzard (Russia) for misusing ChatGPT. This action aligns with multiple language models implementing restrictions on outputs susceptible to malware. Threat actors are using their own AI chat platforms to commit cybercrime as a result.
With the assistance of artificial intelligence, cybercriminals can craft convincing and realistic scams that are challenging to detect, making AI an effective tool. Employees should receive training to identify potential fraud indicators, such as phishing emails or other suspicious messages. Educate employees about the growing risks associated with artificial intelligence (AI) and the use of machine learning (ML) techniques in scams.
Phishing Tackle offers a free 14-day trial to help train your users to avoid these types of attacks and test their knowledge with simulated attacks using various attack vectors. By focusing on training your users to spot these types of attacks, rather than relying solely on technology, you can ensure that your organisation is better prepared to defend against cyber threats and minimise the impact of any successful attacks.