A futuristic robot shaking hands with human, symbolising cooperation.

WormGPT – A Powerful AI Tool For Compromising Business Emails Through Phishing

WormGPT is part of a growing trend among hackers that use generative AI in their phishing attacks. This newly built tool is significantly advertised on hacker forums, allowing its users to efficiently set up phishing emails and potentially break into enterprise mail systems.

SlashNext, a cybersecurity company, claims that WormGPT is an alias of EleutherAI’s 2021 GPTJ big language model. It has features like infinite character support, conversation memory protection, and code formatting. But it has been specifically designed for criminal activities.

This tool is a Blackhat alternative to the well-known GPT models, designed specifically for malicious activities. Cybercriminals can use this technology to automatically create highly convincing fake emails that are personalised to the recipient, increasing the chances of a successful attack.

WormGPT advertisement on Hacker Forum
WormGPT advertisement on Hacker Forum (SlashNext)

WormGPT, which enables ChatGPT to engage in “various unlawful activities,” is referred to by the authors as “ChatGPT’s ultimate opponent”. The tool’s developers also claim that it has been trained on a variety of datasets, focusing primarily on data connected to malware. The precise datasets utilised for the training, however, are not made public.

WormGPT, according to the authors, is “ChatGPT’s ultimate opponent” and allows it to engage in “a variety of unlawful activities”. The tool’s developers also claim that it has been trained on a variety of datasets, with a focus on malware-related information. The precise datasets used for the training, however, are yet unknown.

WormGPT is using Phishing Attacks

The oldest and most common kind of cyber risk is phishing attacks. They usually use false emails, texts, or postings on social media that spoof other people to conduct their crimes. An example of this is a type of attack called a business email compromise (BEC), in which the attacker poses as a company executive or employee to trick the victim into providing confidential information or sending money.

WormGPT is now able to produce convincingly human-like emails because of the quick progress in generative AI. With this skill, fake content is more difficult to identify.

The real-time chatbot created by OpenAI, called ChatGPT, includes a variety of safety features to keep it from promoting or allowing risky or illegal activities. As a result, cybercriminals will find it less helpful as compared to WormGPT. Some of the safety measures can be gotten over, though, with careful prompt design.

The experts went on to conduct their own experiments after gaining access to WormGPT. In one experiment, they used WormGPT to create a trick email that could be sent to a careless account manager to get them to pay a fake invoice.

An email created by WormGPT was not only interesting, but also very creative. This showed how it may be applied to complex phishing and BEC attacks.

Phishing Email Using WormGPT
Phishing Email Using WormGPT (SlashNext)

The use of generative AI simplifies the use of advanced BEC attacks. This makes it possible for attackers with low skill levels to use this technology, making it a malicious tool that a wider range of cybercriminals may use.

Emails produced by generative AI can have precise language, providing an authentic look and reducing the risk of being marked as suspicious. Attackers recommended a method for creating emails in an advertisement that SlashNext found on a forum. The strategy required translating an email that had been written in the native language into English before using an interface like ChatGPT to add polish and seriousness.

Businesses should use stronger email verification to protect themselves from business email compromise (BEC) risks. This involves setting up auto-alerts for emails that pose as inside data. Additionally, it is critical to mark emails that contain the keywords “urgent” or “wire transfer” because these terms are frequently connected to BEC-related schemes. Businesses may greatly improve email security and protect themselves from risks by using these precautionary measures.

Microsoft introduced Security Copilot, a security-focused generative AI platform, in March. Microsoft is a significant investor in OpenAI, the company that developed ChatGPT. With the use of artificial intelligence, these innovative tools enhance cybersecurity protocols and improve threat detection capabilities.

Phishing Tackle offers a free 14-day trial to help train your users to avoid these types of attacks and test their knowledge with simulated attacks using various attack vectors. By focusing on training your users to spot these types of attacks, rather than relying solely on technology (none of which can spot 100% of phishing emails), you can ensure that your organisation is better prepared to defend against cyber threats and minimise the impact of any successful attacks.

Recent posts