ChatGPT tool could be abused by scammers and hackers

ChatGPT and AI Technologies’ Role in Emerging Scams

ChatGPT has gained significant interest online with the introduction of Open AI’s latest product. The feature of ChatGPT that lets users easily create their own AI assistants might be used to create tools for cybercrime.

OpenAI has released ChatGPT, which allows users to customise it for plenty of purposes. For example, BBC News used this technology to create a generative pre-trained transformer. In addition to its potential uses beyond typical use cases, this transformer excels at generating realistic emails, messages, and social media posts.

BBC News just used the premium version of ChatGPT, which includes a special AI bot called Crafty Emails. This specialised bot is designed to create convincing and informative content by using strategies that convince visitors to click on links or download related stuff.

Using social engineering tools, the bot quickly gathered information, efficiently creating a GPT logo without code. It generated convincing text in several languages for widely used phishing and scam strategies very quickly.

These convincing phishing mails are highly risky because scammers try to trick people into sending money without authorisation or disclosing personal information. Additionally, this bot can create realistic chat scripts, which gives criminals the chance to pose as customer support agents and force people into disclosing personal information. This bot can create very realistic fake messages.

The public version of ChatGPT restricted the work, citing legal issues or specific regulations. On the other side, Creative Emails effectively completed a large number of the jobs given to them, sometimes with disclaimers that specifically criticised the unlawful nature of scam techniques.

ChatGPT Enables Malware Download via Phishing Email
ChatGPT Enables Malware Download via Phishing Email

In response to the release, OpenAI highlighted their dedication to enhancing security protocols to prevent their products from being misused for malevolent intent. A spokesperson highlighted ongoing research on increasing system security.

OpenAI announced plans for a GPT App Store during the November developer conference, letting users to share and sell their works, emphasising their commitment to user creativity.

The company claimed to examine GPTs carefully to prevent any efforts at fraudulent activity when it released its GPT Builder tool. Experts claim, meanwhile, that OpenAI is not controlling services as strictly as ChatGPT’s public associates. This may unintentionally provide attackers access to state-of-the-art AI techniques.

Exploring ChatGPT’s Prospective for Creating Scam and Hacking Techniques

A specifically created bot was tested to see how vulnerable it was to common hacking and scamming methods.

A recent Crafty Emails composition featured the infamous Nigerian Prince scam, which has been around for decades and continues to evolve. By using emotional language that supposedly ‘appeals to human goodwill and mutual principles,’ the bot composed a message that was very different from the normal ChatGPT’s rejection.

BBC News requested help from Crafty Emails to create a fraudulent message that would lure victims to click on a link and provide personal information on a fake website. This type of attack is known as standard SMS phishing, or Smishing.

Crafty Emails crafted a fraudulent message advertising free iPhone, utilising social-engineering tactics such as the ‘need-and-greed principle.’ Surprisingly, the public version of ChatGPT declined to create such content, alleging ethical concerns.

A common cyber threat is spear-phishing, which includes sending false emails to targeted persons to lure them to download malicious files or visit risky websites. Crafty Emails GPT has created a sample spear-phishing email.

In this fake scenario, a corporate executive is informed of a potential data breach and encouraged to download a clearly safe file that, unknown to them, contains a booby-trapped file.

The bot translated the content quickly into Spanish and German, claiming to apply convincing techniques such as collective and social compliance ideas to get an immediate action. Although ChatGPT’s public version responded with the requirement, it included less in-depth explanations for the techniques used.

The rise of AI misuse raised fear worldwide, forcing cyber authorities to issue advisories in recent months. Scammers all around the world are using large language models (LLMs) to break down language barriers and improve the complexity of their misleading scams.

The usage of unauthorised LLMs, such as WolfGPT, FraudBard, and WormGPT, is rising. Experts warn, however, that criminals may unexpectedly gain access to the most advanced bots accessible if they use OpenAI’s GPT Platform.

Cybercriminals can develop convincing and realistic scams that are difficult to detect with the use of artificial intelligence, which is an effective tool in their toolkit. Both individuals and organisations must be careful to defend themselves against potential risks.

Attacks may involve phishing emails and texts, fake photos and videos, and AI-powered schemes. Being aware of these dangers is important for maintaining effective cybersecurity security measures.

Employees should be trained to recognise alert signs of possible frauds, such as phishing emails or other suspicious messages. Provide users practical recommendations on how to handle and secure sensitive data.

Furthermore, educate employees on the increasing risks associated with artificial intelligence (AI) and the use of machine learning (ML) techniques in scams.

Phishing Tackle offers a free 14-day trial to help train your users to avoid these types of attacks and test their knowledge with simulated attacks using various attack vectors. By focusing on training your users to spot these types of attacks, rather than relying solely on technology, you can ensure that your organisation is better prepared to defend against cyber threats and minimise the impact of any successful attacks.

Recent posts