New ChatGPT phishing campaigns that target Windows and Android users have recently caught the attention of several security researchers. These campaigns use a programme called ChatGPT to trick users into unknowingly downloading malicious software and disclosing their credit card information.
After its November 2022 introduction, ChatGPT has seen incredible rise, gaining more than 100 million users by January 2023, making it the fastest-growing consumer application in modern times.
A $20/month ChatGPT Plus was introduced for users who want to use the chatbot with no availability limitations as a result of the tool’s extreme popularity and quick growth.
The website chat-gpt-pc[dot online] was one of the first instances discovered by security researcher Dominic Alvieri. Under the guise of downloading a ChatGPT Windows desktop client, it was used to infect users with the Redline information-stealing malware.
According to cyber security company Cyble, a fake social media page, pretending to be ChatGPT creator OpenAI, is spreading multiple phishing websites. Some of the malicious ChatGPT replicas have made it as far as the official Google Play Store and other third-party app stores.
These applications reportedly use the same OpenAI and ChatGPT logos, making it easier for attackers to deceive users. The “Chat GPT AI” page in the shared screenshot has a few posts and at least 3.5K followers.
Threat actors have posted various contents to make the page appear real, including additional OpenAI projects and videos such as Jukebox. However, all of these postings are intended to trick readers into clicking on the links, which will redirect them to different phishing websites and persuade them to download harmful software onto their devices.
These links have been typosquatted to lead the victim to believe they are redirecting to the official ChatGPT website, where they can get an application. In reality, they redirect the user to a phishing scam that imitates the official OpenAI website and includes a “Download for Windows” button.
Moreover, Cyble found a credit card phishing page at “pay.chatgptftw[dot com],” which apparently directs users to a payment page for ChatGPT Plus. Cyble claims to have found over 50 malicious apps using the ChatGPT’s symbol and a name that is similar to it, which are fraudulent apps. They are all false, and they all seek to damage users’ devices.
According to Cyble:
By posing as ChatGPT, these threat actors seek to deceive users into thinking that they are interacting with a legitimate and trustworthy source when in reality, they are being exposed to harmful and malicious content. Users who fall victim to these malicious campaigns could suffer financial losses or even compromise their personal information, causing significant harm.
Researchers from Cyble reveal one situation in which a fake ChatGPT software for Android subscribes users to premium SMS services without their knowledge.
There are currently no desktop or mobile apps available for any operating systems for ChatGPT; it is a completely web-based tool that can only be accessed through “chat.openai.com”.
Security experts have already warned that cybercriminals may exploit the AI technology to produce convincing phishing attacks in large numbers. They may even use it as a bait to trick victims.
Phishing Tackle offers a free 14-day trial to help train your users to avoid these types of attacks and test their knowledge with simulated attacks using various attack vectors. By focusing on training your users to spot these types of attacks, rather than relying solely on technology (none of which can spot 100% of phishing emails), you can ensure that your organisation is better prepared to defend against cyber threats and minimise the impact of any successful attacks.