OpenAI has recently suspended accounts linked to users in China and North Korea, who were suspected of misusing its technology for surveillance and political manipulation.
The company found evidence that a Chinese security operation was developing an AI-driven tool, utilising ChatGPT, to monitor and analyse real-time social media activity in Western countries.
OpenAI researchers have identified a new campaign they have dubbed “Peer Review”. They chose this name after a team member used OpenAI’s technology to debug parts of the underlying computer code.
Using ChatGPT to modify and debug code, this system collected real-time data from platforms such as X, Facebook, Telegram, Instagram, YouTube, and Reddit. According to reports, this code powers the Qianyue Overseas Public Opinion AI Assistant surveillance software.
The tool performed thorough analysis of postings and comments using one of Meta’s Llama models, sharing the results with Chinese officials. Ben Nimmo, a key investigator at OpenAI, said that this was the first time such an AI-powered monitoring system has been discovered.
In a statement, OpenAI claims:
According to the descriptions, one purpose of this tooling was to identify social media conversations related to Chinese political and social topics – especially any online calls to attend demonstrations about human rights in China – and to feed the resulting insights to Chinese authorities.
ChatGPT’s Unexpected Role in Espionage and Misinformation Campaigns
Chinese threat actors not only used its model as a research tool to get publicly accessible information about US think tanks and government individuals, but they also used ChatGPT to read, interpret, and analyse screenshots of English-language documents.
The attackers also requested coding assistance from ChatGPT. They required assistance with open-source Remote Administration technologies (RAT) as well as debugging, research, and development of open-source security technologies. Such tools and code could be used in Remote Desktop Protocol (RDP) brute force attacks.
OpenAI threat analysts discovered that North Korean actors had revealed staging URLs for malicious binaries that were previously unknown to security companies. This discovery appeared while investigating auto-start extensibility point (ASEP) locations and macOS attack tactics.
The staging URLs and related built executable files were submitted to an online scanning service, which was then shared with the security community. As a result, certain companies can now reliably detect these binaries, potentially safeguarding users from potential attacks.
In a detailed report on the use of AI for malicious and deceptive purposes, OpenAI also uncovered a separate Chinese campaign, known as Sponsored Discontent. This group used OpenAI’s technologies to generate English-language posts criticising Chinese dissidents.
Threat actors used this technology to translate content that criticised American politics and society into Spanish before spreading these posts around Latin America.
Threat actors used ChatGPT to create descriptions and sales pitches for these tools. Despite appearing to use artificial intelligence, OpenAI confirmed that its services do not power the spying tools.
ChatGPT was used solely for debugging and generating promotional content. This strategy seems to generate revenue for the Pyongyang regime by deceiving Western companies into hiring North Koreans.
OpenAI also mentioned:
After appearing to gain employment they used our models to perform job-related tasks like writing code, troubleshooting and messaging with coworkers. They also used our models to devise cover stories to explain unusual behaviors such as avoiding video calls, accessing corporate systems from unauthorized countries or working irregular hours.
According to the Google Threat Intelligence Group (GTIG), Gemini AI chatbot was abused by more than 57 different threat actors. Throughout the attack cycle, the chatbot was utilised by these groups, who have connections to North Korea, China, Iran, and Russia. They also employed it for researching current events, as well as for content creation, translation, and localisation.
In an October report, OpenAI reported that it has disrupted over twenty campaigns since early 2024. Iranian and Chinese state-sponsored hackers employed these campaigns for cyber operations and covert influence efforts.
As IT systems advance more difficult to compromise, cybercriminals are increasingly using social engineering tactics to target employees as their first line of defence. To mitigate breaches, employees must be trained to recognise fake emails, texts, and media.
To ensure your security defences are prepared to address the challenges posed by AI-generated content, consider Phishing Tackle’s cybersecurity awareness training and simulated phishing resilience testing.
Our comprehensive solutions provide you with all the tools and strategies needed to identify and address vulnerabilities before they can be exploited. Book a demo today to see how it can work for you.