A robot being created in laboratory with a computer screen showing components in the background.

ChatGPT Sparks AI Weaponization Concerns For Microsoft and OpenAI

Microsoft’s Threat Intelligence Centre (MSTIC) has revealed that nation-state threat actors supported by the governments of China, Iran, North Korea, and Russia are currently exploiting large language models (LLMs) such as OpenAI’s ChatGPT. Surprisingly, despite their widespread use, MSTIC found no evidence of attackers using these LLMs in large cyberattacks.

The tech company highlights the significance of strengthening security processes, even though Microsoft indicates that attackers’ growing usage of GenAI does not immediately compromise businesses.

The conclusions come from a study that Microsoft and OpenAI jointly released. Both companies disclosed that they thwarted the efforts of five state-affiliated actors who utilised their AI services for malicious cyber activities. These actions involved terminating the actors’ assets and accounts.

The research shows that nation-state actors use LLMs to look into certain technologies and flaws on a global scale. They also use these techniques to get insights about regional geopolitics and significant individuals.

In a report, Microsoft said:

Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships,” Microsoft said in a report shared with The Hacker News.

The National Cyber Security Centre in the United Kingdom underlined the rising risk earlier this year, claiming that artificial intelligence (AI) is expected to increase cyberattacks over the next two years.

Threat actors now closely follow new developments in technology, much like defences have been doing for some time, according to the MSTIC. They, like defenders, investigate using AI to increase productivity and target systems such as ChatGPT for possible exploitation.

Threat actors, with varying goals and levels of expertise, engage in activities such as malware development, coding, reconnaissance, and learning English. It is becoming an important skill for threat actors, assisting them in social engineering and manipulation.

Nation-State APTs Using AI to Power Cyber Threats

Microsoft has outlined certain cases in which state-affiliated entities have used AI for malicious cyberattacks. Examples include North Korea’s spear-phishing attacks and Iran’s deployment of social engineering techniques, showcasing the widespread use of AI in cyber warfare.

These examples show a variety of offensive tactics that are all intended to break into networks and control online exchanges of information. These illustrations also cover a broad spectrum of offensive tactics, all of which are intended to break into networks and control online conversation.

The five different threat categories that Microsoft Threat Intelligence keeps an eye on are Forest Blizzard, Emerald Sleet, Charcoal Typhoon, Crimson Sandstorm, and Salmon Typhoon. Russian military intelligence services operate Forest Blizzard, also known as Fancy Bear or APT28, which acts as an advanced persistent threat (APT) actor.

Microsoft revealed in December that Forest Blizzard, which focused on the energy, defence, and government sectors, continued to use an unpatched Exchange vulnerability. Even after March updates, unpatched instances remained vulnerable to attacks.

Emerald Sleet, a nation-state threat actor from North Korea, has been tracked using LLMs to look into think tanks and experts that deal with North Korea. The team also generated spear phishing campaigns and performed basic scripting jobs using this technique.

Some vendors found that LLMs did not increase the efficacy of emails, while other tests on GenAI and phishing content produced unexpected results.

Charcoal Typhoon, a threat actor with links to China, used LLMs for vulnerability analysis and technical study. Microsoft brought attention to the team’s usage of GenAI technologies to enhance scripting methods, potentially simplifying and automating complex cyber activities. They used these technologies for complex operational directives as well.

Recently, Microsoft and OpenAI spotted Crimson Sandstorm, an Iranian threat organisation associated with the Islamic Revolutionary Guard Corps. The group was using LLMs for social engineering, error debugging, learning about.NET programming, and using evasion tactics on hacked computers.

Salmon Typhoon, supported by China, tested LLMs for exploratory interactions in 2023, evaluating their capabilities in gathering information on controversial topics, well-known individuals, local geopolitics, US sway, and domestic matters.

Microsoft’s Recommendations for Enhanced AI Security Measures

A noteworthy feature of Microsoft’s reporting is the little reference to ChatGPT or Copilot by name. These are the main generative AI technologies that Microsoft and OpenAI have created, and nation-state attackers are probably going to test these. Since ChatGPT is the engine driving Copilot, it follows that all these attackers have been using ChatGPT for their operations.

Microsoft highlights the importance of using AI to improve identity proofing, viewing it as an essential tactic in the continuous fight against social engineering and fraud. The research cautions businesses to exercise warning when it comes to offering free trials or special pricing for products or services.

The nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercrime syndicates are using AI technologies and APIs for harmful purposes. Microsoft is now working on a set of guidelines to address these viable issues. For increased security, the goal is to erect strong barriers and safety measures surrounding its models.

Phishing Tackle provides training course videos that cover different types of artificial intelligence threats designed to counter the ever-changing risks posed by artificial intelligence. We offer a free 14-day trial to help train your users to help train your users in avoiding these types of attacks and to test their knowledge with simulated phishing attacks.

Recent posts