A person sitting on a couch using a tablet device while communicating with a robot.

Nation-State Threat Actors Exploit Gemini AI For Cyberattacks

Nation-state threat actors are exploring Google’s AI-powered Gemini assistant to enhance productivity and investigate potential attack infrastructures or conduct target reconnaissance.

Instead of developing new AI-driven hacks that could get past traditional defences, Google’s Threat Intelligence Group (GTIG) claims that government-affiliated advanced persistent threat (APT) organisations have been using Gemini mostly to improve their operational efficiency.

The use of tools like Gemini by Nation-State Threat Actors and information operations actors exposes the basic dual-use risks of sophisticated generative AI technologies, even though they provide tremendous prospects for efficiency and innovation.

In an updated report, Google Threat Intelligence Group (GTIG) said:

Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities. At present, they primarily use AI for research, troubleshooting code, and creating and localizing content.

Unmasking Nation-State Cyberattacks: How Threat Actors Exploit Gemini AI

Threat actors are increasingly using artificial intelligence (AI) tools to expedite their attack operations. Google uncovered Gemini-related operations tied to advanced persistent threat (APT) groups in over 20 countries, most notably Iran and China.

These groups frequently employ AI for coding and developing malicious tools and scripts. They also use it to research publicly disclosed vulnerabilities and to gain a deeper understanding of emerging technologies through explanations and translations.

Google also notes that APT groups from China, North Korea, Russia, and Iran have experimented with Gemini. These attackers are exploring ways to leverage Gemini for discovering security vulnerabilities, evading detection, and coordinating actions post-compromise.

  • The Iranian groups have a history of breaking into networks and cloud environments by employing advanced social engineering tactics. In May 2024, Mandiant disclosed that these threat actors targeted NGOs, media groups, academia, legal services, and activists in both Western and Middle Eastern regions by posing as journalists and event organisers.
  • China Nation-State Threat actors, on the other hand, typically use the platform for activities like scripting movement inside networks and for in-depth spying on U.S. military entities.
  • North Korean groups are exploiting Gemini for various attack stages, including researching free hosting services, developing malware, and creating fake job applications to infiltrate Western companies.
  • Russian threat actors can opt to use in-house AI technologies or avoid Western platforms for security concerns, as seen by the fact that they have used Gemini less often, mostly for scripting assistance, translation, and altering malicious code.

Google discovered threat actors trying to get around security measures by rewording prompts and using public jailbreaks against Gemini. However, these attempts have been unsuccessful. OpenAI, creator of the popular AI chatbot ChatGPT, made a similar disclosure in February 2024.

The tech giant also uncovered underground forum posts offering questionable versions of large language models (LLMs) that generate responses with no safety or ethical limits.

The purpose of tools like WormGPT, WolfGPT, EscapeGPT, FraudGPT, and GhostGPT is to develop fake websites, generate templates for business email compromise (BEC) attacks, and create modified phishing emails.

A growing number of AI models with insufficient protections against misuse are flooding the market, while popular AI products suffer with security vulnerabilities and jailbreaks.

Despite having limitations that are easy to get over, some of these tools are becoming increasingly popular. DeepSeek, a Chinese AI model, has also been gaining popularity recently, highlighting the variety and quick development of AI technologies on the market today.

Scams created by AI are getting increasingly realistic. When you get unusual emails, texts, or phone calls, even if they appear to be from a reliable source, proceed with caution. Always contact the company directly to confirm any request for your personal information.

Limit the personal information you share online, as hackers use AI for reconnaissance. Regularly review your social media privacy settings and avoid revealing too much.

The strategies used by cybercriminals are always changing. To identify such risks, keep yourself updated by reading cybersecurity news, setting up alerts, and learning about the most recent AI-related threats.

To ensure your security defences are equipped to handle the challenges posed by AI hallucinations, consider Phishing Tackle’s cybersecurity awareness training and real-world simulated phishing resilience testing. Our comprehensive solutions provide you with all the tools and strategies needed to identify and address vulnerabilities before they can be exploited. Book a demo today to see how it can work for you.

Recent posts