Two hackers are close to a laptop, attempting to steal data.

AI-Powered Phishing Attack Targets 2.5 Billion Gmail Users

AI-powered phishing attacks are on the rise, targeting Gmail users with unprecedented complexity. Cybercriminals are utilising artificial intelligence (AI) to execute highly convincing account theft scams.

Google has built advanced security measures to protect Gmail accounts, but hackers are also adapting, employing AI-powered attacks. Gmail has over 2.5 billion users globally, making it a potential target for hackers and fraudsters.

People who have faced the AI-powered scam in real life say it is tough to spot due to its enhanced sophistication. Sam Mitrovic, a Microsoft solutions expert, issued a warning after nearly falling victim to a “super realistic AI scam call,” capable of tricking even experienced users. He shared that his experience began with a notification prompting him to approve a Gmail account recovery attempt.

Sam continued:

Recently, I received a notification to approve a Gmail account recovery attempt. The request originated from the United States. I denied the request and, about 40 minutes later, received a missed call. The missed call showed the caller ID as Google Sydney.

AI-Powered Phishing Email: Deceptive and Hard to Detect
AI-Powered Phishing Email: Deceptive and Hard to Detect (Sam Mitrovic)

Sam ignored the missed call at first, but a week later, the pattern repeated. Another Gmail account recovery notification appeared, this time followed by a call.

He answered the call:

It’s an American voice, very polite and professional. The number is Australian. He introduces himself and says that there is suspicious activity on my account. He asks if I’m travelling (sic). When I said no, he asks if I logged in from Germany to which I reply no. He says that someone has had access to my account for a week and that they have downloaded the account data. (I then get a flashback of the recovery notification a week before).

Sam looked up the phone number on Google right away and discovered it in Google’s official records. He was still not convinced, so he requested the caller to confirm by sending an email. Since the email was from a Google domain, it seemed authentic when it first arrived.

Sam saw a warning sign, though, when the email address GoogleMail@InternalCaseTracking.com was displayed in the “To” column and it wasn’t associated with Google. After further investigation, he realised that the person on the other end was not a human, but an AI.

This is a common phishing technique, often used to confirm account recovery or password resets. It becomes more risky when paired with email spoofing and AI calls. Social engineering played a critical role, exploiting trust and human psychology.

Using an AI-generated voice that sounded kind and skilled, the fraudsters impersonated actual Google support staff. This made the conversation appear more genuine and urgent, making it harder for Sam to recognise the scam.

The attackers attempted to deceive him into disclosing private information by posing as a trusted source, demonstrating how social engineering and artificial intelligence are being combined to create increasingly complex account takeover attempts.

Google’s New AI-Powered Shield Against Phishing Scams

Google stated on October 9 that it is working with the DNS Research Federation (DNS RF) and the Global Anti-Scam Alliance (GASA) to tackle online scams. The collaboration led in the development of the Global Signal Exchange, an intelligence-sharing network that provides real-time information related to scams, fraud, and cybercrime.

Google’s Senior Director of Trust and Safety, Amanda Storey, said:

The partnership leverages the capabilities of DNS RF’s data platform, which handles more than 40 million signals, and GASA’s vast network. This partnership will improve the exchange of information about scams, helping to identify and disrupt fraudulent activities faster across various sectors and platforms.

Google Cloud powers the Global Signal Exchange, enabling users to access and share signals with one another. Amanda claims that by utilising the AI capabilities of the Google Cloud Platform, it can intelligently recognise patterns and match signals.

Hackers often use phishing schemes as an approach to access financial and personal data. These scams are more difficult to identify since they don’t involve the installation of any software, unlike malware or harmful programs. Rather, hackers trick users into opening malicious files or clicking on links.

Checking email addresses closely is a significant safety measure. In this case, the scam email used a recipient address unlinked to a Google domain. Furthermore, there were no other active sessions on the victim’s Google account, which raises additional concerns.

Enabling two-factor authentication (2FA) provides a significant level of security, making it more difficult for scammers to get access, even if they know victim password. To avoid falling victim to AI-driven phishing attacks, Gmail users should stay informed and trust their instincts. Reputable businesses, such as Google, will never request private information without taking the necessary precautions.

Phishing Tackle provides training course videos that cover different types of artificial intelligence threats designed to counter the ever-changing risks posed by artificial intelligence. We offer a free 14-day trial to help train your users to help train your users in avoiding these types of attacks and to test their knowledge with simulated phishing attacks.

Recent posts