An image of planet Earth.

The Rise Of AI-Powered Fake News In Geopolitical Conflicts

AI-powered technologies are changing businesses globally, but they also pose significant risks to individuals. An alarming example is the rise of AI-driven fake news activities, which are influencing major geopolitical events such as the crisis in Ukraine and democratic elections worldwide.

The Social Design Agency (SDA) has started a covert activity, Operation Undercut, with Recorded Future’s Insikt Group targeting people in Ukraine, Europe, and the United States. These techniques take advantage of advanced algorithms to manipulate human behaviour using AI-enhanced videos and fake websites that spoof trustworthy news sources.

AI-generated fake news is a powerful tool for societal disruption and election interference, as it leverages biases, emotions, and fears to spread faster than genuine reporting.

These AI-driven misinformation campaigns have focused heavily on the ongoing crisis in Ukraine. Both state-sponsored organisations and cybercriminal networks use misinformation to influence public opinion and reshape conflict narratives. In Ukraine, these campaigns go beyond being a mere digital nuisance; they undermine peace efforts, destabilise international relations, and threaten global stability.

The Growing Risk of AI-Powered Election Manipulation

Several countries are closely monitoring the 2024 elections for reports of AI-driven voting manipulation. Analysts warn that cyber attackers are improving their tactics, making misinformation operations more difficult to detect. Without creating traceable digital proof, this might have a major impact on voter perceptions and perhaps change election results.

These campaigns spread deceptive social media postings, deepfake videos, and false news reports that propagate incorrect information on peace negotiations, humanitarian crises, and military operations.

Deepfake videos created by AI that depict political figures making controversial statements are another unexpected example. These videos, which are widely shared on social media, polarise communities, cause uncertainty, and undermine trust in authority.

The Social Design Agency has been linked to Doppelganger, a company that manipulates public opinion through a network of fake news websites and social media profiles. The company, its owners, and another Russian group named Structura, faced sanctions by the United States earlier this March.

Additionally, Operation Overload (also known as Matryoshka and Storm-1679) and Doppelganger share infrastructure with Operation Undercut. These Russian-affiliated campaigns have deployed fake news websites, counterfeit verification tools, and AI-generated audio content to target the 2024 French elections, the Paris Olympics, and the U.S. presidential election.

Framework of suspected influence operations

Similarly, AI-generated deepfakes have been used to create deceptive robocalls and altered visuals in U.S. elections, complicating efforts to distinguish fact from fiction.

The most recent advertisement highlights a troubling trend: exploiting customer trust in reputable media companies. To appear more credible, it mimics these sources using AI-generated images and videos.

The operation employs popular hashtags in specific languages and regions to disseminate content related to CopyCop (also known as Storm-1516) and expand its reach.

Interestingly, in February 2022, a US-based company was breached by the threat actor APT28, also known as GruesomeLarch, which has links to Russia. The attack used a unique “nearest neighbour” approach, infiltrating the target by compromising a neighbouring entity within Wi-Fi range.

In Slovakia’s 2023 national election, an AI-generated audio recording falsely depicted a candidate endorsing election manipulation. The recording went viral right before the vote, leading to the candidate’s defeat.

Mitigations and Recommendations

A comprehensive, cooperative strategy connecting governments, IT companies, and civil society is needed to tackle AI-generated fake news. The advancement of sophisticated AI detection systems will enable the quick identification and destruction of fake information before it spreads.

Social media platforms are updating their policies to combat misinformation. Measures such as stricter content moderation, greater transparency in algorithmic decision-making, and warning labels on questionable content are being introduced to curb the spread of fake news. However, sustained collective efforts and investments in education and technology are essential for long-term success.

Simultaneously, public awareness tactics focus on digital literacy, educating people how to critically analyse news sources, verify claims, and use fact-checking tools. Staying informed about the latest tactics used by threat actors also helps people identify and counter disinformation effectively.

Phishing Tackle provides training course videos that cover different types of artificial intelligence threats designed to counter the ever-changing risks posed by artificial intelligence. We offer a free 14-day trial to help train your users to help train your users in avoiding these types of attacks and to test their knowledge with simulated phishing attacks.

Recent posts