Slovakia became the target of a particularly serious case of political interference caused by AI. The internet was flooded with deepfake recordings showing compromising talks between influential politicians shortly before the election. Michal Šimečka, the head of the Progressive Slovakia party, was among those who faced difficulties.
Michal Imeka and Monika Tódová looked to be discussing election fraud in the deepfake videos. This includes the controversial strategy of buying votes from the nation’s marginalised Roma minority.
The videos are being shared on social media sites like Meta Platforms’ Facebook and Instagram, as well as messaging services like Telegram. In an update, Reset, a research group that evaluates the impact of technology on democratic processes, found that these clips contain audio mimicking political opponents.
Šimečka and Dennk N immediately claimed the tape as being fake. The audio showed indications of AI manipulation, according to the AFP news agency’s fact-checking staff. However, the recording was released within a 48-hour pre-election silent period, during which media organisations and politicians are compelled to avoid from making public pronouncements.
It was difficult to broadly deny the post by Slovakia’s election laws. Additionally, the article took advantage of a flaw in Meta’s manipulated-media policy, which explicitly tackles faked videos where people are made to say things they never said, because it had audio content.
In Slovakia’s last election, there was a close vote between two major opponents, each with their own vision for the country. Just two weeks ago, SMER defeated the pro-NATO party Progressive Slovakia. The main point of tension between the two parties was SMER’s position on removing the military support to Ukraine, their neighbouring country.
In Slovakia, fact-checkers face an daunting challenge as they make efforts to tackle disinformation on social media. Their on-the-ground experience highlights the alarming truth that AI technology has already advanced to the point where it can disrupt elections.
These committed fact-checkers are challenged by the lack of effective tools at their disposal to properly tackle this growing threat.
Hincová Frankovská’s crew put in a lot of effort throughout the election, fact-checking statements made during TV debates and keeping an eye on social media. To counteract misinformation, Demagog teams up with Meta, but AI raises more challenges.
Meta informed Demagog about a viral audio clip three days before the election that purportedly showed Šimečka promising to raise beer prices if elected. Šimečka denied the validity of the statement, underscoring the necessity for in-depth fact-checking outside of what politicians say.
It was difficult to explain the audio modifications. Hincová Frankovská’s staff, which was new to fact-checking AI-generated content, traced the source to an anonymous Instagram account.
They contacted American AI speech classifier Eleven Labs for expert comments on its authenticity. They discovered changes to the recording within hours. “Independent fact-checkers indicate potential image editing to mislead,” states their Facebook label in Slovak. Users can choose whether or not to watch the video.
The current incidence in Slovakia is far from unusual. In light of future elections in the US, Poland, the UK, India, the EU, and other nations, there is rising worry about the effects of artificial intelligence and misinformation.
Poland is already dealing with rumours of AI manipulation and is dealing with similar challenges. Unfortunately, fact-checking organisations like the Pravda Association have a difficult uphill struggle in the absence of reliable methods for detecting and tackling deepfakes.
Jakub Śliż, the head of a Polish fact-checking organisation, claims that it is often simple to fact-check content by comparing data from several sources. However, he is concerned that if AI-generated audio recordings, like those heard in Slovakia, begin to share in Poland only hours before the polls, it might create a much more difficult situation.
He said, “As a fact-checking organisation, we’re not sure how to handle this situation.” As a result, it would surely be a difficult and unpredictable assignment if something similar happens.
The UK’s Electoral Commission made a worrying disclosure, saying that they had discovered a breach that might have exposed the personal information of as many as 40 million voters. Russian hackers deliberately targeted voter data in several states during that election year, according to Special Counsel Robert Mueller’s investigation into the engaging in the 2016 US election.
Mr. Fredheim, who formerly worked for NATO’s StratCom tracking misinformation, has drawn attention to a worrying trend. He made the point that since a powerful computer is no longer necessary, making deepfake material has gotten amazingly simpler for the common individual. User-friendly programmes like HeyGen, which can easily transform texts into convincing deepfake videos using just brief speech samples from the target person, are one example of this development.
Martin Spano, a computer scientist from Slovakia, shared an AI-generated video that appears to show a number of well-known politicians speaking to the camera. He did this to send a warning message and highlight the value of carefully examining internet information in the age of AI, especially in the run-up to upcoming elections.
Help your colleagues keep a security-first mindset and boost your human firewall by starting your Phishing Tackle security awareness training today with our two-week free trial. We offer training videos to help employees understand what deepfakes are and how to detect them.