Selection of faceless animated characters with "Deepfake" written below

Get ready for deepfake phishing attacks with zoom

Deepfake phishing is a new attack vector that should worry CISOs due to the rise of increasingly complex AI, audio, and video technologies as well as the abundance of user personal data available on social media.

According to Matthew Canham, a research professor and cybersecurity consultant at the University of Central Florida, deepfake phishing will advance in complexity and become more challenging to identify.

Canham continued by saying that deepfake attacks using biometric data, such as fingerprints or face recognition, as well as phishing attacks employing deepfakes on Zoom or other videoconferencing applications, which he nicknamed “zishing,” may soon become commonplace.

Deepfakes have primarily been used to further political and entertainment reasons up to this point, both good and bad. Deepfake technology, however, experts warn, also carries a number of IT vulnerabilities for businesses.

Attackers use deepfake content to dupe users into sending money they shouldn’t or giving out private information that cybercriminals may use against them.

Deepfake Scams

According to The Wall Street Journal, in one significant case from 2019, attackers employed deepfake phishing to convince the CEO of a U.K.-based energy business to transfer them €220,000. The attackers were successful in impersonating the CEO of the parent company, thus convincing the UK CEO that he was chatting with his boss by using AI-based voice faking software.

After becoming suspicious, the company requested the “boss” to prove his identity after this had happened two or three times.

These attacks, when the deception includes a mix of true and false material, are referred to as “synthetic media” attacks by Canham. He has developed a system for categorising objects based on five factors:

  • Familiarity (how well the target “knows” the impostor)
  • Control (used by a person, a machine, or both)
  • Target (Is it a specific person or anyone?)
  • Interactivity (How quickly communications happen?)
  • Medium (voice, video, text, or a mix of them)

In Indiana, there was a burst of fake online kidnappings. People would answer calls from family members only to talk with a fraudster who pretended to have kidnapped their loved one and demanded money.

One guy even received a call of this type about his daughter at the same time as his own son received a call demanding ransom from a fake parent. The calls’ appearance as coming from a loved one was the only “evidence” available. To “spoof” a phone number is not difficult.

How risky are deepfake phishing scams?

Canham claims that we’ve already seen the deepfake video that comedian and filmmaker Jordan Peele made in which former President Barack Obama seems to remark on the movie Black Panther and disrespects then-President Donald Trump.

After using a deepfake tool to change a video of Obama that already existed such that the lip movements matched the phrases, Peele used his own vocal mimicry to speak in Obama’s place.

Canham cited the “I’m not a cat” Zoom video from 2020, in which a Texas attorney found himself trapped with a kitten avatar during a court session, as being more disturbing, even though it would not be at once clear.

The Texas attorney’s lip and eye motions in real life were identical to the kitten avatar. Soon, videoconferencing users may be able to appear convincing to be someone entirely different thanks to comparable overlays and avatars.

According to Canham:

In a few years, Zoom-based phishing attacks should become commonplace. Consider the lawyer kitten video, except instead of a cat, imagine a different lawyer.

Biometric-based phishing attacks are the next frontier, but they may also incorporate elaborate physical constructs à la “Mission Impossible.”

Digital technology could also play a role in it. German researchers demonstrated a few years ago that an iris scanner may be deceived by a high-resolution photograph of Chancellor Angela Merkel’s eyes, and that a convincing fake fingerprint could be produced from a photograph of another German politician’s open palm.

Recommendations to stop deepfake phishing

End users need to be continuously trained about this and other new attack vectors by security experts. Some unexpected low-tech methods could be useful in stopping a deepfake attack before it spreads.

There is a real risk of becoming bored with security awareness training, but making it enjoyable, gratifying, and competitive may keep the material fresh and in your memory.

It may be essential for an authorised individual to transfer significant sums of money using pre-shared codes, or it may require the approval of many people.

Deepfake phishing awareness training will probably be very entertaining, intriguing, and instructional for employees. Try sharing realistic deep fake videos and asking viewers to look for red flags like unblinking eyes, odd lighting, and strange face motions.

Has your organisation started to increase cyber security measures yet? Start your two-week free trial today.

Recent posts