Cyberview #7: Rise of the deepfakes

Éanna Motherway

March 21, 2024

Deepfakes are back in the news. With widespread social and geopolitical instability and a pivotal election looming in the US, these AI-powered hoax videos are injecting some extra chaos into affairs. Blurring the lines between fact, fiction, politics, technology, and showbiz, deepfakes are an unprecedented wildcard to keep an eye on this year. Cyberview dives in.

What are deepfakes?

Definitions first: Deepfakes are highly realistic synthetic video or audio created with AI models. These deep learning (hence “deepfake”) models are trained on huge quantities of data to mimic a person's facial expressions, lip movements, and vocal patterns.

Deepfakes are created with Generative Adversarial Networks (GANs), where two models work together (or more accurately, against each other) for optimum results. One model, the generator, creates the fake content, while its partner model, the discriminator, acts as a judge. Low quality fake content is rejected, the convincing material is accepted, and the discriminator constantly pushes the generator to improve across iterations. The result? Convincing videos of people saying or doing things they never did.


The cybersecurity challenge

Deepfakes, unsurprisingly, pose significant cybersecurity risks. Identity theft, fraud, and authentication exploits are all made easier with this technology. A Hong Kong finance worker was fooled into transferring $25 million to fraudsters due to a deepfake impersonation of the company's CFO.

In another high-tech heist, a company director's voice was cloned. The cyber conmen got away with $35 million. Vishing (voice phishing) and other social engineering techniques have just received the equivalent of a supercharged power-up with deepfakes.

In the political sphere, deepfakes are being used to influence public opinion. Thousands of citizens in New Hampshire received calls that appeared to use AI to impersonate President Joe Biden’s voice, who urged them to skip voting in the January Democratic primary election. The calls were traced back to a company in Texas with suspicious motives and funding.

Soon after this event, fake videos of megastar Taylor Swift announcing her support for Donald Trump circulated online. In an election year balanced on a knife edge, further devious use of deepfakes could do a lot of damage.


Battling against the fakes

Efforts are underway to combat the deepfake threat. Social media and content platforms like TikTok, YouTube, Meta, and Twitter are implementing policies and features to detect, label, or remove misleading AI-generated content.

OpenAI’s DALL-E generated images now include digital watermarks in image metadata. Google has gone a step further with SynthID, which embeds a watermark directly into the pixels of the image. But none of these methods are infallible. A recent study by University of Maryland students found that “our attacks are able to break every existing watermark that we have encountered.”

On the legislative and regulatory side, the FCC promptly banned AI in robocalls following the election interference calls. Currently only about ten states target deepfake content, and these have generally prioritized non-consensual pornographic material. There’s no overarching federal legislation yet, but the No AI FRAUD Act, if enacted, would “provide individual property rights in likeness and voice.” The EU’s AI Act, which will demand transparency from creators of synthetic content, is currently being finalized.

How you can detect deepfakes

Here are a few tips to spot deepfakes:

  • Unnatural movements, poor lip syncing

  • Shadows in the wrong places

  • Vocal inconsistencies, unusual tone/inflection

A good rule of thumb is to verify information from multiple sources before believing it (or sharing it further). And some timeless advice that applies always and forever: Be skeptical of anything that seems too good (or bad) to be true, especially online.

Check out the new Cyberview episode on your favorite platform: