media update’s Jenna Cook uncovers the capabilities of deepfakes.

So far, deepfakes have been limited to amateur hobbyists and Hollywood producers making politicians say funny things and putting Nicholas Cage’s face in places where it just shouldn’t be. But the potential to create manipulated media is quickly becoming the new frontier in the age of #FakeNews.

We live in a world where you need ‘pics or it didn’t happen’, but what about when you’re able to swap out faces, add images, adjust audio and it still looks real? Enter deepfakes.

What exactly are deepfakes?
Deepfakes – put simply – are fake video or audio recordings that are made to look and sound just like the real thing.

They are the combination of recorded footage, in collaboration with machine learning, that can create false ‘evidence’. And while manipulation of digital files is nothing new, the degree to which they are becoming believable is increasing.

Artificial intelligence systems use trained algorithms to recognise patterns in real video and audio recordings.

This technology can be downloaded freely, which means it can be used by anyone with an Internet connection. And in the hands of the masses, there’s no saying what Internet trolls, or even terrorist organisations, will do with it.

In April, film director Jordan Peele created a deepfake in an attempt to show just how easy it has become to fake a video.

A minute-long clip was released depicting former President of the United States Barack Obama speaking directly to the viewer. He is shown discussing a number of things, from his successor to things that he wouldn’t say at a White House address.

The visual then cuts, revealing that Peele has in fact been impersonating Obama the entire time. A video like this clearly illustrates how simple it is to fabricate video and audio depicting someone saying or doing things that they never actually said or did.


How are deepfakes the same as #FakeNews?
Deepfakes and #FakeNews have a major thing in common – they both rely on passing disinformation as fact.

The truth is, more often than not, when it comes to humans – we believe what we see. This is what makes deepfakes so dangerous – the fact that what’s real and what’s not is almost undeterminable.

Deepfakes become even more believable when the AI algorithms are fed more content. In the Obama deepfake example, Peele used over 56 hours worth of sample recordings to create the video. The video went viral when it was first released and has since been seen over 4.8 million times.

Deepfakes and #FakeNews both intend to mislead the viewer with deliberately biased or controversial content that is easily created and that spreads rapidly.

The difference is that deepfakes are solely in the form of video content while #FakeNews can be anything from clickbait articles to dedicated propaganda websites.

Why do we need to combat deepfakes?
It’s becoming harder and harder to trust our ears and eyes – so what happens when we can’t rely on everything we see or hear?

We are going to have to become skeptical – not only of everything we read, but also everything we see with our own eyes and hear with our own ears. And this means that it is simply not enough to rely on a single source of information.

Where can the technology behind deepfakes go?
The technology used to create deepfakes is not what’s dangerous – it’s how it’s used.

Instead of instigating political unrest, these algorithms can clone the voices of those who have lost their own through disease or otherwise. And instead of spreading false information, they can be used to synthesise undocumented moments in time.

Want to stay up to date with the latest news? Subscribe to our newsletter.

As you might imagine, an information silo is what occurs when data is not shared between subjects. Find out how to combat these silos in our article, Three ways to disrupt information silos in journalism.
*Image courtesy of Vecteezy