In just a year, deepfakes have gone from a disturbing underground trend to a global concern, with several high-profile examples featuring celebrities and politicians circulating on social media. Even Mark Zuckerberg has fallen victim to deepfaking in the name of art. But perhaps it’s worth learning a little more about the when, why and how of deepfakes before we collectively throw our hands up in dismay.
The name is new, the idea is not
The story goes that in 2017, a Reddit user called ‘Deepfakes’ started a ‘subreddit’ (a smaller online community within Reddit) to share explicit videos that they had made using a deep learning algorithm, where the subjects face was swapped with that of a celebrity or public figure. It quickly caught on and within a matter of weeks the news broke of a desktop tool that could do the same thing, taking the practice from a troubling handful of examples from the hands of one creator to a deeply concerning open platform for all to use.
That said, we must remember that image editing has been around for decades. There is a perception that Artificial Intelligence has somehow made the process of manipulation and misinformation easier and far more widespread, yet history shows us that well before there were digital graphics tools, photos were already spliced, diced and manipulated – with good and bad intentions. From the removal and addition of political figures in group shots to the famous fake of John Lennon jamming with Che Guevara , doctored images have played with the public opinion for almost as long as the existence of film and photography itself.
Trust and transparency are everything
It’s not as easy as it might seem
Reports might have us believe that there’s a deepfaker around every corner and there are simply dozens of apps that can knock one up in minutes, but actually a true deepfake is quite a complicated affair. For example, a journalist who tried the viral deepfake app ‘Zao’ quickly realised that its algorithm had only been trained using Chinese facial data, meaning it wasn’t effective on Western faces. And serious tools like DeepFaceLab take a real level of commitment and a powerful computer to learn and use.
Right now, they’re good. But they’re not that good
They’re pretty realistic. In fact, some can be convincing (and it must be said that they’re getting better), but most of the time a deepfake just doesn’t quite sit as it should. The positioning of the head may not quite match. Or there’s blurring or artificially smooth skin. At the same time as the technology develops to improve deepfakes, others develop in parallel with the aim of identifying them. A global and multi-organisation initiative called the Deepfake Detection Challenge , has launched to “produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.”
Positive deepfakes? Surely not
Newton’s third law states that for every action, there is an equal and opposite reaction and the same can be said for deepfakes. While there are very understandable concerns around misinformation and malicious use, there are plenty of organisations working with the core technology behind deepfakes (Generative Adversarial Networks (GANs), where two algorithms learn by constantly trying to outsmart each other) and using the concept of deepfakery for positive ends. Others are making great use of the concept in education, campaigning and more:
David Beckham allowed himself to be ‘deepfaked’ in order to address world leaders in nine different languages on behalf of the charity Malaria Must Die , in a film produced using AI.
Samsung AI are working on a project to bringing historical figures to life using just photography. Archive shots of Fyodor Dostoevsky, Salvador Dali, Albert Einstein, Marilyn Monroe and the Mona Lisa have been animated using a technology called ‘Few Shot Adversarial Learning’. And the Dali Museum in Florida gives visitors the opportunity to meet the late, great artist in a concept that has futuristic implications for classrooms, museums and galleries worldwide.
Can I help you?
Begone chatbots! The application of Generative Adversarial Networks in deepfaking (as described above) is ideal for customer services, as continual learning means that the service can actually improve with every new interaction. This means that in the initial stages of a query, you’ll be dealt with quickly and efficiently, but with a level of ‘human-like’ interaction that’s a world away from the automated pop-up box we’re used to.
Humans are unpredictable and the film industry is always looking for faster, better and more efficient ways to get their movies off the set and into theatres. We’re all well-used to special effects, but deepfakes technology can have real benefits to postproduction in ways that don’t really affect the viewing experience at all. As Quentyn Taylor, Canon EMEA’s Director of Information Security explains, “they [deepfakes] will revolutionise film and video making and will allow streamlining and speed of production that was never possible before when having to depend on ‘real’ actors, humans, to get their lines and delivery correct.”
A word of caution: ‘With great power comes great responsibility’
Trust and transparency are everything. For organisations using deepfakes, it’s vitally important that they ensure those who come into contact with them are aware and happy to do so. This also means that anyone responsible for publishing content should be able to verify their sources – this isn’t as problematic as it sounds: media outlets can and should request the original, unedited footage where needed and be prepared to do so as a matter of course. In these respects, deepfakes are no different to any other type of communication.