Share

Around the end of 2017 a term began bouncing around the internet, raising eyebrows and red flags wherever it happened to land: deepfake. 

Since then it has bounced into enough reputable news headlines to spark policy discussion, political worry, and even commercial interest. But what exactly is a deepfake, and should we believe the hysteria surrounding them?

It’s finally getting to the stage where anything created through it is almost indistinguishable from reality.

The term itself, a portmanteau of the computer science concept ‘deep-learning’ and ‘fake’, broadly describes video or audio that has been doctored by artificial intelligence in order to produce a convincing fake. The major distinction between deepfakes and more traditional forms of video editing or CGI is that deepfakes are produced solely using computer algorithms rather than by human hand. 

Above: A deepfake image of a person who, in reality, does not exist.


Here’s a simplification of how it works: You feed the deepfake algorithm an input of training data, for example a bunch of videos of a politician. These algorithms ‘learn’ the training data by recognising significant features shared in it, for instance the face shape, mouth movements, and expressions of the politician. The algorithm then attempts to reproduce those features by creating an output, in this case another video of the politician. 

With deepfake technology you can automatically alter a video to make somebody say or do something that never actually happened.

That output is tested by another algorithm that tries to guess the real from the fake. The process is repeated ad infinitum until the accuracy of the guessing algorithm falls to a 50% success rate, or random guessing, meaning the shared features of the generated video (output) is indistinguishable to the computer from those of the real videos (input). In other words, with deepfake technology you can automatically alter a video to make somebody say or do something that never actually happened, and do it to a convincing enough degree that not even another computer could tell the difference.

The technology has been quickly and quietly developing for some time now, but it’s finally getting to the stage where anything created through it is almost indistinguishable from reality, as you can see for yourself in the image above.

The ethical issues that follow are as obvious as they are disturbing; if it’s possible to forge realistic video then the whole notion of truth as we know it goes out the window. There’s a rapidly growing mountain of journalism speculating about the very real issue that deepfakes will cause for world politics in the not-too-distant future, especially given that it’s election year in the US. It’s easy to see why: given that we’re living in what some call a post-truth era, where fake news is just as influential, if not more so, in shaping public opinion than fact-verified reporting, what sort of havoc will lifelike yet fraudulent video footage wreak on society?

There’s a rapidly growing mountain of journalism speculating about the very real issue that deepfakes will cause for world politics.

It’s a great question and, like all great questions, many have already had a fairly good stab at answering it. Instead we’ll ask another question; what interest do deepfakes hold for the corporate and advertising spheres? Back in April, ESPN and State Farm released a game-changing commercial [below] during the screening of NBA documentary The Last Dance. It featured 1998 footage of sports anchor Kenny Mayne reporting on the result of the finals that year, before unexpectedly using the word “lit” and adding that the clip will be used in future to promote the documentary in a State Farm ad. 

It was so seamlessly blended with the documentary that viewers weren’t even sure they’d seen an ad. They had, of course, and a smart one too. Mr. Mayne’s apparent clairvoyance was actually the result of careful video editing, which layered new footage of his mouth over the original 1998 segment.

Above: ESPN and State Farm's deepfake commercial that aired during an episode of The Last Dance.

But besides being genuinely creative advertising, the ad showcased the viability of deepfakes for commercial use. And given that it came at a time where agencies were scratching their heads at how to make new ads with lockdown restrictions in place, the hype was well-deserved. The answer, as it turned out, had been right under their noses.

We can expect a lot more deepfake ads before the year is through.

Due to the success of the ad “executives at several major advertising agencies said they had discussed making similar commercials with their clients in recent weeks,” according to one New York Times article. If that proves to be true, we can expect a lot more deepfake ads before the year is through. Some will be mind-blowing, others just shameless knock-offs, but all will follow the current implicit law of using deepfakes in advertising: always let the audience know that what they’re seeing is not real.

This is largely because the technology is still new and unnerving. The current extent of computer wizardry isn’t common knowledge to the general public and not being able to distinguish fact from falsehood often leads to distrust. As tech ethicist David Polgar explained in an interview with The Drum, “[it] doesn’t mean people don’t want synthetic media, it just means that blurring the line between real and synthetic without transparency is disrespectful. The advertising industry can and should take a stance and clearly distinguish between the two.” It was precisely for this reason that the ESPN and State Farm commercial went down so well: the creatives behind it chose a funny yet clear way of telling the audience that what they were seeing wasn’t real, while still letting them connect the dots for themselves.

Above: A video using deepfake technology replaced Jack Nicholson with Jim Carrey in The Shining.


From making David Beckham speak nine languages [below] to recasting Jim Carrey in The Shining [above] there are plenty of creative ways to use deepfake technology. This trend will only increase as it grows out of infancy. But perhaps one of the most interesting avenues for deepfake advertising development is to simply give the technology to content creators and see what they make of it. One prediction is that after the first wave of TVCs and other ‘traditional’ ad forms using deepfakes, we might begin to see more ‘template’ advertising, in which brands construct a digital experience that users are invited to participate in with their own likenesses.

From making David Beckham speak nine languages to recasting Jim Carrey in The Shining [above] there are plenty of creative ways to use deepfake technology.

Imagine: rather than creating ads using deepfakes for an audience, instead the audience becomes part of the ad through the software. How might that look? An example might be a trailer for an upcoming miniseries in which the audience plays the protagonist by having their face captured with their smartphone camera and deepfaked into the footage. Combine that with a choice-based story like Black Mirror’s Bandersnatch and the level of immersion could be astounding. A Chinese-developed app called Zao is already dipping its toes into a similar premise, allowing users to deepfake themselves into selected scenes from their favourite movies.

Another possibility is that deepfake technology is put to use in the weird world of Instagram’s virtual influencers, adding a literal extra dimension to the meticulously constructed and brand-sponsored drama that plays itself out in the newsfeeds of their followers. It’s not far-fetched to think that virtual influencers will take another step towards the lifelike with the ability that deepfake tech offers for vocal and facial synthesis.

Above: David Beckham looks to speak nine different languages in an anti-malaria spot. 

What we’re witnessing with deepfakes now is the same process that happens every time an unprecedented piece of technology appears. It’s a phenomenon that’s almost always preceded with a wave of hysteria about how the technology will undermine the world as we know it, but eventually the technology becomes mainstream and democratised. In the process everybody learns to live with it, even to enjoy it. Before Photoshop existed there was a similar moral panic, now people just use it to edit together horse-bird chimaeras [below]. 

Deepfakes are here to stay. It’s the responsibility of the advertising industry to use them in a constructive and transparent way.

Jokes aside, deepfakes really are a tool just like any other. Tools don’t implicitly decide how they will be used, nor whether that use will be positive or negative. Just as one can use a spade to either dig a flower bed or to club somebody to death with, so will there be both benign and malign uses of deepfakes. Commercial deepfakes in particular will more often than not fall on the benign side, if only because their purpose is to sell a product rather than disrupt a political system.

Deepfakes are here to stay. It’s the responsibility of the advertising industry to use them in a constructive and transparent way. That is, however, a worthy trade-off for the new and exciting creative possibilities that deepfake technology affords. Advertising is, after all, always an early adopter and core driver of new technologies, so let’s collectively try to stay on the right trajectory with this one. It’s more important than usual.

The Moon Unit is a creative services company with a globally networked, handpicked crew of specialist writers, visual researchers/designers, storyboard artists and moodfilm editors in nine timezones around the world.

Share