Here’s something unsettling.

https://media.giphy.com/media/l4pTnYGTJLCjB6Mz6/giphy.gif
Now, it’s not entirely convincing, but it’s pretty good, right?
Photoshop and CGI have been around for a long time. From creating science fiction or fantasy backdrops to inserting Tom Hanks into a meeting with John F. Kennedy, people have been using special effects and video techniques to fabricate scenes, people and pictures since film was first created.

Even Charlie Chaplain used optical illusion in order to create a scene and tell a story in some of his films.
Credit:
https://gfycat.com/obviouseuphorichadrosaurus-chaplin-movies
For the longest time, CGI and special effects cost time, money, and people; and so for a long time, it was kept secure and safe within the realm of mass media and Hollywood.
However, here’s the thing about humans and technology; we’re getting really good and making things smaller and more accessible. We don’t seem to have flying cars and teleporters just yet, but humanity continues to advance technology and expand the realms of what we thought possible.
Rooms of computing power that was used to send people to the moon can now be outdone by a graphic calculator or even the most basic smartphone. And what was once available only to the elite and big-budget studios of the film industry can now be reproduced on a singular desktop computer.
All of this is to talk about the idea of Deep Fakes, or the processing of using machine learning and neural networks in order to create false videos and imagery.
So what is a Deep Fake?
What essentially happens here is that a neural network, or an advanced computer system, can aggregate and collect hundreds of pieces of images and video of an individual or an object, and using that data, create a new image that approximates the data without copying it.
Essentially, a computer that has hundreds of photos of a famous politician can look at all the different expressions and features of a politician, and create an “average” image out of that database. The more photos and video, the more realistic it can appear.
Here’s an example of how Jordan Peele used a neural network in order to give former President Obama some words of his own.
A “Deep Fake” can also be used to create fake audio, as illustrated by the “A Stable Genius” YouTube channel, which uses an algorithm in order to create humorous explanations from the current U.S. President Donald Trump.
So most of the examples that I’ve shown you are either humorous or entertaining; and for the most part, that’s what people have been using them for. If nothing else, the generation of new memes has driven a lot in terms of social progress.
The Ugly Side of Fake
It’s easy to imagine what a problem that fake audio and video can pose to Public Relations and the truth-telling community as a whole. And while many people think of Deep Fakes in memes and funny videos, Deep Fakes were actually given their name when a Reddit User named “DeepFake”, began using a network to post fake pornographic video, superimposing celebrities faces onto hardcore pornography.
Revenge porn, copyright, defamation, evidence tampering, political propaganda, misinformation in general. The list of things that Deep Fakes could be used for is only limited by the imagination, and now to those with a powerful enough computer.
This means it will only become harder for Public Relations professionals and journalists to sort out what is real and what’s not. PR professionals now not only have to worry about what is coming out of their organization, but what could be put out there by other individuals/organizations about them, making it appear like it’s their own doing.
While some of these examples are more obvious, how hard will it be to pick out a face digitally altered in a crowd, or perhaps a video of a foreign politician’s words in another country? Maybe someone uses Deep Fakes to create damaging video or audio of an organization they don’t like and leaks it as if it’s real? How often has attempting to deny something worked successfully for PR agencies?
For the longest time, the common phrase “pics or it didn’t happen” has been the guiding mentality of much of modern culture. Back up what you’re saying with photos, video and media content and people are more likely to listen and believe you.
But now it’s going to take a new approach to PR; in my opinion, a more aggressive approach in order for PR professionals to protect the name and credibility of whatever organization they represent. It will no longer be enough for people to sit on the sidelines and deflect when things happen. PR professionals need to be less reactionary and more pro-active in order to ensure that Deep Fakes and similar technology doesn’t work against them.
Deep Fakes are still a work in progress and aren’t as easy as just taking a photo and adding a filter, but they’re getting better and better. It’s important for PR professionals to be able to look at these types of developing trends and anticipate and guard against them.
But for now, here’s one more Deep Fake of Steve Buscemi face on Jennifer Lawrence.
Want to know more about Deep Fakes?
Check out this “Quick-Take” from Bloomberg below.
Or check out the article below from the Guardian.
You thought fake news was bad? Deep fakes are where truth goes to die
https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth
