Could DeepFake abuse be the most immanent AI technology threat? There are plenty of very smart people, such as the late Stephen Hawking and the (still alive) Elon Musk, who are very worried about the dangers of AI. They warn against a future where superhuman machine intelligence will wipe out humanity as if we’re a colony of ants and it's a big old boot.
While that might come true in the far future (or tomorrow, no one knows) we actually have some AI-based things to worry about today already. For example, AI-based facial recognition is already causing a ruckus in civil society as the potential for abuse is becoming clear.
Another is the advent of DeepFakes. Although they aren’t getting as much coverage as they should, DeepFake videos may soon be a very serious problem, and we should all be worried about them. You may have heard the term before, but might not know exactly what it refers to. Rest assured, by the end of this article you'll be as worried as we are about how very wrong this technology can go.
In case you clicked on this article have never heard the word “DeepFake” before, here’s a basic explanation. Using the power of Deep Learning neural net simulation, it’s possible to take the face of one person and swap it out for the face of another. Once the neural net has learned what it needs to about the source and target faces, it can create an almost-flawless synthetic video. So now you can, for example, watch a video where every person’s face has been replaced with that of Nicholas Cage.
Here's an excellent example created by BuzzFeed that shows exactly what we are talking about.
Scary yet impressive, right? Let's try to make sense of what you just witnessed.
The scary part is that you don’t need a supercomputer to create a DeepFake. Anyone can download the software and a tutorial and make a DeepFake using nothing more than an average laptop with a beefy GPU. The results are not 100% convincing and tech-savvy people usually have no issue picking up the tell-tale signs that a video is a DeepFake.
The problem is that technology is advancing at a rapid pace. As the technique improves, proving that a video is a DeepFake will become harder and harder. We have to assume that at some point they will become so accurate that even experts will be at a loss.
I'm not going to go into deep technical details on the DeepFake process here. There are plenty of tutorials out on the web where you can learn how to make them yourself on a step-by-step basis. Instead, let's just go over how it works in broad strokes.
The DeepFake method uses a piece of software that contains "deep learning" technology. In other words, it uses a simulated neural network to learn the solution to the particular problem you have.
In this case, the problem is how to take one person's face and then map it onto another. So the neural network has to learn the dynamics of each face and then perform the swap.
To do this, the deep learning neural net has to ingest hundreds of pictures of both the source and target faces. This is one of the reasons celebrities and public figures are the victims of deepfakes. There's plenty of material to train the system on out there.
After training on both faces, the system can take each frame of the target video, detect the target face and then re-draw it using what it learned about the features of the source face. The final face you see is a completely computer-generated synthetic face that combines the way the source face looks with detailed knowledge about how the target face animated. The end result is shockingly convincing.
That's at the high technical end. There are now even services that let you do simple deepfakes that replace a person's mouth and makes it appear as if they are saying something they never did.
DeepFakes don’t have to be 100% convincing in order to be hurtful. After all, one of the first things people used the technology for was the creation of pornography. The DeepFake creator would take an existing pornographic film and then replace the face of the actress with that of a famous person. So, although the person targeted has never been in a pornographic film, there are now realistic depictions of them doing so.
The applications in crimes such as revenge porn is an obvious concern here as well. It's not really possible to do this with an average person, because of how much training data is needed. Then again, thanks to social media, regular non-famous people are putting plenty of material out there. Governments are already using social media photos to train facial recognition AI's, it's conceivable that it could work in this case too.
Since current DeepFakes are still fairly obvious, no one actually believes these videos are real. However, it’s easy to see why they would be upsetting. There are several prime ways in which they can be abused and pose a danger. Let's unpack each one a little more.
The real issues begin when DeepFakes become impossible to detect. There will probably be a phase where humans can no longer spot them, but other machine learning algorithms can. Then DeepFakes will become indistinguishable from real raw video footage. No one knows how long that will take, but more computing power and more refined Deep Learning software is always just around the corner. So the smart money says this is something that will happen sooner rather than later.
When DeepFakes do finally become indistinguishable from true raw video, we'll need a new way of verifying that a video is real. Perhaps official government videos will contain special watermarks. Maybe the 3D spatial video will become the norm or some other second factor will be used to prevent fraud. No one really knows yet, but if DeepFake technology keeps advancing we'll need something sooner rather than later.
As I said above, right now you need quite a lot of source material to train the deep learning network on. It is, however, conceivable that a future version of this technology won't need as much training material to do the same job. The less training material needed, the more potential targets open up to abusers. Since only people who are famous enough to have tons of image data out there can fall prey to DeepFake abuse. Remove that limitation and its open season. There's no indication when or if this will ever be the case, but it's an issue worth keeping an eye on.
Right now a relatively powerful GPU can create a convincing DeepFake after several days of processing. As computing power improves and the algorithm itself becomes potentially more efficient, it may one day be possible to do the DeepFake process in real-time. It's not a practical concern today, but imagine if you could use DeepFake technology in a video call. Hackers would certainly find social engineering way easier if they can dynamically fake the image of a person their target knows and trusts!
While being unwillingly stuck into an adult film is annoying and upsetting, flawless DeepFakes have the potential to cause all sorts of chaos.
Imagine someone made a 100% convincing video of the US President declaring war. DeepFakes use a technique known as generative adversarial networks. It can be applied not only to video but also to audio! Which means at some point we are going to have to assume that any video of anyone is potentially a DeepFake. Other means of verification will have to be used to assure people that an official video is a real one and no one has come up with a solution yet. As you can imagine, this is one of the most dangerous potential forms of DeepFake abuse imaginable.
The internet has already made it very hard for the average person to figure out what’s true and what isn’t. Simple editing of videos has been enough to fool people into thinking something fake is indeed real. That's not a population of people who are ready for convincing face synthesis.
DeepFakes are already being used to abuse public figures by putting them into compromising films. Simply seeing yourself in an objectionable video can be enough to cause psychological distress, but as DeepFakes become easier to do people without the resources of the rich and famous will have to deal with this as well.
Trolls, cyberbullies and other online offal could use this technology to devastating effect and thanks to the anonymity the net provides there could be no way of stopping it.
The biggest reason we should be worried about DeepFakes has nothing to do with the tech itself and everything to do with our collective lack of critical thinking. People believe all sorts of things that are incredibly unlikely to be true. That the moon landing was faked, that alien lizard people live among us and let's not forget about Pizzagate.
The vast majority of people on the net are not going to take the time to research whether a particular thing a fake Obama said in a video is real or not. There's already plenty of credulous sharing of fake content on social media sites like Facebook. People aren't even aware that this technology exists or what it's capable of. So there's a reasonable expectation that even existing DeepFake technology is going to fool plenty of web users.
If you've seen enough DeepFake videos, you'll start developing an eye for the telltale signs. Which means one of the best ways to learn how to spot one is by watching videos that you know are DeepFakes from the get-go.
The uncanny valley can also help you here. That is, some almost-real computer animation of human faces can give you a bit of an uncomfortable feeling. Many DeepFakes are subtly imperfect and can trigger this feeling of unease. Learn to trust that if you watch a video and think there's something weird about the person's face.
It's also often possible to see where the seams are. If the subject's face is blurry in a way that doesn't blend with the rest of the video, that's one possible sign. Skin-color changes between the neck and face can also be a tell.
The best way to detect a DeepFake has nothing to do with technical methods, however. It's all about critical thinking. Ask yourself whether the person in the video would really say the things they are saying. Does the video as a whole make sense? Are there third-party sources or other recordings that back up what this video depicts? The best strategy is to treat shocking, out-of-character and otherwise weird videos as potential fakes until you can prove otherwise.
Some countries are reacting to the emergence of DeepFakes with new legal protections against them. Although if you ask the experts, in this case, the Electronic Frontier Foundation, we already have laws that address DeepFakes in most developed nations.
As the EFF explains it, DeepFake technology is no different from BitTorrent or Tor. In the sense that the technology itself is morally neutral. So the aim should not be the banning of technology as a whole, but to address the crimes that are committed using that technology.
Which means that if you use DeepFakes to extort or defame someone, current extortion and defamation laws apply already. The same goes for impersonation or fraud.
DeepFakes are just one emerging technology that shows us how wild this century is shaping up to be. On the whole, the advancement of technology has been a positive thing. In fact, that's a massive understatement. Life without modern technology sucks. It's as simple as that.
At the same time, technology can create utter chaos in the wrong hands. For example, there are fears that home genetic engineering tools could lead to someone making the next global killer disease in their garage. How realistic these fears are is up for debate, but you can never shake the potential of powerful tools.
DeepFake technology has caught the interest of those who doubtlessly see the potential for its abuses and as with every other openly available tech toy, I have no doubt that they'll come up with nasty applications of the software we couldn't even imagine. All we can do is understand the tools and stay ever-vigilant.
Are you worried about DeepFakes too? Let us know down below in the comments. Lastly, we’d like to ask you to share this article online. And don’t forget that you can follow TechNadu on Facebook and Twitter. Thanks!