Fake news is one of the scourges of our increasingly connected world. With billions having instant access to social media sites and the web, misinformation can spread quickly like wildfire.
To discern what’s bogus from real, we typically rely on concrete proof, like a video, for example, to prove that something was actually said or done.
But thanks to the unstoppable march of technology, even videos are now at risk of being faked convincingly. With emerging and terrifyingly advanced face tracking and video manipulation techniques, a new era of disinformation is looming.
From a politician saying words that weren’t spoken to a celebrity doing things that weren’t done, the threat of these ultra-realistic fake videos, now collectively known as Deepfakes, is something that can no longer be denied. If we’re not careful, the next fabricated video scandal that can threaten our national security or sway public opinion is just waiting around the corner.
What is Deepfake technology?
Deepfake technology is an emerging technique that uses facial mapping, artificial intelligence and deep machine learning to create ultra-realistic fake videos of people saying and doing things that they haven’t actually done.
And the scary part? The technology is improving at such a rapid pace that it’s getting increasingly difficult to tell what’s fake.
Now, with the use of deep learning, all it takes are computer scans of multiple images and videos of a certain person. Deepfake software will then process this information and mimic the target’s voice, facial expressions and even individual mannerisms. In time, without the proper equipment, these Deepfake videos will become indistinguishable from the real deal!
Don’t trust what you see
The mass accessibility of Deepfake software has many worrying implications that are hard to ignore, as well. Now, even your regular Joe can create realistic fake videos of anyone saying anything that they want them to say.
With this technology in everyone’s hands, it will be increasingly confusing to filter out the truth from the lies.
And it’s not just misinformation that we need to worry about. Realistic Deepfake videos can also be used in blackmail attempts, phishing links and extortion scams.
“Deepfake videos provide even the most unsophisticated criminals with the tools to create (and with minimal effort) realistic, hard to detect (at least without deep forensic analysis) video recordings that can impersonate and fool anyone, including law enforcement,” cybersecurity lawyer Steven Teppler warns.
They could be used in extortion, implicate innocent people in crimes, and in civil court proceedings, these fraudulent videos can be used to carry out all kinds of fraudulent claims or defenses,” Teppler added.
So far, cruder versions of Deepfake technology are already widely used in fake celebrity porn and comedy gags but based on its incredible improvements, it’s only a matter of time before we start seeing videos with more serious consequences.
Deepfakes are not perfect yet
Fortunately, Deepfake technology is not perfect in its current form yet. There are still tell-tale signs like lifeless, unblinking eyes and jerky facial movements. In short, they still look unnatural, and like most CGI, they are still stuck in “uncanny valley” territory.
Unnatural blinking, in particular, is a good indicator of a Deepfake video. The reason behind this apparent flaw is interesting.
Speaking to Wired, computer researcher Siwei Liu said that Deepfake technology “doesn’t get blinking” yet and Deepfake programs also tend to miss physiological signals intrinsic to human beings.
Do you ever get that uncanny valley feeling you get when you’re watching computer generated actors (Carrie Fisher and Peter Cushing in Star Wars Rogue One, for example)? You just can’t put your finger on it but something is just a bit off.
This tells a lot about the technology itself in its current form. Do a Google Image search of an individual and you’ll rarely see them with closed eyes (if ever.) If you’ve ever taken a portrait or a selfie, you know that a shot of the subject with their eyes closed is a big no-no and it’s usually marked for deletion.
Another factor that causes Deepfake rendering flaws is human psychology itself. Similar to other animation programs, you can’t just cobble together a large number of snapshots and have software and artificial intelligence perfectly mimic the personality and respective idiosyncrasies of a human being.
A talented animator may be able to pull it off but it will take an enormous amount of skill and effort to evade detection.
In a Deepfake video future, what’s being done to spot them?
As we enter this new era of video misinformation, perhaps our biggest weapon against Deepfakes is awareness.
If we start recognizing the fact that powerful video manipulation is now widely accessible and can be easily done by anyone, we should start being more critical and mindful of all the video content that we encounter every day.
Fortunately, the U.S. government is already knee-deep in developing technologies that can detect Deepfake videos. For example, the U.S. Defense Advanced Research Projects Agency is already two years into its four-year program to find methods to combat fake videos and images.
Other programs like Darpa’s MediFor are also developing an automated system that can detect fake videos and give them an “integrity score” based on three levels: digital fingerprints, physical elements (such as lighting) and semantics (comparing the content to facts.) In fact, Darpa is hoping that by the end of the MediFor program, there are already prototype fake video detection systems they can test at scale.
Another program is being developed at Los Alamos National Lab that tackles the issue in the pixel level. Focusing on a video’s “compressibility,” this approach looks at image data that appears to be repeated during the course of the video, indicating that some of the pixels are being recycled by a Deepfake program.
Hopefully, these fake video busting techniques will develop fast enough and keep up with the rapid advancements in Deepfake technology itself.