We’ve been hearing a lot about the spreading of fake news lately — especially on social media sites during election years like this one.
That’s why many of us look for verification when someone is said to have done something outrageous. We turn to proof like video or audio recordings. The problem is technology has advanced to the point where videos can be faked.
These spoofed videos are known as “deepfakes” and they’re super convincing. Tap or click here to see an example. So, is there anything social media sites can do to help combat the problem? Facebook says that it’s making a change for the better, but you can’t really believe what it says.
What is deepfake video technology?
Deepfake technology is a technique that uses facial mapping, artificial intelligence (AI) and deep machine learning to create realistic fake videos of people saying and doing things they haven’t actually said or done.
Now, with the use of deep learning, all it takes are computer scans of multiple images and videos of a certain person. Deepfake software will then process this information and mimic the target’s voice, facial expressions and even individual mannerisms.
A recent example was a video of House Speaker Nancy Pelosi slurring her words, making it look like she was intoxicated. The video went viral on Facebook last year and was viewed more than 3 million times.
The thing is, the video wasn’t real. It was a doctored version of a real speech that was created to make Pelosi look bad. Making matters worse was the fact that Facebook refused to remove the video from its site, even though the company knew it was manipulated.
Facebook bans deepfake videos
Facebook announced recently that it’s banning deepfake videos from its site. But, you can’t really trust what the company says. Whenever Facebook claims it’s making changes for the better it’s normally a PR stunt.
For example, last year Mark Zuckerberg said he thought Facebook should be more privacy-driven. Tap or click here to find out how that worked out.
Earlier this week in a blog post, the company said its approach “has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government, and industry to exposing people behind these efforts.”
It went on to say Facebook is strengthening its policy toward misleading manipulated videos that have been identified as deepfakes and will remove this type of content if it meets the following criteria:
- It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Parody and satire videos are not part of the new ban. Those will still be allowed on the site.
Videos that are not up to Facebook’s standard for removal can still be reviewed by independent third-party fact-checkers. If a video is found to be false, it will supposedly be flagged so anyone wanting to share or view them will know. The company said this is a better approach than just removing all questionable videos.
This might seem like a step in the right direction for Facebook but really it’s not. It still allows shadow fakes on the site and is ground zero for the spreading of fake news.
You’ll still need to fact-check news articles that are shared on Facebook since it doesn’t remove this misleading information in the name of free speech. Tap or click here to see the top 10 fake news articles circulated on Facebook last year.