New face-swapping app is going viral but there’s a gotcha
A new face-swapping video app that went viral is now getting some serious backlash. What seemed like fun at first is now raising privacy concerns, especially in light of the fact that millions of people’s images are being shared around the world.
The app allows people to use a photo of themselves to take the place of an actor in a clip of a TV show or movie. And not surprisingly, the app was an instant hit.
But just as quickly as it became popular, users turned against it. We’ll tell you more about this new face-swapping app and why it has people worried.
Privacy fears dog app
The face-swap video app Zao became an immediate hit recently in China. But just as it went viral, so did the privacy fears around it.
The app, only available in China, allows users to upload a photo of themselves or create a series of photos which they can then drop into popular scenes from hundreds of movies or TV shows. Imagine replacing Julia Roberts in “Pretty Woman” with yourself.
In case you haven’t heard, #ZAO is a Chinese app which completely blew up since Friday. Best application of ‘Deepfake’-style AI facial replacement I’ve ever seen.
Here’s an example of me as DiCaprio (generated in under 8 secs from that one photo in the thumbnail) ? pic.twitter.com/1RpnJJ3wgT
— Allan Xia (@AllanXia) September 1, 2019
That potential for fun had Zao racing to the top of China’s iOS App Store over the weekend. Almost immediately, however, fears of how Zao’s developer could use the uploaded photos and clips put the brakes on the app’s popularity.
Related: This creepy facial recognition tech knows when you’re afraid
In a country already besieged by state-run facial recognition technology, Zao’s fine print of how users’ images could be used sparked outrage. The initial version of Zao’s user agreement said developers had “free, irrevocable, permanent, transferable, and relicense-able” rights to all of its users’ images.
Almost as quickly as it rose to the top of the downloadable list, Zao saw its iOS App Store rating drop to 1.9 stars after about 4,000 negative reviews were registered. This prompted Zao to revise its user agreement.
The company now says it will use photos or mini videos uploaded by users only to improve the app. Zao also claims that if users delete content they uploaded, the app will erase it from its servers.
Concerns around the world
While the use of facial data fanned privacy concerns, perhaps more disconcerting is that the app makes it extremely easy for a user to create a deepfake video. The implications of such deepfake videos already have raised worries among lawmakers here in the U.S.
Deepfake video technology uses facial mapping, artificial intelligence (AI) and deep machine learning to create ultra-realistic fake videos. The videos can make it appear as if a person has said or done things they haven’t.
U.S. politicians are particularly worried about the technology and the upcoming election cycle. House Intelligence Committee Chairman Adam Schiff has said deepfake videos could cause “nightmarish scenarios” for the 2020 presidential election.
Even apps that only change still photographs are cause for concern. More specifically, the globally popular FaceApp has U.S, politicians and security analysts deeply worried.
With FaceApp, users can flip genders, change ethnicities and even age people in images. Why does that concern Capitol Hill? FaceApp’s servers and its developers are all based in St. Petersburg, Russia.
Add that to FaceApp’s privacy policy, which states that images uploaded by the user are the property of FaceApp. The policy also states that while users’ names and images are allegedly deleted from the company’s servers after a few days, FaceApp may hold on to data to comply with “certain legal obligations.”
FaceApp’s pedigree and the vast amount of names and facial data it holds have some American politicians spooked. Senate Minority Leader Chuck Schumer has called for an investigation into the app and its practices.
Related: Google will give you $5 for your face data
AI can also generate images of people that don’t exist. The site, whichfaceisreal.com, uses machine learning technology known as a generative adversarial network (GAN).
The GAN goes through tons of portraits of people over and over again, learning patterns so it can recreate what it sees. It then tests its work by comparing a real person with a fake.
It remains to be seen whether China’s Zao app ever goes global. Considering the backlash in its home country, if Zao does go to market outside of China it can expect it to be closely scrutinized.
Tags: deepfake videos, Google, security, X (Twitter)