Facial surveillance is coming whether we like it or not. The technology is already here, and is inside many of our phones and laptops, but it’s only a matter of time before we see it implemented for use by law enforcement and government agencies.
As scary as that sounds, this kind of technology is quickly becoming the norm. In countries like China, facial scanning technology was used to enforce quarantines for the novel coronavirus. Tap or click here to see how to track the spread of the disease.
But what happens when this government and police-level technology makes contact with a hacker? Well, you’ll get sensitive data breaches like no other. And now, a major facial recognition AI has fallen victim to such an attack.
Face on, mask off
Anyone remember Clearview AI? That’s the creepy facial surveillance company with alleged links to intelligence, government and law enforcement agencies.
In a previous article, we talked about how Clearview scraped data from millions of social media users to build one of the biggest databases of faces ever assembled. Tap or click here to learn more about Clearview AI.
But now, Clearview is in a bit of a tight spot. According to reports from The Daily Beast, an unknown intruder gained access to the company’s entire list of clients during a data breach. The intruder was never found, but Clearview was able to verify none of the company’s more important data (read: faces) was accessed.
Shortly after, reporters from BuzzFeed News obtained a document from an anonymous leaker that appears to show a detailed list of Clearview AI clients.
These include agencies we already knew about like the FBI and local police departments, as well as new entries like ICE, Customs and Border Patrol, the Department of Justice, the Secret Service and the Drug Enforcement Agency.
But that’s not all. BuzzFeed also found numerous retail outlets made use of the software. While the complete list was not shared with the public, major names like Best Buy, Kohls, Walmart, Albertsons, Rite Aid and Macy’s were all on record of having run searches using the software.
This means they accessed Clearview AI’s face matching algorithm at some point and ran a scan to detect matches. It makes sense that some law enforcement and government agencies would look up faces, but what are retail outlets doing with this technology?
What happens if any of these companies suffer a breach and that facial data gets loose on the web? It’s currently not known why these companies were running searches for faces in the first place, but it’s definitely something to be concerned about.
How can I protect myself from face-scanning technology?
Clearview AI’s strength lies in the fact that all of its data came from social media platforms like Facebook and Instagram. This gave Clearview as wide of a net as possible to pull data. Worse, this information was given up willingly by users who didn’t bother to read the terms of service agreements.
To protect yourself from being scooped up by this data-hungry powerhouse, your best bet is to go dark on social media. If you don’t feel comfortable having your face used for any kind of “research purpose,” setting your profile to private or removing any photos of your face is a good place to start.
If you choose to dive a bit deeper, you could work to remove your digital footprint altogether. That way, nobody will be able to effectively market or track you without you knowledge. Tap or click here to learn how to delete yourself from the web.
As bad as Clearview AI is, a portion of the blame is absolutely deserved by Facebook and other social networks. These companies built the business model behind our modern-day data economy, and they continue to reap massive profits as a result.
If you’re tired of dealing with these kinds of privacy violations, maybe it’s time to reconsider your relationship with these platforms for good. Tap or click here to see why Kim thinks it’s time to break up with Facebook.