Audio deepfakes: How hackers are stealing your voice
By now, you’ve probably seen a deepfake video or two come across your social media feed (hey, that deepfake Tom Cruise is pretty convincing). Did you know that deepfake audio is even easier to mimic?
To show how flawed voice authentication can be, computer scientists figured out a way to fool the technology in just six tries. Keep reading to learn more about how they did it and how to safeguard yourself.
Voice authentication 101
Voice authentication technology is primarily used by companies that must verify their customers’ identities. Verification with a customer’s unique “voiceprint” is standard practice in banking, call centers, and other institutions where keeping your info private is a major concern.
When you first enroll in voice authentication, you’re typically asked to repeat a specific phrase in your own voice. The company’s system then generates a custom vocal signature, or voiceprint, from whichever phrase you provided. Your voiceprint is then stored on a secure server.
Once your voiceprint is saved, it’s used in the future when you contact the company. You’re usually asked to repeat a different phrase than the one you initially gave, which is then digitally compared to your saved voiceprint in the system. If everything matches up, you’ll pass the test and gain access to your information.
Of course, hackers weren’t born yesterday. They got to work as soon as companies began implementing voiceprint technology on a large scale. Through AI machine-learning “deepfake” software, the bad guys figured out a way to copy voiceprints and skate through security measures.
To stop the deepfakes, voice authentication developers put “spoofing countermeasures” in place. Although they’re designed to tell a human voice from a robot one, the protection often falls short.
Who’s voice is it anyway?
Researchers at the University of Waterloo decided to play hacker for a day and attempted to crack their code. First, they pinpointed the characteristics of deepfake audio that reveal it as computer generated. They then wrote a program that removes these giveaway features, making it virtually the same as authentic human audio.
The hacker-like tech they developed was so good that it could fool most voice authentication systems. The systems with less-than-sophisticated technology were busted in just six attempts 99% of the time.
The researchers also tested it against Amazon Connect’s voice authentication system. They achieved a 10% success rate within four seconds. The success rate jumped to 40% in attempts of 30 seconds or less.
These Veterans Day scams prove criminals have no shame
It’s not always easy to tell when you’re on the phone with a scammer. The usual warning signs you get from emails, such as spelling errors, are not there. But we’re here to help. Tap or click here for five surefire phrases that you’re talking to a scammer on the phone.
Forget emails and texts - Ransomware hackers are calling and here's what they want
Online security measures are updated constantly, making it difficult for hackers to breach systems and steal information. But it doesn’t deter them for long, and they quickly shift towards other tactics.
Unfortunately, network infiltrations are still too common in the U.S., with government agencies, schools and large corporations the most lucrative targets.