Things are rarely what they seem on the internet. On Twitter, bots are used for influence campaigns to push propaganda, and on Amazon fake reviews run rampant to confuse potential buyers. Click or tap here to learn just how bad the fake review problem is across the web.
Unfortunately, one of the biggest hotspots for fakery is on dating websites and apps. People looking for love sign up in search of another human being, only to find scammers and bots — and it’s getting harder to tell them apart from real people.
Now, the bots on dating platforms are taking things a step further. By engaging users in conversation, they’re leading people to other websites that contain dangerous malware, porn or worse. If you’re looking for love in all the wrong places, here’s what you need to know.
‘Am I bot or not?’
According to new reports from CBS News, a large percent of the traffic on dating sites like Match.com, OKCupid and Plentyoffish might not be human.
Users across the web have reported multiple instances of talking to what appear to be an ordinary users, flirting, then being invited off the dating site. When they arrive at other sites, they find themselves dealing with scams, marketing and sometimes even pornographic webcam streams.
These bots exist to make money. Studies performed by Cybersecurity firm Imperva revealed 28.9% of all modern web traffic can be attributed to “bad bots,” or automated accounts spamming misinformation, ads and scams.
Since this study was performed in 2016, the numbers have only continued to rise — and now, many of these bots have found their way onto platforms beyond social media.
But why dating platforms?
A good question with a logical answer: dating platforms are filled with people looking for companionship — which means they’re often desperate to talk and are ripe for exploitation. A lonely person is more likely to engage with what they believe is an attractive person, and dish out cash when bots ask them to pay to “see something naughty.”
But the problem is now severe enough that big name sites like Match.com are in court against the FTC for “unfairly exposing customers to fraud.” Currently, California is the only state in the country with laws that require human-appearing bots to reveal their true intentions upfront — but most experts agree this is unenforceable.
How to spot the bots
So if there isn’t much that can be done in the legal sphere to curb the bot onslaught, how can you identify them so you don’t waste your time? Well, the easiest way is to know the red flags that reveal the account isn’t run by a human.
The biggest one is going to be the profile image. Usually, bots grab their profile images from random social media users and masquerade as them — sometimes with an entirely different name. One of the easiest ways you can check if the profile picture is stolen is to perform a reverse image search.
If you’re in Google Chrome, all you need to do is right click on an image and click “Search Google for image.” If you see the picture belongs to someone with a different name from the site, you can bet it’s a bot!
Other than profile images, pay attention to the account’s grasp of the English language. Most automated bots are based outside the United States, and will often have text riddled with grammar and spelling errors. If the responses also seem a bit off, and don’t seem to address what you’re saying, that’s another red flag.
In other words, it’s up to us to keep our eyes open for bots. Until these platforms start moderating themselves, we might as well accept we’ll be sharing these spaces with bots for the near future. Not that everyone minds, though. Click or tap to see why human-robot marriages might be on the horizon.