Raise your hand if you’ve ever seen mentions of a “miracle cure” in your Facebook news feed. Nearly everyone has to deal with spam content and misinformation on Facebook, but the speed at which hoaxes can spread on social media is no laughing matter. Between scammers tricking money out of people and nation-state actors spamming propaganda and fake news, it’s no wonder that people are fed up with Facebook’s lack of moderation. But is there an end in sight to the deluge of phoniness?
Well, in a surprising twist, Facebook is turning its eye towards some of the most dangerous spam and misinformation on its platform: medical hoaxes. We’ve already seen the dangerous role social media can play in promoting quack cures, and as the largest platform of all, Facebook has an especially tough job ahead of it to weed out the garbage and keep users safe.
With a new policy in effect against health-related misinformation, will Facebook’s new attitude about fake news be enough to protect its users? One thing is for certain: it’s got a lot of work to do if it wants its platform to be a safer place to share news with friends.
Why is fake medical information so prevalent on Facebook?
A major offensive is underway at Facebook against what the company is calling “sensational health claims.” This refers to advertisements, promotions, and viral posts that contain false or unverifiable medical information.
You might already be familiar with a good deal of this content, with ads for “miracle cures,” stories about cancer-curing “superfoods,” and claims about “crystal healing” dominating all corners of the platform.
The pervasiveness of this fake content is both logical and unfortunate. People genuinely care about their health, and when times are desperate, they are often willing to believe any number of dubious solutions that could potentially help them or a loved one. The blame, however, lies with the predatory content-creators that peddle in lies and deceit for money.
Facebook’s actions are notable because they go against the company’s traditional business model of the past. Highly engaging content — like viral medical posts — is financially beneficial for the company since it tends to draw more eyeballs (and therefore more ad revenue).
By tampering down on this kind of content, Facebook is denying itself potential dollars in exchange for making its community safer. This is unusual for a company that has regularly prioritized ads and data harvesting over the safety of its users.
What is Facebook doing to fight sensational health claims on its platform?
According to a press release from Facebook, the company plans on manually lowering the ranking of sensational health-related content as it appears on users’ timelines. Rather than outright deleting the posts, this forces them to a lower position — which can help them from attracting too much attention and spreading.
This method helps to circumvent claims of censorship and gives product-makers a chance to clarify the legitimate uses of their wares before making Facebook ads in the future.
This isn’t the first time it’s employed this kind of tactic to fight misinformation. Just as with the company’s fight against clickbait news, the website’s algorithm searches for commonly used phrases that “exaggerate or mislead” users. This helps to cull the distribution of low-quality content, which helps make the user experience better for everyone on the platform.
Thankfully, there isn’t much you, as a Facebook user, need to do to enable this feature. Facebook is already cleaning the mess up behind the scenes. This effort, while late in the game, is a step in the right direction for a company that’s had so many mishaps with user data and safety.
Is this a sign that Facebook is changing for the better? Only time will tell, but for now, I’d still keep your profile data private if I were you. Even if it removes all the bad actors, Mark Zuckerberg’s still got to make money somehow.