Once a shining star in the social media world, Facebook has hit a rough patch that it is still trying to recover from. Just think of the headlines the company has made over the last couple of years, and then try to pick out one that was positive.
Whether it is the privacy concerns spawned by the Cambridge Analytica issue or the fake news that has run rampant, if the story has been about Facebook, it has been negative. Oh, and there was Mark Zuckerberg's testimony in front of Congress, too.
All of that is in the past, and no doubt Facebook is not done answering for its sins. However, the company is clearly in rebound mode, with commercials explaining how hard they are trying and new features meant to improve security, safety and enjoyment on the site.
What was that weird post?
On Tuesday, people who logged into Facebook may have noticed something different about the site. Sure, there were the normal posts, photos, news links and all, but there was something else.
Starting around 11 a.m. Eastern time and ending shortly thereafter, there was a separate box below every post asking if what was posted was hate speech. It looked like this:
This was under every single post, which did make for a different looking news stream. Regardless, it's clear what Facebook's intention is with this.
Not long ago Facebook made public its policies on what is, and is not, allowed on Facebook, but in doing so noted they will need our help to police the site. In having an option like this, they will be giving us all some say in the matter.
Now, the boxes were not on Facebook for very long, which means they probably were not meant to be available to us just yet. But if you did note that something was, in fact, hate speech, this box opened up:
From there you would select which of the options the specific post fit into. The fact that Facebook removed the feature shortly after it was available is evidence it was not quite ready. But if you need more proof, having what appeared to be test options should seal the deal.
But while this feature is apparently not quite ready to go, we can probably expect to see some version of it soon.
But will it help?
It's tough to imagine anyone would be against the idea of removing hate speech from Facebook, though this concept does not seem to be without its flaws.
For instance, can we really trust people to only report actual hate speech and not just posts they do not like? Furthermore, what happens once posts are flagged as hate? Are they automatically removed or will they be sent to a Facebook employee for further review? If it's the latter option, how many times does a post need to be flagged before it gets a closer look? Facebook still has a lot of questions to answer.
Facebook respond the the glitch
Shortly after the hate speech question was seen, one of Facebook's Vice Presidents posted this comment :
They better be more careful with their "tests"!
Facebook is not the only social media site in trouble
It turns out that apart from Facebook, Cambridge Analytica reportedly had access to at least another social media network's public data. Click here to see which one.