Facebook has been accused in the past of not acting quickly enough to remove fake accounts peddling fake news or offensive posts from its site. To show that the social media giant is taking accusations seriously, Facebook says it removed a record 2.2 billion fake accounts in the first quarter of this year.
The number is part of a transparency report released by Facebook on Thursday. The report also contains statistics on how Facebook is attempting to eliminate not just fake accounts, but also posts that violate its community standards. In the first quarter, Facebook took down 1.5 million posts that promoted or engaged in drugs and firearms sales.
The report also looked at challenges the company faces when trying to track down fake or offensive posts on its mammoth site.
Facebook outlines how it is working to monitor its site
In a blog post that accompanied the transparency report, Facebook said it is bringing in independent third-parties to examine the company’s methodology on removing posts and accounts.
“We are also opening up even more fully to third-parties, including on our fake account numbers, via the Data Transparency Advisory Group (DTAG),” wrote Alex Schultz, vice president of analytics at Facebook. “We know it’s important to have independent verification of our methodology and our work.”
Schultz wrote that Facebook’s enforcement and measures on dealing with fake accounts fall under three categories: Blocking accounts from being created, removing accounts when they sign-up and removing accounts already on Facebook.
Facebook acknowledges the vast majority of the 2.2 billion fake accounts it removed were taken down within minutes of signing up.
However, it’s the third category of accounts, those already on Facebook, that can cause the most mayhem.
In the report, Facebook said it measured nine categories of violations:
- Adult nudity and sexual activity
- Bullying and harassment
- Child nudity and sexual exploitation of children
- Fake accounts
- Hate speech
- Regulated goods: drugs and firearms
- Terrorist propaganda (ISIS, al-Qaeda and affiliates)
- Violence and graphic content
Facebook added the drugs and firearms category this year.
“We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards,” Facebook’s report stated. “While not always perfect, this combination helps us find and flag potentially violating content at scale before many people see or report it.”
Are Facebook’s monitoring efforts working?
There have long been reports of users flagging accounts or posts before Facebook took them down — if it took them down at all. In addition, Facebook only started its transparency reports in late 2017. With only those 3 years to look at, Facebook statistics show it outpaces user complaints in taking down offensive posts or accounts in seven of its nine violations categories.
It’s in the categories of bullying and hate speech that Facebook faces difficulties.
In the last quarter of 2017, 75% of hate speech content was flagged by users. In the first quarter of 2019, that number had dropped to about 36%.
The numbers have actually regressed in the bullying category. In the last quarter of 2018, Facebook took proactive measures in 21% of cases. In the first quarter of this year, that number dropped to about 14%.
Facebook relies on users, Facebook employees and artificial intelligence (AI) to spot bad actors. While it may do well in some more clear-cut violation categories, AI stumbles in the bullying and hate speech categories.
“While instrumental in our efforts, technology has limitations. We’re still a long way off from it being effective for all types of violations. Our software is built with machine learning to recognize patterns based on the violation type and local language,” according to the Facebook report.
“In some cases, our software hasn’t been sufficiently trained to automatically detect violations at scale. Other violation types, such as bullying and harassment, require us to understand context when we review reports and therefore require review by our trained teams.”
Before it pats itself on the back too hard over its first quarter results, Facebook would do well to remember it still has some glaring blind spots. It was during this reporting period that a shooter live-streamed on Facebook his murderous rampage through two mosques in New Zealand.
Facebook admitted that fewer than 200 people saw the massacre while it was being live-streamed. However, the video was viewed 4,000 times before Facebook took it down.