Facebook Deletes 500 Million Fake Accounts In Effort To Clean Up Network

In Facebook's first quarterly Community Standards Enforcement Report, the company said most of its moderation activity was waged against fake accounts and spam posts-with 837 million spam posts and 583 million fake accounts being acted upon.

Facebook believes its policing system is better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda from its social network than it is at removing racist, sexist and other hateful remarks polluting its influential service.

"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook".

"We're sharing these because we think we need to be accountable", vice president of product management Guy Rosen said during a press briefing on the new report.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, according to the report.

The findings, its first public look at internal moderation figures, illustrate the gargantuan task Facebook faces in cleaning up the world's largest social network, where artificial-intelligence systems and thousands of human moderators are fighting back a wave of offensive content and abuse.

But how many content violations actually happen within Facebook?

It attributed the increase to the enhanced use of photo detection technology.

Utilizing new artificial-intelligence-based technology, Facebook can find and moderate content more rapidly and effectively than the traditional, human counterpart-that is, in terms of detecting fake accounts or spam, at least.

[Image: courtesy of Facebook] That's not to say, of course, that such content never shows up-just that, at scale, Facebook is able to remove most of it, often before its 2.2 billion users ever see it.

Facebook "took action" on 3.4 million pieces of content that contained graphic violence.

Facebook plans to continue publishing new enforcement reports, and will refine its methodology on measuring which bad content circulates over the platform. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem. It said it estimates that between 7 and 9 views out of every 10,000 pieces of content viewed on the social media platform were of content that violated the company's adult nudity and pornography standards. "Hate speech content often requires detailed scrutiny by our trained reviewers to understand context", explains the report, "and decide whether the material violates standards, so we tend to find and flag less of it".

[Image: courtesy of Facebook]"We aim to reduce violations to the point that our community doesn't regularly experience them", Rosen and vice president of data analytics Alex Schultz write in the report. It says it found and flagged almost 100% of spam content in both Q1 and Q4. However, it's important to note that the company says it deleted even more fake accounts (694 million) in Q4 of 2017. In that case, Facebook claims it used A.I.to locate 98.5 percent of the fake accounts it recently closed, and "nearly 100 percent" of the spam it found.

Vanessa Coleman

Comments