Facebook Claims It Took Action Against Millions of Bad-Acting Posts

News Facebook Bad Posts Featured

What Facebook really needs is some good P.R. Its reputation, as well as that of CEO Mark Zuckerberg, the past five years has really taken a beating, and as a result, they have lost the trust of users.

It seems as of late, Facebook has been making a marked attempt to change perceptions, and perhaps for that reason they revealed this week all the bad-acting published materials that were removed for the reasons of violating rules that prohibit hate speech, harassment, and child sexual exploitation.


Facebook’s new transparency report that was recently released shows its ongoing efforts to use artificial intelligence to highlight harmless content before users see it and to outsmart those who try to bypass the censors.

During the past two quarters, Facebook claims to have removed or labeled more than 54 million pieces of content that it judged as violent and graphic. In all 18.5 million items were determined to be child nudity or sexual exploitation, 11.4 million posts broke the rule that prohibits hate speech, and 5.7 million uploads broke the bullying and harassment policies.

The report also showed for the first time efforts to do the same with Instagram. There were more than 1.2 million photos or videos that involved child nudity or sexual exploitations, and 3 million posts violated the rules that prohibit illegal drug sales.

Between April 1 and September 30, Facebook took more action against its content than it did in the six months prior. Then 53 million pieces of content were removed for excessive violence, 13 million for child exploitation, 7.5 million for hate speech, and 5.1 million for bullying. But they are not blaming the numbers on a rise in bad-acting posts, they are crediting the tightening of their rules and efforts to search and find the offensive content.

Zuckerberg warned when speaking to reporters against coming to the conclusion that “because we’re reporting big numbers, that must mean there’s so much more harmful content happening on our service than others.” He added, “what it says is we’re working harder to identify this and take action for it.”

News Facebook Bad Posts Content

But Facebook is being asked to do even more. This is after too many issues in the past year had U.S. lawmakers threatening to create and pass laws that would hold technology companies responsible for not policing their sites and services.

Certainly one of these issues was the video of the gunman attacking two mosques in Christchurch, New Zealand, that spread rapidly on Facebook and other social media. The video was able to bypass systems that are meant to prevent such content from being published.

New data about that attack was released as well. The social network removed 4.5 million pieces of content that were related to the attack between the day it occurred on March 15 and September 30. The problem is people already saw this content by the time it was removed.

Facebook claims its A.I. has improved as well. 80 percent of the hate speech it removed was detected before users saw it. While this rate is lower than in other areas, it is still an improvement.

Do Others Measure Up?

Zuckerberg was critical of other tech companies for not publishing the same amount of data on the content they remove. He did not name which of Facebook’s competitors he was referring to.

“As a society, we don’t know how much of this harmful content is out there and which companies are making progress,” he said.

Do the results pushed by Facebook change your mind about it? Or has Zuckerberg and his social network permanently lost your support? Let us know your thoughts and concerns in the comments below.

Laura Tucker Laura Tucker

Laura has spent nearly 20 years writing news, reviews, and op-eds, with more than 10 of those years as an editor as well. She has exclusively used Apple products for the past three decades. In addition to writing and editing at MTE, she also runs the site's sponsored review program.


  1. “Facebook Claims It Took Action Against Millions of Bad-Acting Posts”
    The key word here is “claims”. I would take any announcement from Facebook with a pound of salt.

    In the past week, there was an announcement in the media that Facebook removed 3.5 BILLION bad-acting accounts. That is more accounts than the total number of unique accounts that Facebook ever claimed it had. IIRC, the last publicly announced number of FB accounts was somewhere around 2.5 billion. Either the media is engaging in their usual hyperbole in news reporting, the source FB announcement was exaggerating a bit or someone, somewhere changed an “m” to a “b”.

    1. You’re right; those numbers don’t seem to make sense. I did a little research and couldn’t find the exact numbers you mentioned recently, but did find an older article that kind of speaks to the same thing with slightly different numbers. https://www.engadget.com/2019/05/23/facebook-2-billion-fake-accounts-disabled/

      1. Based on these numbers, one an make an argument that most of Facebook’s accounts are fake. :-)

Comments are closed.