What Facebook really needs is some good P.R. Its reputation, as well as that of CEO Mark Zuckerberg, the past five years has really taken a beating, and as a result, they have lost the trust of users.
It seems as of late, Facebook has been making a marked attempt to change perceptions, and perhaps for that reason they revealed this week all the bad-acting published materials that were removed for the reasons of violating rules that prohibit hate speech, harassment, and child sexual exploitation.
Facebook’s new transparency report that was recently released shows its ongoing efforts to use artificial intelligence to highlight harmless content before users see it and to outsmart those who try to bypass the censors.
During the past two quarters, Facebook claims to have removed or labeled more than 54 million pieces of content that it judged as violent and graphic. In all 18.5 million items were determined to be child nudity or sexual exploitation, 11.4 million posts broke the rule that prohibits hate speech, and 5.7 million uploads broke the bullying and harassment policies.
The report also showed for the first time efforts to do the same with Instagram. There were more than 1.2 million photos or videos that involved child nudity or sexual exploitations, and 3 million posts violated the rules that prohibit illegal drug sales.
Between April 1 and September 30, Facebook took more action against its content than it did in the six months prior. Then 53 million pieces of content were removed for excessive violence, 13 million for child exploitation, 7.5 million for hate speech, and 5.1 million for bullying. But they are not blaming the numbers on a rise in bad-acting posts, they are crediting the tightening of their rules and efforts to search and find the offensive content.
Zuckerberg warned when speaking to reporters against coming to the conclusion that “because we’re reporting big numbers, that must mean there’s so much more harmful content happening on our service than others.” He added, “what it says is we’re working harder to identify this and take action for it.”
But Facebook is being asked to do even more. This is after too many issues in the past year had U.S. lawmakers threatening to create and pass laws that would hold technology companies responsible for not policing their sites and services.
Certainly one of these issues was the video of the gunman attacking two mosques in Christchurch, New Zealand, that spread rapidly on Facebook and other social media. The video was able to bypass systems that are meant to prevent such content from being published.
New data about that attack was released as well. The social network removed 4.5 million pieces of content that were related to the attack between the day it occurred on March 15 and September 30. The problem is people already saw this content by the time it was removed.
Facebook claims its A.I. has improved as well. 80 percent of the hate speech it removed was detected before users saw it. While this rate is lower than in other areas, it is still an improvement.
Do Others Measure Up?
Zuckerberg was critical of other tech companies for not publishing the same amount of data on the content they remove. He did not name which of Facebook’s competitors he was referring to.
“As a society, we don’t know how much of this harmful content is out there and which companies are making progress,” he said.
Do the results pushed by Facebook change your mind about it? Or has Zuckerberg and his social network permanently lost your support? Let us know your thoughts and concerns in the comments below.