Facebook Updates Its Moderation Policy, But It Remains Confusing and Messy

Last updated July 12, 2021
Written by:
Bill Toulas
Bill Toulas
Infosec Writer

Facebook is actively trying to develop a hate speech management rule-book that will make it possible for them to handle the relevant items with greater success than they did in the recent past. As relevant leaks indicate, the amended 1400-page document that described Facebook’s content moderation guidelines is partly inadequate and confusing.

Dealing with all that is going on around the globe is undoubtedly hard, and updating all parts that correspond to critical hate and propaganda content required constant attention and the appropriate modification. For example, the rapidly changing political scene in Bosnia and Sri Lanka requires continuous monitoring and adjustments, while the paperwork and speech restriction laws in counties like India and Myanmar need to be thoroughly evaluated by specialists who can convey the key information to those who develop the moderation guidelines.

Last year, ProPublica tested Facebook’s enforcement of hate speech prevention rules and concluded that they were widely inadequate. ProPublica found that 22 out of 49 blatantly hate speech posts were not caught by the social media’s content reviewers, although the particular posts were extremely offensive to groups of people. Facebook was employing about 7500 censors at that time, who were vigorously reviewing all posts that were flagged by other users as offensive. These reviewers decide on whether a post violates the moderation policy guidelines, and when the rule-book is confusing and outdated, the reviewers have no choice other than following the designated precepts.

While this failure to handle almost half of hate speech posts correctly forced Facebook to apologize for the mistakes made, it looks like the application of targeted and effective changes in the rule-book is still far from getting completed. To meet the growing demand in a flagged post reviewing, the company has increased the number of reviewers to 20000 this year, who are collectively deleting tens of thousands of posts every day. Still, though, it seems that Facebook is more deeply occupied on how to protect its own reputation rather than to protect the various user groups that are victimized through abusive and offensive posts.

The social media giant is maintaining an in-house definition and evaluation system that decides whether a group is promoting hate speech or not, based on a set of signals and elements which can easily lead to wrongful deductions. While this has been proven not to work effectively in many cases, and although Facebook admitted the fact, for some reason, they insist on following the same approach.

Are you happy with how Facebook handled your post reports in recent years? Let us know in the comments below, and don’t forget to also share your thoughts on our socials, on Facebook and Twitter.



For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: