Facebook Prepares a New Strategy to Face Challenges of the 2020 Elections
Last updated July 12, 2021
During the last couple of months, Facebook has been very vocal about battling misinformation on the Web. After implementing a fact-checking system supported by community reviewers, this social media network has announced another measure to fight fake news. This measure will come in the form of notifications that will let users know when they’re about to share news articles more than 90 days old.
It’s interesting to note that this latest measure has its roots in a feature released in 2018. This was when Facebook took the first prominent step toward checking the credibility of articles posted on News Feeds. More precisely, that set of measures added additional context about the source of each article, additional information from external experts or third-party organizations (such as Wikipedia), related articles from that same publisher, and more. As you can see, all of this is done on Facebook’s part, without requiring additional input from users. However, it looks like those measures weren’t powerful enough to stop the spread of fake news and misinformation. That’s why Facebook has decided on a more aggressive approach.
As noted by John Hegeman, Facebook’s VP of Feed and Stories, the company will now automatically check each article’s age before being posted on Facebook. More precisely, a notification will appear informing you about the article’s age, and this applies to all news articles older than three months. It means that you’ll need to make a conscious decision whether the material you’re about to post is genuinely relevant and up-to-date. You can either decide to post it or go back to your News Feed while dismissing the notification.
We also need to add that Facebook’s approach to identifying old articles follows a similar trend initiated by credible news publishers. Many of those have decided to prominently label older content or implement systems that automatically redirect you to the latest available information. And when it comes to social media networks, Facebook’s competitors are also experimenting with the way users post their content. Last year, Instagram introduced a mechanism to inform users when they’re about to post photos or videos that others might find offensive. And earlier this month, Twitter started testing a prompt on Android that appears when users are retweeting an article that they haven’t previously opened on Twitter.
Facebook’s latest measure to prevent misinformation will be rolling out to a small group of users first. However, it’s expected to be available globally in the coming weeks.