How COVID-19 Broke the Internet
Find out why Facebook was marking your posts as Spam and why you should expect more of it
If you posted a link on Facebook yesterday, you may have noticed that the social network marked it as spam or said that it violated the community guidelines. While many people thought that it was because they were posting about coronavirus or Trump, it soon became clear that posts of all different kinds were being flagged.
While conspiracy theories spread around the internet claiming that Facebook was censoring anti-Trump posts, the reality was that an anti-spam filter went haywire. According to Guy Rosen, Facebook’s Vice President of Integrity, there was "an issue with an automated system that removes links to abusive websites, but incorrectly removed a lot of other posts too. We’ve restored all the posts that were incorrectly removed, which included posts on all topics - not just those related to COVID-19.”
There was also speculation that this may have occurred because Facebook recently sent their human content moderators home to protect them from the coronavirus. According to Facebook’s former security chief Alex Stamos, most of the work that the content moderators do can not be done in a work from home setting due to privacy concerns. While Rosen denies this was the cause, it may explain why it took so long for the company to fix the problem.
With shelter in place order in effect for the Bay area, many in Silicon Valley will be working from home over the next few weeks which could lead to more tech problems like this. Facebook already knows that their automated system doesn’t work well enough to determine what violates their community guidelines, that is why they have human content moderators. Other social media sites are also asking their users to bear with them through similar challenges with automated systems.
In a blog post on Monday, YouTube told its creators that the site will be using machine learning and automated systems for “some of the work normally done by reviewers.” They warned that this could result in the site taking down some content that doesn’t violate YouTube’s policies.
“We won’t issue strikes on this content except in cases where we have high confidence that it’s violative. If creators think that their content was removed in error, they can appeal the decision and our teams will take a look. However, note that our workforce precautions will also result in delayed appeal reviews. We’ll also be more cautious about what content gets promoted, including live streams. In some cases, unreviewed content may not be available via search, on the homepage, or in recommendations.”
Twitter issued a similar warning saying “We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” said the company in a blog post.
So while it may be a little rough over the next few weeks, know that your content is not getting blocked because of a political agenda, but rather it is because automated technology still needs humans to work properly.