This article was jointly written by guest authors Solana Larsen, editor of the Mozilla Internet Health Report, and Leil Zahra, Mozilla Fellow at WITNESS.
As people around the world protect themselves against the COVID-19 pandemic, the internet has become a more vital resource than ever before. Millions of us now connect, sympathize, team up, work and play exclusively online.
In some ways, the pandemic has revealed the potential of the internet as a global public resource. But it is also revealing what it is not healthy on the Internet: Those who do not have access are now at a greater disadvantage. And the privacy and security downsides in consumer technology are making more people exponentially vulnerable.
A tricky issue that the pandemic highlights is the broken internet content moderation ecosystem. That is, the very flawed ways that big tech platforms tackle hate speech online, misinformation, and illegal content. Despite many advances, Facebook, YouTube, and other platforms tend to moderate content in ways that are whimsical or really dangerous: harmful content that is forgotten, and acceptable content is unfairly removed. These decisions often disproportionately affect people in the southern hemisphere who speak languages that have less online content.
At this moment, we are at a decisive moment. Are the flaws in content moderation looming bigger? Or may this crisis be a springboard for positive and lasting change in how we communicate online?
One of the trickiest problems with content moderation is who – or what – is in charge of moderation. With the wrong pandemic-related information plaguing platforms and many human content moderators unable to work, many platforms increasingly turn to artificial intelligence. The limits of this approach are clear. Automated filtering has failed to prevent misinformation about COVID-19 from spreading like wildfire and endangering public health. The platforms are crammed with posts suggesting that drinking bleach can cure the virus, or that 5G technology is spreading the disease in some way.
In addition to ignoring the wrong information, automated moderation can also accidentally censor quality content. Internet users who ask honest questions or speak to local vocabularies or contexts may end up being incorrectly rated as problematic. Automated moderation in the middle of the pandemic on Facebook resulted in the erroneous removal of a large amount of content, including links to news articles about COVID-19. In March, Facebook held that a technical “problem” was behind this, and that the deleted content had been reinstated. But the episode raises serious questions about the efficiency of those systems, and also casts doubt on Facebook's transparency. When YouTube announced that it had capped automated filtering due to COVID-19 in March, it wrote: “By doing so, user and creators can see more video deletion, including some videos that have not violated the policies.” In the same way, Twitter explained also in March that its automated moderation “may sometimes lack the context that our teams have, and this may result in us making mistakes.”
It doesn't help that when content is wrongfully removed or an account is suspended, the appeal process may not be clear. In many cases, users are left without understanding the reasons behind content removal or account suspension. And even before the pandemic, context was a big part of the discussion around content moderation. For example, if an approach focused on the United States to what is accepted language should be applied internationally.
Defective technology is a problem with content moderation; unequal policies are another. In the United States and some European countries, large technology platforms can be quite vigilant about following local news and keeping their own domestic policy commitments. Elsewhere, that is not the case. During the 2018 Nigerian elections, Global Voices researcher and contributor Rosemary Ajayi and a group of colleagues cataloged hundreds of tweets that spread disinformation and were appalled by unpredictable and inconsistent response rates to complaints about this activity. “If you report something serious on Election Day and they answer you a week later, what's the point?” Ajayi said. The idea is equally terrifying in the current context: if the platform removes misinformation about COVID-19 after millions have already seen it, the damage has already been done.
These are just two of the lingering issues in the field of content moderation. In Mozilla's recent survey of the social media space, we looked at several more. We spoke to SIN, a Polish drug damage reduction group, which was suspended by Facebook and could not appeal the decision. And we spoke to the human rights research group Syrian Archive, which said the platforms frequently erase human rights abuses at war. It is not difficult to see how such cases can be especially severe during a pandemic as well. What if fundamental health information, or evidence of human rights abuses related to confinement, is mistakenly deleted?
There is no panacea for these problems. Increased transparency about what is removed, when and why it is removed could help the researchers and communities affected, and also how some of what is removed is appealed and reinstated would help better advise platforms and policy makers. However, transparency reports from major platforms have become more detailed over the years, due in part to pressure from civil society, including signatories to the Santa Clara Principles. This community initiative to define guidelines for transparency and accountability in content moderation was launched in 2018, and has been supported by various platforms. In March, noting that the principles could benefit from an update, the Electronic Frontier Foundation (EFF) launched a global call for proposals (deadline is June 30) on how to best meet the needs of marginalized voices who are heavily impacted .
So much is unknown about moderation and appeal patterns in different global contexts that even anecdotal evidence from affected users would be a valued resource. Silenced.online is a new foundation tool for collective collaboration and for analyzing experiences of unfair removals around the world. Their goal is to create a network of organizations and people who have been working on, or want to start working on, content removal and moderation.
Other groups will agree that it is crucial that civil society and researchers engage in content moderation and platform regulation questions. Scandals and crises tend to trigger new rules and regulations, or calls for further automation, that are not necessarily based on independent analysis of what works. New approaches to creating accountability and appeal mechanisms like Facebook's New Board of Oversight demand attention from global audiences.
As previously stated, the COVID-19 pandemic is highlighting flaws in content moderation, but the problems are lifelong, particularly in matters of health and disinformation. They ask for a change in the daily operation of the platforms and in how they are held responsible. Increased attention to the problem has the potential to catalyze something good. You can create more transparency, more humane technology, better laws, and a healthier internet.