Misinformation – intentionally “false news” – is a chronic problem worldwide, but the COVID-19 pandemic has exacerbated information overload. From fake coronavirus cures to misleading information about mandatory vaccines that have spread fears all over the place, it is increasingly difficult to realize the truth.
All that is needed is to click “share” or forward so that the disinformation becomes the wrong information that spreads like wildfire through personal networks in applications and platforms such as WhatsApp and Facebook.
In Africa, where internet penetration remains relatively low at 40% on average, many users are online for the first time. And around the world, many internet users, experienced or not, do not have the digital knowledge tools necessary to distinguish reliable news from fake news.
How can all internet users become perceptive online?
It's the main idea behind “Pick Your Own Fake News,” an online game that explores how disinformation is spreading across East Africa. The game is the brainchild of Neema Iyer, founder and director of Pollicy, a Ugandan organization that supports civic technology on the continent.
Iyer explained the motivation behind his game in a Mozilla Foundation press release:
Online misinformation has real implications offline. It can threaten people’s lives, freedom of expression, and prosperity. This is especially true in parts of East Africa, where people are coming online for the first time and don’t yet have the proper context to distinguish what’s trustworthy from what’s not.
The wrong information has real offline consequences. It can threaten people's lives, freedom of expression, and prosperity. This is true especially in parts of East Africa, where people are online for the first time and still don't have the proper context to distinguish what is trustworthy from what is not trustworthy.
‘Did you see that video on WhatsApp?’
“Choose your own fake news” teaches new internet users how to be more alert about the information they receive and find in digital spaces.
Players choose one of three characters in East Africa: Flora, a student looking for a job; Jo, merchant, or Aida, 62-year-old retired grandmother. Players review news headlines, videos, and social media posts through the eyes of each character.
“The players' decisions make the difference between correctly denying disinformation – or falling victim to fraud, hospitalizing a loved one, and even accidentally inciting a mob,” explains the Mozilla press release.
As players follow their characters' various decisions, the game offers detailed information on how misinformation and misinformation work, and highlights the role people play in intercepting false or unverified information before spreading it.
For example, Aida receives a forwarded message from her cousin with the video of a boy crying after receiving his measles vaccine. Should Aida broadcast the video? Measles can be prevented with a vaccine, but cases continue to rise due to false information.
“Platforms like YouTube and Facebook recommend and amplify content that makes internet users keep clicking, even if it is radical or totally wrong,” said the Mozilla Foundation.
In the third episode of the second season of “Conditions”, a new podcast exploring digital rights in Africa, Neema Iyer talks to digital rights activist Berhane Taye about looking at the history of online disinformation in Africa and how it intersects with bots, trolls, and more.
Iyer and Taye discussed the potentially dangerous consequences of an apparently simple forward or broadcast.
The Internet is full of bots — a software application that runs automated tasks. Iyer estimates that up to half of online activity is handled by bots designed to influence and shape online opinions. The trolls – real people – also disturb, attack and offend with intention. The fake – radically altered videos – can make fiction seem real.
This combination of online agitators contributes to the misinformation that ultimately causes chaos, disagreement and polarizes communities, Iyer said.
To complicate matters, many internet users are “unwitting agents” who inadvertently amplify false information, writes Kate Starbird in Nature.
Mobile phones and SMS text messaging have long been used as tools to organize mob justice and destabilize communities, but when WhatsApp and other platforms emerged, false information was able to spread quickly and exponentially with the click of a button, he continued. Yesterday.
Iyer quotes the lynchings in India caused by WhatsApp rumors about child abduction and sectarian violence in Nigeria that emerged after images circulated on WhatsApp showed Fulani Muslims committing acts of violence against Christians.
In April 2020, in the midst of a pandemic, WhatsApp finally took action to stem the spread of fake news and limited the number of forwardings from five to one. “The measure is designed to reduce the speed with which information moves through WhatsApp, which puts truth and fiction at a similar pace,” according to The Verge.
To penalize or not to penalize?
People often turn to social media to fill in the gaps of mainstream media. But with the democratization of social media, anyone can generate content, with few guidelines to monitor, veto, or verify data.
In East Africa, governments have created various policies and laws designed to control “fake news” and hate speech, but they end up being the reason to penalize opposition or discordant voices.
In March 2020, the Government of South Africa penalized the dissemination of COVID-19 information with the “intention to mislead citizens or the government response to the pandemic”, according to the 2002 disaster management law, violators may receive fines, prison or both, according to the Committee to Protect Journalists.
The Committee to Protect Journalists warned that “passing laws that emphasize the criminalization of disinformation rather than instruction to the public and the promotion of data verification presents a dead end.”
In Nigeria, misinformation has generated mistrust in institutions that “should be guides during a pandemic,” said ‘Gbenga Sesan, executive director of the Paradigm Initiative in Nigeria, who joined Iyer and Taye in” and Conditions. “
“You have a lot of information that shouldn't get into the hands of vulnerable people,” Sesan said, referring to the barrage of videos, messages and memes spread to promote false cures for the coronavirus.
But Nigeria's Draft Protection Against Falsehood and Internet Manipulation Bill – known as the “social media law” – is inadequate and dangerous, imprecise to make a dent in the problem.
Make the truth go viral
Research shows that it is very difficult to change a person's mind when an idea is installed and, faced with it, the typical internet user often looks at headlines.
Artificial intelligence technology may try to intercept fake news or hate speech, but this method is often imprecise and does not capture the nuances of language and cultural context, Iyer explained.
For example, the 2020 Facebook transparency report claimed to have removed 9.6 million hateful or apparently hateful posts in the first four months of 2020, Iyer said. But he warned of the probability of false positives.
Content moderators have immense power to remove what is considered false or hateful, but Facebook fails to adequately handle multiple languages and cultural contexts. Also, many users are unaware of their ability to report content.
Data verifiers don't have the scope to change their minds when fake news takes root. Only in the United States are they at a disadvantage with one-hundred-to-one campaigns. Data verification also varies greatly depending on one's laws regarding transparency, data and freedom of information. In Tanzania, for example, the government has banned data verification, and insists that statistics are the absolute truth.
How do we discourage the spread of the wrong information? Iyer insists on interrupting the fake news before spreading it. Instead, he insists on making the truth go viral.