Last week, my 6-year-old daughter would not go to school. A friend had told her that “the virus” was coming and that everyone would be sick. I sat down with her and, before the school bus arrived, we looked at some interactive maps showing how far the new coronavirus outbreak was from Portugal, the predicted spread, and some World Health Organization recommendations: wash your hands often, sneeze to the elbow, let me know if you are feeling sick―all things that she proudly does already. We found all of this information online in less than 10 minutes.

Credit: Rogério Esteves

For the past 10 years, I have been working with large datasets that might be relevant to public health and epidemiology, from Google searches and social-media sentiment to weather conditions. More recently, I started to study a different type of contagion: how cognitive biases can be exploited by online platforms to make us even more susceptible to ‘misinformation viruses’. The current outbreak enables us to bring them together.

There is a staggering amount of misinformation propagating online. During the first 4 weeks of January 2020, there were over 15 million Twitter posts on this topic and, to date, the most concerning conspiracy theory circulating online related to the factitious claim that the virus was engineered by the Chinese, with political or economic goals.

As is often the case with misinformation, ‘science’ is used to support conspiracy theories. What seems like scientific evidence is even used to support the idea that scientists cannot be trusted. In this particular situation, an article published online on the bioRxiv platform had reportedly found “uncanny” similarities between the new coronavirus and human immunodeficiency virus. The article has now been retracted, but the suggestion that the virus was indeed lab-created is harder to withdraw.

In parallel, a photo of a bottle of bleach has been widely shared. Its label states its effectiveness against several bacteria and viruses, including coronavirus. This was used as further evidence of the conspiracy: how did this bleach brand know that there would be an outbreak of a new virus called corona?

It is likely that people sharing this article online had never heard of the BLAST algorithm or P values before. They may not have understood that the bioRxiv article had not yet been vetted by the wider scientific community. They may not have known that ‘coronavirus’ is a thousand-year-old type of virus, rather than this novel strain (SARS-CoV-2), which causes a disease whose official name is now COVID-19: ‘CO’ for ‘corona’ (Spanish for ‘crown’, because this viral family has spikes on its outside, not because it is transmitted via a popular Mexican beer), ‘VI’ for virus, ‘D’ for disease, and ‘19’ for the year in which it was identified. The naming deliberately avoids using geographical location, to minimize stigmatization, or mention of the likely animal host. However, this is unlikely to reduce all false videos and posts blaming the Chinese and their eating habits for the outbreak.

Overall, it is possible that people sharing such misinformation overestimate their ability to understand very complex problems and might be experiencing a form of the Dunning-Kruger effect, which states that people are often more confident than they are knowledgeable. This may be exacerbated by a lack of trust in institutions, be they governments, the pharmaceutical industry, or the traditional media.

For decades, scientists, medical doctors, science communicators, and journalists have been trying to promote the democratization of knowledge, the participation of citizens, and a more critical society. Social networks could be expected to facilitate and even amplify these efforts. It now seems that we might have gotten more than we asked for: a society that is over-critical and over-informed but, unfortunately, not very knowledgeable.

Fortunately, social networks are taking steps to actively and publicly confront misinformation. Facebook, which has historically been very resistant to permanently deleting posts, now seems to be doing just that. People who search for information about coronaviruses or vaccines are now being directed to credible sources. Users also seem to be more aware of false information and seem to be more active in flagging it. An account long known for spreading equivocal information was recently deleted from Twitter, but only after it posted conspiracy theories related to the COVID-19.

However, the decision to delete this misinformation publicly might create its own problems. It could reinforce conspiracy theories, and it is arguably very difficult to make this decision when it involves high officials, such as presidents. As an alternative, social-media platforms could attempt to implement simple nudges: asking people whether they are sure they want to share something could activate their best judgment and reduce over-confidence; and introducing time delays on the publication of dubious information, while it is being checked, could slow the spreading process and eventually prevent its publication.

As I spoke, my daughter eventually looked up from the screen and said “maybe my friend’s parents did not teach him we cannot believe everything we see on the internet.” Learning how to navigate this new world is indeed important but, given our very human limitations, I would add that the platforms could be better at reminding us that they should not be blindly trusted.