Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

# Web of lies: a tool for determining the limits of verification in preventing the spread of false information on networks

### Subjects

An Author Correction to this article was published on 28 July 2021

This article has been updated

## Abstract

The spread of false information on social networks has garnered substantial scientific and popular attention. To counteract this spread, verification of the truthfulness of information has been proposed as a key intervention. Using a novel behavioral experiment with over 2000 participants, we analyze participants’ willingness to spread false information in a network. All participants in the network have aligned incentives making lying attractive and countering the explicit norm of truth-telling that we impose. We investigate how verifying the truth, endogenously or exogenously, impacts the choice to lie or to adhere to the norm of truth-telling and how this compares to the spread of information in a setting in which such verification is not possible. The three key take-aways are (1) verification is only moderately effective in reducing the spread of lies, and (2) its effectivity is contingent on the agency of people in seeking the truth, and (3) on the exposure of liars, not only on the exposure of the lies being told. These results suggest that verification is not a blanket solution. To enhance its effectivity, verification should be combined with efforts to foster a culture of truth-seeking and with information on who is spreading lies.

## Introduction

The spread of false information on social networks has received a great deal of attention from both academic research and popular news1,2,3. This recent interest has been sparked by the alarming potential impact that false information may have had on election outcomes4,5. However, the concern is broader and extends to whether false information can influence support for specific policies6, whether or not one’s children should be vaccinated7 and if one should get a flu shot8. Against this backdrop, the study of interventions that can counteract the spread of false information on social networks is timely9. A widely proposed intervention to counteract the ills that false information may cause is promoting the verification of the information shared in networks1.

Verification can occur in two main ways: exogenously or endogenously. Exogenous verification is when an external and impartial source labels the veracity of information. For example, algorithms have been proposed to rank content by its credibility10,11. In this vein, Google has led an effort to rank search results by a trustworthiness score12. In other words, exogenous verification is a top-down solution. Endogenous verification is when those exchanging information take measures themselves to investigate the truth. For instance, Facebook spearheaded a controversial effort to crowd-source verification (https://www.facebook.com/zuck/videos/10106612617413491/). As such, endogenous verification is a bottom-up solution and depends heavily on the willingness of people to put effort towards truth seeking.

The underlying motivation behind verification is the presence of a widely held norm of truth-telling13. This means that when false information is identified, people can be expected to make that falsity known and not spread lies, even when it goes against their self-interest. Despite this normative expectation, the effectivity of verification may be compromised, as people do not act in a vacuum; rather, they act in a naturally occurring social network in which those connected to one another have similar dispositions, interests and incentives14,15. It has been shown that social networks have become more polarized over time16,17,18,19, which may lead people to prioritize fitting in and supporting views that are shared by other group members and thus beneficial to their group by reinforcing group identity19,20,21 instead of incorporating contradicting information22,23 and telling the truth13,24.

The tension between aligned interests and a wide social norm of truth-telling motivates our investigation on how well verification works and how its effectivity can be enhanced. Arguably, answering this question in a field setting is fraught with challenges for at least three reasons. First, it is nearly impossible to identify who verified the information shared, impeding the evaluation of how verification impacted the choice to spread false information. Second, even if tracking information on verification is possible, those who verify it may have different preferences for honesty than those who do not. As a consequence, it may be impossible to know if verification, if imposed, would be effective on those who do not typically verify information. Third, social connections are not random, and they may depend on preferences for honesty, as well as on the act of verification. In short, people are not randomized in their social positions, nor are they randomly exposed to verification regimes and their use.

To address these obstacles, we design and conduct an online experiment that provides us with a controlled environment where a verification regime can be randomly assigned and tracked. This allows us to observe which people know the truth about the information they spread and what type of verification is used to find the truth. Moreover, we have control over the social positions that people take, i.e., participants do not choose their interaction partners. Most importantly, our experimental design emphasizes the tension between aligned interests in one’s network and an explicitly imposed social norm of truth-telling. People in a network can be dishonest without being held fully accountable for their lies. In spreading false information, people can “hide” behind the lies of others so that the recipient of a lie cannot be sure about who is responsible for that lie. Furthermore, people embedded in networks can contribute to spreading false information without necessarily lying about the information they receive. The experiment that we discuss in the next section captures these aspects of a naturally occurring social network.

Adding to these benefits, the experiment we design also helps us to better understand the mechanisms that may drive the effectivity of verification, such as the psychological cost that individuals experience when telling lies or the reputational cost they perceive when identified as liars13. We interrogate these mechanisms through experimental manipulations that change the presence and type of verification to test which of these channels may enhance the effect of verification. Our findings can help inform the designs of useful interventions and policies to prevent the spread and amplification of lies on social networks in real-world settings, where people are surrounded by others who are similar to them, when sharing information25,26,27.

### Experimental design

We design a one-shot sequential game, which we call the web of lies game (see Fig. 1), where three players are assigned to different positions in a linear communication network: first, F, intermediate, I, and last, L. At the beginning of the game, player F chooses a card from a $$12 \times 12$$ grid, which reveals an integer, x, between 1 and 30 written on the card. The number x is observed only by F and is referred to as the hidden number. Player F then sends a number, xF, also between 1 and 30, to player I, reporting on x. Player I observes xF, but not x, and reports a number, xI, under the same conditions to player L. Finally, player L observes xI, but not x or xF, and reports the final number, xL, this time to the experimenter.

## References

1. Vosoughi, S., Roy, D. & Aral, S. The spread of true and false news online. Science 359, 1146–1151 (2018).

2. Lazer, D. M. et al. The science of fake news: addressing fake news requires a multidisciplinary effort. Science 359, 1094–1096 (2018).

3. Ha, L., Perez, L. A. & Ray, R. Mapping recent development in scholarship on fake news and misinformation, 2008 to 2017: disciplinary contribution, topics, and impact. Am. Behav. Sci. 65, 290–315 (2021).

4. Mocanu, D., Rossi, L., Zhang, Q., Karsai, M. & Quattrociocchi, W. Collective attention in the age of (mis)information. Comput. Hum. Behav. 51, Part B, 1198–1204 (2015).

5. Persily, N. The 2016 U.S. election: Can democracy survive the internet?. J. Democr. 28, 63–76 (2017).

6. Ding, D., Maibach, E. W., Zhao, X., Roser-Renouf, C. & Leiserowitz, A. Support for climate policy and societal action are linked to perceptions about scientific agreement. Nat. Clim. Change 1, 462–466 (2011).

7. Schmitt, H.-J. et al. Child vaccination policies in Europe: a report from the summits of independent European vaccination experts. Lancet Infect. Dis. 3, 103–108 (2003).

8. Nyhan, B. & Reifler, J. Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine 33, 459–464 (2015).

9. Iyengar, S. & Massey, D. S. Scientific communication in a post-truth society. Proc. Natl. Acad. Sci. 116, 7656–7661 (2019).

10. Ratkiewicz, J., Conover, M., Goncalves, B., Flammini, A. & Menczer, F. Detecting and tracking political abuse in social media. In Proceedings of the 5th AAAI International Conference on Weblogs and Social Media (ICWSM’11) (2011).

11. Gupta, A., Kumaraguru, P., Castillo, C. & Meier, P. Tweetcred: Real-time credibility assessment of content on twitter. In Social Informatics (eds Aiello, L. M. & McFarland, D.) 228–243 (Springer, Berlin, 2014).

12. Dong, X. L. et al. Knowledge-based trust: estimating the trustworthiness of web sources. Proc. VLDB Endow. 8, 938–949 (2015).

13. Abeler, J., Nosenzo, D. & Raymond, C. Preferences for truth-telling. Econometrica 87, 1115–1153 (2019).

14. Yang, S.-H. et al. Like like alike: joint friendship and interest propagation in social networks. In Proceedings of the 20th international conference on World wide web, 537–554 (2011).

15. Colleoni, E., Rozza, A. & Arvidsson, A. Echo chamber or public sphere? Predicting political orientation and measuring political homophily in twitter using big data. J. Commun. 64, 317–332 (2014).

16. Lelkes, Y. Mass polarization: manifestations and measurements. Public Opin. Q. 80, 392–410 (2016).

17. Boxell, L., Gentzkow, M. & Shapiro, J. M. Greater internet use is not associated with faster growth in political polarization among us demographic groups. Proc. Natl. Acad. Sci. 114, 10612–10617 (2017).

18. Boutyline, A. & Willer, R. The social structure of political echo chambers: variation in ideological homophily in online networks. J. Polit. Psychol. 38, 551–569 (2017).

19. Steglich, C. Why echo chambers form and network interventions fail: selection outpaces influence in dynamic networks (2018). arXiv:1810.00211.

20. Cowan, S. K. Secrets and misperceptions: the creation of self-fulfilling illusions. Sociol. Sci. 1, 466–492 (2014).

21. Cowan, S. K. & Baldassarri, D. It could turn ugly: selective disclosure of attitudes in political discussion networks. Soc. Netw. 52, 1–17 (2018).

22. Garrett, R. K., Carnahan, D. & Lynch, E. K. A turn toward avoidance? Selective exposure to online political information, 2004–2008. Poli. Behav. 35, 113–134 (2013).

23. Becker, J., Porter, E. & Centola, D. The wisdom of partisan crowds. Proc. Natl. Acad. Sci. 116, 10717–10722 (2019).

24. Gneezy, U., Kajackaite, A. & Sobel, J. Lying aversion and the size of the lie. Am. Econ. Rev. 108, 419–453 (2018).

25. Weisel, O. & Shalvi, S. The collaborative roots of corruption. Proc. Natl. Acad. Sci. 112, 10651–10656 (2015).

26. Barr, A. & Michailidou, G. Complicity without connection or communication. J. Econ. Behav. Organ. 142, 1–10 (2017).

27. Pennycook, G., Bear, A., Collins, E. & Rand, D. G. The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag. Sci. (2019).

28. Chen, D. L., Schonger, M. & Wickens, C. otree—An open-source platform for laboratory, online, and field experiments. J. Behav. Exp. Finance 9, 88–97 (2016).

29. Buhrmester, M., Kwang, T. & Gosling, S. Amazon’s mechanical turk—A new source of inexpensive, yet high-quality, data?. Perspect. Psychol. Sci. 6, 3–5 (2011).

30. Sprouse, J. A validation of amazon mechanical turk for the collection of acceptability judgments in linguistic theory. Behav. Res. Methods 43, 155–167 (2011).

31. Cohen, J. Statistical Power Analysis for the Behavioral Sciences 2nd edn. (Routledge, New York, 1988).

32. Bland, J. & Nikiforakis, N. Coordination with third-party externalities. Eur. Econ. Rev. 80, 1–15 (2015).

33. Amir, A., Kogut, T. & Bereby-Meyer, Y. Careful cheating: people cheat groups rather than individuals. Front. Psychol. 7, 371 (2016).

34. Conrads, J., Irlenbusch, B., Rilke, R. M. & Walkowitz, G. Lying and team incentives. J. Econ. Psychol. 34, 1–7 (2013).

35. van de Ven, J. & Villeval, M. C. Dishonesty under scrutiny. J. Econ. Sci. Assoc. 1, 86–99 (2015).

36. Scheufele, D. A. & Krause, N. M. Science audiences, misinformation, and fake news. Proc. Natl. Acad. Sci. 116, 7662–7669 (2017).

37. Jun, Y., Meng, R. & Johar, G. V. Perceived social presence reduces fact-checking. Proc. Natl. Acad. Sci. 114, 5976–5981 (2017).

38. Fischbacher, U. & Follmi-Heusi, F. Lies in disguise: an experimental study on cheating. J. Eur. Econ. Assoc. 11, 525–547 (2013).

Download references

## Acknowledgements

We are grateful for the constructive comments received from Andrzej Baranski, Sanjeev Goyal, Agne Kajakaite, Byungkyu Lee, Georgia Michailidou, Rebecca Morton, Nikos Nikiforakis, Daniele Nosenzo, Wojtek Przepiorka, Ernesto Reuben, and Marie Claire Villeval as well as the participants of the 2019 Winter Experimental Social Sciences Institute at NYU Abu Dhabi, International Meeting on Experimental and Behavioral Social Sciences at the University of Utrecht and the participants of the Networks and Time speaker series at Columbia University. We also thank the anonymous reviewer for generous comments that helped improve the paper. This work was supported by the NYUAD Center for Interacting Urban Networks (CITIES), funded by Tamkeen under the NYUAD Research Institute Award CG001 and by the Swiss Re Institute under the Quantum Cities™ initiative.

## Author information

Authors

### Contributions

K.M. and M.M. designed the research; K.M. and M.M. performed the research; K.M. and M.M. analyzed the data; and K.M. and M.M. wrote the paper. K.M. and M.M. contributed equally to this work. Authors’ last names are written in alphabetical order.

### Corresponding author

Correspondence to Kinga Makovi.

## Ethics declarations

### Competing interests

The authors declare no competing interests.

## Additional information

### Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this Article was revised: The Acknowledgements section in the original version of this Article was incomplete, it now reads: “We are grateful for the constructive comments received from Andrzej Baranski, Sanjeev Goyal, Agne Kajakaite, Byungkyu Lee, Georgia Michailidou, Rebecca Morton, Nikos Nikiforakis, Daniele Nosenzo, Wojtek Przepiorka, Ernesto Reuben, and Marie Claire Villeval as well as the participants of the 2019 Winter Experimental Social Sciences Institute at NYU Abu Dhabi, International Meeting on Experimental and Behavioral Social Sciences at the University of Utrecht and the participants of the Networks and Time speaker series at Columbia University. We also thank the anonymous reviewer for generous comments that helped improve the paper. This work was supported by the NYUAD Center for Interacting Urban Networks (CITIES), funded by Tamkeen under the NYUAD Research Institute Award CG001 and by the Swiss Re Institute under the Quantum Cities™ initiative.”

## Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

## About this article

### Cite this article

Makovi, K., Muñoz-Herrera, M. Web of lies: a tool for determining the limits of verification in preventing the spread of false information on networks. Sci Rep 11, 3845 (2021). https://doi.org/10.1038/s41598-021-82844-7

Download citation

• Received:

• Accepted:

• Published:

• DOI: https://doi.org/10.1038/s41598-021-82844-7

## Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

## Search

### Quick links

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing