Perspective | Published:

Lessons for artificial intelligence from the study of natural stupidity

Abstract

Artificial intelligence and machine learning systems are increasingly replacing human decision makers in commercial, healthcare, educational and government contexts. But rather than eliminate human errors and biases, these algorithms have in some cases been found to reproduce or amplify them. We argue that to better understand how and why these biases develop, and when they can be prevented, machine learning researchers should look to the decades-long literature on biases in human learning and decision-making. We examine three broad causes of bias—small and incomplete datasets, learning from the results of your decisions, and biased inference and evaluation processes. For each, findings from the psychology literature are introduced along with connections to the machine learning literature. We argue that rather than viewing machine systems as being universal improvements over human decision makers, policymakers and the public should acknowledge that these system share many of the same limitations that frequently inhibit human judgement, for many of the same reasons.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.

    Silver, D. et al. Mastering the game of Go without human knowledge. Nature 550, 354–359 (2017).

  2. 2.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

  3. 3.

    Lazer, D., Kennedy, R., King, G. & Vespignani, A. The parable of Google flu: traps in big data analysis. Science 343, 1203–1206 (2014).

  4. 4.

    Campolo, A., Sanfilippo, M., Whittaker, M. & Crawford, K. A. I. Now 2017 Report (AI Now Institute, 2017); https://ainowinstitute.org/AI_Now_2017_Report.pdf.

  5. 5.

    O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Broadway Books, 2017).

  6. 6.

    Barocas, S. & Selbst, A. Big data’s disparate impact. Calif. Law Rev. 104, 671–729 (2016).

  7. 7.

    Buolamwini, J. & Gebru, T. Gender shades: intersectional accuracy disparities in commercial gender classification. In Proc. Mach. Learn. Res. Vol. 81 (eds Friedler, S. A, & Wilson, C.) 77–91 (PMLR, 2018).

  8. 8.

    Mohler, G. O. et al. Randomized controlled field trials of predictive policing. J. Am. Stat. Assoc. 110, 1399–1411 (2015).

  9. 9.

    Lum, K. & Isaac, W. To predict and serve?. Significance 13, 14–19 (2016).

  10. 10.

    Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C. & Venkatasubramanian, S. Runaway feedback loops in predictive policing. In Proc. Mach. Learn. Res. Vol. 81 (eds Friedler, S. A. & Wilson, C.) 1–12 (PMLR, 2018).

  11. 11.

    Denrell, J. Why most people disapprove of me: experience sampling in impression formation. Psychol. Rev. 112, 951–978 (2005).

  12. 12.

    Crawford, K. The hidden biases in big data. Harvard Business Review https://hbr.org/2013/04/the-hidden-biases-in-big-data (1 April 2013).

  13. 13.

    Hertwig, R., Barron, G., Weber, E. U. & Erev, I. Decisions from experience and the effect of rare events in risky choice. Psychol. Sci. 15, 534–539 (2004).

  14. 14.

    Wiener, N. Cybernetics: Or Control and Communication in the Animal and the Machine (MIT Press, 1948).

  15. 15.

    von Neumann, J. The Computer and the Brain (Yale Univ. Press, 1958).

  16. 16.

    Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction (Cambridge Univ. Press, 1998).

  17. 17.

    Rumelhart, D. E., McClelland, J. L. & PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations (MIT Press, 1986).

  18. 18.

    Marcus, G. Deep learning: a critical appraisal. Preprint at https://arxiv.org/abs/1801.00631 (2018).

  19. 19.

    Lake, B. M., Ullman, T. D., Tenenbaum, J. B. & Gershman, S. J. Building machines that learn and think like people. Behav. Brain Sci. 40, 1–101 (2016).

  20. 20.

    Tversky, A. & Kahneman, D. Judgment under uncertainty: heuristics and biases. Science 185, 1124–1131 (1974).

  21. 21.

    Lake, B. M., Salakhutdinov, R. & Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science 350, 1332–1338 (2015).

  22. 22.

    Chomsky, N. Aspects of the Theory of Syntax (MIT Press, 1965).

  23. 23.

    Fox, C. R. & Hadar, L. Decisions from experience’ = sampling error + prospect theory: reconsidering Hertwig, Barron, Weber & Erev (2004). Judgm. Decis. Mak. 1, 159–161 (2006).

  24. 24.

    Harris, A. J. & Hahn, U. Unrealistic optimism about future life events: a cautionary note. Psychol. Rev. 118, 135–154 (2011).

  25. 25.

    Chambers, J. R., Windschitl, P. D. & Suls, J. Egocentrism, event frequency, and comparative optimism: when what happens frequently is ‘more likely to happen to me’. Personal. Social. Psychol. Bull. 29, 1343–1356 (2003).

  26. 26.

    Teodorescu, K. & Erev, I. On the decision to explore new alternatives: the coexistence of under- and over-exploration. J. Behav. Decis. Mak. 27, 109–123 (2014).

  27. 27.

    Fuller, R. Behavior analysis and unsafe driving: warning—learning trap ahead! J. Appl. Behav. Anal. 24, 73–75 (1991).

  28. 28.

    Szollisi, A., Liang, G., Konstantinidis, E., Donkin, C. & Newell, B. R. Simultaneous underweighting and overestimation of rare events: unpacking a paradox. J. Exp. Psychol. Gen. (in the press).

  29. 29.

    Hamilton, D. L. & Gifford, R. K. Illusory correlation in interpersonal perception: a cognitive basis of stereotypic judgments. J. Exp. Social. Psychol. 12, 392–407 (1976).

  30. 30.

    Mullen, B. & Johnson, C. Distinctiveness-based illusory correlations and stereotyping: a meta-analytic integration. Br. J. Social. Psychol. 29, 11–28 (1990).

  31. 31.

    Eder, A. B., Fiedler, K. & Hamm-Eder, S. Illusory correlations revisited: the role of pseudocontingencies and working-memory capacity. Q. J. Exp. Psychol. 64, 517–532 (2011).

  32. 32.

    Kutzner, F., Vogel, T., Freytag, P. & Fiedler, K. A robust classic: illusory correlations are maintained under extended operant learning. Exp. Psychol. 58, 443–453 (2011).

  33. 33.

    Fiedler, K. & Kutzner, F. in The Wiley Blackwell Handbook of Judgment and Decision Making (eds Keren, G. & Wu, G.) 380–403 (Wiley, 2015).

  34. 34.

    Fiedler, K., Walther, E., Freytag, P. & Plessner, H. Judgment biases in a simulated classroom — a cognitive – environmental approach. Organ. Behav. Human. Decis. Process. 88, 527–561 (2002).

  35. 35.

    Halevy, A., Norvig, P. & Pereira, F. The unreasonable effectiveness of data. IEEE Intell. Syst. 24, 8–12 (2009).

  36. 36.

    Lerman, J. Big data and its exclusions. Stanf. Law Rev. 66, 55–63 (2013).

  37. 37.

    Tibshirani, R. Regression selection and shrinkage via the lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1996).

  38. 38.

    Kleinberg, J., Mullainathan, S. & Raghavan, M. Inherent trade-offs in the fair determination of risk scores. Preprint at https://arxiv.org/abs/1609.05807 (2016).

  39. 39.

    Corbett-Davies, S., Pierson, E., Feller, A., Goel, S. & Huq, A. Algorithmic decision making and the cost of fairness. In Proc. 23rd Conf. Knowledge Discovery and Data Mining, 797-806 (2017).

  40. 40.

    Chen, I., Johansson, F. D. & Sontag, D. Why is my classifier discriminatory? Adv. Neural Inf. Process. Syst. 31, 3543–3554 (2018).

  41. 41.

    Dawes, R. The robust beauty of improper linear models in decision making. Am. Psychol. 34, 571–582 (1979).

  42. 42.

    Denrell, J. & March, J. G. Adaptation as information restriction: the hot stove effect. Organ. Sci. 12, 523–538 (2001).

  43. 43.

    Liu, C., Eubanks, D. L. & Chater, N. The weakness of strong ties: sampling bias, social ties, and nepotism in family business succession. Leadersh. Q. 26, 419–435 (2015).

  44. 44.

    Le Mens, G., Kareev, Y. & Avrahami, J. The evaluative advantage of novel alternatives: an information-sampling account. Psychol. Sci. 27, 161–168 (2016).

  45. 45.

    Denrell, J. & Le Mens, G. Seeking positive experiences can produce illusory correlations. Cognition 119, 313–324 (2011).

  46. 46.

    Niv, Y., Joel, D., Meilijson, I. & Ruppin, E. Evolution of reinforcement learning in uncertain environments: a simple explanation for complex foraging behaviors. Adapt. Behav. 10, 5–24 (2002).

  47. 47.

    Le Mens, G. & Denrell, J. Rational learning and information sampling: on the ‘naivety’ assumption in sampling explanations of judgment biases. Psychol. Rev. 118, 379–392 (2011).

  48. 48.

    Fazio, R. H., Eiser, J. R. & Shook, N. J. Attitude formation through exploration: valence asymmetries. J. Personal. Social. Psychol. 87, 293–311 (2004).

  49. 49.

    Eiser, J. R., Fazio, R. H., Stafford, T. & Prescott, T. J. Connectionist simulation of attitude learning: asymmetries in the acquisition of positive and negative evaluations. Personal. Social. Psychol. Bull. 29, 1221–1235 (2003).

  50. 50.

    Rich, A. S. & Gureckis, T. M. The limits of learning: exploration, generalization, and the development of learning traps. J. Exp. Psychol. Gen. 147, 1553–1570 (2018).

  51. 51.

    Shepard, R. N., Hovland, C. L. & Jenkins, H. M. Learning and memorization of classifications. Psychol. Monogr. 75, 1689–1699 (1961).

  52. 52.

    Jung, J., Concannon, C., Shroff, R., Goel, S. & Goldstein, D. G. Simple rules for complex decisions. Harvard Business Review https://hbr.org/2017/04/creating-simple-rules-for-complex-decisions (19 April 2017).

  53. 53.

    Jabbari, S., Joseph, M., Kearns, M., Morgenstern, J. & Roth, A. Fairness in reinforcement learning. In Proc. Mach. Learn. Res. Vol. 71 (eds Precup, D. & Whye Teh, Y.) 1617–1626 (PMLR, 2017).

  54. 54.

    Sculley, D. et al. Hidden technical debt in machine learning systems. Adv. Neural Inf. Process. Syst. 28, 2503–2511 (2015).

  55. 55.

    Fiedler, K. Beware of samples! A cognitive-ecological sampling approach to judgment biases. Psychol. Rev. 107, 659–676 (2000).

  56. 56.

    Kahneman, D. & Tversky, A. Prospect theory: an analysis of decision under risk. Économ. J. Econom. Soc. 47, 263–292 (1979).

  57. 57.

    Denrell, J. Reference-dependent risk sensitivity as rational inference. Psychol. Rev. 122, 461–484 (2015).

  58. 58.

    Lieder, F., Griffiths, T. L. & Goodman, N. D. Burn-in, bias, and the rationality of anchoring. Adv. Neural Inf. Process. Syst. 25, 2790–2798 (2012).

  59. 59.

    Lieder, F., Griffiths, T. L. & Hsu, M. Overrepresentation of extreme events in decision making reflects rational use of cognitive resources. Psychol. Rev. 125, 1–32 (2018).

  60. 60.

    Blei, D. M., Kucukelbir, A. & McAuliffe, J. D. Variational inference: a review for statisticians. J. Am. Stat. Assoc. 112, 859–877 (2017).

  61. 61.

    Blei, D. M. & Lafferty, J. D. Dynamic topic models. In Proc. 23rd Int. Conf. Machine Learning 113–120 (ACM, 2006).

  62. 62.

    Gershman, S. J., Horvitz, E. J. & Tenenbaum, J. B. Computational rationality: a converging paradigm for intelligence in brains, minds, and machines. Science 349, 273–278 (2015).

  63. 63.

    Gigerenzer, G. & Brighton, H. Homo heuristicus: why biased minds make better inferences. Top. Cogn. Sci. 1, 107–143 (2009).

  64. 64.

    Katsikopoulos, K. V. Bounded rationality: the two cultures. J. Econ. Methodol. 21, 361–374 (2014).

  65. 65.

    Czerlinski, J., Gigerenzer, G. & Goldstein, D. G. How Good Are Simple Heuristics? in Simple Heuristics That Make Us Smart (eds Gigerenzer, G., Todd, P.M., & The ABC Research Group) 97-118 (Oxford Univ. Press, 1999).

  66. 66.

    Martignon, L. & Hoffrage, U. in Simple Heuristics That Make Us Smart (eds Gigerenzer, G., Todd, P. M., & The ABC Research Group) 119–140 (Oxford Univ. Press, 1999).

  67. 67.

    Hogarth, R. M. & Karelaia, N. ‘Take-the-best’ and other simple strategies: why and when they work ‘well’ with binary cues. Theory Decis. 61, 205–249 (2006).

  68. 68.

    Parpart, P., Jones, M. & Love, B. C. Heuristics as Bayesian inference under extreme priors. Cogn. Psychol. 102, 127–144 (2018).

  69. 69.

    Doshi-Velez, F. & Kim, B. Towards a rigorous science of interpretable machine learning. Preprint at https://arxiv.org/abs/1702.08608 (2017).

  70. 70.

    Herrnstein, R. J. & Prelec, D. Melioration: a theory of distributed choice. J. Econ. Perspect. 5, 137–156 (1991).

  71. 71.

    Gureckis, T. M. & Love, B. C. Short-term gains, long-term pains: how cues about state aid learning in dynamic environments. Cognition 113, 293–313 (2009).

  72. 72.

    Polka, L. & Werker, J. F. Developmental changes in perception of nonnative vowel contrasts. J. Exp. Psychol. Human. Percept. Perform. 20, 421–435 (1994).

  73. 73.

    Goldstone, R. Influences of categorization on perceptual discrimination. J. Exp. Psychol. Gen. 123, 178–200 (1994).

  74. 74.

    Levinthal, D. A. & March, J. G. The myopia of learning. Strateg. Manag. J. 14, 95–112 (1993).

  75. 75.

    Denrell, J. Vicarious learning, undersampling of failure, and the myths of management. Organ. Sci. 14, 227–243 (2003).

  76. 76.

    Feiler, D. C., Tong, J. D. & Larrick, R. P. Biased judgment in censored environments. Manag. Sci. 59, 573–591 (2013).

  77. 77.

    Hogarth, R. M., Lejarraga, T. & Soyer, E. The two settings of kind and wicked learning environments. Curr. Dir. Psychol. Sci. 24, 379–385 (2015).

  78. 78.

    Chapman, L. J. Illusory correlation in observational report. J. Mem. Lang. 6, 151–155 (1967).

Download references

Author information

Competing interests

A.S.R. is employed by Flatiron Health, an independent subsidiary of Roche.

Correspondence to Alexander S. Rich.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark
Fig. 1: In illusory correlations, an agent mistakenly comes to believe that there is a correlation between a variable of interest and membership to a larger group (or more data-rich group or individual).
Fig. 2: An agent’s beliefs about whether an option is mostly good or mostly bad evolve as the agent experiences a series of positive and negative outcomes, potentially causing the hot stove effect.
Fig. 3: An ‘attentional learning trap’ can emerge with choice-contingent feedback in some environments.
Fig. 4: Reference-dependent risk preferences can be produced by Bayesian prediction.