Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Rethinking fast and slow based on a critique of reaction-time reverse inference

Abstract

Do people intuitively favour certain actions over others? In some dual-process research, reaction-time (RT) data have been used to infer that certain choices are intuitive. However, the use of behavioural or biological measures to infer mental function, popularly known as ‘reverse inference’, is problematic because it does not take into account other sources of variability in the data, such as discriminability of the choice options. Here we use two example data sets obtained from value-based choice experiments to demonstrate that, after controlling for discriminability (that is, strength-of-preference), there is no evidence that one type of choice is systematically faster than the other. Moreover, using specific variations of a prominent value-based choice experiment, we are able to predictably replicate, eliminate or reverse previously reported correlations between RT and selfishness. Thus, our findings shed crucial light on the use of RT in inferring mental processes and strongly caution against using RT differences as evidence favouring dual-process accounts.

Introduction

Understanding the processes behind decision-making is a fundamental goal in the social, behavioural and cognitive sciences. Across these literatures, one central question is whether behaviour is the result of a slow and deliberative process that carefully weighs the available options, or a more automatic process that is quick but prone to certain biases. A prominent view, often referred to as dual-process theory, is that both types of processes contribute to human behaviour. In the context of value-based choice, these two processes might favour different alternatives and compete to determine the decision maker’s final choice. Thus, certain decisions may come to be thought of as ‘intuitive/automatic’ (Type I), while others may be labelled as ‘deliberative’ (Type II)1,2. The distinction is important because deliberative processes should consider features of the choice problem, while intuitive processes should be insensitive to choice details. For example, giving money to a homeless person may be seen as an automatic response to help others or as a calculated action taken only when another is truly in need. Which explanation is correct has major implications for understanding human nature, and from a practical point of view for designing institutions to encourage or discourage certain behaviours. It is, therefore, crucial to identify ways to determine whether choices are intuitive or deliberative.

It has been proposed that one way to distinguish between intuitive and deliberative choices is to examine relative reaction times (RTs), the logic being that a key feature of intuitive processes is that they can be executed more quickly than deliberative processes. Decisions produced by an intuitive process should thus tend to have shorter RTs than those from a deliberative process1,3. In recent years, several researchers have used this relationship to reason backwards from RTs to infer that fast decisions are intuitive4,5,6,7,8,9,10,11,12,13,14,15,16. However, there are well-known pitfalls associated with making reverse inferences in other domains17, and a similar argument applies to RT durations. In short, there is a key distinction between the prediction that an automatic process will occur faster than more deliberative computations, and the classification of a choice as intuitive or automatic because it happens more quickly. It is well-established that various cognitive processes contribute to RT and thus any inference based on RT must account for these processes.

However, as noted above, claims relying on RT reverse inferences are all too common in the decision science literature. Again, the problem with claims based on RT correlations is that there are multiple factors that can contribute to RT. Most prominently, there is an extensive literature documenting the relationship between discriminability and RT, ranging from memory and perception18,19,20,21,22,23,24,25 to value-based/economic choice26,27,28,29,30,31,32,33,34,35,36,37,38,39. For example, in the 1990s, a now-famous debate arose over the use of RT to infer serial versus parallel visual search processes40. During this debate several authors demonstrated that discriminability was a key determinant of the observed RT effects, thus undermining evidence for the alternative dual-process accounts41,42. In the realm of value-based choice, decision problems involving similar options tend to take a large amount of time, while choices between dissimilar options generally take less time26,29,30,33,35. Therefore, it is critically important to consider the possibility that there may just be a single deliberative process governing choices, and that variations in RT are due to the perceived similarity of the choice options and not competing processes (for related points in additional domains, see41,43,44,45,46). In fact, if discriminability is not properly accounted for in the experimental design and/or analyses, RT asymmetries are almost guaranteed in any data set.

Here we illustrate this point in depth, using social-preference and intertemporal choice paradigms, both contexts in which others have inferred dual processes. Initially, we identify RT asymmetries between one type of choice and the other, which some might take as evidence for dual processes. However, after controlling for the strength-of-preference between choice options in these paradigms, we find no evidence that one type of response is any faster than the other. Based on these findings, we argue that modifying the choice options appropriately can produce any desired RT result (for example, fast or slow selfishness). We demonstrate this experimentally by running a replication of a public-goods experiment from a recent influential study by Rand, Greene and Nowak5 (henceforth, RGN), but with two additional choice problems that vary the personal cost of the pro-social act. The three different cost levels in this data set replicate, eliminate and reverse the originally observed RT asymmetries in RGN5. These results clearly demonstrate that RT differences or correlations should not be used as evidence for dual-process theories.

Results

The RT reverse-inference problem

We know that RT in a choice task depends critically on how different the decision maker finds the options that she is considering47. This is true for both perceptual and value-based decision-making. Here we will focus on value-based decision-making. Consider an abstract choice between two options A and B. If you were to plot the expected RT as a function of the difference in subjective value (preference) between A and B, you would find that the curve peaks at a value difference of 0, and falls off steadily as the strength of the preference increases in either direction (Fig. 1a).

Now imagine an experiment where subjects make multiple decisions, each time between an option from group A and an option from group B. As a concrete example, one could imagine an experiment designed to test whether people intuitively favour Ale or Bourbon. In this experiment, subjects would make a series of choices, each time between a different Ale–Bourbon pairing. The experimenter has to make a decision about which Ales and Bourbons to include in her experiment. Depending on which items she selects, she may find that the Ales she selected are generally preferred to the Bourbons. Another experimenter, running an otherwise identical experiment on the same population, may select a different set of Ales and Bourbons and find that the Bourbons he selected are generally preferred to the Ales.

The problem arises when these two experimenters compare their RT results. Experimenter 1, having a majority of trials where Ale is preferred to Bourbon, will likely have many instances where there is a strong preference for Ale and relatively fewer instances where there is a strong preference for Bourbon. This would lead to generally faster Ale choices (Fig. 1b). On the other hand, Experimenter 2 will likely have many trials where Bourbon is strongly preferred and relatively fewer trials where Ale is strongly preferred. This would lead to generally faster Bourbon choices (Fig. 1c).

Based on their results, these two experimenters, having run seemingly identical studies, would reach opposite conclusions about whether people intuitively favour Ale or Bourbon in a fast, automatic way.

One can apply the same logic to any choice task. For instance, in cooperation-game studies, we can replace A and B with Selfish and Pro-Social. The same prediction would hold. If the experiment is set-up in such a way that the pro-social options are subjectively better than the selfish options, then pro-social choices will tend to be faster. But if the experiment is slightly different, then selfish choices may be more appealing and they will tend to be faster.

There are two sources of variability in the relative attractiveness of a choice category A relative to an alternative category B. The first is due to idiosyncratic individual variability in preferences. Some subjects may generally prefer A, while others generally prefer B. If two experiments have different proportions of these subjects, they may exhibit opposite RT effects. The second source of variability is due to the choice problems selected by the experimenter as outlined above. Thus for the same set of subjects, one choice problem may strongly favour A, while another choice problem strongly favours B. Returning to our earlier example, one decision may be between a renowned craft Ale and a bottom-shelf Bourbon, while another decision may be between a generic, discount Ale and a top-shelf Bourbon.

In the following two experiments, one on social preferences and one on time preferences, we demonstrate how variability in individual preferences is systematically related to RT differences. In the third experiment, an extension of the RGN public-goods study, we demonstrate how variability in the choice problems also affects RT differences.

Dictator game

In the Dictator Game experiment, subjects (n=25) in the role of the dictator made 70 binary decisions between two allocations of money, each one specifying an amount for the dictator and an amount for the receiver. For each choice, there was a selfish option and a pro-social option. Compared with the pro-social option, the selfish option gave more money to the dictator and less money to the receiver.

We start by looking at RT purely as a function of choice type (pro-social versus selfish). We find that subjects were faster when choosing the selfish option (mean median: 2,822 ms) compared with the pro-social option (mean median: 3,100 ms, t(24)=2.16, P=0.04) (Fig. 2a). On this basis we might conclude that selfish decisions are fast and intuitive, while pro-social choices are more deliberative.

It is important to note that each trial has a different tradeoff between what the dictator has to personally give up and how much he benefits the receiver by choosing the pro-social option. In some trials the dictator has to give up very little in exchange for a big gain for the receiver, but in other trials the dictator has to give up a lot in exchange for a small gain for the receiver, while a third type of trial falls somewhere in between. We refer to the first type of trials as ‘high-benefit’ trials, and to the second type of trials as ‘low-benefit’ trials.

So far we have only discussed objective tradeoffs, that is, dollar cost to the dictator versus dollar benefit to the receiver. For any given trial, these tradeoffs are identical across subjects. But of course, subjects may differentially value money for themselves compared with money for others. ‘Selfish’ subjects place relatively low value on money for others, while ‘pro-social’ subjects place relatively high value on money for others. For any given trial, a selfish subject will thus be more likely to choose the selfish option than his pro-social counterpart.

Based on the arguments laid out above, we would expect that in a given experiment a selfish subject (that is, one who primarily chooses the selfish option in this experiment) would tend to make faster selfish decisions and slower pro-social decisions. The more selfish the subject is, the harder it will be for him to be pro-social. Thus, more extreme selfishness will result in a larger difference between pro-social and selfish RTs. Similarly, a pro-social subject (again defined by his decisions in the experimental choice set) would tend to make slower selfish decisions and faster pro-social decisions. That is, the more extreme the pro-sociality, the bigger the gap between selfish and pro-social RTs.

Turning to the data, we indeed find a strong correlation between a subject’s probability of choosing the selfish option (that is, his degree of selfishness) and the difference between his median RT for pro-social and selfish choices (r=0.6, t(23)=3.56, P=0.002) (Fig. 2b). Furthermore, note that at indifference (P(choose selfish)=0.5) the difference between median RTs is 0. In other words, if our experiment is perfectly designed to make our subject choose each option half of the time (and with equal vigour), then we should find no difference in RTs. However, such experiments are rare due to the fact that they must be tailored to each individual.

In our experiment, the bias was clearly towards low-benefit trials because subjects chose the selfish option 64% of the time, yielding selfish choices that were faster than the pro-social choices. Thus, the RT difference observed in these experiments is likely a result of the fact that subjects chose the selfish option more than half of the time.

To control for choice difficulty when examining RT, we applied a well-established model of social preferences developed by Fehr–Schmidt and later Charness–Rabin, to estimate each subject’s preference for pro-social acts48,49. This utility function allows us to convert each two-dimensional choice option (dictator payoff and receiver payoff) into one subjective value. Specifically, we use the following utility function:

where xi is the payoff to the dictator, xj is the payoff to the receiver, r and s are dummy variables for whether the dictator’s payoff is higher or lower than the receiver’s payoff, respectively, and β and α are individually fit preference parameters for each of those contingencies, respectively. We can then use the difference in utility between the chosen and unchosen options as an index of the strength-of-preference.

Next, we analysed RTs as a function of both choice type (selfish versus pro-social) and the difference in utility between the two choice options. Figure 2c displays two features of the data. First, mean RT decreases as the utility difference increases, that is, as one option becomes increasingly better than the other. Second, at each level of utility difference, there is no difference in RT between selfish and pro-social choices. What drives the overall RT difference in the data set is that there are simply more trials where there was a high utility-difference advantage for the selfish option.

To carefully test these observations, we conducted a mixed-effects regression with log(RT) as the dependent variable explained by independent variables for utility difference and a dummy for pro-social choices. These regressions revealed significant effects of utility difference (t(1,740)=7.37, P<0.001) but no effect of the dummy for pro-social choice (t(1740)=0.87 P=0.38) on RT. In other words, the potential conclusion that selfish decisions are fast and intuitive is merely an artifact of the parameters of the experiment. Once we correct for the strength-of-preference in each trial, using utility differences, there is no evidence that selfish or pro-social choices take different amounts of time.

Intertemporal choice

So far we have demonstrated that after taking choice difficulty into account, our data show no difference in RT for self-centered versus other-regarding choices. However, claims of competing dual processes in decision-making are not limited to the domain of social preferences. For example, some have argued that different processes govern choices between immediate and delayed rewards50,51,52 (but see the studies by Kable and Glimcher53,54). If this were the case, one might expect to see RT differences between such choices.

To investigate this possibility, we analysed a temporal-discounting data set where 41 subjects made 216 binary choices between $25 now and some larger amount x, t days in the future55. In brain-imaging studies like this one, it is often important to have a balanced design. In this study, subjects chose the immediate$25 option on 53% of trials, which was not significantly different from 50% (t(40)=0.88, P=0.39). On that basis, we should expect no significant difference in RT for immediate versus delayed choices. Indeed, we find no difference in RT for immediate choices (1,169 ms) compared with delayed choices (1,229 ms, t(40)=1.46, P=0.15), though the difference goes in the expected direction, with the slightly less-favoured delayed choices being slightly slower.

For the sake of argument, let us suppose that the authors had not been so careful with their design, or had, for instance, wanted more trials where the immediate option would be chosen. Thus, let us examine a subset of the full experiment, focusing on trials where the immediate option is quite attractive compared with the delayed option and so we would have predicted more choices of the immediate option, based on median preferences reported in previous experiments53 (see Methods).

In this reduced data set, we do find an overall difference in RT for immediate choices (1,152 ms) compared with delayed choices (1,257 ms, t(40)=2.22, P=0.03) (Fig. 3a). On this basis we might conclude, consistent with a large fraction of the literature, that choices of the immediate payoff are fast and intuitive, while choosing to wait for a bigger payoff is slow and deliberative.

However, just as in the social decisions, there is a strong correlation between a subject’s probability of choosing the delayed option and the difference in the median RT between choices for the immediate and delayed options (r=0.61, t(39)=4.79, P=10−5) (Fig. 3b). In other words, impulsive subjects take more time when they choose the delayed option, while patient subjects take more time when they choose the immediate option. The overall RT effect in the reduced data set is simply due to the fact that there are many more trials where choosing the immediate option is easy for everyone (compared with easy-delayed trials), and so choices of the immediate option are more common (68%) and faster.

As in the Dictator Game analysis, we next applied a model of temporal discounting to control for strength-of-preference when analysing RTs. To do so, we converted the delayed reward into its present discounted value, using the standard hyperbolic-discounting function:

Reduced data set

To produce the reduced data set, we utilized the same hyperbolic-discounting function described above. To independently select a subset of the trials with which to demonstrate our point concerning RT inferences, we took the median k value (k=0.01 per day) from a similar study on temporal discounting53. We then removed all trials in which the utility of the delayed option was <\$25. This reduced the data set from 216 trials to 140 trials per subject.

Public-goods experiment

We recruited 204 non-economics students who had no experience with PGG from the regular subject pool of the decision laboratory of the Department of Economics at the University of Zurich, where the sessions took place in May and June 2013. The sample size was chosen to roughly match the sample size in RGN. All subjects gave written informed consent. The study was approved by the ethics committee of the Canton of Zurich. The experiment was computerized with the software z-Tree67. Subjects first received general details about the game, shown on the first computer screen. On the subsequent decision screens, subjects were told the respective group benefit from the public good, and how much they would earn if the others in their group contributed everything and if they either contributed everything or nothing (as in RGN). They could then type in an integer contribution from 0 to 40 points (2 points=1 CHF) and then press an ‘OK’ button on the screen. RT was calculated as the time from the onset of the decision screen to the time that subjects clicked on the ‘OK’ button.

Each subject made three such decisions, one for each group-benefit level (0.3, 0.5 and 0.9 MU per subject), in a counterbalanced order. After all three decisions, subjects were given two incentivized comprehension questions asking for the efficient contribution and the contribution that maximizes own earnings (as in RGN). 175 subjects successfully answered both questions and the rest were excluded from subsequent analysis.

At the end of the study, subjects were randomly assigned to groups of four and paid according to their choices from one of the three randomly selected games. This was the only time that subjects received any feedback about others’ choices. The overall payment to the participants consisted of a fixed show-up fee (10 CHF) plus the payment from the randomly selected PGG. On average, participants earned 27.55 CHF (ranging from 12.40 to 44.60 CHF). Sessions lasted for a little less than 1 h, including the payment of the subjects.

Simulations

We simulated 20 ‘subjects’ for each experiment (Fig. 1b,c). For each subject, we randomly selected a logit temperature parameter from a uniform distribution between 0.1 and 10, and we also randomly selected 20 choice trials from a uniform distribution over the net preference values given in Fig. 1b,c, depending on the experiment. We then determined the average probability of choosing A for that simulated subject across its 20 choice trials. For each subject, we determined the average RT for A and B choices by looking at each trial and multiplying the probability of making either choice by the appropriate RT depicted in Fig. 1a–c. We then computed the normalized sum of these weighted RTs across the 20 choice trials, once for the A choices and once for the B choices.

How to cite this article: Krajbich, I. et al. Rethinking fast and slow based on a critique of reaction-time reverse inference. Nat. Commun. 6:7455 doi: 10.1038/ncomms8455 (2015).

References

1. 1

Kahneman, D. Thinking, Fast and Slow Macmillan (2011).

2. 2

Stanovich, K. E. Who Is Rational?: Studies of Individual Differences in Reasoning Psychology Press (1999).

3. 3

Evans, J. S. B. T. & Stanovich, K. E. Dual-process theories of higher cognition advancing the debate. Perspect. Psychol. Sci. 8, 223–241 (2013).

4. 4

Rubinstein, A. Instinctive and cognitive reasoning: a study of response times. Econ. J. 117, 1243–1259 (2007).

5. 5

Rand, D. G., Greene, J. D. & Nowak, M. A. Spontaneous giving and calculated greed. Nature 489, 427–430 (2012).

6. 6

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. & Cohen, J. D. An fMRI investigation of emotional engagement in moral judgment. Science 293, 2105–2108 (2001).

7. 7

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M. & Cohen, J. D. The neural bases of cognitive conflict and control in moral judgment. Neuron 44, 389–400 (2004).

8. 8

Rand, D. G. et al. Social heuristics shape intuitive cooperation. Nat. Commun. 5, 3677 (2014).

9. 9

Stupple, E. J. N., Ball, L. J., Evans, J. S. B. T. & Kamal-Smith, E. When logic and belief collide: individual differences in reasoning times support a selective processing model. J. Cogn. Psychol. 23, 931–941 (2011).

10. 10

De Neys, W. & Glumicic, T. Conflict monitoring in dual process theories of thinking. Cognition 106, 1248–1299 (2008).

11. 11

Lotito, G., Migheli, M. & Ortona, G. Is cooperation instinctive? Evidence from the response times in a public goods game. J. Bioeconomics 15, 123–133 (2013).

12. 12

Nielsen, U. H., Tyran, J.-R. & Wengström, E. Second thoughts on free riding. Econ. Lett. 122, 136–139 (2014).

13. 13

Piovesan, M. & Wengström, E. Fast or fair? A study of response times. Econ. Lett. 105, 193–196 (2009).

14. 14

Zaki, J. & Mitchell, J. P. Intuitive prosociality. Curr. Dir. Psychol. Sci. 22, 466–470 (2013).

15. 15

Bargh, J. A. & Chartrand, T.L. In Handbook of Research Methods in Social Psychology eds Reis H., Judd C. 253–285Cambridge Univ. Press (2000).

16. 16

Achtziger, A. & Alós-Ferrer, C. Fast or rational? a response-times study of bayesian updating. Manag. Sci. 60, 923–938 (2013).

17. 17

Poldrack, R. A. Can cognitive processes be inferred from neuroimaging data? Trends Cogn. Sci. 10, 59–63 (2006).

18. 18

Ratcliff, R. A theory of memory retrieval. Psychol. Rev. 85, 59–108 (1978).

19. 19

Gold, J. I. & Shadlen, M. N. Neural computations that underlie decisions about sensory stimuli. Trends Cogn. Sci. 5, 10–16 (2001).

20. 20

Bogacz, R., Wagenmakers, E.-J., Forstmann, B. U. & Nieuwenhuis, S. The neural basis of the speed-accuracy tradeoff. Trends Neurosci. 33, 10–16 (2009).

21. 21

Ratcliff, R. A diffusion model account of response time and accuracy in a brightness discrimination task: Fitting real data and failing to fit fake but plausible data. Psychon. Bull. Rev. 9, 278–291 (2002).

22. 22

Usher, M. & McClelland, J. The time course of perceptual choice: the leaky, competing accumulator model. Psychol. Rev. 108, 550–592 (2001).

23. 23

Ratcliff, R. & McKoon, G. The diffusion model: Theory and data for two-choice decision tasks. Neural Comput. 20, 873–922 (2008).

24. 24

Wenzlaff, H., Bauer, M., Maess, B. & Heekeren, H. R. Neural characterization of the speed-accuracy tradeoff in a perceptual decision-making task. J. Neurosci. 31, 1254–1266 (2011).

25. 25

Mulder, M. J., Wagenmakers, E.-J., Ratcliff, R., Boekel, W. & Forstmann, B. U. Bias in the brain: a diffusion model analysis of prior probability and potential payoff. J. Neurosci. 32, 2335–2343 (2012).

26. 26

Krajbich, I., Armel, K. C. & Rangel, A. Visual fixations and the computation and comparison of value in simple choice. Nat. Neurosci. 13, 1292–1298 (2010).

27. 27

Krajbich, I. & Rangel, A. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proc. Natl Acad. Sci. USA 108, 13852–13857 (2011).

28. 28

Krajbich, I., Oud, B. & Fehr, E. Benefits of neuroeconomic modeling: new policy interventions and predictors of preference. Am. Econ. Rev. 104, 501–506 (2014).

29. 29

Polania, R., Krajbich, I., Grueschow, M. & Ruff, C. C. Neural oscillations and synchronization differentially support evidence accumulation in perceptual and value-based decision making. Neuron 82, 709–720 (2014).

30. 30

Milosavljevic, M., Malmaud, J., Huth, A., Koch, C. & Rangel, A. The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure. Judgm. Decis. Mak. 5, 437–449 (2010).

31. 31

Philiastides, M. G. & Ratcliff, R. Influence of branding on preference-based decision making. Psychol. Sci. 24, 1208–1215 (2013).

32. 32

Busemeyer, J. R. & Townsend, J. T. Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychol. Rev. 100, 432–459 (1993).

33. 33

Krajbich, I., Lu, D., Camerer, C. & Rangel, A. The attentional drift-diffusion model extends to simple purchasing decisions. Front. Psychol. 3, 193 (2012).

34. 34

Basten, U., Biele, G., Heekeren, H. & Fieback, C. J. How the brain integrates costs and benefits during decision making. Proc. Natl Acad. Sci. USA 107, 21767–21772 (2010).

35. 35

Hunt, L. T. et al. Mechanisms underlying cortical activity during value-guided choice. Nat. Neurosci. 15, 470–476 (2012).

36. 36

Hare, T., Schultz, W., Camerer, C., O’Doherty, J. P. & Rangel, A. Transformation of stimulus value signals into motor commands during simple choice. Proc. Natl Acad. Sci. USA 108, 18120–18125 (2011).

37. 37

De Martino, B., Fleming, S. M., Garret, N. & Dolan, R. J. Confidence in value-based choice. Nat. Neurosci. 16, 105–110 (2013).

38. 38

Cavanagh, J. F., Wiecki, T. V., Kochar, A. & Frank, M. J. Eye tracking and pupillometry are indicators of dissociable latent decision processes. J. Exp. Psychol. Gen. 143, 1476–1488 (2014).

39. 39

Gluth, S., Rieskamp, J. & Buechel, C. Deciding when to decide: time-variant sequential sampling models explain the emergence of value-based decisions in the human brain. J. Neurosci. 32, 10686–10698 (2012).

40. 40

Treisman, A. In Attention: Selection, Awareness, and Control: A Tribute to Donald Broadbent eds Baddeley A. D., Weiskrantz L. 5–35Clarendon Press/Oxford Univ. Press (1993).

41. 41

Palmer J. In Visual Attention ed Wright R. D. 8–348Oxford Univ. Press (1998).

42. 42

McElree, B. & Carrasco, M. The temporal dynamics of visual search: evidence for parallel processing in feature and conjunction searches. J. Exp. Psychol. Hum. Percept. Perform. 25, 1517–1539 (1999).

43. 43

Baron, J., Guercay, B., Moore, A. B. & Starcke, K. Use of a Rasch model to predict response times to utilitarian moral dilemmas. Synthese 189, 107–117 (2012).

44. 44

Evans, J. S. B.T. In The Oxford Handbook of Thinking and Reasoning eds Holyoak K. J., Morrison R. G. 115–133Oxford Univ. Press (2012).

45. 45

Phelps, E. A., Lempert, K. M. & Sokol-Hessner, P. Emotion and decision making: multiple modulatory neural circuits. Annu. Rev. Neurosci. 37, 263–287 (2014).

46. 46

Kahane, G. On the wrong track: process and content in moral psychology. Mind Lang. 27, 519–545 (2012).

47. 47

Henmon, V. A. C. The Time of Perception as a Measure of Differences in Sensations Science Press (1906).

48. 48

Fehr, E. & Schmidt, K. M. A theory of fairness, competition, and cooperation. Q. J. Econ. 114, 817–868 (1999).

49. 49

Charness, G. & Rabin, M. Understanding social preferences with simple tests. Q. J. Econ. 117, 817–869 (2002).

50. 50

McClure, S. M. Separate neural systems value immediate and delayed monetary rewards. Science 306, 503–507 (2004).

51. 51

McClure, S. M., Ericson, K. M., Laibson, D. I., Loewenstein, G. & Cohen, J. D. Time discounting for primary rewards. J. Neurosci. 27, 5796–5804 (2007).

52. 52

Metcalfe, J. & Mischel, W. A hot/cool-system analysis of delay of gratification: dynamics of willpower. Psychol. Rev. 106, 3–19 (1999).

53. 53

Kable, J. W. & Glimcher, P. W. The neural correlates of subjective value during intertemporal choice. Nat. Neurosci. 10, 1625–1633 (2007).

54. 54

Kable, J. W. & Glimcher, P. W. An ‘as soon as possible’ effect in human intertemporal decision making: behavioral evidence and neural mechanisms. J. Neurophysiol. 103, 2513–2531 (2010).

55. 55

Hare, T. A., Hakimi, S. & Rangel, A. Activity in dlPFC and its effective connectivity to vmPFC are associated with temporal discounting. Front. Neurosci. 8, 50 (2014).

56. 56

Ledyard, J.O. In The Handbook of Experimental Economics eds Kagel J. H., Roth A. E. 111–194Princeton Univ. Press (1995).

57. 57

Tinghög, G. et al. Intuition and cooperation reconsidered. Nature 498, E1–E2 (2013).

58. 58

Verkoeijen, P. P. J. L. & Bouwmeester, S. Does intuition cause cooperation? PLoS ONE 9, e96654 (2014).

59. 59

Rand, D. G., Newman, G. E. & Wurzbacher, O. M. Social context and the dynamics of cooperative choice. J. Behav. Decis. Mak. 28, 159–166 (2015).

60. 60

Cone, J. & Rand, D. G. Time pressure increases cooperation in competitively framed social dilemmas. PLoS ONE 9, e115756 (2014).

61. 61

Cornelissen, G., Dewitte, S. & Warlop, L. Are social value orientations expressed automatically? decision making in the dictator game. Pers. Soc. Psychol. Bull. 37, 1080–1090 (2011).

62. 62

Kovarik, J. Giving it now or later: altruism and discounting. Econ. Lett. 102, 152–154 (2009).

63. 63

Rand, D. G. & Kraft-Todd, G. T. Reflection does not undermine self-interested prosociality. Front. Behav. Neurosci. 8, 300 (2014).

64. 64

Roch, S. G., Lane, J. A. S., Samuelson, C. D., Allison, S. T. & Dent, J. L. Cognitive load and the equality heuristic: a two-stage model of resource overconsumption in small groups. Organ. Behav. Hum. Decis. Process. 83, 185–212 (2000).

65. 65

Ruff, C. C., Ugazio, G. & Fehr, E. Changing social norm compliance with noninvasive brain stimulation. Science 342, 482–484 (2013).

66. 66

Schulz, J. F., Fischbacher, U., Thöni, C. & Utikal, V. Affect and fairness: dictator games under cognitive load. J. Econ. Psychol. 41, 77–87 (2014).

67. 67

Fischbacher, U. z-Tree: Zurich toolbox for ready-made economic experiments. Exp. Econ. 10, 171–178 (2007).

Acknowledgements

We thank Shabnam Hakimi, Antonio Rangel and Yosuke Morishima for sharing their data with us. We also thank Daniel Kahneman, Gustav Tinghög and Daniel Burghart for their helpful comments. I.K. gratefully acknowledges support from FINRISK. T.H. gratefully acknowledges support from the US National Science Foundation grant number 0851408. E.F. gratefully acknowledges support from the European Research Council (grant number 295642, ‘Foundations of Economic Preferences’) and the Swiss National Science Foundation (grant number 100018_140734/1, ‘The distribution and determinants of social preferences’).

Author information

Authors

Contributions

I.K. proposed the research question. B.B., E.F., I.K. and T.H. designed the experiments. B.B., I.K. and T.H. collected the data. I.K. analysed the data with inputs from E.F. and T.H. E.F., I.K. and T.H. wrote the paper with inputs from B.B.

Corresponding author

Correspondence to Ian Krajbich.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Rights and permissions

Reprints and Permissions

Krajbich, I., Bartling, B., Hare, T. et al. Rethinking fast and slow based on a critique of reaction-time reverse inference. Nat Commun 6, 7455 (2015). https://doi.org/10.1038/ncomms8455

• Accepted:

• Published:

• What we can (and can’t) infer about implicit bias from debiasing experiments

• Nick Byrd

Synthese (2021)

• Attentional priorities drive effects of time pressure on altruistic choice

• Yi Yang Teoh
• Ziqing Yao
• Cendri A. Hutcherson

Nature Communications (2020)

• Contribution of self- and other-regarding motives to (dis)honesty

• Anastasia Shuster
• Dino J. Levy

Scientific Reports (2020)

• Temporal dynamics of resting EEG networks are associated with prosociality

• Bastian Schiller
• Tobias Kleinert
• Markus Heinrichs

Scientific Reports (2020)

• Cognitive processes underlying distributional preferences: a response time study

• Urs Fischbacher

Experimental Economics (2020)