Article

Explicit representation of confidence informs future value-based decisions

  • Nature Human Behaviour 1, Article number: 0002 (2016)
  • doi:10.1038/s41562-016-0002
  • Download Citation
Received:
Accepted:
Published online:

Abstract

Humans can reflect on decisions and report variable levels of confidence. But why maintain an explicit representation of confidence for choices that have already been made and therefore cannot be undone? Here we show that an explicit representation of confidence is harnessed for subsequent changes of mind. Specifically, when confidence is low, participants are more likely to change their minds when the same choice is presented again, an effect that is most pronounced in participants with greater fidelity in their confidence reports. Furthermore, we show that choices reported with high confidence follow a more consistent pattern (fewer transitivity violations). Finally, by tracking participants’ eye movements, we demonstrate that lower-level gaze dynamics can track uncertainty but do not directly impact changes of mind. These results suggest that an explicit and accurate representation of confidence has a positive impact on the quality of future value-based decisions.

As we navigate through life, we are constantly faced with choices that require us to assign and compare the values of different options or actions. Some of these value-based choices seem relatively straightforward (‘what should I eat for lunch?’) and others less so (‘which job offer should I take?’). No matter how simple or complex these choices are, they are often accompanied by a sense of confidence in having made the right choice. Recent work has shown that it is possible to behaviourally and computationally dissociate a value estimate (‘how much do I like something?’) from internal fluctuations in confidence (‘how sure am I?’). For example, at a behavioural level it has been shown that confidence shares only a limited amount of variance with value and instead reflects an assessment of choice accuracy1. This relation between value and confidence is neatly accounted for computationally by assuming that confidence emerges from the dynamics of noisy accumulators in an evidence-accumulation framework1,​2,​3,​4. More recently, Lebreton and colleagues5 showed that confidence may be an inherent property of value estimation, sharing a quadratic relationship with a linear rating of value (see also Barron and colleagues6). But what is the function of confidence? Why maintain an explicit representation of confidence when a choice has already been made and therefore cannot be undone?

According to one view, confidence can be thought of as a by-product of a stochastic accumulation process that is implemented in the ventromedial prefrontal cortex during value comparison. Research indicates the brain constructs an explicit representation of confidence that underpins verbal reports7,8. Studies suggest that the rostrolateral prefrontal cortex represents confidence in both value-based and perceptual decisions1,9,​10,​11. Explicit representations of confidence allow individuals to communicate the strength of their beliefs to others, facilitating group decisions12,13, but may play little role in one’s own decision process.

An alternative view is that explicit representations of confidence are critical for guiding one’s own future behaviour14. Work in perceptual decision-making has revealed commonalities between mechanisms supporting confidence construction and error monitoring15,16, suggesting changes of mind may be informed by confidence4. However, whether confidence is harnessed over a longer timescale to guide future choices is unknown. We aim to test the hypothesis that an explicit (and well-tuned) representation of confidence in a recent choice can guide a decision maker’s choice when faced with the same (or a similar) decision again. To test this hypothesis we presented participants with the same set of choices more than once during the course of two experiments and tested which factors were associated with a change of mind. We then investigated how confidence related to the degree of internal consistency in their patterns of choice. Choice consistency can be quantified by measuring the degree of transitivity across choices. Here we introduced a novel method for tagging choices as conforming to or violating transitivity. Using this method we were able to show that explicit representations of confidence are associated with more consistent patterns of choice as a consequence of changes of mind. Finally, we directly contrasted the effect of explicit confidence reports with lower-level markers of uncertainty that we gathered using eye tracking, revealing that changes of mind were specifically associated with explicit reports of confidence.

Results

We collected data in two experiments in which hungry participants made choices between food items (which they could consume later) while their eye movements were monitored. In the first experiment, the 28 participants were shown high-definition pictures of two snacks and were asked to choose their preferred one (Fig. 1a). In a second experiment, 24 participants chose their preferred snack among three snacks available in each trial (Fig. 1d). After making each choice, participants reported their degree of confidence in having made the ‘correct’ choice, which in this design equates to choosing the higher-valued item. The value for individual items was elicited using a standard incentive-compatible Becker–DeGroot–Marschak (BDM) method17 . The experimental procedure we used was adapted from a task we developed previously1 (see Methods for more details).

Figure 1: Relation between confidence and choice.
Figure 1

a, In experiment 1, participants were presented with two snack items and were required to choose one item to consume at the end of the experiment (snacks are shown here unwrapped for copyright reasons while in the actual experiment were shown in their wrappers). d, In experiment 2, participants chose between three options, and the presentation of the stimuli was contingent on which box participants looked at. In both experiments, participants indicated their confidence that they had made a correct decision on a visual analogue scale after each choice that they made. b, Probability of choosing the item on the right as a function of the difference in value between the two available options. e, Probability of choosing the reference item (see Methods), as a function of the value difference (DV) between the reference item and the mean value of the alternatives. The black lines indicate high-confidence trials and the grey lines low-confidence trials (as determined by a median split). Each graph shows the z-scored data pooled across participants: points represent quartiles of difference in value and the error bars show standard errors. c,f, Fixed-effects coefficients from hierarchical logistic regression models that predict choice. The graph for experiment 1 shows the coefficients that predict the probability of choosing the right-hand option (c); the graph for experiment 2 shows the coefficients that predict the probability of choosing the reference option (f) (see Methods). Error bars show the 95% confidence intervals. The sample size for experiment 1 was 28 participants (each completing 240 trials); the sample size for experiment 2 was 24 participants (each completing 144 trials). DV × confidence, interaction of difference in value and confidence; DV × SV, interaction of difference in value and summed value (SV). ***P < 0.001; **P < 0.01; *P < 0.05 (two-sided t-test). Eye-tracking variables are reported in blue.

Relation between confidence and choice

In line with a wealth of research18,​19,​20,​21,​22 , we found that the difference in value between the two items (constructed from values elicited through a BDM bidding procedure) was a reliable predictor of participants’ choices in both experiments (hierarchical logistic regression; experiment 1: z = 11.48, P < 0.0001, Fig. 1c,f; experiment 2: z = 6.66, P < 0.0001, Fig. 1b,e). Note that in the three-choice design (experiment 2) value difference (DV) was calculated as the difference between the value of the reference item and the average of the two other available options (following Krajbich and Rangel23 ). In the Supplementary Information we report the results of a multinomial logistic regression model in which the value of each option was inputted independently and therefore does not require a priori specification of DV. This analysis yielded the same pattern of results. In both studies we also identified a significant negative interaction between the summed value of all options (SV) and DV (experiment 1: z = −3.08, P < 0.005; experiment 2: z = −2.84, P < 0.005), indicating that DV had a stronger influence on choice when item values were low than when they were high (Fig. 1c,f). To our knowledge this effect has not been reported before, but it is consistent with the Weber–Fechner law of sensory perception in which the resolution of precepts diminishes for stimuli of greater magnitude. The effect is also compatible with the notion of normalization24,​25,​26 . Confidence, unlike DV, was not in itself a predictor of choice (right or left item) but instead correlated with choice accuracy, with a steeper slope relating DV to choice when confidence was high, as found previously1 (Fig. 1b,e; experiment 1: z = 7.43, P < 0.0001; experiment 2: z = 5.82, P < 0.0001).

We used eye tracking to measure the dynamics of eye movements between items during decision-making: both the total amount of time participants spent looking at each item and how frequently gaze shifted back and forth between items (see Supplementary Information). Replicating previous studies23,27 , we found the difference in dwell time (DDT) was a robust predictor of choice in both two-option and three-option experiments (experiment 1: z = 4.95, P < 0.0001; experiment 2: z = 9.81, P < 0.0001; Fig. 1c,f).

For a full list of fitted models and their respective Bayesian information criterion (BIC) scores see the Supplementary Information.

Factors that contribute to confidence

We next investigated which variables contributed to subjective confidence during value-based choices. Our previous work showed an interrelationship between the absolute difference in value (|DV|), response time (RT) and confidence (that is, participants are more confident when |DV| is high and their choices are made more quickly)1 . These findings are in line with the conceptual relation between confidence, strength of evidence (indexed by |DV| in the value-based framework) and decision time3,28 . We observed this same relation in the current study. In both experiments we found that |DV| was a significant predictor of confidence (experiment 1: t = 13.43 P < 0.0001; experiment 2: t = 7.46, P < 0.0001). We also found that RT was a negative predictor of confidence (experiment 1: t = −10.01, P < 0.0001; experiment 2: t = −7.53, P < 0.0001). Additionally, we found that the summed value positively predicted confidence, meaning that participants tended to be more confident when the options were all high in value (experiment 1: t = 3.50, P < 0.005; experiment 2: t = 4.80, P < 0.0001). This finding indicates that overall value might boost confidence, despite paradoxically making choices less accurate. More broadly, these findings highlight how evidence and confidence, although related, play partially independent roles in the decision-making process. Note that all of the predictors analysed in this section were entered into the same hierarchical linear regression; therefore all of the effects hold when controlling for the other variables reported.

We also hypothesized that lower-level features of information sampling may reflect an individual’s explicit confidence reports. To test this idea, we constructed a novel measure that captured uncertainty in information-sampling behaviour. This new measure, which we termed gaze-shift frequency (GSF), indexes how frequently gaze shifted back and forth among the options presented on the screen. This measure is independent of DDT (experiment 1: Pearson’s correlation coefficient (r) = −0.02, experiment 2: r = 0.04): for a constant allocation of time between the options (for example, 3 s for the left-hand option and 5 s for the right-hand option), one may shift fixation only once (switching from left to right after 3 s have elapsed, for example; low GSF) or shift many times between the two options (high GSF). We found that the GSF was a robust negative predictor of confidence in both experiments (experiment 1: t = −3.67, P < 0.005; experiment 2: t = −8.94, P < 0.0001) see Fig. 2a,b. In other words, in trials in which participants shifted their gaze more often between the available options, their confidence was lower, even after accounting for changes in |DV| and RT. The four-way relationship between |DV|, RT, GSF and confidence is plotted in Fig. 2c,d. Correlation tables can be found in the Supplementary Information.

Figure 2: Factors that contribute to confidence
Figure 2

a,b, Fixed-effect coefficients in hierarchical regression models that predict confidence for experiments 1 (a) and 2 (b). Error bars show the 95% confidence intervals. c,d, 4D heat maps for experiments 1 (c) and 2 (d) showing the mean z-scored confidence as a function of subject-specific quantiles of response time, absolute difference in value and GSF. The sample size for experiment 1 was 28 participants (each completing 240 trials); the sample size for experiment 2 was 24 participants (each completing 144 trials). ***P < 0.001; **P < 0.01; *P < 0.05 (two-sided t-test). Eye-tracking variables are reported in blue.

Confidence predicts change of mind

In both experiments, participants saw the same exact choice sets on more than one occasion. In experiment 1 each pair was presented twice; in experiment 2 each triad was presented three times (counterbalancing for different spatial locations). This design allowed us to determine the factors that affect a change of mind when the same choice is encountered again. Note that the way we define change of mind in this study differs from how it is often defined in perceptual decision-making, as a reversal in a continuing motor plan due to further processing of sensory information4,15,29,30 . The hypothesis we sought to test was that an explicit report of confidence in an initial choice at time t would influence behaviour when the same decision was presented again at a future time (t future). In a hierarchical logistic regression, lower confidence at t was indeed associated with increased changes of mind at t future in both experiments (experiment 1: z = −6.70, P < 0.0001; experiment 2: z = –5.71, P < 0.0001). The effect of confidence in predicting change of mind remained robust after controlling for several other factors that might correlate with the stability of a choice such as |DV| and RT. Because |DV| correlated positively with confidence (see the previous section and the Supplementary Information), we checked the covariance matrices and variance inflation factors (VIFs) to ensure that these correlations did not influence the interpretation of our findings. Both the covariances and the VIFs were below standard thresholds, allowing the straightforward interpretation of coefficients (see the Supplementary Information). Furthermore, to rule out the possibility that the effect we observed was driven by the presence of fast motor errors that were later corrected by the participant, we reanalysed the data excluding all trials that were faster than each participant’s mean response time. This analysis produced comparable results (see the Supplementary Information). Notably, GSF (itself a correlate of confidence) did not predict a change of mind when included in the regression analysis (Fig. 3a,b, coefficients in blue), even when excluding reported confidence from the regression analysis (see the Supplementary Information Section 7a). Together, these results suggest that a low-level (and possibly implicit) representation of uncertainty indexed by GSF is insufficient to trigger a future change of mind. On the contrary, individuals may use an explicit representation of uncertainty (expressed through confidence) to reverse their initial decision when the same (or a similar) choice is presented again.

Figure 3: Confidence predicts change of mind
Figure 3

a,b, Fixed-effect coefficients from hierarchical logistic regression models that predict future changes of mind for experiments 1 (a) and 2 (b). Error bars show the 95% confidence intervals. c, Correlation between metacognitive accuracy and the coefficients for confidence ratings that predict future changes of mind (highlighted in pale green in a,b). Participants with greater metacognitive accuracy are more likely to change their mind following a low-confidence judgment; note that the correlation is negative because the relationship between confidence and changes of mind is itself negative (lower confidence increases the probability of subsequent changes of mind). Both axes (x and y) are z-scored for each experiment separately. The sample size for experiment 1 was 28 participants (each completing 240 trials); the sample size for experiment 2 was 24 participants (each completing 144 trials). ***P < 0.001; **P < 0.01; *P < 0.05 (two-sided). Eye-tracking variables are reported in blue.

We next harnessed individual differences in metacognition to provide a more stringent test of this hypothesis. We reasoned that the impact of confidence on changes of mind would be more prominent in participants with enhanced metacognitive skills, that is, those whose explicit confidence ratings more accurately track the level of uncertainty underlying their decision process. To test this hypothesis, we calculated an individual index of metacognitive sensitivity by computing the difference in slope between psychometric functions fitted to high- and low-confidence trials1,31,32 . We then ran a logistic regression to predict changes of mind at t future using confidence measured at t. In line with our initial hypothesis, we were able to show that the impact of confidence on changes of mind (here the negative coefficient of confidence predicting a change of mind) is stronger in subjects with greater metacognitive accuracy (r = –0.35, P = 0.01) (Fig. 3c).

Link between confidence and choice transitivity

In the analyses presented above, we established a link between an explicit representation of confidence and future changes of mind. However, these analyses are agnostic to the quality of the decisions that emerge as a consequence of changes of mind. Not all choices are born equal; some are more consistent than others, which is formally captured by the notion of transitivity. A transitive ranking is characterized by the following structure: if option A is preferred over option B and option B is preferred over option C, then it follows that A should be preferred over C (that is, A  B and B  C then A  C). Transitivity is a normative prescription in utility theory33 ; however, failures of transitivity are commonly observed in human choices and represent a prominent violation of economic rationality and, more generally, of logical consistency34,35 . To test the relation between confidence and transitivity, we found the (idiosyncratic) preference ranking of items that led to the lowest number of transitivity violations for each subject. Finding an optimal ranking of choice sets with more than a handful of items is extremely complex; however, a number of efficient algorithms that approximate a numerical solution have been developed for pairwise comparisons. In our study, we used the minimum violations ranking (MVR) algorithm36 , which minimizes the number of inconsistencies in the ranking of items conditional on each participant’s choices. This method is conceptually similar to other methods that are based on revealed preferences such as Afriat’s efficiency index37,38 . The MVR algorithm provided an optimal ranking of items for each participant so we could tag choices that violate this ranking, hereafter labelled transitivity violations (TVs). Because most of these methods are not suited for ternary choice, the analyses presented in this section were performed only on the data collected for the experiment that used binary choice (experiment 1). An alternative way to assess choice quality is to compute the choice ranking using the BDM method and test whether participants chose the item with the highest ranking. This method gives qualitatively similar results to those reported below (see the Supplementary Information).

After the participants’ choices were ordered according to the MVR algorithm, 4.5% of all of the decisions were classified as TVs. We then split the dataset into trials in which participants reported high confidence and trials in which they reported low confidence (median split). A dramatic reduction in TVs was observed in high confidence trials (16% of TVs) in comparison to low confidence trials (84 % of TVs) (Fig. 4a). Although these results are consistent with previous evidence provided here and elsewhere1 , note that we did not rely on BDM value estimates (collected post-choice), instead we relied only on subjects’ choices to generate the optimal ranking. In other words, the link between confidence and the quality of a value-based decision is robust to the method used to elicit preference. To statistically quantify the relation between confidence and TVs on a trial-by-trial basis (while accounting for other factors that may result in violations of transitivity), we constructed a set of hierarchical logistic regression models. We found that |DV| was a robust negative predictor of TV (z = −6.59, P < 0.0001; Fig. 4b) such that participants were more likely to violate transitivity when items were closer in value. Critically, this same model showed that even when |DV| was accounted for, confidence was a negative predictor of TVs (z = –6.75, P < 0 .0001). In other words, participants were less confident during those trials in which they went against their best-fitting preference order. Finally, both response time (z = 2.55, P = 0.01) and summed value (z = 2.55, P = 0.01) positively predicted TVs, such that trials in which the value of both options was higher and/or in trials in which their responses were slower, participants’ choices were more likely to result in TVs. Similar to change-of-mind analysis, eye-tracking variables did not reliably predict TVs (GSF = −1.74, P = 0.08; |DDT| z = −0.47, P = 0.64) (Fig. 5b). Note that this was still true when reported confidence was excluded from the regression analysis (see the Supplementary Information Section 7b).

Figure 4: Link between confidence and transitivity.
Figure 4

a, Heat maps showing the number of transitivity violations (TVs) for the full sample and for high- and low-confidence trials (median split). The middle diagonal line is empty because no item was ever paired with itself. Note that most TVs took place in low-confidence trials. b, Fixed-effect coefficients from a hierarchical logistic regression model that predicts TVs. Error bars show the 95% confidence intervals. c, Decreases in TVs between the first and second presentation for each participant as a function of metacognitive accuracy. The graph shows that participants who are more metacognitively accurate tend to become more transitive over time. The sample size for experiment 1 was 28 participants (each completing 240 trials). ***P < 0.001; **P < 0.01; *P < 0.05 (two-sided t-test). Eye-tracking variables are reported in blue.

Finally we examined whether intersubject variability in metacognitive ability affected TVs. We reasoned that if a well-calibrated explicit representation of uncertainty plays a role in guiding future decisions, participants with greater metacognitive ability would show a decrease in the number of TVs when the same option was presented a second time. In line with this hypothesis, we observed that greater metacognitive ability was associated with a marked reduction in TVs between the first and second presentation of the same choice (standardized coefficient β = 0.85, s.e.m. = 0.42, z(26) = 2.03, P < 0.05; Fig. 4c). We also confirmed that this effect was not due to a relationship between metacognition and choice instability: the total number of TVs was unrelated to metacognitive accuracy (β = −1.83, s.e.m. = 1.61, z(26) = −1.14, P = 0.25). Together, these analyses show that a more accurate explicit representation of confidence is associated with more optimal choices when participants are given the opportunity to change their minds.

Discussion

What is the advantage of explicitly representing one’s confidence in value-based decision-making? Most experimental set-ups elicit confidence after a decision has been made and cannot be changed. Our hypothesis was that an explicit representation of confidence might serve an important role in decision-making by signalling the need to explore different alternatives when the same (or a similar) choice is presented again.

Value-based decisions are often perceptually unambiguous (for example, a banana is noticeably different from an apple), and most of the uncertainty is contingent on a number of internal processes such as memories or homeostatic states that are often difficult to manipulate experimentally. For example, a choice between two food items might be affected both by uncertainty about the tastes of the items and by uncertainty about one’s own level of hunger. To take advantage of this information, a decision-maker should be able to correctly monitor uncertainty that arises from the different constitutive computations. A wealth of work has shown that humans can introspect on their choice process and report their level of confidence, an ability that has been associated with the psychological concept of metacognition. However, the functions of these explicit representations of confidence (as opposed to implicit markers of uncertainty such as decision time) have remained unclear. Furthermore, individuals show wide variations in how accurately they can track and report fluctuations in uncertainty (that is, metacognitive accuracy).

In two independent experiments we showed that confidence reports (elicited after a value-based decision) reliably predicted a change of mind when the same choice was presented again. This effect is robust after controlling for other factors associated with the difficulty of a decision, such as difference in value and reaction time. Furthermore, intersubject variability in metacognitive accuracy modulated the degree to which confidence predicted change of mind: confidence was a stronger predictor of change of mind in participants with better metacognitive abilities. Critically, and in contrast to our findings on explicit confidence reports, a lower-level marker of uncertainty (GSF) did not predict subsequent changes of mind, suggesting that an explicit representation of uncertainty expressed through confidence is im portant for guiding future choices. Instead, we suggest that GSF can be considered an ingredient that agents use to construct a subjective sense of certainty, together with decision time and strength of evidence (Fig. 2c,d). An alternative interpretation of our results is that GSF does not contribute directly to subjective confidence but reflects an agent’s attempt to gather more information to adaptively reduce uncertainty (a situation in which confidence would be low and reaction time slow). Future work is required to distinguish between these two hypotheses. A further methodological appeal of GSF as a trial-by-trial measure of uncertainty is that it can be easily gathered in animals. Recent years have seen a resurgence of interest in studying uncertainty and confidence using animal models39 . This promising line of work relies heavily on the development of experimental paradigms (such as opt-out or post-decision wagering) to measure the fluctuation in uncertainty during a decision process. GSF (which can be measured in rodents by tracking head movements) may prove a useful tool to monitor, on a trial-by-trial basis, internal fluctuations in uncertainty and its relation to the neural encoding of decision time and strength of evidence.

Tracking the level of decision uncertainty is helpful in guiding behaviour in a number of contexts; for example, in guiding learning40 , in deciding whether to explore a new alternative or stick with the current one41,42 or in evaluating an alternative course of action18 . At the neural level, the rostrolateral prefrontal cortex and frontopolar cortex have been shown to play key roles in tracking trial-by-trial evolution of uncertainty43,​44,​45 and modulating uncertainty-driven behaviours18,41,42,46,​47,​48 . At the same time, the rostrolateral prefrontal cortex and frontal pole have also been shown (using a number of different methods) to play a key role in enabling metacognitive abilities1,10,11,14,32 . It is therefore possible that these two processes are linked anatomically and computationally: individuals whose prefrontal cortex more closely tracks the trial-by-trial evolution of uncertainty might also have more accurate explicit representations of confidence. In turn, superior metacognitive abilities might confer the advantage of knowing how uncertain one’s choice was and therefore guide future behavioural strategies, such as uncertainty-driven exploration42 or changes of mind. As we did not collect neural measures in this study, we cannot test this hypothesis directly, but our findings provide a foundation for future studies of the neurobiology of changes of mind.

Another question we sought to address was whether changes of mind are associated with more optimal decisions. In value-based decisions the difference between a correct decision and an incorrect one is often murky because value is a subjective construct. However, when people make a series of value-based choices across a set of options, their pattern of decisions is characterized by a variable degree of internal consistency. In experiment 1 we used a recently developed algorithm to find an optimal ranking of items that produced the lowest number of TVs for each individual. In this way we identified when participants’ decisions were inconsistent with their overall (idiosyncratic) pattern of decisions. TVs are a paradigmatic example of irrationality in economic choice as they are easy to exploit. For example, when individual preferences are not transitive, it is possible to construct a choice set in which each decision appears fair on its own but when combined guarantees a loss (a phenomenon known as a Dutch book or arbitrage in finance)49 . We showed that choices made with high confidence are overall more transitive and therefore more optimal according to the normative prescriptions of utility theory. Noticeably, this effect is robust after controlling for the absolute difference in value and reaction time. This finding suggests that individuals can monitor and report that a given decision was noisier and therefore more likely to result in a decision that is inconsistent with their overall preference patterns, establishing confidence as a correlate of choice accuracy without relying on the BDM procedure to derive independent estimates of subjective utility. This result also resonates with the well-established finding in perceptual decision-making that people are able to detect and signal errors as soon as they respond16,50 and with the proposal that confidence can facilitate cognitive control51 . We suggest a similar process might operate in value-based decisions, in which errors can be thought of as choices that are at odds with one’s overall preferences. Consistent with this proposal, we found that individuals who have a more accurate representation of confidence (greater metacognitive ability) were more likely to move towards a more internally consistent decision-making pattern over time. Our work sheds light on the reasons for an explicit representation of confidence in human decision-making. It explores value-based choices (aka economic choices) by borrowing methods and concepts from perceptual decision-making52 . Similar to perceptual decision-making, we found that the same ‘strength of evidence’ in value (that is, |DV|) is accompanied by a variable level of uncertainty that is represented explicitly as confidence. We suggest these representations play a functional role not only in allowing confidence to be shared with others but also in guiding our own future choices. Taken together, our results show that an explicit and accurate representation of confidence can have a positive impact on the quality of future value-based decisions.

Methods

Experimental procedures

Experiment 1

Participants were required to make binary choices between 16 common snack items. Participants were asked to choose between each combination of the items (N = 120) twice, counterbalanced across the left–right spatial configurations (total number of choices = 240). After each choice, participants indicated their confidence in their decision on a continuous rating scale. Neither choices nor confidence ratings were time-constrained. The trial order was randomized with the only constraint being that the same pair was never repeated in subsequent trials. Participants’ eye movements were recorded throughout this task.

At the end of the experiment, one choice from this phase was played out and the subject had the opportunity to buy the chosen item by means of an auction administered according to the BDM procedure: the experimenter randomly extracted a price from a uniform distribution (£0–3)—the ‘market price’ of that item. If the participant’s bidding price (willingness to pay) was below the market price, no transaction occurred. The computer-generated value was drawn to a precision greater than two decimals to avoid the possibility of a tie but was rounded to pennies in the event of a transaction. If the subject’s bidding price was above the market price, the participant bought the snack item at the market price17 . At the end of the experiment, participants had to remain in the laboratory for an additional hour. During this hour, the only food they were allowed to eat was the item purchased in the auction, if any. At the end of the waiting period, participants were debriefed and thanked for their participation. Participants were paid £25 for their time, less the cost of the food item, if they bought any. Both tasks were programmed using MATLAB 8.0 (MathWorks) running the Psychophysics toolbox (http://psychtoolbox.org) as well as the Eyelink toolbox extensions53,54 . The procedure of this experiment was approved by the UCL Research Ethics Committee (project ID: 3736/004).

Experiment 2

Participants gave their willingness to pay for 72 common snack food items on a scale ranging from £0 to £3 in a BDM procedure17 that is similar to that in experiment 1. They then completed a choice task: in each trial, they had to pick their favourite item out of three options. The triplets presented in the choice task were tailored for each participant from their willingness-to-pay ratings. The items were divided into high-value and low-value sets by a median split. The 36 high-value items were randomly combined into 12 high-value triplets; this procedure was mirrored to generate 12 low-value triplets. The high-value and low-value items were then mixed to generate medium-value triplets, with 12 triplets consisting of two high-value items and one low-value item and 12 triplets with the reverse ratio. This resulted in 48 unique triplets, with counterbalanced spatial configurations (total trials = 144) split into three blocks. Each triplet was shown once in each block; the presentation order inside blocks was randomized with the constraint that the triplet that ended one block was never shown first in the next block.

In the subsequent choice task, the triplets were presented inside three squares in an equidistant 2 × 2 grid (one randomly determined position on the grid was left empty). We used a gaze-contingent paradigm in which the items were only visible when the participant fixated inside one of the squares, so that the participant could only see one item at a time. They had unlimited time to make up their mind and could make as many fixations as they wished. After each choice, participants indicated their confidence in their decision on a visual analogue rating scale without any time constraints. Participants’ eye movements were recorded throughout the choice task. Both the choice task and the willingness-to-pay procedure were programmed in Experiment Builder version 1.10.1640 (SR-Research).

Following the choice task, an auction based on the BDM ratings was held (see experiment 1). After the auction, participants had to remain in the laboratory for an additional hour, as in experiment 1. At the end of the waiting period, participants were debriefed and thanked for their participation. Participants were paid £15 for their time, less the cost of the food item, if they bought any. The procedure of this experiment was approved by the University of Cambridge Psychology Research Ethics Committee (application no. Pre2014.113).

Exclusion criteria

Because the aim of the experiment was to explore the relationship between confidence and value, it was essential that we had enough measurement sensitivity in both the confidence scale and in the value scale (the BDM ratings) and that participants’ choices reflected their stated preferences. We therefore excluded participants if any of the following criteria were met:

  1. Participants used less than 25% of the BDM scale.

  2. Participants gave exactly the same BDM rating for more than 25% of the items.

  3. Participants used less than 25% of the confidence scale.

  4. Participants gave exactly the same confidence rating for more than 25% of their choices.

  5. Participant choices did not correspond to their BDM ratings (when predicting choices from differences in value, the DV coefficient deviated by more than 2 standard deviations (s.d.) from the experiment mean).

Participants

Experiment 1

A total of 30 participants took part in the study. One participant did not complete the task and one participant was excluded because the BDM estimates were poor predictors of his choice (failed criterion 5). Thus 28 participants were included in the analysis (13 females, aged: 19–73). All participants were required to fast for 4 hours before taking part in the experiment. Blood glucose levels were taken to test their adherence to this criterion (mean glucose level = 83.57 mg dl−1, s.d. = 10.90 mg dl−1; by comparison, the mean fasting blood glucose levels for adults is 86.4 mg dl−1)55 . All participants gave informed consent before participating in this experiment.

Experiment 2

Of the 30 participants who completed the study, three were excluded due to a limited range in their BDM ratings (failed criterion 2). An additional three participants were excluded for a limited range in their use of the confidence scale (failed criterion 4). In total, 24 participants were included in the main analyses (17 females, aged: 21–38). All participants were required to fast for 4 hours before doing the experiment. All participants gave informed consent before participating in this experiment.

Sample size was determined a priori. A power estimation was based on previously published work that used a similar experimental set-up26 . We implemented a fixed sample stopping rule set a priori (N = 30). Statistical inferences were conducted only after allof the data were collected. A participant who did not fulfil one of the exclusion criteria (decided before data collection) would have been excluded from the analysis without replacement.

Eye trackers

For experiment 1, eye gaze was sampled at 250 Hz with a head-mounted SR Research Eyelink II eye-tracker (SR-Research). For experiment 2, eye movements were recorded at 1,000 Hz with an EyeLink 1000 Plus eye-tracker (SR-Research).

Preparation of the eye-tracking data

Experiment 1

Areas of interest (AI) were defined by splitting the screen in half to create two equal-sized areas. Fixations in the left AI were assumed to be directed towards the left snack item and vice versa. We constructed two variables from the eye-tracking data: the DDT between the two AIs and GSF. DDT was calculated by subtracting the total dwell time on the left side from the total dwell time on the right side. GSF was calculated using the number of times that participants shifted their gaze from one AI to the other during each trial.

Experiment 2

AIs were pre-defined by the three squares that participants had to fixate on to view the items (given the gaze-contingent design). We derived two variables from the eye-tracking data: the total dwell time in each AI for a given trial and GSF. Following experiment 1, GSF measured the number of fixations in one AI immediately followed by a fixation in another AI. To ensure that participants paid attention, we excluded trials where participants had not fixated on every option available at least once. Of the 3,457 trials, 13 were excluded from the analysis for this reason.

Hierarchical models

All of the hierarchical analyses reported in the results section were conducted using the lme4 package56 (version 1.1-7) in R. For the linear models, degrees of freedom and P values were obtained using the Kenward–Roger approximation, as implemented in the pbkrtest package57 . For the choice models (Fig. 1c,f), we ran two hierarchical logistic regressions: in experiment 1 we predicted the log odds ratio of picking the right-hand option on a given trial; for experiment 2 we predicted the log odds ratio of picking the reference item. The reference item was determined as the first item encountered according to reading order in Latin languages (that is, the upper-left item for the trials when an item was presented in that position and the upper right item for the remaining trials). Fixed-effect confidence intervals were estimated by multiplying the standard errors by 1.96 (ref. 58 ). Because these confidence intervals are estimates that do not take the covariance between parameters into account59 , they should not be interpreted too strictly, but rather serve to give the reader a sense of the precision of the fixed-effect coefficients. Note that all predictors reported are z-scored on the participant level and that all models allow for random slopes at the participant level. For completeness, we report coefficients from the full model, while noting that this model is not always the most parsimonious. For a comprehensive list of models tested and a formal model comparison using BIC scores see the Supplementary Information Section 3.

Note that the regression models for confidence in experiment 1 had issues converging. We addressed these issues by square-root-transforming the |DV| predictor. Notably, for the individual difference analyses that investigated the change of mind and transitivity, we did not implement hierarchical models but instead unpooled (individual-level) models. The rationale behind this choice was that for both analyses we were interested in studying between-subject variation (Fig. 3c and Fig. 4c) that could be potentially affected by the shrinkage of parameters towards the group mean that is characteristic of hierarchical models60.

Code availability

The code for the analyses presented in this article can be found at the BDM Lab GitHub page: https://github.com/BDMLab.

Data availability

The data presented in this article can be found at the BDM Lab GitHub page: https://github.com/BDMLab, and on figshare: https://dx.doi.org/10.6084/m9.figshare.3756144.v261 .

Additional information

How to cite this article: Folke, T., Jacobsen, C., Fleming, S. M. & De Martino, B. Explicit representation of confidence informs future value-based decisions. Nat. Hum. Behav. 1, 0002 (2016).

References

  1. 1.

    , , & Confidence in value-based choice. Nat. Neurosci. 16, 105–110 (2013).

  2. 2.

    , , & Neural correlates, computation and behavioural impact of decision confidence. Nature 455, 227–231 (2008).

  3. 3.

    , & Choice certainty is informed by both evidence and decision time. Neuron 84, 1329–1342 (2014).

  4. 4.

    et al. A common mechanism underlies changes of mind about decisions and confidence. eLife 5, e12192 (2016).

  5. 5.

    , , & Automatic integration of confidence in the brain valuation signal. Nat. Neurosci. 18, 1159–1167 (2015).

  6. 6.

    , & Reassessing VMPFC: full of confidence? Nat. Neurosci. 18, 1064–1066 (2015).

  7. 7.

    , & Confidence as Bayesian probability: from neural origins to behavior. Neuron 88, 78–92 (2015).

  8. 8.

    , & Confidence and certainty: distinct probabilistic quantities for different goals. Nat. Neurosci. 19, 366–374 (2016).

  9. 9.

    , & Prefrontal contributions to metacognition in perceptual decision making. J. Neurosci. 32, 6117–6125 (2012).

  10. 10.

    , , , & Relating introspective accuracy to individual differences in brain structure. Science 329, 1541–1543 (2010).

  11. 11.

    , , , & Theta-burst transcranial magnetic stimulation to the prefrontal cortex impairs metacognitive visual awareness. Cogn. Neurosci. 1, 165–175 (2010).

  12. 12.

    et al. What failure in collective decision-making tells us about metacognition. Phil. Trans. R. Soc. B 367, 1350–1365 (2012).

  13. 13.

    et al. Does interaction matter? Testing whether a confidence heuristic can replace interaction in collective decision-making. Conscious. Cogn. 26, 13–23 (2014).

  14. 14.

    & Empirical support for higher-order theories of conscious awareness. Trends. Cogn. Sci. 15, 365–373 (2011).

  15. 15.

    , , & Changes of mind in decision-making. Nature 461, 263–266 (2009).

  16. 16.

    & Metacognition in human decision-making: confidence and error monitoring. Phil. Trans. R. Soc. B. 367, 1310–1321 (2012).

  17. 17.

    , & Measuring utility by a single-response sequential method. Behav. Sci. 9, 226–232 (1964).

  18. 18.

    , , & How green is the grass on the other side? Frontopolar cortex and the evidence in favor of alternative courses of action. Neuron 62, 733–743 (2009).

  19. 19.

    , & The role of human orbitofrontal cortex in value comparison for incommensurable objects. J. Neurosci. 29, 8388–8395 (2009).

  20. 20.

    , , , & An automatic valuation system in the human brain: evidence from functional neuroimaging. Neuron 64, 431–439 (2009).

  21. 21.

    , , & Choice from non-choice: predicting consumer preferences from blood oxygenation level-dependent signals obtained during passive viewing. J. Neurosci. 31, 118–125 (2011).

  22. 22.

    , & Appetitive and aversive goal values are encoded in the medial orbitofrontal cortex at the time of decision making. J. Neurosci. 30, 10799–10808 (2010).

  23. 23.

    & Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proc. Natl Acad. Sci. USA 108, 13852–13857 (2011).

  24. 24.

    & Normalization as a canonical neural computation. Nat. Rev. Neurosci. 13, 51–62 (2011).

  25. 25.

    & Normalization is a general neural mechanism for context-dependent decision making. Proc. Natl Acad. Sci. USA 110, 6139–6144 (2013).

  26. 26.

    , & A range-normalization model of context-dependent choice: a new model and evidence. PLoS Comput. Biol. 8, e1002607 (2012).

  27. 27.

    , & Visual fixations and the computation and comparison of value in simple choice. Nat. Neurosci. 13, 1292–1298 (2010).

  28. 28.

    & Representation of confidence associated with a decision by neurons in the parietal cortex. Science 324, 759–764 (2009).

  29. 29.

    et al. Decisions reduce sensitivity to subsequent information. Proc. Biol. Sci. 282, 20150228 (2015).

  30. 30.

    , & Post choice information integration as a causal determinant of confidence: Novel data and a computational account. Cogn. Psychol. 78, 99–147 (2015).

  31. 31.

    & Does confidence use a common currency across two visual tasks? Psychol. Sci. 25, 1286–1288 (2014).

  32. 32.

    & How to measure metacognition. Front. Hum. Neurosci. 8, 1–9 (2014).

  33. 33.

    , , & Theory of Games and Economic Behavior (Princeton Univ. Press, 2007).

  34. 34.

    & Violations of the betweenness axiom and nonlinearity in probability. J. Risk Uncertain. 8, 167–196 (1994).

  35. 35.

    , & Observing violations of transitivity by experimental methods. Econometrica 59, 425–439 (1991).

  36. 36.

    , & A minimum violations ranking method. Optim. Eng. 13, 349–370 (2011).

  37. 37.

    Efficiency estimation of production functions. Int. Econ. Rev. 13, 568–598 (1972).

  38. 38.

    Goodness-of-fit in optimizing models. J. Econ. 46, 125–140 (1990).

  39. 39.

    & A computational framework for the study of confidence in humans and animals. Phil. Trans. R. Soc. B 367, 1322–1337 (2012).

  40. 40.

    , & The sense of confidence during probabilistic learning: a normative account. PLoS Comput. Biol. 11, e1004305 (2015).

  41. 41.

    , , , & Cortical substrates for exploratory decisions in humans. Nature 441, 876–879 (2006).

  42. 42.

    , , & Rostrolateral prefrontal cortex and individual differences in uncertainty-driven exploration. Neuron 73, 595–607 (2012).

  43. 43.

    & Anterior prefrontal function and the limits of human decision-making. Science 318, 594–598 (2007).

  44. 44.

    & Choice, uncertainty and value in prefrontal and cingulate cortex. Nat. Neurosci. 11, 389–397 (2008).

  45. 45.

    & Resolution of uncertainty in prefrontal cortex. Neuron 50, 781–789 (2006).

  46. 46.

    , & Foundations of human reasoning in the prefrontal cortex. Science 344, 1481–1486 (2014).

  47. 47.

    , & Neural computations underlying arbitration between model-based and model-free learning. Neuron 81, 687–699 (2014).

  48. 48.

    , , & The neural representation of unexpected uncertainty during value-based decision making. Neuron 79, 191–201 (2013).

  49. 49.

    The Oxford Handbook of Rational and Social Choice (Anand, P., Pattanaik, P. K. & Puppe, C. eds) 173–195 (Oxford Univ. Press, 2008).

  50. 50.

    & Shared neural markers of decision confidence and error detection. J. Neurosci. 35, 3478–3484 (2015).

  51. 51.

    , & Does perceptual confidence facilitate cognitive control? Atten. Percept. Psychophys. 77, 1295–1306 (2015).

  52. 52.

    Building bridges between perceptual and economic decision-making: neural and computational mechanisms. Front. Neurosci. 6, 70 (2012).

  53. 53.

    The psychophysics toolbox. Spat. Vis. 10, 433–436 (1997).

  54. 54.

    , & The Eyelink Toolbox: eye tracking with MATLAB and the Psychophysics Toolbox. Behav. Res. Methods Instrum. Comput. 34, 613–617 (2002).

  55. 55.

    et al. Fasting blood glucose and cancer risk in a cohort of more than 140,000 adults in Austria. Diabetologia 49, 945–952 (2006).

  56. 56.

    , , & Fitting Linear Mixed-Effects Models Using lme4. J. Stat. Softw, 67, 1–48 (2015).

  57. 57.

    & A Kenward-Roger approximation and parametric bootstrap methods for tests in linear mixed models–the R package pbkrtest. J. Stat. Softw. 59, 1–30 (2014).

  58. 58.

    & Data Analysis using Regression and Multilevel/hierarchical Model. (Cambridge Univ. Press, 2006).

  59. 59.

    How trustworthy are the confidence intervals for lmer objects through the effects package? Stack Exchange (accessed 10 December 2015);

  60. 60.

    & Bayesian measures of explained variance and pooling in multilevel (hierarchical) models. Technometrics 48, 241–251 (2006).

  61. 61.

    Explicit representations of confidence informs future value-based decisions. figshare (2016).

Download references

Acknowledgements

This work was supported by the Wellcome Trust and Royal Society (Henry Dale Fellowship no. 102612/Z/13/Z to B.D.M.) and the Economics and Social Research Council (PhD scholarship for T.F.). The funders had no role in the study design, the data collection and analysis, the decision to publish, or the preparation of the manuscript. We would like to thank Y. Yamamoto for sharing the methods he developed to rank choice in experiment 1 and C. Street and S. Bobadilla Suarez for help in collecting the data and pre-processing the eye-tracking raw data used in experiment 1. We also thank C. Ruff for suggesting an appropriate name for the GSF variable.

Author information

Affiliations

  1. Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK

    • Tomas Folke
  2. Department of Food and Resource Economics, University of Copenhagen, Rolighedsvej 25, DK-1958 Frederiksberg C, Copenhagen, Denmark

    • Catrine Jacobsen
  3. University College London, Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London WC1N 3BG, UK

    • Stephen M. Fleming
  4. Institute of Cognitive Neuroscience, 17–19 Queen Square, London WC1N 3AR, UK

    • Benedetto De Martino

Authors

  1. Search for Tomas Folke in:

  2. Search for Catrine Jacobsen in:

  3. Search for Stephen M. Fleming in:

  4. Search for Benedetto De Martino in:

Contributions

B.D.M., C.J. and S.M.F. designed the first experiment reported in this paper. The data for the first experiment were collected by C.J. The second experiment was designed by T.F. and B.D.M. The data for the second experiment were collected by T.F., and the data from both experiments were analysed by T.F. The article was written by B.D.M. and T.F. All authors revised the manuscript.

Competing interests

The authors declare no competing interests.

Corresponding author

Correspondence to Benedetto De Martino.

Supplementary information

PDF files

  1. 1.

    Supplementary information

    Supplementary Figures 1–7, Supplementary Tables 1–16, Supplementary Methods and Supplementary Results