Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions

Abstract

Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials). Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.

Introduction

A general understanding of the notion of confidence is that it quantifies the degree of belief in a decision1,2. The simplest context for studying confidence is the one of perceptual decision making. In psychology and neuroscience, the most commonly used experimental protocols are the ones of two alternative forced choices (2AFC) and stimulus discrimination tasks in which, in a sequence of trials, the participant is presented with stimuli and has to make a binary choice, associating each stimulus to one of two categories (e. g. decide if a visual stimulus is the image of a cat or a dog). Many studies have tackled the issue of confidence measurement in perceptual decision tasks, either by directly requiring participants to provide an estimation of their confidence3,4,5, or by using postdecision wagering (subjects can choose a safe option, with low reward regardless of the correct choice)6,7,8. Postdecision wagering has been used in behaving animals in order to study the neural basis of confidence9,10,11,12.

In order to model the neural mechanisms underlying the decision-making process, two main routes are followed. The most frequently used considers (linear) drift-diffusion models (DDM)13,14 or independent race models (IRM)15,16,17, in which choice-specific cells accumulate evidence in favor of one or the other alternative to which they are tuned. A more biophysical approach considers attractor neural networks18, with competing pools of cells, leading to a nonlinear dynamics with choice-specific attractors. Within one or the other framework, researchers have tried to relate confidence to the decision-making process, making different hypotheses on the origin of confidence. Within the Bayesian and signal detection theories, researchers model confidence from the probability of having made the correct choice7,8,19,20,21. When considering the neural dynamics, researchers assume that confidence is based on the integration of evidence over time9,22,23. Finally, researchers have modeled confidence as based on a consensus reached by a pool of independent decision-making networks24,25.

Several experimental studies1,8 suggest that choice and confidence can be read out from the same neural representation. In an experiment by Kiani and Shadlen10, monkeys perform a discrimination task (each correct choice leading to a reward), but on half of the trials, the monkey is given the option to abort the task in favor of a certain but small reward. The probability of choosing this’sure target’ reflects the monkey’s degree of choice uncertainty, assuming that risk aversion strongly correlates with this uncertainty. In order to account for the experimental findings in this uncertain option task, authors23,26 have proposed biophysical attractor neural network models. They show that these models capture both the behavioral observations and the associated physiological recordings from neurons in the Lateral Intraparietal (LIP) cortex, area where the firing rates of individual neurons strongly correlate with the decision that is being made, and in the pulvinar area, which might be the locus of the readout of confidence from the LIP activity.

In the present work, we address the issue of the ability of attractor networks to quantitatively account for confidence reports in human. For this, we first experimentally investigate confidence formation and its impact on sequential effects in human experiments. Participants perform an orientation discrimination task on Gabor patches that deviate clockwise or counter-clockwise with respect to the vertical. In some blocks, after reporting their decisions, participants perform a confidence judgment on a visual scale. Then, we fit an attractor neural network model27,28 on the behavioral data. More precisely, for each participant, we calibrate a network specifically on his/her behavioral data, the fit being only based on mean response times and accuracy. With the model so calibrated for each participant, and making simulations that replicate the experimental protocol, here for the first time we quantitatively confront an attractor neural network behavior with human behavior during full sequences of perceptual decisions. Following Wei and Wang23, we assume that confidence is an increasing function of the difference, measured at the time the decision is made, between the mean spike rates of the two neural pools specific to one or the other of the two possible choices. We show that in this way, behavioral effects of confidence can be accurately estimated for each participant. We find that the attractor neural network accurately reproduces an effect of confidence on serial dependence which is observed in the experiment: participants are faster (respectively slower) on trials following high (resp. low) confidence trials. Since drift diffusion models cannot account for such effects without ad-hoc changes of parameters from trial to trial, we argue that these sequential effects reveal the intrinsically non-linear nature of the underlying neural network dynamics.

Results

Experiment and neural model

Participants completed a visual discrimination task between clockwise and counter-clockwise orientated stimuli, followed, or not, by a task in which they were asked to assess the confidence in their decision. We used three kinds of blocks, comprising either sequences of pure decision trials (pure blocks), trials with feedback (feedback blocks) or trials with confidence judgments (confidence blocks). In feedback blocks, on each trial, participants received auditory feedback on the correctness of their choice. In the confidence block, after each trial, they were asked to report their confidence on a discrete scale of ten levels, from 0 to 9. In feedback blocks, participants were not asked to report their confidence, and in confidence blocks they did not receive any feedback. We illustrate the experimental protocol in Fig. 1, panels a–d.

Figure 1
figure1

Experimental protocol and model architecture. Procedure of the discrimination task, for the three blocks. (a) Structure of a trial: Following a fixation period, the circular grating (Gabor patch, oriented clockwise, C, or counterclockwise, AC) appears and participants make the decision (C or AC). In confidence blocks, after a delay, participants report their confidence with respect to their choice, on a discrete scale with 10 levels. (b) Time course of a pure block trial. (c) Time course of a confidence block trial. (d) Time course of a feedback block trial. (e) Decision-making network structure. The network consists of two neural pools (specific to clockwise (C) and anti-clockwise (AC) stimuli), endowed with self-excitation and mutual inhibition. After a decision is made (threshold crossed), a non specific inhibitory input (corollary discharge) is sent onto both units. (f) Time course of the neural activities of both pools during two consecutive trials.

For the modelling of the neural correlates, we consider a decision-making recurrent neural network governed by local excitation and feedback inhibition, based on a biophysical model of spiking neurons18,29. We work with a reduced version27 allowing for large-scale numerical simulations and for better analytic analysis. More precisely, we consider a model variant28 allowing the network to engage in a sequence of perceptual decisions, as explained below.

The model (see Fig. 1e) consists of two competing units, each one representing an excitatory neuronal pool, selective to one of the two available response options, here C (clockwise) or AC (anti-clockwise). Each population receives a task-related input signaling the perceived evidence for each option. The difference between these inputs varies inversely with the difficulty of the task, thus it varies with the absolute value of the Gabor orientation. The decision, ‘C’ or ‘AC’, is made when one of the two units reaches a threshold z. Once a decision is made (threshold is reached), a non specific inhibitory current (the corollary discharge) is injected into the two neural pools, causing a relaxation of the network activity towards a neutral low activity state, before the onset of the next stimulus. This allows the network to deal with consecutive sequences of trials, as illustrated in Fig. 1f. For a biologically relevant range of parameters, relaxation is not complete at the onset of the next stimulus, hence the decision made in this new trial will depend on the one at the previous trial. In a previous work, we showed28 that the model accounts for the main sequential and post-error effects observed in perceptual decision making experiments in human and monkeys.

Full details about the experiment and the model can be found in the Methods Section.

Calibration of the model onto the behavioral results

In Fig. 2, we show for each participant response times and accuracies with respect to stimulus orientation (absolute value of the orientation angle). All subjects exhibit improved accuracies and shorter response times for less difficult (larger orientation) stimuli, as classically reported in the literature.

Figure 2
figure2

Mean response times (a,c) and accuracies (b,d) as a function of the absolute value of stimulus orientation, in the pure (a,b) and confidence (c,d) blocks. For each subject we represent the behavioral data (red dots) and the associated fitted model (blue line). Error bars are 95% confidence interval using the bootstrap method.

We fit the model to these behavioral data. As detailed in the Methods Section, we perform model calibration in order to reproduce both the mean response times and the accuracy (success rates). For each participant, this is done separately for the three types of blocks.

First, we note that the model correctly reproduces the behavioral results of the different participants, as can be seen in Fig. 2. Second, we compare the values of the parameters obtained for the pure and confidence blocks. We find that participants have higher decision threshold (Signed Rank test30 p = 0.03, 6 participants), higher stimulus strength level by angle (Signed Rank test, p = 0.031, 6 participants) and higher mean non-decision times (Signed Rank test p = 0.03, 6 participants). Two of the authors of the present paper (J-R. Martin, J. Sackur, personnal communication, April, 2018) have obtained analogous results when analyzing similar data within the Drift Diffusion framework: non-decision time, drift rate and decision threshold are modified by the confidence context in the experimental setup.

Non-decision times

Our fitting procedure allows estimating the non-decision times. In Fig. 3, we represent the histogram of the response times across participants for the pure and confidence blocks. The red curve shows the distribution of non-decision times in the model, and the black curve the response times distribution. We note that, with a fit only based on the mean response times and accuracies, the model also accurately account for the distributions of response times. We find that the minimum value of non-decision time is 75 ms for the pure block, and 100 ms for the confidence block, and the average non-decision times are within the order of magnitude of saccadic latency31. Finally, we observe that the non-decision times distributions clearly show a right skew for several participants, in agreement with previous studies32. This justifies the modelling of non-decision times with an exponentially modified Gaussian (ex-Gaussian) distribution, instead of simply adding a constant non-decision time to every decision time.

Figure 3
figure3

Distributions of RTs for each subject (a) Pure block data, (b) confidence block data. For both panels: In blue, participants’ histograms of the response times; Black curve: density of response times of the simulated network model; Red curve: the associated non decision response times distribution.

Confidence modeling

Recent studies have reported a choice-independent representation of confidence signal in monkeys33 and in rats9, as well as evidence for a close link between decision variable and confidence – in monkeys from LIP recordings10 and in humans from fMRI experiments34. In an experiment with monkeys,Shadlen10 introduce a’sure target’ associated with a low reward, which can be chosen instead of the categorical targets. The probability of not choosing the sure target is then a proxy for the confidence level. Wei and Wang23 model the neural correlates of confidence within the framework of attractor neural networks. They assume that the confidence level (as given by the probability of not choosing the sure target) is a sigmoidal function of the difference, at the time of decision, between the activities of the winning and loosing neural pools. This hypothesis is in line with similar hypothesis in the framework of DDMs and other decision-making models2,6. They then show that the empirical dependencies of response times and accuracies in the confidence level are qualitatively reproduced in the simulations of the neural model.

We make here the hypothesis that the confidence in a decision is based on the difference Δr between the neural activities of the winning and loosing neural pools23, measured at the time of the decision: the larger the difference, the greater the confidence. In our experiment, the measure of confidence is the one reported by the subjects on a discrete scale, and it is this reported confidence level that we want to model. Within our framework, we quantitatively link this empirical confidence to the neural difference Δr by matching the distribution of the neural evidence balance with the empirical histogram of the confidence levels. In Fig. 4, we show, for each participant, the matching between the histogram of confidence levels, as reported by the participant, and the distribution of Δr, as obtained in the model calibrated on the participant performance. We note that the main difference between the participants’ histograms lies in the percent of trials (and level on the confidence scale) for which a participant reports the highest confidence level. This last point is highly dependent on the participant, and can be at a very low value of Δr (see e.g. Participant 2 on Fig. 4b).

Figure 4
figure4

Matching network confidence measure to empirical behavioral confidence. (a) Confidence histograms. The x-axis gives the value of the confidence on a discrete scale from 0 to 9. Each sub-panel corresponds to a different participant with, in blue, the histogram of the reported confidence, and in orange, the one from the model. For clarity we plot the blue and orange bars side by side, but the bins of the histograms are, by construction, identical. (b) Transfer function F for each participant. The x-axis denotes the difference in neural pools activities Δr at the time of the decision, and the y-axis the cumulative distribution of Δr. Each point represents the levels of Δr delimiting the level of confidence (from left to right, confidence level 0 to confidence level 9). The dashed colored curve is the cumulative distribution function (CDF) and the light blue dashed curve is the fit of the CDF by a sigmoid.

In our analysis, the shape of the mapping is not chosen a priori but non-parametrically inferred from the experimental data. This is in contrast with previous studies in which the sigmoidal shape is imposed8,9,35. In a related neural attractor model, Wei and Wang23 exhibit a link between Δr and a probabilistic measure of confidence, and show that it is well fitted by a sigmoid function. Here we find that, for each participant, the non-parametric mapping is also very well approximated by a sigmoidal function of the type 1/(1 + exp(−βr − κ))), with participant-specific parameters κ and β. It is noticeable that both the link between Δr and a probabilistic measure of confidence observed in an attractor network model, and the mapping obtained here between Δr and the empirical confidence, can be approximated by a sigmoid function. This suggests that the human reported confidence can be understood as a discretization of a probabilistic function.

Response times and accuracies vs. confidence

Studies have shown that confidence ratings are closely linked to response times36,37 and choice accuracy36,38,39. The behavioral confidence in our model is assumed to be based on a simple neural quantity measured at the time of the decision. In what follows, we study whether this hypothesis on the neural correlates of confidence can account for the links between the behavioral data: response times, accuracy and confidence. In Fig. 5, we represent the response times (Fig. 5a) and choice accuracy (Fig. 5b) with respect to the reported confidence level for each participant. The data points show the experimental results (with the error bars as the bootstrapped 95% confidence interval), and the colored line the result of the simulation (with the light colored area the bootstrapped 95% confidence interval). Response times decrease36,37 and accuracies increase with confidence37,38,40,41. We find a monotonic dependency between response times and confidence, and between accuracy and confidence, but with specific shapes for each participant. Note that some values of confidence are only observed for a few trials, resulting then in large error bars especially for accuracy as we take the mean of a binary variable. For the numerical simulations, the relatively large size of the confidence interval is due to the limited number of trials, since we limit ourselves to the same protocol as the experimental one.

Figure 5
figure5

Response times and Accuracy as a function of confidence. (a) Response times, (b) Accuracy. For both panels: each sub-panel represents a different participant. Dots are experimental data with 95% bootstrapped confidence interval as error bars. Lines are averages over 20 simulations of the attractor neural network model (calibrated as explained in the Methods Section). The shaded area represents the 95% bootstrapped confidence interval on the mean.

For comparison, we also fit on our experimental data another non-linear model with mutual inhibition, the Usher-McClelland model42 (see Supplementary Information Section S1). The model fits the response times with respect to confidence, but only at intermediate levels of confidence. For some participants, we observe a strong divergence at high confidence (Participant 1,4 and 5). Although accuracy is an increasing function of confidence (except for participant 5), the experimental data do not fall within the bootstrapped confidence interval of the simulations (Supplementary Information Fig. S1). In contrast, we see that our more biophysical model correctly reproduces the psychometric and chronometric functions with respect to confidence for each participant, despite the important difference of response times between participants.

Previous studies found that, during a perceptual task, reported confidence increases with stimulus strength for correct trials, but decreases for error trials9,37,38. This effect of confidence has been correlated to patterns of firing rates in experiments with rats9 and to the human feeling of confidence38. We observe the same type of variations of confidence with respect to stimulus strength (Supplementary Information Fig. S2), both in the experimental results and in the model simulations. This effect is in accordance with a prediction of statistical confidence, defined as the Bayesian posterior probability that the decision-maker is correct38,43,44. We thus see that the attractor network model reproduces a key feature of statistical confidence.

Impact of confidence on history biases

Statistical analysis of sequential effects

Perceptual decisions made by humans in behavioral experiments depend not only on the current sensory input, but also on the choices made at previous trials. Various sequential effects have been reported45,46,47, and researchers have proposed different models to account for them28,48,49,50 – in a previous work on post error effects28 we give a more general discussion of sequential effects. When the subject does not receive any feedback, confidence in his/her decision might be important for controlling future behaviors1,20. Recently, the effects of confidence on the history biases have been experimentally investigated51,52. One main finding is that decisions with high confidence confer stronger biases upon the following trials. Here, we investigate the influence of confidence upon the next trial in the empirical data, and we show that the results are well reproduced by the behavior of the dynamical neural model.

First, we perform a statistical analysis of the effect of history biases on response times in the experimental data.

For this, for each participant, we classify each trial into low and high confidence: a trial is considered as low confidence (resp. high confidence) if the reported confidence is below (resp. above) the participant’s median. We analyze the history biases making use of linear mixed effects models (LMM)53. The LMM we consider assumes that the logarithm of the response time at trial n, RTn, is a linear combination of factors as follows:

$$\mathrm{ln}(R{T}_{n})={a}_{0,p}+{a}_{1,p}|\theta |+{a}_{2}{x}_{repetition}+{a}_{3,p}\,\mathrm{ln}(R{T}_{n-1})+{a}_{4}Con{f}_{n-1}$$
(1)

with xrepetition a binary variable taking the value 1 if the correct choice for the current trial is a repetition of the previous choice (and 0 otherwise), θ the orientation of the Gabor (in degree), RTn−1 the response times of the previous trials, and Confn−1 the confidence of the previous trial coded as 0 for low and 1 for high. The subscript p in a coefficient (e.g a0,p) indicates that for this parameter we allow for a random slope per participant. We show the results in Table 1.

Table 1 Results of the application of the LMM on the experimental data (first row) and on the data from the neural network simulations (second row).

We find that higher orientations lead to faster response times and the repetition biases on response times48. In line with previous works37, high confidence has the effect of speeding up the following trial. Finally, we find that the previous response time has an effect on the subsequent one, meaning that the participants have the tendency to show sequences of fast (or slow) response times.

Next, following a numerical protocol replicating the experimental one, for each participant we make numerical simulations with the model specifically calibrated on the participant’s data. Recall that the fit has been done on mean response times and accuracy, hence without taking into account serial dependencies. We then look at correlations between decisions made in successive trials by the neural attractor network, performing the same type of statistical analysis as done on the experimental data (see Methods, Table 1).

We note that the attractor neural network captures the variation of response times with respect to angle orientation27. We find that the dependency in the choice history (through the repetition of responses), as observed in the experimental data, is correctly reproduced by the model, in agreement with a previous study of these effects28. Quite remarkably, we observe an effect of confidence on response times in the network, with the same sense of variation (negative slopes) as in the experiment.

Analysis of the underlying neural dynamics

To understand how the neural dynamics leads to these confidence-specific sequential effects, we make an analysis of the dynamics similar to the one done in Berlemont and Nadal28 for the analysis of post-error effects in the same neural model. We illustrate this analysis in Fig. 6. On each panel, we compare the mean neural dynamics for post-low and post-high confidence trials (respectively red and blue lines). Without loss of generality, we assume that the previous decision was a C grating. We first note that the relaxation dynamics between two consecutive trials are different, resulting in different starting points for the next trial, from post-low and post-high confidence trials. Panel (A) corresponds to the case where the new stimulus is also C oriented (“repeated” case), at low strength level. The ending points of the relaxations fall into the correct basin of attraction. Because the post-high confidence relaxation lies deeper into the basin of attraction than the one of post-low trials, the subsequent dynamics will be faster for post-high confidence trials in this case. In panel (B) we represent the case, still at low stimulus strength, where the stimulus orientation of the new stimulus is the opposite (“alternated” case) to the one corresponding to the previous decision (hence an AC grating). Both dynamics lie close to the basin boundary of the two attractors, thus the dynamics are slow and there is no significant difference between post-low and post-high confidence trials. In panels (C) and (D) we represent the same situations as panels (A) and (B), respectively, but for high strength levels (easy trials). The ending points of the relaxations are far from the boundary of the basins of attraction, whatever the grating presented. The response times for post-high and post-low confidence trials are thus similar. This analysis shows that the non-linearity of the network dynamics is responsible for the considered sequential effect. More precisely, it is the very existence of basin boundaries, and the fact that the network state is more or less close to the basin boundary at the onset of a new stimulus, which lead to the sequential effects.

Figure 6
figure6

Non linear dynamics in post-low and post-high confidence trials. Phase-plane trajectories (in log-log plot, for ease of viewing) of the post low and high confidence trials. We assume that the previous decision was decision C. The axis represent the losing neural pool SL and and the winning neural pool SW at the previous trial. The blue color codes for post-high confidence trials, and the red one for post-low confidence. Panels (a) and (b): Repeated and alternated case for low orientation stimuli; Panels (c) and (d): Repeated and alternated case for high orientation stimuli. In order to compare the decision times, the dynamics starting at the onset of the next stimulus is followed during 200 ms, as if there were no decision threshold. The actual decision occurs at the crossing of the dashed gray line, indicating the threshold.

We now qualitatively confront the outcomes of the above analysis with the experimental data. To do so, we group the response times according to the same cases as previously: high and low stimulus strength, repeated or alternated trials. We compare post high and low confidence trials in each case, using a t-test54. We find that mean response times between post low and high confidence trials are different in the low orientation stimuli and repeated case (t-test, p = 0.044, df = 1322.6), but they are identical in the high orientation stimuli and alternated case, high orientation stimuli and alternated case, low orientation stimuli and repeated case they are identical (respectively p = 0.90, df = 778.7, p = 0.70, df = 610, p = 0.23, df = 617.4). This is in accordance with the outcomes of the above analysis based on the non-linear dynamics.

The model reproduces sequential effects correlated with repetition and confidence, and we have shown that these effects result from the intrinsic nonlinear network dynamics. However, the model does not induce correlations between response times of two successive trials (for more details see Supplementary Information S3). This suggests that the correlations observed in the experimental data cannot be explained by the intrinsic dynamics of the attractor network, but may come from higher order processing. Within a DDM approach, one may account for such effect, but only with a change of parameters from trial to trial (e.g. by changing the decision threshold depending on the previous reaction time). However, such ad hoc changes of parameter values are not supported by models or experimental data which would provide clues about the neurodynamical mechanisms underlying these changes.

Comparison with diffusion models

Next we investigate if linear models of decision-making of the DDM family are able to reproduce the sequential effects. More precisely, we consider the independent race model (IRM15,16,17). During the accumulation of evidence the equations of evolution of the activities xi, i = {C, AC}, are:

$$d{x}_{i}={I}_{0}(1\pm c)dt+\sigma {\nu }_{i}(t)$$
(2)

where vi(t), i = {C, AC}, are white noise processes. The first race that reaches a threshold z (or −z) is the winning race. The confidence in the decision is modelled as a monotonic function of the balance of evidence |z − xlosing|2,6,55.

We extend the IRM in order to deal with sequences of trials. To do so, we allow for a relaxation dynamics between trials, in a way analogous to the relaxation dynamics in the attractor network model. Hence, after a decision is made, both units receive a non specific inhibitory input leading to a relaxation until the next stimulus is presented (see Fig. 7). Within this extended IRM framework, we study how, with a fixed set of parameter values, the sequential effects would be correlated with confidence.

Figure 7
figure7

Schematic dynamics of a race model with a relaxation mechanism. The upper and bottom dash lines correspond to the two opposite decision thresholds. The blue trajectory is a typical winning race. The black rectangle on the x-axis denotes the onset of the next stimulus, hence the end of the relaxation period. The green and orange trajectories are the loosing races in two trials with different confidence outcomes. The green and orange dashed lines represent the mean dynamics of these two races during the presentation of the next stimulus.

Since in the IRM there is no interaction between the two races, the relaxation of the winning race is the same in both low and high confidence trials. However, the ending point of the relaxation following a decision is closer to the base-line (0 line) for a high confidence trial than when it comes to a trial with low confidence trial (Fig. 7). For the next trial, if the winning race is the same as previously, then the mean response times are identical in low and high confidence cases. However, if the opposite decision is made, the response time in the post-low confidence case is faster than the one in the post-high confidence case, as we can observe with the mean race shown in Fig. 7. This behavior is in contradiction with the experimental data for which we observe the opposite effect (see Supplementary Information Table S3). This conclusion applies more generally to any race-type model without interactions between units. This applies as well to IRMs with a non-linearity under the form of collapsing bounds (see Supplementary Information S6). Other researchers have studied different kind of sequential effects, such as post-error slowing, with collapsing bounds models56. However, these models do not have a relaxation dynamics between trials, and the model parameters are allowed to be modified between trials. Here we show that the confidence-specific sequential effects result from a type of non-linearity in the dynamics which impacts the interaction between neural units.

Authors have proposed a set of non-linear diffusion equations to approximate the process of decision-making in attractor neural networks57. These equations result from a description of the dynamics at the vicinity of the bifurcation point - the particular state at which any small input will induce the appearance of the two attractors. This framework is not well adapted to model sequences of decisions, since one would need to reset the system at the vicinity of the bifurcation point before the onset of a new trial. Even if this would be done, the system would not account for the experimentally observed sequential effects, for the very same reasons as those presented above for linear diffusion models. As shown by our analysis of the attractor network dynamics, the key non-linearity underpinning the sequential effects is the one resulting from the existence of frontiers between basins of attractions.

Discussion

Dynamical models of decision making implement in different ways the same qualitative idea: decision between two categories is based on the competition between units collecting evidences in favor of one or the other category (or with a single unit whose activity represent the difference between the categorical evidences). Most authors propose that behavioral confidence can be modeled as a function of the balance of evidence4,6,9,22,58. Very few works propose other mechanisms25,59,60. Among these exceptions, the consensus model25 assumes that several attractor neural networks are run in parallel, and a decision is reached when there is a consensus (more than half of the networks have chosen the same alternative). Within this framework, confidence is defined as the fraction of networks that have chosen the winning decision. This model can account for the relation between confidence and reaction times25. However, it is not clear how it can be modified in order to perform sequences of decisions. In particular, since the different networks reach the decision threshold at different times, it is not obvious how to decide when relaxation should start for each network, and it is not clear what kind of sequential effects should be expected for any given relaxation rule.

Considering the large majority of models based on the DDM framework, in view of our findings we now contrast the DDM and attractor neural network approaches. We do so on three aspects: the modelling of confidence, the analysis of sequential effects, and the issue of non-decision times.

Bayesian inference models compute confidence using drift-diffusion models (DDM) extensions based on decision variable balance4,6,9,58, possibly with additional mechanisms - decision variable balance combined with response times61 or post-decisional deliberation22 (the dynamics continues after the decision, thus updating the balance of evidence). Similar studies have been made with independent race models (IRM)15,16,17. These DDM or IRM models successfully account for various psychometric and chronometric specificities of human confidence. In DDMs, confidence based on decision variable balance predicts that confidence should deterministically decrease as a function of response times10. However, the response times distributions strongly overlap across confidence levels62. This property can be recovered making use of additional processes, such as with a two-stage drift-diffusion model22. Yet, other effects remain unexplained within the framework of DDM. This is the case of early influence of sensory evidence on confidence4, as well as the fact that confidence is mainly influenced by evidence in favor of the selected choice4.

Within the framework of attractor neural networks, early sensory evidence influence decision accuracy and reaction time63. The model discussed in the present paper is appropriate for going beyond and studying the effect of early sensory evidence on confidence. Given our findings, we expect that the model will reproduce the above mentioned effects not well accounted for by the DDMs.

As discussed in the Results section, various serial dependence effects are observed in perceptual decision-making. A recent finding is that the magnitude of history biases increases when previous trials were faster and correct. Given the known correlations between confidence, response time and accuracy, this effect is interpreted as an impact of confidence on the next decision51. By measuring directly the subjective confidence of the participants, recent studies confirm that history biases are correlated with confidence37,52,64. In our experiment, we observe that high confidence trials lead to faster subsequent choices in agreement with the above mentioned experimental studies. On the theoretical side, this impact of confidence on response times of the subsequent trials has been investigated within the framework of DDMs37. The usual analysis consists in dividing trials into two categories, subsequent to low or high confidence trials, fitting then separately a DDM on each type of trial. The main finding is that the parameters (threshold and drift) are different depending on the confidence level at the previous trial. In the absence of changes of parameters, even with the addition of a relaxation between trials as discussed in the Results section, the DDM or IRM models cannot account for the observed sequential effects: as we have seen, the predicted sequential effect would be the opposite of the observed one. In contrast, as discussed in the Results section, with a unique choice of parameters values for each participant, the attractor network model does not only account for the relationship between confidence, response times and accuracy, but also reproduces the influence of confidence on serial dependence.

Finally, the question of the non-decision times arise from our modelling work. Human studies commonly report right-skewed response times distribution31,65. Such long right tails are well captured by drift-diffusion models42,65 - and this is generally considered as a strong evidence in favor of the accumulation of evidence mechanism. However, with trained subjects, the right-skew is less pronounced and a Gaussian distribution fits well the response times distribution66. In contrast to human studies, experiments in monkeys do not show such long right tails in the response times histograms67. When assuming a constant value for the non-decision time, attractor neural network models do not produce right-skewed distributions, but accurately reproduce the shape of the distributions in monkeys experiments68. In accordance with these results, in this work we have shown that for the range of parameters we considered, the decision time distribution generated by the neural network can be approximated by a Gaussian distribution. Within the neural attractor framework, the experimentally observed long right tails can thus be understood as originating only from the non-decision times. Here we have proposed an estimation of the distribution of these non-decision time allowing to fit the empirical response times distributions. One should note that, even in the case of an analysis of experimental data within the DDM framework, the estimated non-decision times are not necessarily given by a constant value, but may show a distribution with a strong right skew32. These findings combined with ours suggest that the question of the origin of the long right tails in human response times has to be reconsidered.

To conclude, in this work we designed an experiment in order to study confidence with human participants. We fitted a neural attractor network model specifically to each participant in order to describe their behavioral results in continuous sequences of perceptual decisions: response times, accuracy and confidence. Quite remarkably, we found that the impact of confidence on sequential effects is well described by the nonlinear nature of the attractor dynamics.

Methods

Experiment

Participants

Nine participants (7 Females, Mean Age = 27.3, SD = 5.14) have been recruited from the Laboratoire de Psychologie cognitive et de Psycholinguistique’s database (LSCP, DEC, ENS-EHESS-CNRS, PSL, Paris, France). Every subject had normal or corrected-to-normal vision. The participants performed three sessions on three distinct days in the same week for a total duration of about 2 h15. Three participants were excluded. Two of the excluded participants did not complete correctly the experiment and one exhibited substantially asymmetric performance (98% of correct responses for an angle of 0.2, but 18% at −0.2 degree). As a result, we analyzed data from 6 participants.

Ethics statement

The experiment followed the ethics requirements of the Declaration of Helsinki (2008) and has been approved by the local Ethics Committee (Comité de Protection des Personnes, Ile-de-France VI, Paris, France). We obtained written informed consent from every participant who received a compensation of 15 euros for their participation.

Stimuli and tasks

The stimuli were generated using Matlab along with the Psychophysics Toolbox69. They were displayed on a monitor at 57.3 cm of the participants’ head. The participants performed the experiment in a quiet and darkened experimental room. Their heads were stabilized thanks to a chin-rest. Trials began with the presentation of a black fixation point (duration = 200 ms). Then the stimulus for the primary decision task was presented, consisting in a circular grating (diameter = 4°, Tukey window, 2 cycles per degree, Michelson contrast = 89%, duration = 100 ms, phase randomly selected at each trial). The grating had eight possible orientations with respect to the vertical meridian, and participants were asked to categorize them as clockwise or anti-clockwise with respect to the vertical meridian by pressing the right-arrow or left-arrow. Participants had been instructed to respond as follows: “You have to respond quickly but not at the expense of precision. After 1.5 s the message, “Please answer”, will appear on the screen. It would be really ideal, if you would answer before this message appears”.

Trials were of three types, grouped in pure block, feedback block and confidence block (see below). Participants performed three sessions on three distinct days. Each session (45 min) consisted in three runs, each run being composed of one exemplar of each of the three type of block, in a random order. Before starting the experiment, participants performed a short training block of each type, with easier orientations than in the main experiment.

Pure block

. In this block, participants waited 300 ms after each decision, before the black fixation point appears. The stimulus appeared 200 ms after this fixation point. The eight possible orientations for the circular grating were [−1.6, −0.8, −0.5, −0.2, 0.2, 0.5, 0.8, 1.6] and a stimulus was chosen randomly among them with the following weights: [0.05, 0.1, 0.15, 0.2, 0.2, 0.15, 0.1, 0.05].

Feedback block

. In this block, 200 ms after the decision, the participants received an auditory feedback (during 200 ms) about the correctness of the decision they just made. The black fixation dot appeared 100 ms after this feedback and a new trial began. The orientations of the circular gratings were chosen randomly from [−1.6, −0.8, −0.2, 0.2, 0.8, 1.6] with the following weights [0.12, 0.18, 0.2, 0.2, 0.18, 0.12].

Confidence block

. In the confidence block, participants had to evaluate the confidence on the orientation task 200 ms after the decision. To perform this task they had to move a slider on a 10-points scale, from pure guessing to certain to be correct. Importantly, the initial position of the slider was chosen randomly for each trial. Participants moved the slider to the left by pressing the “q” key, and to the right with the “e” key. We ask the following kind of confidence judgment to the participants: one extreme of the scale is “pure guess”, the other is “absolutely certain”. They confirmed the choice of the value of confidence by pressing the space bar. The participants had the choice to indicate that they had made a “motor mistake” during the orientation task. For this they had to press a key with a red sticker instead of responding on the confidence scale. After the choice of confidence, the participants had to wait 300 ms before the black fixation dot appears. After the fixation dot the stimulus appeared 200 ms later. The orientations of the circular gratings were the same as in the feedback block.

Accuracy is higher and response times are slower in confidence blocks than in pure blocks. To test this effect on accuracy we ran a binomial regression of responses with fixed factors of orientation and type of block (pure or confidence), the interaction between these factors and a random participant intercept. The orientation coefficient was 2.15 (SD = 0.17, z = 12.44 and p < 10−16); there was no effect of block type (p = 0.385). But we found a significant orientation by block type interaction (value of 0.55, SD = 0.08, z = 6.97 and p = 3 · 10−12), indicating that participants were more accurate in confidence blocks than in no-confidence blocks. In a similar way, we test the effect on response times by using a mixed effect regression with the same factors and intercept as for the accuracy (only on the absolute value of the orientation). We found that the orientation coefficient (value of −0.08, SD = 0.013 and p = 0.0006) and the block type coefficient (value of 0.095, SD = 0.028 and p = 0.011) were significant, meaning that participant are slower in the confidence block. Moreover, the slope by block type interaction with orientation was also significant (value of −0.028, SD = 0.010 and p = 0.031), meaning that the difference between the two types of blocks is more important at low orientation. Surprisingly, we find that performance and response time across participants are identical in the feedback and pure blocks (no statistically significant difference). The participants were highly trained in the orientation discrimination task.

Statistical analyses

We used RStudio with the package lme470 to perform a linear mixed effects analysis53 of the history biases of the reaction times, on both the experimental and the numerical simulations data.

To perform the comparison between the experimental data and the model results illustrated in Fig. 6, we first transform the response times of each participant using the z-score71. This normalization allows analyzing the participants all together.

We compared the LMM model described in the main text, Eq. (1), to other ones that do not include all the terms, using the ANOVA function (with the lme4 package70) that performs model comparison based on the Akaike and Bayesian Information Criteria (AIC and BIC)70. We find that the LMM from Eq. (1) is preferable in all cases (see Supplementary Information Table S4).

Attractor neural network model

Neural dynamics

We consider a decision-making recurrent network model governed by local excitation and feedback inhibition27,28. Within a mean-field approach, Wang and Wong27 have derived a reduced firing-rate model of a detailed biophysical model of spiking neurons29. This reduced model is composed of two interacting neural pools which faithfully reproduces not only the behavioral behavior of the full model, but also the dynamics of the neural firing rates and of the output synaptic gating variables. The model variant28 that we consider here takes into account a corollary discharge72,73. This results in a non specific inhibitory current injected into the neural pools just after a decision is made, making the neural activities relax towards a low activity, neutral, state, therefore allowing the network to deal with consecutive sequences of decision making trials28. For completeness, we recall here the equations and parameters with notation adapted to the present study.

The model consists of two competing units, each one representing an excitatory neuronal pool, selective to one of the two categories, C or AC. The dynamics is described by a set of coupled equations for the synaptic activities SC and SAC of the two units C and AC:

$$i\in \{C,AC\},\,\frac{{\rm{d}}{S}_{i}}{{\rm{d}}t}=-\frac{{S}_{i}}{{\tau }_{S}}+(1-{S}_{i})\gamma f({I}_{i,tot})$$
(3)

The synaptic drive Si for pool i {C, AC} corresponds to the fraction of activated NMDA conductance, and Ii,tot is the total synaptic input current to unit i. The function f is the effective single-cell input-output relation74, giving the firing rate as a function of the input current:

$${r}_{i}=f({I}_{i,tot})=\frac{a{I}_{i,tot}-b}{1-\exp [-d(a{I}_{i,tot}-b)]}$$
(4)

where a, b, c are parameters whose values are obtained through numerical fit. The total synaptic input currents, taking into account the inhibition between populations, the self-excitation, the background current and the stimulus-selective current, can be written as:

$${I}_{C,tot}={J}_{C,C}{S}_{C}-{J}_{C,AC}{S}_{AC}+{I}_{stim,C}+{I}_{noise,C}+{I}_{CD}(t)$$
(5)
$${I}_{AC,tot}={J}_{AC,AC}{S}_{AC}-{J}_{AC,C}{S}_{C}+{I}_{stim,AC}+{I}_{noise,AC}+{I}_{CD}(t)$$
(6)

with Ji,j the synaptic couplings. The minus signs in the equations make explicit the fact that the inter-units connections are inhibitory (the synaptic parameters Ji,j being thus positive or null). The term Istim,i is the stimulus-selective external input. The form of this stimulus-selective current is:

$${I}_{stim,i}={J}_{ext}(1\pm {c}_{\theta })$$
(7)

with i = C, AC. The sign, ±, is positive when the stimulus favors population C, negative in the other case. Here the parameter Jext combines a synaptic coupling variable and the global strength of the signal (which are parametrized separately in the original model27,28). The quantity cθ, between 0 and 1, characterizes the stimulus strength in favor of the actual category, here an increasing function of the (absolute value of) the stimulus orientation angle, θ.

In addition to the stimulus-selective part, each unit receives individually an extra noisy input, fluctuating around the mean effective external input J0:

$${\tau }_{noise}\frac{{\rm{d}}{I}_{noise,i}}{{\rm{d}}t}=-({I}_{noise,i}(t)-{I}_{0})+{\eta }_{i}(t)\sqrt{{\tau }_{noise}}{\sigma }_{noise}$$
(8)

with τnoise a synaptic time constant which filter the white-noise.

On presentation of a stimulus, the system evolves toward one of the two attractor states, corresponding to the decision state. We consider that the decision is made when for the first time the firing rate of one of the two units crosses a threshold z.

After each decision, a corollary discharge under the form of an inhibitory input is sent to both units until the next stimulus is presented:

$${I}_{CD}(t)=\{\begin{array}{cc}0 & {\rm{during}}\,{\rm{stimulus}}\,{\rm{presentation}}\\ -{I}_{CD,\max }\exp (-(t-{t}_{D})/{\tau }_{CD}) & {\rm{after}}\,{\rm{the}}\,{\rm{decision}}\,{\rm{time}},{t}_{D}\end{array}$$
(9)

This inhibitory input, delivered between the time of decision and the presentation of the next stimulus, allows the network to escape from the current attractor and thus engage in a new decision task28.

Confidence modeling

Within the various decision making modelling frameworks, similar proposals have been made to model the neural correlated of the behavioral confidence level. In race models15, which have equal number of accumulation variables and stimulus categories, as in attractor network models, the balance of evidence at the time of perceptual decisions has been used to model the neural correlate of the behavioral confidence23,41,75. This balance of evidence is given by the absolute difference between the activities of the category specific units at the time of decision. Here, we consider that confidence is obtained as a function f of the difference in neural pools activities23, Δr = |rC − rAC|.

In our experiment, the subjects expressed their confidence level by a number on a scale from 0 to 9. In order to match the neural balance of evidence with the confidence reported by the subject, we map the balance of evidence histogram onto the behavioral confidence histogram, a procedure called histogram matching76. Note that the mapping is here from a continuous variable to a discrete one (taking integer values from 0 to 9).

Fitting procedure

For each participant we calibrate the model by fitting both the mean response times and the accuracies for each orientation, this separately for each block. We note that we only fit the means, which in particular implies that the fits do not take into account the serial dependencies. Doing so, any sequential effects that will arise in the model will result from the intrinsic dynamics of the network, and not from a fitting procedure of these effects.

For most model parameters we take the value used in a previous study28, as reproduced in Table 2. For the models calibration we consider ICD,max, τCD, cθ and z as free parameters. We impose the two parameters ICD,max and τCD to be common to all participants (joint optimization). We optimize the parameters cθ (one for each orientation value) and the decision threshold, z across subjects and blocks.

Table 2 Numerical values of the model parameters.

The rationale for this choice of free parameters is as follows. To avoid overfit, one has to restrict as much as possible the number of free parameters. We rely on the model calibrations done in previous works18,27,28 which suggest to keep as much as possible the parameters values resulting from the initial work of Wong and Wang. In particular, the original parameters values were chosen such as to reproduce empirical data with the mean field model. Now since the empirical data are only behavioral data, it is difficult to make a calibration of the synaptic weights. A significant change of these parameters would be required to change the behavioral outcomes. Importantly, we tried to restrict the calibration to a small set of reasonably independent parameters. For instance, a change in the weights values may be compensated by a change in the decision threshold (so that the cost function may be flat on a large domain of the parameters space). With the weights fixed, we can optimize the fit with respect to the decision threshold in a safer way. An important quantity is the signal-to-noise ratio. By keeping the internal noise constant during the fitting procedure, we explore the whole range of this ratio. We also note that our choice of free parameters can be paralleled with the one made in the DDM framework (drift, threshold and level of noise). This facilitates the comparison with the DDM approach. Finally, imposing some of the free parameters to be common to all participants allows us to further reduce the number of free parameters, at a price of a more complex optimization (a partially joint calibration of all the participant-specific networks).

The observed response time is the sum of a decision time and of a non decision time. Assuming no correlation between these two times, the mean non decision time is thus independent of the orientation. For comparing data with model simulations (which only gives a decision time) at any given orientation θ, we first substract to the mean response time the mean response time averaged over all orientations (this for both data and simulations). We calibrate the model parameters so as to fit these centered mean response times. This will provide a fit of the mean response times (at each angle) up to a global constant, which is the mean non decision time (the modeling of the non decision time distribution is presented in the next Section).

For each participant, and each block, we thus consider the cost function:

$$\begin{array}{rcl}{\rm{Cost}}\,{\rm{function}} & = & \lambda \frac{1}{m}\sum _{\theta }{([{\langle RT\rangle }_{network}(\theta )-{\langle RT\rangle }_{network}]-[{\langle RT\rangle }_{data}(\theta )-{\langle RT\rangle }_{data}])}^{2}\\ & & +\frac{1}{n}\sum _{\theta }{({\langle accuracy\rangle }_{network}(\theta )-{\langle accuracy\rangle }_{data}(\theta ))}^{2}\end{array}$$
(10)

where the sums are over the orientation values, θ = {0.2, 0.5, 0.8, 1.6}, the brackets 〈…〉 design averages (as detailed below), and the normalization factors n (for response times) and m (for the accuracy) are given by

$$m=\mathop{{\rm{\max }}}\limits_{\theta }{([{\langle RT\rangle }_{network}(\theta )-{\langle RT\rangle }_{network}]-[{\langle RT\rangle }_{data}(\theta )-{\langle RT\rangle }_{data}])}^{2}$$
$$n=\mathop{{\rm{\max }}}\limits_{\theta }{[{\langle accuracy\rangle }_{network}(\theta )-{\langle accuracy\rangle }_{data}(\theta )]}^{2}$$

In these expression, 〈RTdata(θ) denotes the mean experimental response time obtained by averaging over all trials at the orientations ±θ, 〈RTdata is the average over all orientations; 〈RTnetwork(θ) and 〈RTnetwork are the corresponding averages obtained from the model simulations. The coefficient λ denotes the relative weight given to the response time and accuracy cost terms. We present the results obtained when taking λ = 2, but it should be noted that the choice of this parameter does not impact drastically the fitted parameters.

For each subject, we minimize this cost function with respect to the choice of cθ and z, making use of a Monte Carlo Markov Chain fitting procedure, coupled to a subplex procedure77. This method is particularly adapted to handle simulation based models with stochastic dynamics. Finally, ICD,max and τCD are fitted using a grid search algorithm as they have less influence on the cost function. In the model, the parameter c represents the stimulus ambiguity, which we expect here to be a monotonous function of the amplitude of the angle, θ. When allowed to be independent parameter values for each value of the orientation, θ = {0.2, 0.5, 0.8, 1.6}, we find that the cθ values can be approximated by a linear or quadratic function of θ depending on the participant. We performed an AIC test78 between the linear and quadratic fit in order to choose which function to use for each participant. These approximations reduce the number of free parameters.

In order to obtain a confidence interval for the different parameters, we used the likelihood estimation of confidence interval for Monte-Carlo Markov Chains method. The confidence interval on the parameters is thus the 70% confidence interval, assuming a Gaussian distribution of the cost function. This provides an approximation of the reliability of the parameters values found. In order to assess the reliability of this method we checked that the threshold z and stimulus strength cθ parameters have an almost non-correlated influence onto the cost function.

The results of the calibrating procedure are summarized in Supplementary Information Tables S5 and S6, with ICD,max = 0.033 nA and τCD = 150 ms.

Estimating the non-decision time

The above fitting procedure calibrates the mean response times up to a global constant, corresponding to the mean non decision time. As explained in the main text, we can go beyond and actually model the non-decision time distribution.

The non-decision time is considered to be due to encoding and motor execution31. Most model-based data analysis of response time distributions assume a constant non-decision time27,42,65. However, fitting data originating from a skewed distribution under the assumption of a nonskewed non-decision time distribution is cause for bias in the parameter estimates if the model for non-decision time is not correct79. Recently, authors have proposed a mathematical method to fit a non-parametrical non-decision time32. Analyzing various experimental data with this method within the framework of drift-diffusion models, they find that strongly right skewed non-decision time distributions are common.

In this paper we make the hypothesis that the non-decision time distributions are ex-Gaussian distributions, whose parameters are inferred from the data making use of the deconvolution method32 and detailed in Supplementary Information Section S5. We present in Fig. 3 the fits of the response time distributions and the inferred non decision time distributions.

Data availability

Experimental Data as well as the statistical analyses scripts are available at https://osf.io/eh2xb/. Simulations of the model were carried out using the Julia programming language80. Scripts corresponding to these numerical simulations are also available at https://osf.io/eh2xb/.

References

  1. 1.

    Meyniel, F., Sigman, M. & Mainen, Z. F. Confidence as bayesian probability: from neural origins to behavior. Neuron 88, 78–92 (2015).

    CAS  Article  PubMed  Google Scholar 

  2. 2.

    Mamassian, P. Visual confidence. Annual Review of Vision Science 2, 1–23, https://doi.org/10.1146/annurev-vision-111815-114630 (2015).

    Article  Google Scholar 

  3. 3.

    Peirce, C. S. & Jastrow, J. On small differences in sensation. Memoirs of the National Academy of Sciences (1884).

  4. 4.

    Zylberberg, A., Barttfeld, P. & Sigman, M. The construction of confidence in a perceptual decision. Frontiers in integrative neuroscience 6, 79 (2012).

    Article  PubMed  PubMed Central  Google Scholar 

  5. 5.

    Adler, W. T. & Ma, W. J. Comparing bayesian and non-bayesian accounts of human confidence reports. PLoS computational biology 14, e1006572 (2018).

    ADS  Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. 6.

    Vickers, D. Decision processes in visual perception (Academic Press, 1979 (reeditited in 2014)).

  7. 7.

    Fleming, S. M., Weil, R. S., Nagy, Z., Dolan, R. J. & Rees, G. Relating introspective accuracy to individual differences in brain structure. Science 329, 1541–1543 (2010).

    ADS  CAS  Article  PubMed  PubMed Central  Google Scholar 

  8. 8.

    Kepecs, A. & Mainen, Z. F. A computational framework for the study of confidence in humans and animals. Philosophical Transactions of the Royal Society B Biological Sciences 367, 1322–1337 (2012).

    Article  PubMed Central  Google Scholar 

  9. 9.

    Kepecs, A., Uchida, N., Zariwala, H. A. & Mainen, Z. F. Neural correlates, computation and behavioural impact of decision confidence. Nature 455, 227 (2008).

    ADS  CAS  Article  PubMed  Google Scholar 

  10. 10.

    Kiani, R. & Shadlen, M. N. Representation of confidence associated with a decision by neurons in the parietal cortex. Science 324, 759–764 (2009).

    ADS  CAS  Article  PubMed  PubMed Central  Google Scholar 

  11. 11.

    Komura, Y., Nikkuni, A., Hirashima, N., Uetake, T. & Miyamoto, A. Responses of pulvinar neurons reflect a subject’s confidence in visual categorization. Nature neuroscience 16, 749 (2013).

    CAS  Article  PubMed  Google Scholar 

  12. 12.

    Lak, A. et al. Orbitofrontal cortex is required for optimal waiting based on decision confidence. Neuron 84, 190–201 (2014).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  13. 13.

    Bogacz, R., Brown, E., Moehlis, J., Holmes, P. & Cohen, J. D. The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks. Psychological review 113, 700 (2006).

    Article  PubMed  Google Scholar 

  14. 14.

    Ratcliff, R. A theory of memory retrieval. Psychological review 85, 59 (1978).

    Article  Google Scholar 

  15. 15.

    Raab, D. H. Division of psychology: Statistical facilitation of simple reaction times. Transactions of the New York Academy of Sciences 24, 574–590 (1962).

    CAS  Article  PubMed  Google Scholar 

  16. 16.

    Vickers, D. Evidence for an accumulator model of psychophysical discrimination. Ergonomics 13, 37–58 (1970).

    CAS  Article  PubMed  Google Scholar 

  17. 17.

    Merkle, E. C. & Van Zandt, T. An application of the poisson race model to confidence calibration. Journal of Experimental Psychology: General 135, 391 (2006).

    Article  Google Scholar 

  18. 18.

    Wang, X.-J. Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36, 955–968 (2002).

    CAS  Article  PubMed  Google Scholar 

  19. 19.

    Clarke, F. R., Birdsall, T. G. & Tanner, W. P. Jr Two types of roc curves and definitions of parameters. The Journal of the Acoustical Society of America 31, 629–630 (1959).

    ADS  Article  Google Scholar 

  20. 20.

    Yeung, N. & Summerfield, C. Metacognition in human decision-making: confidence and error monitoring. Phil. Trans. R. Soc. B 367, 1310–1321 (2012).

    Article  PubMed  Google Scholar 

  21. 21.

    Meyniel, F., Schlunegger, D. & Dehaene, S. The sense of confidence during probabilistic learning: A normative account. PLoS computational biology 11, e1004305 (2015).

    ADS  Article  CAS  PubMed  PubMed Central  Google Scholar 

  22. 22.

    Pleskac, T. J. & Busemeyer, J. R. Two-stage dynamic signal detection: a theory of choice, decision time, and confidence. Psychological review 117, 864 (2010).

    Article  PubMed  Google Scholar 

  23. 23.

    Wei, Z. & Wang, X.-J. Confidence estimation as a stochastic process in a neurodynamical system of decision making. Journal of neurophysiology 114, 99–113 (2015).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  24. 24.

    Koriat, A. The self-consistency model of subjective confidence. Psychological review 119, 80 (2012).

    Article  PubMed  Google Scholar 

  25. 25.

    Paz, L., Insabato, A., Zylberberg, A., Deco, G. & Sigman, M. Confidence through consensus: a neural mechanism for uncertainty monitoring. Scientific reports 6, 21830 (2016).

    ADS  CAS  Article  PubMed  PubMed Central  Google Scholar 

  26. 26.

    Jaramillo, J., Mejias, J. F. & Wang, X.-J. Engagement of pulvino-cortical feedforward and feedback pathways in cognitive computations. Neuron 101, 321–336 (2019).

    CAS  Article  PubMed  Google Scholar 

  27. 27.

    Wong, K.-F. & Wang, X.-J. A recurrent network mechanism of time integration in perceptual decisions. Journal of Neuroscience 26, 1314–1328 (2006).

    CAS  Article  PubMed  Google Scholar 

  28. 28.

    Berlemont, K. & Nadal, J.-P. Perceptual decision-making: Biases in post-error reaction times explained by attractor network dynamics. Journal of Neuroscience 39, 833–853 http://www.jneurosci.org/content/39/5/833.full.pdf https://doi.org/10.1523/JNEUROSCI.1015-18.2018 (2019).

  29. 29.

    Compte, A., Brunel, N., Goldman-Rakic, P. S. & Wang, X.-J. Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral Cortex 10, 910–923 (2000).

    CAS  Article  PubMed  Google Scholar 

  30. 30.

    Wilcoxon, F. Individual comparisons by ranking methods. Biometrics Bulletin 1, 80 (1945).

    Article  Google Scholar 

  31. 31.

    Luce, R. D. et al. Response times: Their role in inferring elementary mental organization. 8 (Oxford University Press on Demand, 1986).

  32. 32.

    Verdonck, S. & Tuerlinckx, F. Factoring out nondecision time in choice reaction time data: Theory and implications. Psychological review 123, 208 (2016).

    Article  Google Scholar 

  33. 33.

    Ding, L. & Gold, J. I. Neural correlates of perceptual decision making before, during, and after decision commitment in monkey frontal eye field. Cerebral Cortex 22, 1052–1067 (2011).

    Article  Google Scholar 

  34. 34.

    Hebart, M. N., Schriever, Y., Donner, T. H. & Haynes, J.-D. The relationship between perceptual decision variables and confidence in the human brain. Cerebral Cortex 26, 118–130 (2014).

    Article  Google Scholar 

  35. 35.

    Beck, J. M. et al. Probabilistic population codes for bayesian decision making. Neuron 60, 1142–1152, https://doi.org/10.1016/j.neuron.2008.09.021 (2008).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  36. 36.

    Baranski, J. V. & Petrusic, W. M. The calibration and resolution of confidence in perceptual judgments. Perception & psychophysics 55, 412–428 (1994).

    CAS  Article  Google Scholar 

  37. 37.

    Desender, K., Boldt, A., Verguts, T. & Donner, T. H. Post-decisional sense of confidence shapes speed-accuracy tradeoff for subsequent choices. bioRxiv 466730 (2018).

  38. 38.

    Sanders, J. I., Hangya, B. & Kepecs, A. Signatures of a statistical computation in the human sense of confidence. Neuron 90, 499–506 (2016).

    CAS  MATH  Article  PubMed  PubMed Central  Google Scholar 

  39. 39.

    Urai, A. E., Braun, A. & Donner, T. H. Pupil-linked arousal is driven by decision uncertainty and alters serial choice bias. Nature communications 8, 14637 (2017).

    ADS  Article  PubMed  PubMed Central  Google Scholar 

  40. 40.

    Geller, E. S. & Whitman, C. P. Confidence ill stimulus predictions and choice reaction time. Memory & cognition 1, 361–368 (1973).

    CAS  Article  Google Scholar 

  41. 41.

    Vickers, D. & Packer, J. Effects of alternating set for speed or accuracy on response time, accuracy and confidence in a unidimensional discrimination task. Acta psychologica 50, 179–197 (1982).

    CAS  Article  PubMed  Google Scholar 

  42. 42.

    Usher, M. & McClelland, J. L. The time course of perceptual choice: the leaky, competing accumulator model. Psychological review 108, 550 (2001).

    CAS  Article  PubMed  Google Scholar 

  43. 43.

    Griffin, D. & Tversky, A. The weighing of evidence and the determinants of confidence. Cognitive psychology 24, 411–435 (1992).

    Article  Google Scholar 

  44. 44.

    Ernst, M. O. & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429 (2002).

    ADS  CAS  Article  PubMed  Google Scholar 

  45. 45.

    Laming, D. Choice reaction performance following an error. Acta Psychologica 43, 199–224 (1979).

    Article  Google Scholar 

  46. 46.

    Leopold, D. A., Wilke, M., Maier, A. & Logothetis, N. K. Stable perception of visually ambiguous patterns. Nature neuroscience 5, 605 (2002).

    CAS  Article  PubMed  Google Scholar 

  47. 47.

    Gold, J. I., Law, C.-T., Connolly, P. & Bennur, S. The relative influences of priors and sensory evidence on an oculomotor decision variable during perceptual learning. Journal of neurophysiology 100, 2653–2668 (2008).

    Article  PubMed  PubMed Central  Google Scholar 

  48. 48.

    Cho, R. Y. et al. Mechanisms underlying dependencies of performance on stimulus history in a two-alternative forced-choice task. Cognitive, Affective, &. Behavioral Neuroscience 2, 283–299 (2002).

    Google Scholar 

  49. 49.

    Glaze, C. M., Kable, J. W. & Gold, J. I. Normative evidence accumulation in unpredictable environments. Elife 4, e08825 (2015).

    Article  PubMed Central  Google Scholar 

  50. 50.

    Bonaiuto, J. J., de Berker, A. & Bestmann, S. Response repetition biases in human perceptual decisions are explained by activity decay in competitive attractor models. eLife 5, e20047 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  51. 51.

    Braun, A., Urai, A. E. & Donner, T. H. Adaptive history biases result from confidence-weighted accumulation of past choices. Journal of Neuroscience 2189–17 (2018).

  52. 52.

    Samaha, J., Switzky, M. & Postle, B. R. Confidence boosts serial dependence in orientation estimation. bioRxiv 369140 (2018).

  53. 53.

    Gelman, A. & Hill, J. Data analysis using regression and hierarchical/multilevel models. New York, NY: Cambridge (2007).

  54. 54.

    Fay, M. P. & Proschan, M. A. Wilcoxon-mann-whitney or t-test? on assumptions for hypothesis tests and multiple interpretations of decision rules. Statistics surveys 4, 1 (2010).

    MathSciNet  MATH  Article  PubMed  PubMed Central  Google Scholar 

  55. 55.

    Drugowitsch, J., Moreno-Bote, R. & Pouget, A. Relation between belief and performance in perceptual decision making. PloS one 9, e96511 (2014).

    ADS  Article  CAS  PubMed  PubMed Central  Google Scholar 

  56. 56.

    Purcell, B. A. & Kiani, R. Neural mechanisms of post-error adjustments of decision policy in parietal cortex. Neuron 89, 658–671 (2016).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  57. 57.

    Roxin, A. & Ledberg, A. Neurobiological models of two-choice decision making can be reduced to a one-dimensional nonlinear diffusion equation. PLoS Computational Biology 4, e1000046 (2008).

    ADS  MathSciNet  Article  CAS  PubMed  PubMed Central  Google Scholar 

  58. 58.

    Moreno-Bote, R. Decision confidence and uncertainty in diffusion models with partially correlated neuronal integrators. Neural computation 22, 1786–1811 (2010).

    MathSciNet  MATH  Article  PubMed  Google Scholar 

  59. 59.

    Rolls, E. T., Grabenhorst, F. & Deco, G. Choice, difficulty, and confidence in the brain. Neuroimage 53, 694–706 (2010).

    Article  PubMed  Google Scholar 

  60. 60.

    Rolls, E. T., Grabenhorst, F. & Deco, G. Decision-making, errors, and confidence in the brain. Journal of neurophysiology 104, 2359–2374 (2010).

    Article  PubMed  Google Scholar 

  61. 61.

    Kiani, R., Corthell, L. & Shadlen, M. N. Choice certainty is informed by both evidence and decision time. Neuron 84, 1329–1342 (2014).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  62. 62.

    Ratcliff, R. & Starns, J. J. Modeling confidence and response time in recognition memory. Psychological review 116, 59 (2009).

    Article  PubMed  PubMed Central  Google Scholar 

  63. 63.

    Wong, K.-F., Huk, A. C., Shadlen, M. N. & Wang, X.-J. Neural circuit dynamics underlying accumulation of time-varying evidence during perceptual decision making. Frontiers in Computational Neuroscience 1, 6 (2007).

    Article  PubMed  PubMed Central  Google Scholar 

  64. 64.

    Desender, K., Murphy, P. R., Boldt, A., Verguts, T. & Yeung, N. A post-decisional neural marker of confidence predicts information-seeking. bioRxiv 433276 (2018).

  65. 65.

    Ratcliff, R. & Rouder, J. N. Modeling response times for two-choice decisions. Psychological Science 9, 347–356 (1998).

    Article  Google Scholar 

  66. 66.

    Peirce, C. S. On the theory of errors of observation. Report of the Superintendent of the United States Coast Survey Showing the Progress of the Survey During the Year 1870, 220–224 (1873).

    Google Scholar 

  67. 67.

    Ditterich, J. Evidence for time-variant decision making. European Journal of Neuroscience 24, 3628–3641 (2006).

    Article  PubMed  Google Scholar 

  68. 68.

    Wang, X.-J. Decision Making in Recurrent Neuronal Circuits. Neuron 60, 215–234 (2008).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  69. 69.

    Kleiner, M. et al. What’s new in psychtoolbox-3. Perception 36, 1 (2007).

    Google Scholar 

  70. 70.

    Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. Journal of Statistical Software 67, 1–48 (2015).

    Article  Google Scholar 

  71. 71.

    Kreyszig, E. Advanced engineering mathematics. fourth edi (1979).

  72. 72.

    Sommer, M. A. & Wurtz, R. H. Visual perception and corollary discharge. Perception 37, 408–418 (2008).

    Article  PubMed  PubMed Central  Google Scholar 

  73. 73.

    Crapse, T. B. & Sommer, M. A. Frontal eye field neurons with spatial representations predicted by their subcortical input. Journal of Neuroscience 29, 5308–5318 (2009).

    CAS  Article  Google Scholar 

  74. 74.

    Abbott, L. & Chance, F. S. Drivers and modulators from push-pull and balanced synaptic input. Progress in brain research 149, 147–155 (2005).

    CAS  Article  Google Scholar 

  75. 75.

    Smith, P. L. & Vickers, D. The accumulator model of two-choice discrimination. Journal of Mathematical Psychology 32, 135–168 (1988).

    MathSciNet  MATH  Article  Google Scholar 

  76. 76.

    Gonzalez, R. C., et al. Digital image processing (2002).

  77. 77.

    Rowan, T. The subplex method for unconstrained optimization. Ph.D. thesis, Ph. D. thesis, Department of Computer Sciences, Univ. of Texas (1990).

  78. 78.

    Akaike, H. Information theory and an extension of the maximum likelihood principle. In Breakthroughs in statistics, 610–624 (Springer, 1992).

  79. 79.

    Ratcliff, R. Parameter variability and distributional assumptions in the diffusion model. Psychological review 120, 281 (2013).

    Article  Google Scholar 

  80. 80.

    Bezanson, J., Edelman, A., Karpinski, S. & Shah, V. B. Julia: A fresh approach to numerical computing. SIAM review 59, 65–98, https://doi.org/10.1137/141000671 (2017).

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

We are grateful to Laurent Bonnasse-Gahot for useful discussions and suggestions. We thank Pascal Mamassian, Vincent de Gardelle and Xiao-Jing Wang for stimulating discussions. We thank Isabelle Brunet for her help in recruiting the participants and organizing the experimental sessions. We thank the anonymous referees for useful remarks. K.B. acknowledges a fellowship from the ENS Paris-Saclay.

Author information

Affiliations

Authors

Contributions

All authors contributed to the research plan. K.B., J.-R.M. and J.S. conceived the experiment; K.B. and J.-R.M. conducted the experiment; K.B. and J.S. performed the statistical analyses; K.B. and J.-P.N. performed the mathematical modeling; K. B. performed the numerical simulations; K.B. and J.-P.N. analyzed the results with inputs from the other authors. K.B. and J.-P.N. wrote the paper with input from the other authors. All authors reviewed the manuscript.

Corresponding author

Correspondence to Kevin Berlemont.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Berlemont, K., Martin, J., Sackur, J. et al. Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions. Sci Rep 10, 7940 (2020). https://doi.org/10.1038/s41598-020-63582-8

Download citation

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.