Abstract
Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidencespecific sequential effects observed in the experiment (participants are faster on trials following high confidence trials). Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decisionmaking neural network.
Similar content being viewed by others
Introduction
A general understanding of the notion of confidence is that it quantifies the degree of belief in a decision^{1,2}. The simplest context for studying confidence is the one of perceptual decision making. In psychology and neuroscience, the most commonly used experimental protocols are the ones of two alternative forced choices (2AFC) and stimulus discrimination tasks in which, in a sequence of trials, the participant is presented with stimuli and has to make a binary choice, associating each stimulus to one of two categories (e. g. decide if a visual stimulus is the image of a cat or a dog). Many studies have tackled the issue of confidence measurement in perceptual decision tasks, either by directly requiring participants to provide an estimation of their confidence^{3,4,5}, or by using postdecision wagering (subjects can choose a safe option, with low reward regardless of the correct choice)^{6,7,8}. Postdecision wagering has been used in behaving animals in order to study the neural basis of confidence^{9,10,11,12}.
In order to model the neural mechanisms underlying the decisionmaking process, two main routes are followed. The most frequently used considers (linear) driftdiffusion models (DDM)^{13,14} or independent race models (IRM)^{15,16,17}, in which choicespecific cells accumulate evidence in favor of one or the other alternative to which they are tuned. A more biophysical approach considers attractor neural networks^{18}, with competing pools of cells, leading to a nonlinear dynamics with choicespecific attractors. Within one or the other framework, researchers have tried to relate confidence to the decisionmaking process, making different hypotheses on the origin of confidence. Within the Bayesian and signal detection theories, researchers model confidence from the probability of having made the correct choice^{7,8,19,20,21}. When considering the neural dynamics, researchers assume that confidence is based on the integration of evidence over time^{9,22,23}. Finally, researchers have modeled confidence as based on a consensus reached by a pool of independent decisionmaking networks^{24,25}.
Several experimental studies^{1,8} suggest that choice and confidence can be read out from the same neural representation. In an experiment by Kiani and Shadlen^{10}, monkeys perform a discrimination task (each correct choice leading to a reward), but on half of the trials, the monkey is given the option to abort the task in favor of a certain but small reward. The probability of choosing this’sure target’ reflects the monkey’s degree of choice uncertainty, assuming that risk aversion strongly correlates with this uncertainty. In order to account for the experimental findings in this uncertain option task, authors^{23,26} have proposed biophysical attractor neural network models. They show that these models capture both the behavioral observations and the associated physiological recordings from neurons in the Lateral Intraparietal (LIP) cortex, area where the firing rates of individual neurons strongly correlate with the decision that is being made, and in the pulvinar area, which might be the locus of the readout of confidence from the LIP activity.
In the present work, we address the issue of the ability of attractor networks to quantitatively account for confidence reports in human. For this, we first experimentally investigate confidence formation and its impact on sequential effects in human experiments. Participants perform an orientation discrimination task on Gabor patches that deviate clockwise or counterclockwise with respect to the vertical. In some blocks, after reporting their decisions, participants perform a confidence judgment on a visual scale. Then, we fit an attractor neural network model^{27,28} on the behavioral data. More precisely, for each participant, we calibrate a network specifically on his/her behavioral data, the fit being only based on mean response times and accuracy. With the model so calibrated for each participant, and making simulations that replicate the experimental protocol, here for the first time we quantitatively confront an attractor neural network behavior with human behavior during full sequences of perceptual decisions. Following Wei and Wang^{23}, we assume that confidence is an increasing function of the difference, measured at the time the decision is made, between the mean spike rates of the two neural pools specific to one or the other of the two possible choices. We show that in this way, behavioral effects of confidence can be accurately estimated for each participant. We find that the attractor neural network accurately reproduces an effect of confidence on serial dependence which is observed in the experiment: participants are faster (respectively slower) on trials following high (resp. low) confidence trials. Since drift diffusion models cannot account for such effects without adhoc changes of parameters from trial to trial, we argue that these sequential effects reveal the intrinsically nonlinear nature of the underlying neural network dynamics.
Results
Experiment and neural model
Participants completed a visual discrimination task between clockwise and counterclockwise orientated stimuli, followed, or not, by a task in which they were asked to assess the confidence in their decision. We used three kinds of blocks, comprising either sequences of pure decision trials (pure blocks), trials with feedback (feedback blocks) or trials with confidence judgments (confidence blocks). In feedback blocks, on each trial, participants received auditory feedback on the correctness of their choice. In the confidence block, after each trial, they were asked to report their confidence on a discrete scale of ten levels, from 0 to 9. In feedback blocks, participants were not asked to report their confidence, and in confidence blocks they did not receive any feedback. We illustrate the experimental protocol in Fig. 1, panels a–d.
For the modelling of the neural correlates, we consider a decisionmaking recurrent neural network governed by local excitation and feedback inhibition, based on a biophysical model of spiking neurons^{18,29}. We work with a reduced version^{27} allowing for largescale numerical simulations and for better analytic analysis. More precisely, we consider a model variant^{28} allowing the network to engage in a sequence of perceptual decisions, as explained below.
The model (see Fig. 1e) consists of two competing units, each one representing an excitatory neuronal pool, selective to one of the two available response options, here C (clockwise) or AC (anticlockwise). Each population receives a taskrelated input signaling the perceived evidence for each option. The difference between these inputs varies inversely with the difficulty of the task, thus it varies with the absolute value of the Gabor orientation. The decision, ‘C’ or ‘AC’, is made when one of the two units reaches a threshold z. Once a decision is made (threshold is reached), a non specific inhibitory current (the corollary discharge) is injected into the two neural pools, causing a relaxation of the network activity towards a neutral low activity state, before the onset of the next stimulus. This allows the network to deal with consecutive sequences of trials, as illustrated in Fig. 1f. For a biologically relevant range of parameters, relaxation is not complete at the onset of the next stimulus, hence the decision made in this new trial will depend on the one at the previous trial. In a previous work, we showed^{28} that the model accounts for the main sequential and posterror effects observed in perceptual decision making experiments in human and monkeys.
Full details about the experiment and the model can be found in the Methods Section.
Calibration of the model onto the behavioral results
In Fig. 2, we show for each participant response times and accuracies with respect to stimulus orientation (absolute value of the orientation angle). All subjects exhibit improved accuracies and shorter response times for less difficult (larger orientation) stimuli, as classically reported in the literature.
We fit the model to these behavioral data. As detailed in the Methods Section, we perform model calibration in order to reproduce both the mean response times and the accuracy (success rates). For each participant, this is done separately for the three types of blocks.
First, we note that the model correctly reproduces the behavioral results of the different participants, as can be seen in Fig. 2. Second, we compare the values of the parameters obtained for the pure and confidence blocks. We find that participants have higher decision threshold (Signed Rank test^{30} p = 0.03, 6 participants), higher stimulus strength level by angle (Signed Rank test, p = 0.031, 6 participants) and higher mean nondecision times (Signed Rank test p = 0.03, 6 participants). Two of the authors of the present paper (JR. Martin, J. Sackur, personnal communication, April, 2018) have obtained analogous results when analyzing similar data within the Drift Diffusion framework: nondecision time, drift rate and decision threshold are modified by the confidence context in the experimental setup.
Nondecision times
Our fitting procedure allows estimating the nondecision times. In Fig. 3, we represent the histogram of the response times across participants for the pure and confidence blocks. The red curve shows the distribution of nondecision times in the model, and the black curve the response times distribution. We note that, with a fit only based on the mean response times and accuracies, the model also accurately account for the distributions of response times. We find that the minimum value of nondecision time is 75 ms for the pure block, and 100 ms for the confidence block, and the average nondecision times are within the order of magnitude of saccadic latency^{31}. Finally, we observe that the nondecision times distributions clearly show a right skew for several participants, in agreement with previous studies^{32}. This justifies the modelling of nondecision times with an exponentially modified Gaussian (exGaussian) distribution, instead of simply adding a constant nondecision time to every decision time.
Confidence modeling
Recent studies have reported a choiceindependent representation of confidence signal in monkeys^{33} and in rats^{9}, as well as evidence for a close link between decision variable and confidence – in monkeys from LIP recordings^{10} and in humans from fMRI experiments^{34}. In an experiment with monkeys,Shadlen^{10} introduce a’sure target’ associated with a low reward, which can be chosen instead of the categorical targets. The probability of not choosing the sure target is then a proxy for the confidence level. Wei and Wang^{23} model the neural correlates of confidence within the framework of attractor neural networks. They assume that the confidence level (as given by the probability of not choosing the sure target) is a sigmoidal function of the difference, at the time of decision, between the activities of the winning and loosing neural pools. This hypothesis is in line with similar hypothesis in the framework of DDMs and other decisionmaking models^{2,6}. They then show that the empirical dependencies of response times and accuracies in the confidence level are qualitatively reproduced in the simulations of the neural model.
We make here the hypothesis that the confidence in a decision is based on the difference Δr between the neural activities of the winning and loosing neural pools^{23}, measured at the time of the decision: the larger the difference, the greater the confidence. In our experiment, the measure of confidence is the one reported by the subjects on a discrete scale, and it is this reported confidence level that we want to model. Within our framework, we quantitatively link this empirical confidence to the neural difference Δr by matching the distribution of the neural evidence balance with the empirical histogram of the confidence levels. In Fig. 4, we show, for each participant, the matching between the histogram of confidence levels, as reported by the participant, and the distribution of Δr, as obtained in the model calibrated on the participant performance. We note that the main difference between the participants’ histograms lies in the percent of trials (and level on the confidence scale) for which a participant reports the highest confidence level. This last point is highly dependent on the participant, and can be at a very low value of Δr (see e.g. Participant 2 on Fig. 4b).
In our analysis, the shape of the mapping is not chosen a priori but nonparametrically inferred from the experimental data. This is in contrast with previous studies in which the sigmoidal shape is imposed^{8,9,35}. In a related neural attractor model, Wei and Wang^{23} exhibit a link between Δr and a probabilistic measure of confidence, and show that it is well fitted by a sigmoid function. Here we find that, for each participant, the nonparametric mapping is also very well approximated by a sigmoidal function of the type 1/(1 + exp(−β(Δr − κ))), with participantspecific parameters κ and β. It is noticeable that both the link between Δr and a probabilistic measure of confidence observed in an attractor network model, and the mapping obtained here between Δr and the empirical confidence, can be approximated by a sigmoid function. This suggests that the human reported confidence can be understood as a discretization of a probabilistic function.
Response times and accuracies vs. confidence
Studies have shown that confidence ratings are closely linked to response times^{36,37} and choice accuracy^{36,38,39}. The behavioral confidence in our model is assumed to be based on a simple neural quantity measured at the time of the decision. In what follows, we study whether this hypothesis on the neural correlates of confidence can account for the links between the behavioral data: response times, accuracy and confidence. In Fig. 5, we represent the response times (Fig. 5a) and choice accuracy (Fig. 5b) with respect to the reported confidence level for each participant. The data points show the experimental results (with the error bars as the bootstrapped 95% confidence interval), and the colored line the result of the simulation (with the light colored area the bootstrapped 95% confidence interval). Response times decrease^{36,37} and accuracies increase with confidence^{37,38,40,41}. We find a monotonic dependency between response times and confidence, and between accuracy and confidence, but with specific shapes for each participant. Note that some values of confidence are only observed for a few trials, resulting then in large error bars especially for accuracy as we take the mean of a binary variable. For the numerical simulations, the relatively large size of the confidence interval is due to the limited number of trials, since we limit ourselves to the same protocol as the experimental one.
For comparison, we also fit on our experimental data another nonlinear model with mutual inhibition, the UsherMcClelland model^{42} (see Supplementary Information Section S1). The model fits the response times with respect to confidence, but only at intermediate levels of confidence. For some participants, we observe a strong divergence at high confidence (Participant 1,4 and 5). Although accuracy is an increasing function of confidence (except for participant 5), the experimental data do not fall within the bootstrapped confidence interval of the simulations (Supplementary Information Fig. S1). In contrast, we see that our more biophysical model correctly reproduces the psychometric and chronometric functions with respect to confidence for each participant, despite the important difference of response times between participants.
Previous studies found that, during a perceptual task, reported confidence increases with stimulus strength for correct trials, but decreases for error trials^{9,37,38}. This effect of confidence has been correlated to patterns of firing rates in experiments with rats^{9} and to the human feeling of confidence^{38}. We observe the same type of variations of confidence with respect to stimulus strength (Supplementary Information Fig. S2), both in the experimental results and in the model simulations. This effect is in accordance with a prediction of statistical confidence, defined as the Bayesian posterior probability that the decisionmaker is correct^{38,43,44}. We thus see that the attractor network model reproduces a key feature of statistical confidence.
Impact of confidence on history biases
Statistical analysis of sequential effects
Perceptual decisions made by humans in behavioral experiments depend not only on the current sensory input, but also on the choices made at previous trials. Various sequential effects have been reported^{45,46,47}, and researchers have proposed different models to account for them^{28,48,49,50} – in a previous work on post error effects^{28} we give a more general discussion of sequential effects. When the subject does not receive any feedback, confidence in his/her decision might be important for controlling future behaviors^{1,20}. Recently, the effects of confidence on the history biases have been experimentally investigated^{51,52}. One main finding is that decisions with high confidence confer stronger biases upon the following trials. Here, we investigate the influence of confidence upon the next trial in the empirical data, and we show that the results are well reproduced by the behavior of the dynamical neural model.
First, we perform a statistical analysis of the effect of history biases on response times in the experimental data.
For this, for each participant, we classify each trial into low and high confidence: a trial is considered as low confidence (resp. high confidence) if the reported confidence is below (resp. above) the participant’s median. We analyze the history biases making use of linear mixed effects models (LMM)^{53}. The LMM we consider assumes that the logarithm of the response time at trial n, RT_{n}, is a linear combination of factors as follows:
with x_{repetition} a binary variable taking the value 1 if the correct choice for the current trial is a repetition of the previous choice (and 0 otherwise), θ the orientation of the Gabor (in degree), RT_{n−1} the response times of the previous trials, and Conf_{n−1} the confidence of the previous trial coded as 0 for low and 1 for high. The subscript p in a coefficient (e.g a_{0,p}) indicates that for this parameter we allow for a random slope per participant. We show the results in Table 1.
We find that higher orientations lead to faster response times and the repetition biases on response times^{48}. In line with previous works^{37}, high confidence has the effect of speeding up the following trial. Finally, we find that the previous response time has an effect on the subsequent one, meaning that the participants have the tendency to show sequences of fast (or slow) response times.
Next, following a numerical protocol replicating the experimental one, for each participant we make numerical simulations with the model specifically calibrated on the participant’s data. Recall that the fit has been done on mean response times and accuracy, hence without taking into account serial dependencies. We then look at correlations between decisions made in successive trials by the neural attractor network, performing the same type of statistical analysis as done on the experimental data (see Methods, Table 1).
We note that the attractor neural network captures the variation of response times with respect to angle orientation^{27}. We find that the dependency in the choice history (through the repetition of responses), as observed in the experimental data, is correctly reproduced by the model, in agreement with a previous study of these effects^{28}. Quite remarkably, we observe an effect of confidence on response times in the network, with the same sense of variation (negative slopes) as in the experiment.
Analysis of the underlying neural dynamics
To understand how the neural dynamics leads to these confidencespecific sequential effects, we make an analysis of the dynamics similar to the one done in Berlemont and Nadal^{28} for the analysis of posterror effects in the same neural model. We illustrate this analysis in Fig. 6. On each panel, we compare the mean neural dynamics for postlow and posthigh confidence trials (respectively red and blue lines). Without loss of generality, we assume that the previous decision was a C grating. We first note that the relaxation dynamics between two consecutive trials are different, resulting in different starting points for the next trial, from postlow and posthigh confidence trials. Panel (A) corresponds to the case where the new stimulus is also C oriented (“repeated” case), at low strength level. The ending points of the relaxations fall into the correct basin of attraction. Because the posthigh confidence relaxation lies deeper into the basin of attraction than the one of postlow trials, the subsequent dynamics will be faster for posthigh confidence trials in this case. In panel (B) we represent the case, still at low stimulus strength, where the stimulus orientation of the new stimulus is the opposite (“alternated” case) to the one corresponding to the previous decision (hence an AC grating). Both dynamics lie close to the basin boundary of the two attractors, thus the dynamics are slow and there is no significant difference between postlow and posthigh confidence trials. In panels (C) and (D) we represent the same situations as panels (A) and (B), respectively, but for high strength levels (easy trials). The ending points of the relaxations are far from the boundary of the basins of attraction, whatever the grating presented. The response times for posthigh and postlow confidence trials are thus similar. This analysis shows that the nonlinearity of the network dynamics is responsible for the considered sequential effect. More precisely, it is the very existence of basin boundaries, and the fact that the network state is more or less close to the basin boundary at the onset of a new stimulus, which lead to the sequential effects.
We now qualitatively confront the outcomes of the above analysis with the experimental data. To do so, we group the response times according to the same cases as previously: high and low stimulus strength, repeated or alternated trials. We compare post high and low confidence trials in each case, using a ttest^{54}. We find that mean response times between post low and high confidence trials are different in the low orientation stimuli and repeated case (ttest, p = 0.044, df = 1322.6), but they are identical in the high orientation stimuli and alternated case, high orientation stimuli and alternated case, low orientation stimuli and repeated case they are identical (respectively p = 0.90, df = 778.7, p = 0.70, df = 610, p = 0.23, df = 617.4). This is in accordance with the outcomes of the above analysis based on the nonlinear dynamics.
The model reproduces sequential effects correlated with repetition and confidence, and we have shown that these effects result from the intrinsic nonlinear network dynamics. However, the model does not induce correlations between response times of two successive trials (for more details see Supplementary Information S3). This suggests that the correlations observed in the experimental data cannot be explained by the intrinsic dynamics of the attractor network, but may come from higher order processing. Within a DDM approach, one may account for such effect, but only with a change of parameters from trial to trial (e.g. by changing the decision threshold depending on the previous reaction time). However, such ad hoc changes of parameter values are not supported by models or experimental data which would provide clues about the neurodynamical mechanisms underlying these changes.
Comparison with diffusion models
Next we investigate if linear models of decisionmaking of the DDM family are able to reproduce the sequential effects. More precisely, we consider the independent race model (IRM^{15,16,17}). During the accumulation of evidence the equations of evolution of the activities x_{i}, i = {C, AC}, are:
where v_{i}(t), i = {C, AC}, are white noise processes. The first race that reaches a threshold z (or −z) is the winning race. The confidence in the decision is modelled as a monotonic function of the balance of evidence z − x_{losing}^{2,6,55}.
We extend the IRM in order to deal with sequences of trials. To do so, we allow for a relaxation dynamics between trials, in a way analogous to the relaxation dynamics in the attractor network model. Hence, after a decision is made, both units receive a non specific inhibitory input leading to a relaxation until the next stimulus is presented (see Fig. 7). Within this extended IRM framework, we study how, with a fixed set of parameter values, the sequential effects would be correlated with confidence.
Since in the IRM there is no interaction between the two races, the relaxation of the winning race is the same in both low and high confidence trials. However, the ending point of the relaxation following a decision is closer to the baseline (0 line) for a high confidence trial than when it comes to a trial with low confidence trial (Fig. 7). For the next trial, if the winning race is the same as previously, then the mean response times are identical in low and high confidence cases. However, if the opposite decision is made, the response time in the postlow confidence case is faster than the one in the posthigh confidence case, as we can observe with the mean race shown in Fig. 7. This behavior is in contradiction with the experimental data for which we observe the opposite effect (see Supplementary Information Table S3). This conclusion applies more generally to any racetype model without interactions between units. This applies as well to IRMs with a nonlinearity under the form of collapsing bounds (see Supplementary Information S6). Other researchers have studied different kind of sequential effects, such as posterror slowing, with collapsing bounds models^{56}. However, these models do not have a relaxation dynamics between trials, and the model parameters are allowed to be modified between trials. Here we show that the confidencespecific sequential effects result from a type of nonlinearity in the dynamics which impacts the interaction between neural units.
Authors have proposed a set of nonlinear diffusion equations to approximate the process of decisionmaking in attractor neural networks^{57}. These equations result from a description of the dynamics at the vicinity of the bifurcation point  the particular state at which any small input will induce the appearance of the two attractors. This framework is not well adapted to model sequences of decisions, since one would need to reset the system at the vicinity of the bifurcation point before the onset of a new trial. Even if this would be done, the system would not account for the experimentally observed sequential effects, for the very same reasons as those presented above for linear diffusion models. As shown by our analysis of the attractor network dynamics, the key nonlinearity underpinning the sequential effects is the one resulting from the existence of frontiers between basins of attractions.
Discussion
Dynamical models of decision making implement in different ways the same qualitative idea: decision between two categories is based on the competition between units collecting evidences in favor of one or the other category (or with a single unit whose activity represent the difference between the categorical evidences). Most authors propose that behavioral confidence can be modeled as a function of the balance of evidence^{4,6,9,22,58}. Very few works propose other mechanisms^{25,59,60}. Among these exceptions, the consensus model^{25} assumes that several attractor neural networks are run in parallel, and a decision is reached when there is a consensus (more than half of the networks have chosen the same alternative). Within this framework, confidence is defined as the fraction of networks that have chosen the winning decision. This model can account for the relation between confidence and reaction times^{25}. However, it is not clear how it can be modified in order to perform sequences of decisions. In particular, since the different networks reach the decision threshold at different times, it is not obvious how to decide when relaxation should start for each network, and it is not clear what kind of sequential effects should be expected for any given relaxation rule.
Considering the large majority of models based on the DDM framework, in view of our findings we now contrast the DDM and attractor neural network approaches. We do so on three aspects: the modelling of confidence, the analysis of sequential effects, and the issue of nondecision times.
Bayesian inference models compute confidence using driftdiffusion models (DDM) extensions based on decision variable balance^{4,6,9,58}, possibly with additional mechanisms  decision variable balance combined with response times^{61} or postdecisional deliberation^{22} (the dynamics continues after the decision, thus updating the balance of evidence). Similar studies have been made with independent race models (IRM)^{15,16,17}. These DDM or IRM models successfully account for various psychometric and chronometric specificities of human confidence. In DDMs, confidence based on decision variable balance predicts that confidence should deterministically decrease as a function of response times^{10}. However, the response times distributions strongly overlap across confidence levels^{62}. This property can be recovered making use of additional processes, such as with a twostage driftdiffusion model^{22}. Yet, other effects remain unexplained within the framework of DDM. This is the case of early influence of sensory evidence on confidence^{4}, as well as the fact that confidence is mainly influenced by evidence in favor of the selected choice^{4}.
Within the framework of attractor neural networks, early sensory evidence influence decision accuracy and reaction time^{63}. The model discussed in the present paper is appropriate for going beyond and studying the effect of early sensory evidence on confidence. Given our findings, we expect that the model will reproduce the above mentioned effects not well accounted for by the DDMs.
As discussed in the Results section, various serial dependence effects are observed in perceptual decisionmaking. A recent finding is that the magnitude of history biases increases when previous trials were faster and correct. Given the known correlations between confidence, response time and accuracy, this effect is interpreted as an impact of confidence on the next decision^{51}. By measuring directly the subjective confidence of the participants, recent studies confirm that history biases are correlated with confidence^{37,52,64}. In our experiment, we observe that high confidence trials lead to faster subsequent choices in agreement with the above mentioned experimental studies. On the theoretical side, this impact of confidence on response times of the subsequent trials has been investigated within the framework of DDMs^{37}. The usual analysis consists in dividing trials into two categories, subsequent to low or high confidence trials, fitting then separately a DDM on each type of trial. The main finding is that the parameters (threshold and drift) are different depending on the confidence level at the previous trial. In the absence of changes of parameters, even with the addition of a relaxation between trials as discussed in the Results section, the DDM or IRM models cannot account for the observed sequential effects: as we have seen, the predicted sequential effect would be the opposite of the observed one. In contrast, as discussed in the Results section, with a unique choice of parameters values for each participant, the attractor network model does not only account for the relationship between confidence, response times and accuracy, but also reproduces the influence of confidence on serial dependence.
Finally, the question of the nondecision times arise from our modelling work. Human studies commonly report rightskewed response times distribution^{31,65}. Such long right tails are well captured by driftdiffusion models^{42,65}  and this is generally considered as a strong evidence in favor of the accumulation of evidence mechanism. However, with trained subjects, the rightskew is less pronounced and a Gaussian distribution fits well the response times distribution^{66}. In contrast to human studies, experiments in monkeys do not show such long right tails in the response times histograms^{67}. When assuming a constant value for the nondecision time, attractor neural network models do not produce rightskewed distributions, but accurately reproduce the shape of the distributions in monkeys experiments^{68}. In accordance with these results, in this work we have shown that for the range of parameters we considered, the decision time distribution generated by the neural network can be approximated by a Gaussian distribution. Within the neural attractor framework, the experimentally observed long right tails can thus be understood as originating only from the nondecision times. Here we have proposed an estimation of the distribution of these nondecision time allowing to fit the empirical response times distributions. One should note that, even in the case of an analysis of experimental data within the DDM framework, the estimated nondecision times are not necessarily given by a constant value, but may show a distribution with a strong right skew^{32}. These findings combined with ours suggest that the question of the origin of the long right tails in human response times has to be reconsidered.
To conclude, in this work we designed an experiment in order to study confidence with human participants. We fitted a neural attractor network model specifically to each participant in order to describe their behavioral results in continuous sequences of perceptual decisions: response times, accuracy and confidence. Quite remarkably, we found that the impact of confidence on sequential effects is well described by the nonlinear nature of the attractor dynamics.
Methods
Experiment
Participants
Nine participants (7 Females, Mean Age = 27.3, SD = 5.14) have been recruited from the Laboratoire de Psychologie cognitive et de Psycholinguistique’s database (LSCP, DEC, ENSEHESSCNRS, PSL, Paris, France). Every subject had normal or correctedtonormal vision. The participants performed three sessions on three distinct days in the same week for a total duration of about 2 h15. Three participants were excluded. Two of the excluded participants did not complete correctly the experiment and one exhibited substantially asymmetric performance (98% of correct responses for an angle of 0.2, but 18% at −0.2 degree). As a result, we analyzed data from 6 participants.
Ethics statement
The experiment followed the ethics requirements of the Declaration of Helsinki (2008) and has been approved by the local Ethics Committee (Comité de Protection des Personnes, IledeFrance VI, Paris, France). We obtained written informed consent from every participant who received a compensation of 15 euros for their participation.
Stimuli and tasks
The stimuli were generated using Matlab along with the Psychophysics Toolbox^{69}. They were displayed on a monitor at 57.3 cm of the participants’ head. The participants performed the experiment in a quiet and darkened experimental room. Their heads were stabilized thanks to a chinrest. Trials began with the presentation of a black fixation point (duration = 200 ms). Then the stimulus for the primary decision task was presented, consisting in a circular grating (diameter = 4°, Tukey window, 2 cycles per degree, Michelson contrast = 89%, duration = 100 ms, phase randomly selected at each trial). The grating had eight possible orientations with respect to the vertical meridian, and participants were asked to categorize them as clockwise or anticlockwise with respect to the vertical meridian by pressing the rightarrow or leftarrow. Participants had been instructed to respond as follows: “You have to respond quickly but not at the expense of precision. After 1.5 s the message, “Please answer”, will appear on the screen. It would be really ideal, if you would answer before this message appears”.
Trials were of three types, grouped in pure block, feedback block and confidence block (see below). Participants performed three sessions on three distinct days. Each session (45 min) consisted in three runs, each run being composed of one exemplar of each of the three type of block, in a random order. Before starting the experiment, participants performed a short training block of each type, with easier orientations than in the main experiment.
Pure block
. In this block, participants waited 300 ms after each decision, before the black fixation point appears. The stimulus appeared 200 ms after this fixation point. The eight possible orientations for the circular grating were [−1.6, −0.8, −0.5, −0.2, 0.2, 0.5, 0.8, 1.6] and a stimulus was chosen randomly among them with the following weights: [0.05, 0.1, 0.15, 0.2, 0.2, 0.15, 0.1, 0.05].
Feedback block
. In this block, 200 ms after the decision, the participants received an auditory feedback (during 200 ms) about the correctness of the decision they just made. The black fixation dot appeared 100 ms after this feedback and a new trial began. The orientations of the circular gratings were chosen randomly from [−1.6, −0.8, −0.2, 0.2, 0.8, 1.6] with the following weights [0.12, 0.18, 0.2, 0.2, 0.18, 0.12].
Confidence block
. In the confidence block, participants had to evaluate the confidence on the orientation task 200 ms after the decision. To perform this task they had to move a slider on a 10points scale, from pure guessing to certain to be correct. Importantly, the initial position of the slider was chosen randomly for each trial. Participants moved the slider to the left by pressing the “q” key, and to the right with the “e” key. We ask the following kind of confidence judgment to the participants: one extreme of the scale is “pure guess”, the other is “absolutely certain”. They confirmed the choice of the value of confidence by pressing the space bar. The participants had the choice to indicate that they had made a “motor mistake” during the orientation task. For this they had to press a key with a red sticker instead of responding on the confidence scale. After the choice of confidence, the participants had to wait 300 ms before the black fixation dot appears. After the fixation dot the stimulus appeared 200 ms later. The orientations of the circular gratings were the same as in the feedback block.
Accuracy is higher and response times are slower in confidence blocks than in pure blocks. To test this effect on accuracy we ran a binomial regression of responses with fixed factors of orientation and type of block (pure or confidence), the interaction between these factors and a random participant intercept. The orientation coefficient was 2.15 (SD = 0.17, z = 12.44 and p < 10^{−16}); there was no effect of block type (p = 0.385). But we found a significant orientation by block type interaction (value of 0.55, SD = 0.08, z = 6.97 and p = 3 · 10^{−12}), indicating that participants were more accurate in confidence blocks than in noconfidence blocks. In a similar way, we test the effect on response times by using a mixed effect regression with the same factors and intercept as for the accuracy (only on the absolute value of the orientation). We found that the orientation coefficient (value of −0.08, SD = 0.013 and p = 0.0006) and the block type coefficient (value of 0.095, SD = 0.028 and p = 0.011) were significant, meaning that participant are slower in the confidence block. Moreover, the slope by block type interaction with orientation was also significant (value of −0.028, SD = 0.010 and p = 0.031), meaning that the difference between the two types of blocks is more important at low orientation. Surprisingly, we find that performance and response time across participants are identical in the feedback and pure blocks (no statistically significant difference). The participants were highly trained in the orientation discrimination task.
Statistical analyses
We used RStudio with the package lme4^{70} to perform a linear mixed effects analysis^{53} of the history biases of the reaction times, on both the experimental and the numerical simulations data.
To perform the comparison between the experimental data and the model results illustrated in Fig. 6, we first transform the response times of each participant using the zscore^{71}. This normalization allows analyzing the participants all together.
We compared the LMM model described in the main text, Eq. (1), to other ones that do not include all the terms, using the ANOVA function (with the lme4 package^{70}) that performs model comparison based on the Akaike and Bayesian Information Criteria (AIC and BIC)^{70}. We find that the LMM from Eq. (1) is preferable in all cases (see Supplementary Information Table S4).
Attractor neural network model
Neural dynamics
We consider a decisionmaking recurrent network model governed by local excitation and feedback inhibition^{27,28}. Within a meanfield approach, Wang and Wong^{27} have derived a reduced firingrate model of a detailed biophysical model of spiking neurons^{29}. This reduced model is composed of two interacting neural pools which faithfully reproduces not only the behavioral behavior of the full model, but also the dynamics of the neural firing rates and of the output synaptic gating variables. The model variant^{28} that we consider here takes into account a corollary discharge^{72,73}. This results in a non specific inhibitory current injected into the neural pools just after a decision is made, making the neural activities relax towards a low activity, neutral, state, therefore allowing the network to deal with consecutive sequences of decision making trials^{28}. For completeness, we recall here the equations and parameters with notation adapted to the present study.
The model consists of two competing units, each one representing an excitatory neuronal pool, selective to one of the two categories, C or AC. The dynamics is described by a set of coupled equations for the synaptic activities S_{C} and S_{AC} of the two units C and AC:
The synaptic drive S_{i} for pool i ∈ {C, AC} corresponds to the fraction of activated NMDA conductance, and I_{i,tot} is the total synaptic input current to unit i. The function f is the effective singlecell inputoutput relation^{74}, giving the firing rate as a function of the input current:
where a, b, c are parameters whose values are obtained through numerical fit. The total synaptic input currents, taking into account the inhibition between populations, the selfexcitation, the background current and the stimulusselective current, can be written as:
with J_{i,j} the synaptic couplings. The minus signs in the equations make explicit the fact that the interunits connections are inhibitory (the synaptic parameters J_{i,j} being thus positive or null). The term I_{stim,i} is the stimulusselective external input. The form of this stimulusselective current is:
with i = C, AC. The sign, ±, is positive when the stimulus favors population C, negative in the other case. Here the parameter J_{ext} combines a synaptic coupling variable and the global strength of the signal (which are parametrized separately in the original model^{27,28}). The quantity c_{θ}, between 0 and 1, characterizes the stimulus strength in favor of the actual category, here an increasing function of the (absolute value of) the stimulus orientation angle, θ.
In addition to the stimulusselective part, each unit receives individually an extra noisy input, fluctuating around the mean effective external input J_{0}:
with τ_{noise} a synaptic time constant which filter the whitenoise.
On presentation of a stimulus, the system evolves toward one of the two attractor states, corresponding to the decision state. We consider that the decision is made when for the first time the firing rate of one of the two units crosses a threshold z.
After each decision, a corollary discharge under the form of an inhibitory input is sent to both units until the next stimulus is presented:
This inhibitory input, delivered between the time of decision and the presentation of the next stimulus, allows the network to escape from the current attractor and thus engage in a new decision task^{28}.
Confidence modeling
Within the various decision making modelling frameworks, similar proposals have been made to model the neural correlated of the behavioral confidence level. In race models^{15}, which have equal number of accumulation variables and stimulus categories, as in attractor network models, the balance of evidence at the time of perceptual decisions has been used to model the neural correlate of the behavioral confidence^{23,41,75}. This balance of evidence is given by the absolute difference between the activities of the category specific units at the time of decision. Here, we consider that confidence is obtained as a function f of the difference in neural pools activities^{23}, Δr = r_{C} − r_{AC}.
In our experiment, the subjects expressed their confidence level by a number on a scale from 0 to 9. In order to match the neural balance of evidence with the confidence reported by the subject, we map the balance of evidence histogram onto the behavioral confidence histogram, a procedure called histogram matching^{76}. Note that the mapping is here from a continuous variable to a discrete one (taking integer values from 0 to 9).
Fitting procedure
For each participant we calibrate the model by fitting both the mean response times and the accuracies for each orientation, this separately for each block. We note that we only fit the means, which in particular implies that the fits do not take into account the serial dependencies. Doing so, any sequential effects that will arise in the model will result from the intrinsic dynamics of the network, and not from a fitting procedure of these effects.
For most model parameters we take the value used in a previous study^{28}, as reproduced in Table 2. For the models calibration we consider I_{CD,max}, τ_{CD}, c_{θ} and z as free parameters. We impose the two parameters I_{CD,max} and τ_{CD} to be common to all participants (joint optimization). We optimize the parameters c_{θ} (one for each orientation value) and the decision threshold, z across subjects and blocks.
The rationale for this choice of free parameters is as follows. To avoid overfit, one has to restrict as much as possible the number of free parameters. We rely on the model calibrations done in previous works^{18,27,28} which suggest to keep as much as possible the parameters values resulting from the initial work of Wong and Wang. In particular, the original parameters values were chosen such as to reproduce empirical data with the mean field model. Now since the empirical data are only behavioral data, it is difficult to make a calibration of the synaptic weights. A significant change of these parameters would be required to change the behavioral outcomes. Importantly, we tried to restrict the calibration to a small set of reasonably independent parameters. For instance, a change in the weights values may be compensated by a change in the decision threshold (so that the cost function may be flat on a large domain of the parameters space). With the weights fixed, we can optimize the fit with respect to the decision threshold in a safer way. An important quantity is the signaltonoise ratio. By keeping the internal noise constant during the fitting procedure, we explore the whole range of this ratio. We also note that our choice of free parameters can be paralleled with the one made in the DDM framework (drift, threshold and level of noise). This facilitates the comparison with the DDM approach. Finally, imposing some of the free parameters to be common to all participants allows us to further reduce the number of free parameters, at a price of a more complex optimization (a partially joint calibration of all the participantspecific networks).
The observed response time is the sum of a decision time and of a non decision time. Assuming no correlation between these two times, the mean non decision time is thus independent of the orientation. For comparing data with model simulations (which only gives a decision time) at any given orientation θ, we first substract to the mean response time the mean response time averaged over all orientations (this for both data and simulations). We calibrate the model parameters so as to fit these centered mean response times. This will provide a fit of the mean response times (at each angle) up to a global constant, which is the mean non decision time (the modeling of the non decision time distribution is presented in the next Section).
For each participant, and each block, we thus consider the cost function:
where the sums are over the orientation values, θ = {0.2, 0.5, 0.8, 1.6}, the brackets 〈…〉 design averages (as detailed below), and the normalization factors n (for response times) and m (for the accuracy) are given by
In these expression, 〈RT〉_{data}(θ) denotes the mean experimental response time obtained by averaging over all trials at the orientations ±θ, 〈RT〉_{data} is the average over all orientations; 〈RT〉_{network}(θ) and 〈RT〉_{network} are the corresponding averages obtained from the model simulations. The coefficient λ denotes the relative weight given to the response time and accuracy cost terms. We present the results obtained when taking λ = 2, but it should be noted that the choice of this parameter does not impact drastically the fitted parameters.
For each subject, we minimize this cost function with respect to the choice of c_{θ} and z, making use of a Monte Carlo Markov Chain fitting procedure, coupled to a subplex procedure^{77}. This method is particularly adapted to handle simulation based models with stochastic dynamics. Finally, I_{CD,max} and τ_{CD} are fitted using a grid search algorithm as they have less influence on the cost function. In the model, the parameter c represents the stimulus ambiguity, which we expect here to be a monotonous function of the amplitude of the angle, θ. When allowed to be independent parameter values for each value of the orientation, θ = {0.2, 0.5, 0.8, 1.6}, we find that the c_{θ} values can be approximated by a linear or quadratic function of θ depending on the participant. We performed an AIC test^{78} between the linear and quadratic fit in order to choose which function to use for each participant. These approximations reduce the number of free parameters.
In order to obtain a confidence interval for the different parameters, we used the likelihood estimation of confidence interval for MonteCarlo Markov Chains method. The confidence interval on the parameters is thus the 70% confidence interval, assuming a Gaussian distribution of the cost function. This provides an approximation of the reliability of the parameters values found. In order to assess the reliability of this method we checked that the threshold z and stimulus strength c_{θ} parameters have an almost noncorrelated influence onto the cost function.
The results of the calibrating procedure are summarized in Supplementary Information Tables S5 and S6, with I_{CD,max} = 0.033 nA and τ_{CD} = 150 ms.
Estimating the nondecision time
The above fitting procedure calibrates the mean response times up to a global constant, corresponding to the mean non decision time. As explained in the main text, we can go beyond and actually model the nondecision time distribution.
The nondecision time is considered to be due to encoding and motor execution^{31}. Most modelbased data analysis of response time distributions assume a constant nondecision time^{27,42,65}. However, fitting data originating from a skewed distribution under the assumption of a nonskewed nondecision time distribution is cause for bias in the parameter estimates if the model for nondecision time is not correct^{79}. Recently, authors have proposed a mathematical method to fit a nonparametrical nondecision time^{32}. Analyzing various experimental data with this method within the framework of driftdiffusion models, they find that strongly right skewed nondecision time distributions are common.
In this paper we make the hypothesis that the nondecision time distributions are exGaussian distributions, whose parameters are inferred from the data making use of the deconvolution method^{32} and detailed in Supplementary Information Section S5. We present in Fig. 3 the fits of the response time distributions and the inferred non decision time distributions.
Data availability
Experimental Data as well as the statistical analyses scripts are available at https://osf.io/eh2xb/. Simulations of the model were carried out using the Julia programming language^{80}. Scripts corresponding to these numerical simulations are also available at https://osf.io/eh2xb/.
References
Meyniel, F., Sigman, M. & Mainen, Z. F. Confidence as bayesian probability: from neural origins to behavior. Neuron 88, 78–92 (2015).
Mamassian, P. Visual confidence. Annual Review of Vision Science 2, 1–23, https://doi.org/10.1146/annurevvision111815114630 (2015).
Peirce, C. S. & Jastrow, J. On small differences in sensation. Memoirs of the National Academy of Sciences (1884).
Zylberberg, A., Barttfeld, P. & Sigman, M. The construction of confidence in a perceptual decision. Frontiers in integrative neuroscience 6, 79 (2012).
Adler, W. T. & Ma, W. J. Comparing bayesian and nonbayesian accounts of human confidence reports. PLoS computational biology 14, e1006572 (2018).
Vickers, D. Decision processes in visual perception (Academic Press, 1979 (reeditited in 2014)).
Fleming, S. M., Weil, R. S., Nagy, Z., Dolan, R. J. & Rees, G. Relating introspective accuracy to individual differences in brain structure. Science 329, 1541–1543 (2010).
Kepecs, A. & Mainen, Z. F. A computational framework for the study of confidence in humans and animals. Philosophical Transactions of the Royal Society B Biological Sciences 367, 1322–1337 (2012).
Kepecs, A., Uchida, N., Zariwala, H. A. & Mainen, Z. F. Neural correlates, computation and behavioural impact of decision confidence. Nature 455, 227 (2008).
Kiani, R. & Shadlen, M. N. Representation of confidence associated with a decision by neurons in the parietal cortex. Science 324, 759–764 (2009).
Komura, Y., Nikkuni, A., Hirashima, N., Uetake, T. & Miyamoto, A. Responses of pulvinar neurons reflect a subject’s confidence in visual categorization. Nature neuroscience 16, 749 (2013).
Lak, A. et al. Orbitofrontal cortex is required for optimal waiting based on decision confidence. Neuron 84, 190–201 (2014).
Bogacz, R., Brown, E., Moehlis, J., Holmes, P. & Cohen, J. D. The physics of optimal decision making: a formal analysis of models of performance in twoalternative forcedchoice tasks. Psychological review 113, 700 (2006).
Ratcliff, R. A theory of memory retrieval. Psychological review 85, 59 (1978).
Raab, D. H. Division of psychology: Statistical facilitation of simple reaction times. Transactions of the New York Academy of Sciences 24, 574–590 (1962).
Vickers, D. Evidence for an accumulator model of psychophysical discrimination. Ergonomics 13, 37–58 (1970).
Merkle, E. C. & Van Zandt, T. An application of the poisson race model to confidence calibration. Journal of Experimental Psychology: General 135, 391 (2006).
Wang, X.J. Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36, 955–968 (2002).
Clarke, F. R., Birdsall, T. G. & Tanner, W. P. Jr Two types of roc curves and definitions of parameters. The Journal of the Acoustical Society of America 31, 629–630 (1959).
Yeung, N. & Summerfield, C. Metacognition in human decisionmaking: confidence and error monitoring. Phil. Trans. R. Soc. B 367, 1310–1321 (2012).
Meyniel, F., Schlunegger, D. & Dehaene, S. The sense of confidence during probabilistic learning: A normative account. PLoS computational biology 11, e1004305 (2015).
Pleskac, T. J. & Busemeyer, J. R. Twostage dynamic signal detection: a theory of choice, decision time, and confidence. Psychological review 117, 864 (2010).
Wei, Z. & Wang, X.J. Confidence estimation as a stochastic process in a neurodynamical system of decision making. Journal of neurophysiology 114, 99–113 (2015).
Koriat, A. The selfconsistency model of subjective confidence. Psychological review 119, 80 (2012).
Paz, L., Insabato, A., Zylberberg, A., Deco, G. & Sigman, M. Confidence through consensus: a neural mechanism for uncertainty monitoring. Scientific reports 6, 21830 (2016).
Jaramillo, J., Mejias, J. F. & Wang, X.J. Engagement of pulvinocortical feedforward and feedback pathways in cognitive computations. Neuron 101, 321–336 (2019).
Wong, K.F. & Wang, X.J. A recurrent network mechanism of time integration in perceptual decisions. Journal of Neuroscience 26, 1314–1328 (2006).
Berlemont, K. & Nadal, J.P. Perceptual decisionmaking: Biases in posterror reaction times explained by attractor network dynamics. Journal of Neuroscience 39, 833–853 http://www.jneurosci.org/content/39/5/833.full.pdf https://doi.org/10.1523/JNEUROSCI.101518.2018 (2019).
Compte, A., Brunel, N., GoldmanRakic, P. S. & Wang, X.J. Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral Cortex 10, 910–923 (2000).
Wilcoxon, F. Individual comparisons by ranking methods. Biometrics Bulletin 1, 80 (1945).
Luce, R. D. et al. Response times: Their role in inferring elementary mental organization. 8 (Oxford University Press on Demand, 1986).
Verdonck, S. & Tuerlinckx, F. Factoring out nondecision time in choice reaction time data: Theory and implications. Psychological review 123, 208 (2016).
Ding, L. & Gold, J. I. Neural correlates of perceptual decision making before, during, and after decision commitment in monkey frontal eye field. Cerebral Cortex 22, 1052–1067 (2011).
Hebart, M. N., Schriever, Y., Donner, T. H. & Haynes, J.D. The relationship between perceptual decision variables and confidence in the human brain. Cerebral Cortex 26, 118–130 (2014).
Beck, J. M. et al. Probabilistic population codes for bayesian decision making. Neuron 60, 1142–1152, https://doi.org/10.1016/j.neuron.2008.09.021 (2008).
Baranski, J. V. & Petrusic, W. M. The calibration and resolution of confidence in perceptual judgments. Perception & psychophysics 55, 412–428 (1994).
Desender, K., Boldt, A., Verguts, T. & Donner, T. H. Postdecisional sense of confidence shapes speedaccuracy tradeoff for subsequent choices. bioRxiv 466730 (2018).
Sanders, J. I., Hangya, B. & Kepecs, A. Signatures of a statistical computation in the human sense of confidence. Neuron 90, 499–506 (2016).
Urai, A. E., Braun, A. & Donner, T. H. Pupillinked arousal is driven by decision uncertainty and alters serial choice bias. Nature communications 8, 14637 (2017).
Geller, E. S. & Whitman, C. P. Confidence ill stimulus predictions and choice reaction time. Memory & cognition 1, 361–368 (1973).
Vickers, D. & Packer, J. Effects of alternating set for speed or accuracy on response time, accuracy and confidence in a unidimensional discrimination task. Acta psychologica 50, 179–197 (1982).
Usher, M. & McClelland, J. L. The time course of perceptual choice: the leaky, competing accumulator model. Psychological review 108, 550 (2001).
Griffin, D. & Tversky, A. The weighing of evidence and the determinants of confidence. Cognitive psychology 24, 411–435 (1992).
Ernst, M. O. & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429 (2002).
Laming, D. Choice reaction performance following an error. Acta Psychologica 43, 199–224 (1979).
Leopold, D. A., Wilke, M., Maier, A. & Logothetis, N. K. Stable perception of visually ambiguous patterns. Nature neuroscience 5, 605 (2002).
Gold, J. I., Law, C.T., Connolly, P. & Bennur, S. The relative influences of priors and sensory evidence on an oculomotor decision variable during perceptual learning. Journal of neurophysiology 100, 2653–2668 (2008).
Cho, R. Y. et al. Mechanisms underlying dependencies of performance on stimulus history in a twoalternative forcedchoice task. Cognitive, Affective, &. Behavioral Neuroscience 2, 283–299 (2002).
Glaze, C. M., Kable, J. W. & Gold, J. I. Normative evidence accumulation in unpredictable environments. Elife 4, e08825 (2015).
Bonaiuto, J. J., de Berker, A. & Bestmann, S. Response repetition biases in human perceptual decisions are explained by activity decay in competitive attractor models. eLife 5, e20047 (2016).
Braun, A., Urai, A. E. & Donner, T. H. Adaptive history biases result from confidenceweighted accumulation of past choices. Journal of Neuroscience 2189–17 (2018).
Samaha, J., Switzky, M. & Postle, B. R. Confidence boosts serial dependence in orientation estimation. bioRxiv 369140 (2018).
Gelman, A. & Hill, J. Data analysis using regression and hierarchical/multilevel models. New York, NY: Cambridge (2007).
Fay, M. P. & Proschan, M. A. Wilcoxonmannwhitney or ttest? on assumptions for hypothesis tests and multiple interpretations of decision rules. Statistics surveys 4, 1 (2010).
Drugowitsch, J., MorenoBote, R. & Pouget, A. Relation between belief and performance in perceptual decision making. PloS one 9, e96511 (2014).
Purcell, B. A. & Kiani, R. Neural mechanisms of posterror adjustments of decision policy in parietal cortex. Neuron 89, 658–671 (2016).
Roxin, A. & Ledberg, A. Neurobiological models of twochoice decision making can be reduced to a onedimensional nonlinear diffusion equation. PLoS Computational Biology 4, e1000046 (2008).
MorenoBote, R. Decision confidence and uncertainty in diffusion models with partially correlated neuronal integrators. Neural computation 22, 1786–1811 (2010).
Rolls, E. T., Grabenhorst, F. & Deco, G. Choice, difficulty, and confidence in the brain. Neuroimage 53, 694–706 (2010).
Rolls, E. T., Grabenhorst, F. & Deco, G. Decisionmaking, errors, and confidence in the brain. Journal of neurophysiology 104, 2359–2374 (2010).
Kiani, R., Corthell, L. & Shadlen, M. N. Choice certainty is informed by both evidence and decision time. Neuron 84, 1329–1342 (2014).
Ratcliff, R. & Starns, J. J. Modeling confidence and response time in recognition memory. Psychological review 116, 59 (2009).
Wong, K.F., Huk, A. C., Shadlen, M. N. & Wang, X.J. Neural circuit dynamics underlying accumulation of timevarying evidence during perceptual decision making. Frontiers in Computational Neuroscience 1, 6 (2007).
Desender, K., Murphy, P. R., Boldt, A., Verguts, T. & Yeung, N. A postdecisional neural marker of confidence predicts informationseeking. bioRxiv 433276 (2018).
Ratcliff, R. & Rouder, J. N. Modeling response times for twochoice decisions. Psychological Science 9, 347–356 (1998).
Peirce, C. S. On the theory of errors of observation. Report of the Superintendent of the United States Coast Survey Showing the Progress of the Survey During the Year 1870, 220–224 (1873).
Ditterich, J. Evidence for timevariant decision making. European Journal of Neuroscience 24, 3628–3641 (2006).
Wang, X.J. Decision Making in Recurrent Neuronal Circuits. Neuron 60, 215–234 (2008).
Kleiner, M. et al. Whatâ€™s new in psychtoolbox3. Perception 36, 1 (2007).
Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixedeffects models using lme4. Journal of Statistical Software 67, 1–48 (2015).
Kreyszig, E. Advanced engineering mathematics. fourth edi (1979).
Sommer, M. A. & Wurtz, R. H. Visual perception and corollary discharge. Perception 37, 408–418 (2008).
Crapse, T. B. & Sommer, M. A. Frontal eye field neurons with spatial representations predicted by their subcortical input. Journal of Neuroscience 29, 5308–5318 (2009).
Abbott, L. & Chance, F. S. Drivers and modulators from pushpull and balanced synaptic input. Progress in brain research 149, 147–155 (2005).
Smith, P. L. & Vickers, D. The accumulator model of twochoice discrimination. Journal of Mathematical Psychology 32, 135–168 (1988).
Gonzalez, R. C., et al. Digital image processing (2002).
Rowan, T. The subplex method for unconstrained optimization. Ph.D. thesis, Ph. D. thesis, Department of Computer Sciences, Univ. of Texas (1990).
Akaike, H. Information theory and an extension of the maximum likelihood principle. In Breakthroughs in statistics, 610–624 (Springer, 1992).
Ratcliff, R. Parameter variability and distributional assumptions in the diffusion model. Psychological review 120, 281 (2013).
Bezanson, J., Edelman, A., Karpinski, S. & Shah, V. B. Julia: A fresh approach to numerical computing. SIAM review 59, 65–98, https://doi.org/10.1137/141000671 (2017).
Acknowledgements
We are grateful to Laurent BonnasseGahot for useful discussions and suggestions. We thank Pascal Mamassian, Vincent de Gardelle and XiaoJing Wang for stimulating discussions. We thank Isabelle Brunet for her help in recruiting the participants and organizing the experimental sessions. We thank the anonymous referees for useful remarks. K.B. acknowledges a fellowship from the ENS ParisSaclay.
Author information
Authors and Affiliations
Contributions
All authors contributed to the research plan. K.B., J.R.M. and J.S. conceived the experiment; K.B. and J.R.M. conducted the experiment; K.B. and J.S. performed the statistical analyses; K.B. and J.P.N. performed the mathematical modeling; K. B. performed the numerical simulations; K.B. and J.P.N. analyzed the results with inputs from the other authors. K.B. and J.P.N. wrote the paper with input from the other authors. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Berlemont, K., Martin, JR., Sackur, J. et al. Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions. Sci Rep 10, 7940 (2020). https://doi.org/10.1038/s41598020635828
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598020635828
This article is cited by

Adaptive neurons compute confidence in a decision network
Scientific Reports (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.