INTRODUCTION

A critical aspect of executive control is the ability to detect and correct errors in our ongoing cognitive performance. The processing of errors serves an adaptive function, signalling to an individual that task performance was either incorrect or insufficient, and that the intervention of other attention or control processes would potentially be advantageous. Although the study of error detection, and performance monitoring more generally, is of interest because of relevance to a broad range of other processes in cognition, it has to an extent been driven by neurophysiological and behavioral evidence of error processing dysfunction in a range of clinical conditions.

A repeated finding has been the identification of discordant neural activity in the anterior cingulate cortex (ACC) during cognitive errors. Persons with schizophrenia, ADHD, Alzheimer's disease, and various disorders of drug addiction (eg, cocaine, heroin, and alcohol) show diminished ACC responses to errors (Bates et al, 2002; Forman et al, 2004; Kaufman et al, 2003; Mathalon et al, 2003; Ridderinkhof et al, 2002), whereas obsessive-compulsive disorder patients have elevated responses in this region (Gehring et al, 2000). Current evidence suggests that the neural response to errors involves a network of regions (Ridderinkhof et al, 2004), which consistently involves the ACC (Gehring and Fencsik, 2001). There remains much debate in the literature about the role these cortical regions play in error processing (for a review see Ridderinkhof et al, 2004), in particular whether their responsiveness is error specific, or whether they respond to certain properties of the stimuli used in cognitive paradigms, such as the amount of response conflict engendered by the stimulus (owing to competition between potential responses) (Carter et al, 1998).

One mechanism by which the neural response to errors has been hypothesized to contribute to ongoing cognitive control is by engaging the top–down control of attention (MacDonald et al, 2000; Ullsperger and von Cramon, 2001). Three recent studies have provided brain–behavior relationships to support such a hypothesis (Garavan et al, 2002; Gehring et al, 1993; Kerns et al, 2004), indicating that strategic processes following an error, namely adapting response speeds on the trials immediately following an error, correlated with greater activity in the ACC region during the error trial. Implementation of the increased cognitive control was related to greater dorsolateral prefrontal cortex activity (dPFC). Typically, these results have been explained using the conflict monitoring hypothesis, which suggests that ACC activation, irrespective of success or failure, is related to the extent of response conflict: conditions in which multiple responses compete for the control of action.

The aim of the present study was to examine the cognitive implications of diminished error-related activity in cocaine users. Previous studies have indicated that cocaine users have a hypoactive error-related neural response (Kaufman et al, 2003; Kubler et al, 2005), particularly in the ACC region. It remains unclear how this hypoactivity might contribute to the more general cognitive control problem identified in this population (Bolla et al, 2004; Di Sclafani et al, 2002; Fillmore and Rush, 2002; Goldstein et al, 2001), which suggests a broader dysfunction in fronto-parietal top–down control networks. Two error-related processes were chosen to examine the potential influence of a hypoactive error-related neural response: error awareness and post-error cognitive control. Previous research suggests that the ACC is active during errors irrespective of awareness, whereas activity in prefrontal and parietal regions appears to differentiate awareness of an error (Hester et al, 2005; Nieuwenhuis et al, 2001). Post-error cognitive control, which has typically been studied by examining post-error slowing during cognitive control tasks (eg, Stroop task, Go/NoGo task, and Flanker task), has been characterized as a reciprocal relationship between anterior cingulate and dorsolateral prefrontal cortices, where ACC detects the requirement for, and the dPFC implements, greater cognitive control.

Given the previous findings of error-related ACC hypoactivity in cocaine users, and the evidence from control subjects that the level of ACC activity is positively related to post-error increases in cognitive control, we predicted that our cocaine user sample would have greater difficulty modulating cognitive control. Our predictions for error awareness in cocaine users were less straightforward. The evidence from past studies suggests that while the ACC is active during both aware and unaware errors, its level of activity does not predict awareness of an error. Awareness appears to be related more to fronto-parietal activation, regions which have also been shown to be hypoactive in cocaine users (Hester and Garavan, 2004), although specific examinations of user's error-related processing have not indicated fronto-parietal hypoactivity (Kaufman et al, 2003). We therefore predicted that conscious awareness of errors would not be significantly different in cocaine users when compared to matched healthy controls.

MATERIALS AND METHODS

Subjects

Twenty-two non-drug using (six female subjects, mean age=39.9, range=26–51) and 21 active cocaine using participants (six female subjects, mean age=40.3, range=22–48) were included in the current study. Educational attainment for the two groups was not significantly different (controls: 12.7 years, users: 11.3, F(1,41)=3.80, p>0.05). The numerical difference may be the result of abrogated schooling that often accompanies drug use, and which also results in underestimation of intellect (Chatterji, 2006). Participants were fully informed of the nature of the research and provided written consent for their involvement in accordance with the Institutional Review Board of the Medical College of Wisconsin (MCW). Participants were recruited via the General Medical Research Centre (GCRC) at MCW, where staff psychiatrists screened individuals who responded to advertisements seeking active users of cocaine to volunteer for a range of different studies conducted at MCW examining the effects of cocaine use. The Structured Clinical Interview for DSM-IV was conducted to ensure that all participants were right handed and had no current or past history of neurological or psychiatric disorders, dependence on any psychoactive substance other than cocaine (for user participants only), or nicotine. Urine samples were obtained from all participants at least 1 h before testing, with all non-drug participants returning negative tests for all 96 CNS-reactive drug substances tested for, and active cocaine users returning positive tests for cocaine or its metabolites, indicating that they had used cocaine within the past 72 h. Self-report from participants indicated that the time since last use was 45 h (range 12–60 h). User participants' who were positive for any drug (on the urine screen) other than cocaine, nicotine, or marijuana were excluded. Thirteen of the cocaine user sample reported occasional use of cannabis, with 17 days being the average duration since last use and none had consumed in the 24 h before cognitive testing. The acute effects of cannabis intoxication on the cognitive performance of occasional users are short lived, peaking at 2 h post-consumption, and lasting up to 8 h, but are not present after 24 h (Curran et al, 2002; Fant et al, 1998), and ‘light’ (once per week) use of cannabis has not been associated with decrements in cognitive test performance (Pope et al, 2001). Twenty-two participants (17 cocaine users and five control) reported regular use of tobacco (M=11.1 cigarettes per day), and all participants from both groups reported regular use of alcohol. Participants were excluded for present or past dependence on alcohol, and recorded zero blood alcohol levels before testing.

Behavioral Tasks

EAT

To examine conscious recognition of errors we administered the Error Awareness Task (EAT) (see Figure 1) (Hester et al, 2005), a motor Go/NoGo response inhibition task in which subjects make errors of commission of which they are aware (aware errors), or unaware (unaware errors). The EAT presents a serial stream of single color words in congruent fonts, with the word presented for 900 ms followed by a 600 ms inter-stimulus interval. Subjects were trained to respond to each of the words with a single ‘Go trial’ button press, and withhold this response when either of two different circumstances arose. The first was if the same word was presented on two consecutive trials (Repeat NoGo), and the second was if the word and font of the word did not match (Stroop NoGo). By having competing types of response inhibition rules we aimed to vary the strength of stimulus–response relationships, whereby representations of rules competitively suppress one another such that the more prepotent rule would suppress the weaker rule and so produce a significant number of errors, a small proportion of which may go unnoticed owing to focussing primarily on the prepotent rule. In particular, we aimed to capitalize on the overlearned human behavior of reading the word, rather than the color of the letters (the Stroop effect), and so predispose subjects to monitor for the Repeat, rather than the Stroop, NoGo's. To indicate ‘error awareness’ subjects were trained to press the Go trial button twice on the trial following any commission errors. Four blocks of 225 trials were administered to subjects, distributing 50 Repeat NoGo events and 50 Stroop NoGo events pseudo-randomly throughout the serial presentation of 800 Go trials, for the purpose of mixing frequent responses and infrequent response inhibitions to maintain response prepotency.

Figure 1
figure 1

The EAT. The EAT presents a serial stream of single color words in congruent fonts, with the word presented for 900 ms followed by a 600 ms inter-stimulus interval. Subjects were trained to respond to each of the words with a single ‘Go trial’ button press, and withhold this response when either of two different circumstances arose. The first was if the same word was presented on two consecutive trials (Repeat NoGo), and the second was if the word and font of the word did not match (Stroop NoGo). By having competing types of response inhibition rules we aimed to vary the strength of stimulus–response relationships, whereby representations of rules competitively suppress one another such that the more prepotent rule would suppress the weaker rule and so produce a significant number of errors, a small proportion of which may go unnoticed owing to focussing primarily on the prepotent rule. In particular, we aimed to capitalize on the overlearned human behavior of reading the word, rather than the color of the letters (the Stroop effect), and so predispose subjects to monitor for the Repeat, rather than the Stroop, NoGo's. To indicate ‘error awareness’ subjects were trained to press the Go trial button twice on the trial following any commission errors.

BAT

Subjects' completed a second motor inhibition Go-NoGo task (see Figure 2), although the aim of this design was to examine post-error behavior for single lures in both the Single Lure (SL) and Double Lure (DL) conditions. The Behavior Adaptation Task (BAT) is a motor inhibition Go-NoGo task in which the letters X and Y were presented serially in an alternating pattern at 1 Hz. Subjects were asked to make a button response for each letter in the sequence but to withhold their responses when the alternation order was interrupted. Stimulus duration was 800 ms followed by a 200-ms fixation point. Stimuli were presented in two different conditions: in the SL condition the 80 NoGo stimuli (lures) were always followed by a Go stimulus. In the DL condition there were 44 single lures, and 18 double lures, with the double lures requiring two successive response inhibitions (eg, the seventh and eighth stimuli in the sequence below). The aim of this design was to examine post-error behavior for single lures in both the SL and the DL condition. In the DL condition only, NoGo lures were occasionally followed by another NoGo lure which was reasoned should induce post-error (and post-lure) behavioral changes such as response slowing; the advantage of such post-error slowing is reduced in the SL condition in which a NoGo lure was always followed by a Go trial.

Figure 2
figure 2

The BAT. The BAT is a motor inhibition Go-NoGo task in which the letters X and Y were presented serially in an alternating pattern at 1 Hz. Subjects were asked to make a button response for each letter in the sequence but to withhold their responses when the alternation order was interrupted. Stimulus duration was 800 ms followed by a 200-ms fixation point. Stimuli were presented in two different conditions: in the SL condition the 80 NoGo stimuli (lures) were always followed by a Go stimulus. In the DL condition there were 44 single lures, and 18 double lures, with the double lures requiring two successive response inhibitions (eg, the seventh and eighth stimuli in the sequence below). The aim of this design was to examine post-error behavior for single lures in both the SL and the DL condition.

The experiment was conducted in six different blocks of 314 trials each, three blocks of the SL condition and three of the DL condition. Each block had a duration of 5 min and 30 s. The order of the conditions was randomized across subjects.

RESULTS

EAT

Performance indices for both control and user subjects are presented in Table 1. The distribution of scores for a number of measures was not normally distributed, and non-parametric statistical tests have been used where appropriate. Control subjects' inhibitory control, as measured by NoGo accuracy, was better than cocaine users, but this difference was not significant, F(1,42)=2.0, p=0.164. Comparing Stroop and Repeat NoGo's performance separately showed significant group differences for the latter, Mann–Whitney U (z=−2.38, p=0.017) and not the former, F(1,42)=0.314, p=0.579. Although Stroop NoGo errors were more common than Repeat NoGo errors when comparing across both groups with a repeated measures t-test, t(42)=9.25, p<0.0001, group differences in the awareness of errors were evident for Repeat NoGo errors, U(z=−2.31, p=0.021), but not Stroop errors, F(1,42)=0.809, p=0.374. Control participants were aware of over 85% of Repeat NoGo errors in comparison to 71% for cocaine users (see Figure 3).

Table 1 Mean Accuracy, Reaction Time, and Standard Deviation Scores for Cocaine Users (n=21) and Matched Controls (n=22) on the Error Awareness Task
Figure 3
figure 3

The mean number of aware and unaware errors for repeat and stroop NoGo lure trials, by both control and cocaine user participants during the EAT.

Control participants responded significantly faster to Go trials than cocaine users, F(1,42)=5.88, p=0.020, although the faster reaction times of controls during NoGo errors was not significant, F(1,42)=2.96, p=0.092. Go RTs were significantly faster than error RT's for both groups, however this comparison was confounded by the response pattern of many participants, who, after making the initial erroneous button press response during the NoGo trial, responded again during this trial. Owing to technical limitations we were not able to obtain separate reaction times for each response, and while removal of the double responses does not alter the result it remains unclear how such responses might have influenced error RT's.

A measure of post-Aware-error slowing was calculated by subtracting the Go trial response time that immediately preceded an error from the second Go trial response time that followed an error. The second trial after an error was used due to the technical difficulty with calculating reaction times from double responses. Mean scores for post-Aware-error slowing indicated no significant change in RTs following an aware error for controls, however cocaine users showed significant slowing (33 ms, t(20)=−2.23, p=0.037). We have previously observed faster RT following errors with similar tasks, which we hypothesized was an adaptation to the task stimulus presentation ratio as subjects learn that NoGo events are widely spaced. Further support for this hypothesis includes significantly faster RT following STOPS for both groups in the current study (Post-STOP trial RT minus Pre-STOP trial: cocaine users=−73 ms, t(20)=7.95, p<0.001; control=−91 ms, t(21)=8.13, p<0.001). In contrast, both groups showed a trend towards slowing after unaware errors (post-unaware-error Go trial RT minus pre-unaware-error Go trial: cocaine users=19 ms, t(20)=−1.1, p=0.284; control=22 ms, t(21)=−2.04, p=0.05), a result observed previously with this task (Hester et al, 2005). Neither the post-STOP nor post-unaware-error effects differed between users and controls.

In summary, cocaine users displayed poorer inhibitory control (more commission errors for Repeat NoGos) and poorer error awareness (poorer awareness of Repeat NoGo errors). However, response speed slowing following errors (aware or unaware) or STOPS did not appear to be compromised (indeed, users showed significantly greater post-error slowing following aware errors relative to controls).

BAT

Performance indices for both control and user subjects is presented in Table 2. Given the significant correlation between NoGo performance for single lures in SL and DL conditions and first lure of a double lure in the DL condition (ranging between r=0.66 and 0.82), an average BAT inhibition score was created. Control subjects' BAT inhibition performance was better than cocaine users: F(1,42)=4.31, p=0.04. No group difference was identified for the second lure of a double lure, U(z=−0.71, p=0.43). A ‘post-error task adaptation’ coefficient was derived for double lures, specifically examining those occasions where participants failed to inhibit their response to the first lure, by calculating the proportion of second lures on which participants successfully inhibited or ‘adapted’ their performance. The results indicated that control participants adapted their performance following an error on 78% of occasions, which was significantly more than cocaine users who adapted post-error performance on only 66% of trials, U(z=−2.096, p=0.036).

Table 2 Mean Accuracy, Reaction Time, and Standard Deviation Scores for Cocaine Users (n=21) and Matched Controls (n=22) on the Behavior Adaptation Task

Two separate two condition (single, double lure) × 2 group (controls, cocaine users) repeated measures ANOVAs examined Go RT and Error of Commission (EOC) RT. Go RT did not significantly differ across groups, F(1,41)=0.029, p>0.05, or conditions, F(1,41)=0.571, p>0.05, and the interaction between condition and group was also not significant, F(1,41)=1.01, p>0.05. The same pattern of nonsignificant results was observed for EOC RT.

Two RT difference scores, post-EOC RT and Post-Stop RT, were also calculated. Post-EOC RT was calculated by subtracting the RT for the trial following an EOC from the RT for the trial immediately before an EOC. Only single lures in the SL and DL conditions were used to calculate this difference score. To test whether post-error slowing occurred, one-sample t-tests against zero for each group, during each condition, were conducted. The results indicated that both controls: SL: 60 ms, t(20)=−4.37, p<0.01; DL: 97 ms, t(20)=−7.97, p<0.01, and cocaine users: SL: 56 ms, t(21)=−5.62, p<0.01; DL: 86 ms, t(21)=−5.57, p<0.01, demonstrated significant post-error slowing. A repeated measures ANOVA indicated that post-error slowing significantly differed across conditions, F(1,41)=12.13, p<0.01, with slowing in the DL condition significantly greater than for the SL condition, but did not show a significant effect of group (p>0.05), or interaction, F(1,41)=0.622, p>0.05.

To examine post-STOP behavior, one-sample t-tests against zero for each group, during each condition, were conducted. The results indicated that during the SL condition both controls: 71 ms, t(21)=4.52, p<0.01; and cocaine users: 48 ms, t(20)=2.86, p<0.01; demonstrated significant post-STOP decreases in RT. The DL condition showed a different pattern, with cocaine users post-STOP RT significantly slowing: 39 ms, t(21)=−2.19, p<0.05, and control participants not significantly different to zero: −2 ms, t(20)=−0.14, p>0.05. A repeated measures ANOVA indicated that post-STOP behavior significantly differed across conditions, F(1,41)=55.21, p<0.01, with slowing in the DL condition and speeding up for the single lure condition, but did not show a significant effect of group, F(1,41)=2.42, p>0.05, or interaction, F(1,41)=0.48, p>0.05.

In summary, no group differences in any of the RT measures were observed. Moreover, there were no group differences in response time adaptation (ie, post-error or post-STOP slowing). However, an inhibitory control and a performance adaptation score (the likelihood of making two successive commission errors on the double lures) was lower in users than controls. Although this last finding might suggest impaired performance adaptation in users the number of double lure errors may not be a pure measure of adaptation as successful performance may also depend on the ability to withhold a prepotent response (a process demonstrated above to be impaired for repeat trials in users). Double condition ‘adaptation’ was not significantly related to either measure of inhibition accuracy from the EAT (r=0.13–0.27), or single condition accuracy (r=0.27, p=0.08), but was related to double condition single lure accuracy (r=0.41, p<0.01), and double lure 1 accuracy (r=0.36, p=0.01). No relationship was observed between double condition adaptation and the awareness measures from the EAT (r=0.12–0.14). Similarly no relationship was observed with single condition post-error slowing (r=0.09), however a significant correlation was seen between the magnitude of post-error slowing in the double lure condition and double condition adaptation (r=0.34, p=0.02). On the whole, this pattern of correlations demonstrates that this performance adaptation measure very likely reflects inhibitory abilities in addition to behavioral adaptation abilities and that the poorer scores of users are not necessarily indicative of compromised post-error adaptation.

Relationship between Cocaine Use Behavior and Cognitive Task Performance

The EAT and BAT provide numerous measures of cognitive control, including response inhibition (percentage of successful inhibitions for Stroop and Repeat NoGos from EAT and the composite NoGo score from BAT), error awareness (summing Stroop and Repeat error awareness scores in the EAT), post-error slowing (post-unaware errors in the EAT and single lure trials in the DL condition) and inhibition adaptation (DL condition). NoGo accuracy and error awareness indices from the EAT task generally showed negative correlations to use behavior but these effects were not significant. Significant relationships were identified between use behavior and BAT performance, with NoGo accuracy (r=−0.39) and post-error adaptation (r=−0.49) significantly correlated with years of cocaine use. Weekly spending on cocaine also negatively correlated with post-error adaptation (r=−0.38, p=0.08) though it did not exceed the significance threshold. Post-error slowing, did not significantly relate to any of the self-report measures of cocaine use.

DISCUSSION

The results of the present study suggest that while cocaine users show a range of deficits in comparison to matched controls, a number of dissociations exist between deficits and retained cognitive performance. Cocaine users consistently demonstrated poorer inhibitory control, a deficit that was accompanied by reduced awareness of their errors when compared to matched controls. Although we had predicted post-error adaptation deficits on the basis of hypoactive ACC activity in the cocaine using population, we could find no evidence of deficits in post-error reaction times, a behavior that has previously been shown to correlate with error-related ACC activity. Interestingly, a different measurement of post-error adaptation behavior: exerting inhibitory control on the trial immediately after failing to inhibit, was significantly poorer in the cocaine using sample, suggesting a dissociation between these forms of behavior.

Inhibitory Control

Results for the EAT and BAT measures of inhibitory control demonstrated poorer performance in cocaine users; with the exception of Stroop NoGo performance in the EAT, cocaine user participants' performance was worse than that of control participants on all indices of inhibitory control.

The general pattern of poorer inhibitory control in cocaine users is consistent with the extant literature (Bolla et al, 1999; Fillmore and Rush, 2002; Hester and Garavan, 2004), although intact performance has also been reported (Bolla et al, 2004; Goldstein et al, 2001; Hoff et al, 1996). It is possible that some of the cause of inconsistency lies with sampling power, primarily the number of observations, as both current tasks indicate a significant cocaine-related deficit when inhibitory performance is averaged across within-task conditions.

Previous findings of inhibitory deficits in cocaine users have been associated with significantly faster response speeds (Fillmore and Rush, 2002; Kaufman et al, 2003), a relationship that has also been observed in control samples (Bellgrove et al, 2004). The faster response speeds are thought to reflect diminished attention to the task, or an impulsive response style, both of which would be consistent with the deficits observed in cocaine users. Interestingly however, results from both of the current tasks indicate that inhibitory control deficits persisted in the absence of response speed differences.

Error Awareness

Cocaine users explicitly recognized fewer of their inhibitory errors than control participants when performing the EAT. Again, the pattern of performance indicated differences between experimental conditions, with awareness of repeat errors, but not stroop lure errors, showing significant group differences. The interaction between group and condition appeared to be driven by poorer awareness of cocaine users on Repeat errors, but not Stroop errors. Repeat lures require sustained attention to the processing of trial sequence, whereas Stroop lures require phasic detection of the incongruency between word and font color. The pattern of results observed here may indicate that performance monitoring of cocaine users may be particularly poor for tasks that require sustained attention, or, that place demands on the updating process typically associated with working memory (Jansma et al, 2000). We have previously shown that cocaine users find inhibitory control under increased working memory demands particularly difficult (Hester and Garavan, 2004), partly due to the inability to modulate ACC activity in response to these demands. Although the working memory demands of the EAT task are minimal, certainly in comparison to n-back tasks that have also demonstrated cognitive impairment in cocaine users (Verdejo-Garcia et al, 2006), they may have been sufficient to interfere with the process of explicit error awareness. For example, even minimal increased in working memory demands have been shown to deleteriously influence other executive processes such as selective attention (de Fockert et al, 2001), and inhibitory control (Bunge et al, 2001), potentially owing to placing demands on shared neural resources (Klingberg, 1998).

Although this relationship requires further study, links between working memory and error awareness are of particular pertinence to drug abuse. Research suggests that cue-related cocaine craving involves the activation of a network of cortical regions involved in the engagement of attention, and the subsequent ruminations also involve the fronto-parietal network seen in WM rehearsal (Childress et al, 1999; Garavan et al, 2000; Grant et al, 1996; Kilts et al, 2001; Maas et al, 1998). Although speculative, a link between WM demands, cocaine craving and poor error awareness may help explain why cocaine users self-monitoring, or insight into their own behavior, is particularly poor during craving for the drug (Miller and Gold, 1994). The reduced awareness of errors by cocaine users has not previously been reported, though animal research has alluded to this type of deficit with tasks examining post-error responses to cognitive task performance (Gendle et al, 2003, 2004; Morgan et al, 2002).

Post-Error Adaptation

Contrary to expectation, cocaine users showed intact post-error slowing of response times in both the single and double lure conditions of the BAT. The magnitude and pattern of post-error slowing did not differ significantly between the groups on any measure derived from the BAT. Post-error slowing is a common phenomenon typically seen in experimental paradigms that require fast responses from a multiple-choice set of alternatives. However, RT slowing does not necessarily confer a direct benefit to post-error performance, particularly for tasks that emphasize speed over accuracy. However, the benefit to performance of post-error slowing was specifically manipulated within the BAT, by introducing a double lure condition where 50% of NoGo lure trials were immediately followed by a second consecutive NoGo lure. This manipulation proved successful, with both groups showing significantly greater post-error slowing for the double lure condition when compared to the single lure condition.

In contrast to the post-error slowing result, cocaine users did show significantly less post-error adaptation of inhibitory control performance. During the double lure condition, a significantly smaller proportion of first lure errors were followed by successful inhibition on the subsequent second lure of double lures. This result was surprising given that cocaine users showed significant levels of post-error slowing, particularly during the double lure condition, which suggested an ability to detect errors and modify behavior accordingly. The intercorrelations between measures on the BAT and EAT indicated that our derived measure of post-error adaptation was related to both inhibitory control performance and error awareness, but showed no relationship to the post-error slowing measures. Consequently, the dissociation between deficient post-error adaptation of inhibitory control and intact post-error slowing may have resulted from cocaine users' impairment in inhibitory control, whereby the magnitude of slowing following errors (which was equivalent to that of control participants) was not sufficient to overcome their significantly poorer inhibitory control.

We had predicted, on the basis of previous studies demonstrating a relationship between diminished error-related ACC responses and decreased post-error slowing (de Bruijn et al, 2004; Kerns et al, 2005; Ridderinkhof et al, 2002), that cocaine users, who have previously shown this diminished error-related ACC response (Kaufman et al, 2003), would have impaired post-error slowing. Given the intact post-error slowing of the current cocaine-using sample, we must consider whether any significant methodological or sample differences between the current study and those that had formed the hypothesis might account for the unexpected finding. One obvious explanation is that the current sample of cocaine users may not have a significantly diminished ACC response to errors. In the absence of neuroimaging results we cannot rule out this explanation, however, given the similarities between the current sample and Kaufman et al (2003), in terms of both lifetime and recent cocaine use behavior, as well as demographic characteristics, there appears to be no direct evidence to support this hypothesis. In addition, ACC hypoactivity appears to be a general feature of drug abusers, with findings in opiate (Forman et al, 2004), cannabis (Gruber and Yurgelun-Todd, 2005), and methamphetamine (London et al, 2005) samples. Furthermore, this sample does show impairment in other ACC-related cognitive functions, such as inhibitory control and error awareness. Although we do not have neuroimaging data from the BAT, it is based on the Go/NoGo format which has previously shown a relationship between post-error slowing and error-related ACC activity (Garavan et al, 2002), replicating a relationship demonstrated with other cognitive tasks such as the flanker (Gehring et al, 1993) and Stroop tasks (Kerns et al, 2004, 2005). Despite this evidence some studies have failed to demonstrate a relationship between error-related ACC activity and post-error slowing (Gehring and Fencsik, 2001), or have associated different neural signatures with post-error slowing, such as the ERP error positivity (Pe) waveform (Hajcak et al, 2003). Studies examining the diminished error-related ACC response of older adults have also failed to show that it bears any relationship to post-error slowing (Themanson et al, 2005; West and Moore, 2005), and studies examining drug-related increases (Riba et al, 2005a; Tieges et al, 2004) or decreases (Easdon et al, 2005; Riba et al, 2005b) in the ACC response to errors have also failed to show a relationship to post-error slowing. It is intriguing that those studies examining within-subject relations between ACC activity and post-error slowing have found significant correlations (eg, Kerns et al, 2005, Gehring et al, 1993), whereas the above-mentioned studies using between-group comparisons have not. Between-group comparisons may be influenced by other independent variables that distinguish the groups, for example, anatomical variability, which while not directly related to performance monitoring could potentially mask the ACC/post-error slowing relationship seen at the single-subject level.

The present result of intact post-error slowing and deficient post-error adaptation is remarkably consistent with two previous reports from the neuropsychological literature. Gehring and Knight (2000) examined a group of six brain-injured patients with focal lesions within the lateral prefrontal cortex, finding that response correction, but not post-error slowing, was significantly different to age-matched controls. ERP data from the brain-injured sample indicated a ‘normal’ error-related negativity (ERN) waveform, however an ERN-like waveform of similar magnitude was also present on correct trials. Gehring and Knight (2000) suggested that the inability of the ERN signal to differentiate errors from correct trials argued against the models of executive control that postulated signalling by the ACC of the PFC, or the reverse, for exertion of greater cognitive control. They argued that a more complex model, where the ACC detected properties of the stimuli that required information held in the prefrontal cortices for interpretation, and potentially also implementation of corrective behavior. Swick and Turken (2002) presented a single case, RN, who had an extensive left hemisphere ACC lesion that was associated with impaired correction but intact post-error slowing when performing the Stroop task. ERP data from this patient indicated a similar result to Gehring and Knight's patients, with the ERN waveform unable to be distinguished from the equivalent negative ERP waveform during correct trials. In comparison to a control sample, both the corrected errors and overall errors ERN signals were diminished, suggesting that the detection, and or correction, of errors could not be directly related to the magnitude of the ERN.

Theoretical and computational models of error processing also offer some predictions as to the relationship between error-related neural processes and post-error behavior. The reinforcement learning model from Holroyd and Coles (2002), Holroyd et al (2005), Nieuwenhuis et al (2004, 2002) argues that errors are detected primarily by the basal ganglia, which compares known stimulus–response relationships to stimuli perceived and responses made. Although the complex model is beyond description here, it does suggest that the ERN is the result of the basal ganglia enervating the mesencephalic dopamine system, which in turn disinhibits motor neurons in the ACC. The ACC is thought to use this information to improve ongoing performance. Their model predicted, as has been shown experimentally, that within individuals, larger ERN responses were associated with greater post-error slowing. Holroyd has argued that the size of the ERN, and indirectly the magnitude of post-error slowing, are linked to the frequency of targets. Frequent targets are typically answered correctly, hence the stimulus attains a large positive value. Consequently, errors on such frequent targets result in a large value change, from very good to very bad. A Go/NoGo task is not specifically considered by this model, though it appears reasonable to predict that NoGo errors would represent errors on frequent stimuli.

Unfortunately this model, nor others attempting to explain the error-related neural and behavioral processes (Botvinick et al, 2001; Braver et al, 2002), have as yet attempted to explain the specific results obtained by either Gehring and Knight (2000) or Swick and Turken (2002). The deficient error correction observed in those studies (and the lower level of performance adaptation in the current study), appears consistent with the modelling from Holroyd and co-workers, as presumably diminished ACC activity is indicative of dysfunction in the cortical ‘interpreter’ of mesencephalic dopamine signals from the basal ganglia. However, the intact post-error slowing suggests that this behavioral change is not as closely linked to the ERN as suggested by the Holroyd model. It is possible that post-error slowing is linked indirectly to the ERN, or more directly to other error-related neural activity such as the Pe waveform. Although the Pe waveform has been localized to the rostral ACC region (Herrmann et al, 2004; Mathewson et al, 2005), other studies, including those that attempted to manipulate the awareness (or unawareness) of errors, have identified both prefrontal and parietal regions contributing to the Pe or error awareness (Brazdil et al, 2002; Nieuwenhuis et al, 2001). It appears increasingly likely that whereas certain regions are consistently linked with error-related neural activity, a network of regions is contributing to post-error processes such as response slowing and performance adaptation. Neural dysfunction in part(s) of the network may therefore not disable post-error processes, but render them somewhat less effective, or present subtle dissociations of the type seen in the present study. Alternatively, ACC activity may have a threshold-type relationship to post-error processes (Yeung et al, 2004), whereby a certain level of activity is sufficient to begin the cascade of error detection, post-error slowing and or post-error adaptation, but the overall level of activity is not tightly coupled to these processes.

The proposed role of dopamine and the mesencephalic dopamine system in error processing may also have implications for cocaine users. Cocaine is believed to exert its reinforcing effects by blocking the re-uptake of dopamine and increasing its concentration in dopamine receptor-rich regions such as the ventral striatum and ACC (Koob and Bloom, 1988; Kuhar et al, 1991). Repeated exposure to this hyper-dopaminergic state has been suggested to account for decreased dopamine receptor levels in chronic users, and consequently, decreased metabolism in the ACC region, as this region has a dense concentration of these receptors (Volkow et al, 1999, 1991). Holroyd and co-workers argue that it is the phasic increases (during success) or decreases (during failure) in dopaminergic activity, that lead to modulation of neuronal firing in the ACC, detection of errors and the post-error changes to cognitive behavior. Although speculative, this hypothesis appears consistent with both the results of the present study that indicate deficits in error detection and (some) post-error adaptive changes of behavior following chronic cocaine use, and previous work demonstrating increased error-related ACC activity in drug-naïve participants following the acute administration of amphetamines (de Bruijn et al, 2004). What is particularly intriguing is that both studies demonstrate a relationship between apparent dopaminergic levels, ACC activity and self-reported assessments of performance, but not objective measures such as post-error slowing. De Bruijn et al (2004) found that participants administered amphetamines rated their performance as significantly better than during a placebo condition, however, no difference was evident in post-error slowing or other measures of behavior (ie, inhibition and conflict adaptation). Although these results appear to suggest a role for dopamine in subjective awareness of cognitive performance, further research is clearly required to test the relationship between dopamine, cingulate activity, error-related brain activity, and behavior.

The error awareness and adaptation deficits detected in cocaine users are of direct consequence to overall cognitive functioning, and potentially have wider implications for the addiction process. Deficits in experimental measures of error awareness have shown to relate to failures in remediating everyday failures of cognitive performance (Giovannetti et al, 2002; Hart et al, 1998). Similarly, dysfunctional neural responses to errors in other clinical groups have also shown to relate to their general symptom profile. For example, patients with schizophrenia have diminished levels of ACC activity in response to errors (Kerns et al, 2005; Mathalon et al, 2002), with the level of dysfunction relating to the severity of disorganization symptoms (ie, formal thought disorder, inappropriate affect, and bizarre behavior) (Bates et al, 2002; Berman et al, 1997; Frith and Done, 1989; Liddle et al, 1992). Alzheimer's disease has also been associated with a progressive deterioration in error awareness (Cahn et al, 1997; Neils-Strunjas et al, 1998) and the neural response to errors (Mathalon et al, 2003).

Although a relationship between error-related neural and behavioral processes and the maintenance of drug abuse has not been established (Garavan and Stout, 2005), these deficits are indicative of dysfunction to the system responsible for executive or cognitive control (Posner and Rothbart, 1998; Ridderinkhof et al, 2004). Dysfunctional cognitive control has been highlighted as critical to the maintenance of drug addiction (Lyvers, 2000), particularly in relation to impulse control and attentional biases to drug stimuli. For example, higher levels of attentional bias towards drug-related stimuli has been shown to relate to both diminished cognitive control and poorer treatment outcomes for both cocaine (Carpenter et al, 2005) and alcohol abusers (Cox et al, 2002). The present results may help in specifying the particular cognitive control deficits of current cocaine users.