Review Article | Open | Published:

Behavioral and neurobiological mechanisms of punishment: implications for psychiatric disorders

Neuropsychopharmacologyvolume 43pages16391650 (2018) | Download Citation



Punishment involves learning about the relationship between behavior and its adverse consequences. Punishment is fundamental to reinforcement learning, decision-making and choice, and is disrupted in psychiatric disorders such as addiction, depression, and psychopathy. However, little is known about the brain mechanisms of punishment and much of what is known is derived from study of superficially similar, but fundamentally distinct, forms of aversive learning such as fear conditioning and avoidance learning. Here we outline the unique conditions that support punishment, the contents of its learning, and its behavioral consequences. We consider evidence implicating GABA and monoamine neurotransmitter systems, as well as corticostriatal, amygdala, and dopamine circuits in punishment. We show how maladaptive punishment processes are implicated in addictions, impulse control disorders, psychopathy, anxiety, and depression and argue that a better understanding of the cellular, circuit, and cognitive mechanisms of punishment will make important contributions to next generation therapeutic approaches.

Punishment involves learning about the relationship between behavior and its adverse consequences. It is used in different ways in the contemporary literature. In addiction neuroscience, punishment serves as a tool for assessing persistent drug-seeking in the face of adverse consequences and as a qualitative marker of a compulsive behavioral phenotype underlying individual differences in development of compulsive seeking. In the decision neurosciences, punishment serves as a tool for assessing the influences of risk on decision-making and as a tool for identifying the brain mechanisms of value and choice. In the clinical literature, sensitivity to punishment is assessed across a variety of disorders, including addiction, depression, psychopathy as well as eating disorders, enabling insights into the etiology, maintenance, and treatment of these conditions. It is unsurprising, then, that there is considerable diversity in how punishment experiments are conducted and interpreted. In this article, we consider key theoretical and methodological complexities of punishment, the design choices available, and the implications of these choices for interpretation. We then review some of the brain bases of punishment and psychiatric disorders with perturbations in punishment processing.

Different forms of aversive learning

Learning about and responding to aversive events is fundamental to survival. The learning and behavior that occurs in response to aversive events depend on the relationships between the aversive event, environmental stimuli and the animal’s behavior (Fig. 1a). In general, we can be passive recipients of aversive events while at other times our actions determine the events we experience. This latter category of response-dependent aversive events can be studied in the laboratory via punishment.

Fig. 1
Fig. 1

Determinants of aversive associative learning. a Even in carefully designed studies (e.g., light → shock Pavlovian fear, or press → shock punishment protocols [solid lines]), aversive events are inevitably embedded within multi-layered contingencies. A shock could be attributed (dashed lines) to behavioral antecedents (e.g., lever-pressing), environmental antecedents (e.g., light), or both. The relative validity of these antecedents determine whether a Pavlovian light → shock, instrumental press → shock, or instrumental discriminative (blue lines) light = [press → shock]) association is formed, in turn determining what behavior is being examined. b Contingency space describing relationships between aversive outcome (O) and behaviors (R). Upper left corner of contingency space: O is only likely to occur if the response is not made (response reinforced by contingency). Bottom left corner of contingency space: O is only likely if the response is made (response punished by contingency). If O is independent of responding (dashed line), only Pavlovian learning is likely to occur. DS = discriminative stimulus; O = aversive outcome; R = specified response; p(O|R) = probability of aversive outcome, given the response was made; p(O|no R) = probability of aversive outcome, given the response was not made

Punishment is instrumental aversive learning. It refers to the suppressive effects of undesirable outcomes on the behaviors that cause them (Table 1). This effect of response-dependent aversive events is symmetrical to the response-promoting effects of reinforcement (instrumental reward learning). Like reinforcement, the instrumental contingency between a response and undesirable outcome causes formation of a response–outcome (R–O) association, i.e., a response–punisher association, that disincentivizes punished responding.

Table 1 Characteristics of different aversive learning paradigms

Punishment is closely related to, but distinct from, other forms of aversive instrumental learning—active avoidance and escape learning (Fig. 1b; Table 1). In these, an aversive event is prevented or halted by the performance of an action. They are examples of negative reinforcement, as the specified behavior is increased (reinforced) by the contingency between the response and consequence (“negative”, as the consequence of making the response is omission or removal of the stimulus). Active avoidance and punishment are frequently confused, but the key learning processes underpinning them are distinct [1,2,3]. A helpful way to distinguish these is to note active avoidance involves generation of specific behaviors that avoid or terminate the aversive event (e.g., lever-pressing), while behaviors that allow the aversive event to occur are unspecified and diffuse (grooming, exploring, inactivity, etc.). This results in an R – [no O] association (and possibly a stimulus–response association) that supports responding [4]. Conversely, in punishment the behaviors that avoid the negative event are unspecified and diffuse, while the behavior (e.g., lever-press) that causes the undesirable event is specific.

Punishment is also frequently confused with Pavlovian fear conditioning (Table 1). Indeed, many experiments purporting to study punishment actually involve response-independent delivery of an aversive event regardless of the actions performed. This response-independent contingency is not punishment. Rather, it is the stimulus–outcome (S–O) contingency of Pavlovian fear conditioning. In Pavlovian fear conditioning, a conditioned stimulus (CS) is paired with an aversive stimulus (e.g., an aversive unconditioned stimulus [US]), imbuing the CS with aversive motivational properties. This CS elicits conditioned responses, which can include defensive responses such as freezing but also non-specific suppression of reward behavior (termed conditioned suppression). So, punishment and fear can achieve the same behavioral outcome—suppression of ongoing behavior—via different mechanisms. However, punishment and fear have salient distinguishing characteristics. Punishment suppression is specific to the punished response, whereas Pavlovian fear is not [5, 6]. Punishment causes greater response suppression than fear [7,8,9]. However, punishment can be more transient than fear; punished behaviors can reappear spontaneously or due to changes in context, and often return rapidly once the punishment contingency is suspended.

What kinds of events serve as punishers?

A variety of noxious or aversive events serve as punishers when their delivery is made contingent on behavior. The most frequently used punishers are brief footshock, airpuff, or contamination of a palatable solution with quinine (particularly in rodent and non-human primate studies). In humans, the range of punishers is broader and can include the same aversive events (airpuff, loud noises, electrick shock) but also include social exclusion, negative feedback on task performance, and monetary loss. When delivered in a manner contingent on a behavior, these are examples of primary punishers. However, Pavlovian fear CSs can also serve as effective punishers. For example, presentations of a fear CS contingent upon lever-pressing will instrumentally suppress lever-pressing (a procedure known as secondary or conditioned punishment). In these examples, behavior is modified because it causes an adverse event to occur, so these are examples of positive punishment. Still other classes of events can serve as punishers. A reduction or removal of reward, or a stimulus that signals such reductions (e.g., monetary loss, absence of a palatable food), can serve as a punisher when made contingent on a behavior. These kinds of events are frequently used in human neuroimaging or primate single unit recording experiments. These are examples of negative punishment, whereby behavior causes a pleasant stimulus to be removed. It is important to note that reward omission is not always negative punishment. Loss of responding during reward omission can be due to extinction as opposed to response suppression due to punishment. In negative punishment, the rewarding outcome is only omitted if a response is made. In extinction, the response has no bearing on delivery of reward.

One coherent way to treat these diverse events is to suppose that they share a common affective quality: recruitment of an aversive motivational system. This aversive motivational system is able to suppress ongoing reward behavior because it inhibits appetitive motivation. This motivational opponency has proved popular because it provides a short-hand explanation for the common behavioral consequences of otherwise diverse events. There is evidence for common psychological and neurobiological coding of aversive motivational quality [10,11,12,13]. However, this equivalence remains poorly understood and there are differences between positive and negative punishment as well as between primary and secondary punishers [14, 15].

An important consideration is the intensity of the aversive event [16]. Low intensity aversive events can be detected but do not cause suppression. Because these events do not result in suppression, they cannot be termed punishers. Such low intensity events can still control behavior, but they do so via different mechanism (see Section 3). Whereas mild to moderate punishers partially suppress behavior, with some observations that this suppression lessens over time (likely due to habituation). Severe punishers result in complete and permanent suppression of responding. A common design choice is to increment punisher intensity across sessions. This allows greater control over rates of suppression, and greater habituation to the punisher, reducing overall suppression [17].

Punisher instensity is also important because it relates to indiviudal differences in punishment. Marchant et al. [18] reported pronounced individual differences in punisment sensitivity, across a large population of rats, when a constant intensity punisher was used. This work revealed a bimodal distribution of punishment sensitive and insensitive rats. Interestingly, when punisher intensity was increased across trials, individual differences were less pronounced. Differences in punisher sensitivity are also important when considering sex differences in punishment. There has been little published work systematically examining sex differences in punishment. However, female rats are more sensitive to footshock than males, and initially show greater response suppression following footshock (see ref. [19]). Interestingly, these sex differences appear to reverse when assessing response suppression after an interval, such that female rats exhibit poorer retention of response suppression. While these differences have important implications for our understanding of punishment processing, these sex differences have not been adequately studied using explicit punishment designs. Moreover, these differences may not extend to other species (e.g., see ref. [20]). The cause of these differences in sensitivity to punishment, between and within individuals, warrant further investigation.

Learning processes involved in punishment experiments

Typically, the experimenter is interested in the depressive effects of punishment on instrumental responding for reward. That is, the formation and use of an R (lever-press)—O (shock) association. Such associations are formed during punishment [21, 22]. However, other associations are also formed and influence behavior in different ways. Understanding the origin and effects of these other associations is essential.

Consider the simplest punishment experiment. Here a mouse or rat is initially rewarded for lever-pressing (e.g., by a food pellet or intravenous infusion of cocaine). Once responding is established, the lever-presses now also cause delivery of footshock. The mouse will reduce or even cease lever-pressing. This reduction in lever-pressing is the key behavioral dependent variable. However, this reduction in lever-pressing is not sufficient evidence for punishment or use of R–O aversive knowledge. In fact, alone it tells us little about the underlying cause of behavior change.

In any punishment experiment, subjects are influenced by both instrumental and Pavlovian contingencies, even if these are not intended by the experimenter. This is because, in practice, aversive events have both environmental and behavioral antecedents. Moreover, these environmental (Pavlovian) and behavioral (instrumental) antecedents both suppress ongoing behaviors, including lever-pressing for reward. Sometimes these antecedents are well-specified by the experimenter and are known, but in other cases these antecedents are embedded in other features of the experiment and unknown. Whether punishment or fear is controlling behavior in any given punishment experiment is, at least initially, always ambiguous. This is problematic when attempting to attribute effects to one of these processes and not the other and poses significant problems when attempting to understand the effects of brain manipulations. Is lever-pressing reduced because of an instrumental R (lever-press)—O (shock) association? Or does the lever and its spatial location act as a fear CS due to the contingency between the lever and shock, causing Pavlovian fear (S–O) and conditioned suppression?

Whether, when, and how instrumental or Pavlovian associations control behavior in punishment designs has been the subject of significant empirical and theoretical attention [1, 23,24,25]. Both kinds of association contribute to and cause behavioral suppression in most punishment experiments and various design choices favor one over the other. The contribution of Pavlovian vs. instrumental aversive learning depends on the relative validity of each association. Whether environmental stimuli or the response is a better predictor of shock determines the strength of the associations formed; interposing a stimulus (e.g., a tone) between a response and response-contingent shock can retard the acquisition of instrumental suppression because the validity of the S–O (tone–shock) contingency interferes with (overshadows) the formation of the R–O (press–shock) contingency ([26, 27]).

Pavlovian fear tends to be greatest early in punishment training, and, when using an appropriate experimental design, is reduced across extended training. Fear emerges early due to S–O contingencies between various cues in the apparatus (the chamber, the location of the lever, the sound of lever insertion) and the delivery of the punisher. This fear is weakened across punishment training due to the inevitable extinction of these Pavlovian contingencies if response suppression results in no aversive outcome. Regardless, behavioral suppression early in punishment training often reflects a greater contribution of Pavlovian fear associations than later in training [21, 28].

A similar problem occurs in stimulus control over punishment. Both fear and punishment can be brought under stimulus control. In punishment, such stimulus control is achieved via a discriminative stimulus (DS; traditionally SD) that can be used to signal that a punishment contingency is in effect. In fear conditioning a discrete CS is used to signal an impending aversive US. CSs and DSs are superficially similar but not equivalent [29, 30]; punishment DSs and fear CSs differ in terms of what is learned about them and how they control behavior.

One demonstration of these differences is through the use of the blocking procedure [31]. In this procedure, subjects are first trained on one association (e.g., CSA—shock). They then receive compound training of CSA+CSB−shock. When tested for fear to CSB, the subjects show little evidence of having learned fear. CSA is said to have blocked fear learning to CSB. Such blocking is quite general and robust. It shows that what is learned about one CS signaling a shock and a second CS signaling shock is the same because learning about one prevents learning to the other. Blocking is also observed between DSs. For example, what is learned about one DS that signals a period of instrumental punishment is the same as what is learned about a second DS signaling a period of instrumental punishment because they also block learning to each other. However, CSs and DSs do not block learning to each other [29]. The contents of these two different associations are determined by whether the aversive outcome is response-dependent (instrumental) or independent (e.g., Pavlovian).

There are still other associations at work in punishment designs. One relates to direct interactions between the punisher and reinforcer. There are two issues here. The first is motivational or affective interactions between the punisher and reward. These can arise when delivery of the punisher occurs in close temporal proximity to delivery of the reward that sustains responding. This can occur if the same schedule is used for reward and punishment. Under these conditions, there is an unintended contingent relationship between the punisher and the reward. For example, in a typical punishment design using rodents, a shock punisher is invariably delivered immediately via grid floor while a contingently-delivered reward is consumed after due to the requirement of the animal to enter the magazine to consume it or the effects of the intravenous infusion of a drug reward persisting beyond the shock. This unintended signaling relationship, shock → reward, enables a form of learning—counterconditioning—that reduces the aversive value of the shock [32, 33]. The degree of counterconditioning depends on the experimental parameters. In particular, its impact depends on the exact temporal relationships. When the order of events is reversed, so that delivery of reward signals punishment (reward → shock), the effects of punishment are different [23, 34]. The second issue concerns the signaling properties of the punisher itself. Shocks can serve as a DS, signaling whether or not a response will be rewarded [35]. This means that a punisher can suppress instrumental responding for reward, not because the punisher is aversive or noxious and the animal has learned that responding causes shock (i.e., R−Oaversive), but rather because shock signals that behavior will not be rewarded (i.e., Sshock [R−no Oreward]). Moreover, the reverse is also possible: the presence of the punisher can signal that behavior will be rewarded, thereby increasing instrumental responding for the reward (i.e., Sshock [R−Oreward]) [36].

Finally, punishment can be context-specific. When rats are trained to lever-press for reward in one context (context A) then punished for that responding in a second context (context B), responding returns when placed in the original training context (ABA renewal) or in a third context (ABC renewal). Renewal of punished responding has been observed for responding based on food [37] and alcohol [38]. Moreover, opioid self-administration after punishment can be reinstated by priming injections of opioids or benzodiazepines [39, 40]. These effects are reminiscent of the effects of extinction, leading to suggestions that punishment and extinction involve similar contextual learning processes. However, there is only partial overlap between the brain mechanisms of contextual control of punishment and extinction [41, 42] and there are other important behavioral [43] and neurobiological [44] distinctions between them.

Methodological considerations when studying punishment

Given the complexity of associations formed in punishment experiments, and the fact that many of these associations are not often of primary interest, the literature offers some methodological recommendations. First, it is worth observing and measuring the animal’s behavior in the task. The presence of species-typical defense behavior, such as freezing for rodents, provides one measure of Pavlovian fear that is helpful for interpreting results. Response-punisher associations result in little autonomic disturbance (freezing, piloerection), and are more typically associated with abortive responses [45].

Second, including a different, unpunished behavior in the same task is very useful. For example, in rodent studies this could involve rewarding two responses (e.g., two different levers) and punishing responses on only one. Inclusion of an unpunished behavior serves two purposes. Suppression of responding on the unpunished lever correlates strongly with expression of defensive behaviors such as freezing and reflects Pavlovian fear [5, 6, 28], whereas specific suppression of the punished response is indicative of punishment learning. Different responses on the same manipulandum (lifting vs. pressing a lever; pushing left vs. right on a bar) can be punished vs. unpunished [21]; this approach controls for any fear to the spatial location of the lever. Effects selective to one response reflect use of contingent R–O aversive knowledge whereas effects common to both responses reflect Pavlovian fear or S–O aversive knowledge. The additional purpose of including an unpunished behavior is that the rewarded alternative response supports a more stable suppression of the punished response.

Third, a strong instrumental contingency between a response and punisher supports stronger R–O aversive learning [21]. Ratio schedules produce strong R–O associations [46], so if relatively quick isolation of punishment is desired, ratio schedules are preferable. Even so, ratio schedules still support Pavlovian fear learning at first, thereby affecting interpretation of data from these early sessions. This fear extinguishes in a reasonably continuous fashion while instrumental suppression increases across sessions [21, 28].

Fourth, careful consideration of punishment and reinforcement schedules can avoid many of the interpretative issues associated with direct interactions between the punisher and reward, and use of outcomes as a DS (see above). For example, these direct interactions are more likely to occur when the same schedule of reinforcement is used to deliver the punisher and reward. One way of avoiding such interactions is to deliver rewards on variable interval schedules and the punisher on a fixed ratio schedules [21]. The variable interval schedule for reward encourages relatively stable rates of lever pressing against which to measure punishment suppression and the fixed ratio schedule for punishment encourages strong R–O encoding of punishment. However, many other schedules are possible such as separate variable interval schedules (VI60 v VI90 for reward and punishment; [37]). The important point is to reduce inadvertent signaling relationships between the reward and the punisher.

Fifth, direct comparison of response-contingent vs. non-contingent aversive events in the same experiment can be helpful. This control could be applied in several ways, but one method is via the use of yoking. In between-subjects yoking, stimuli are response-dependent for one subject while yoked (concurrently presented) to another in a response-independent manner. This allows direct comparison of USs/punishers, which are matched in presentation (both number and distribution) but are embedded within differing contingencies. Yoking has inferential limitations [47, 48] but its utility is enhanced in combination with the other suggestions discussed here.

Brain mechanisms of punishment

Early anxiolytics, particularly barbiturates and benzodiazepines, had such profound “anti-punishment” effects, especially within conflict protocols, that anti-punishment effects were used as a behavioral screen for anxiolytics. The anti-punishment effects of barbiturates and benzodiazepines are well-documented in multiple species at doses that do not affect unpunished behavior (see [49]). They are also observed in conditioned punishment; midazolam abolishes conditioned punishment without affecting unpunished responding or the arousing effects of a footshock [50]. Interestingly, benzodiazepines enhance the acquisition of active avoidance [51, 52], suggesting that benzodiazepine effects on punishment are due to direct actions on instrumental suppression rather than aversive motivation. GABA and benzodiazepine antagonists block the anti-punishment effects of these drugs [53]. Ethanol also has specific anti-punishment effects [54,55,56] that appear similarly mediated by its action on GABA and the benzodiazepine-binding site [57, 58].

Serotonin (5-HT) is strongly implicated in punishment, with proposed roles in behavioral inhibition [59, 60] and aversive processing [61]. It has been suggested that 5-HT inhibits the reward-coding dopamine system, with 5-HT and dopamine being conceived of as oppositional systems, promoting aversive and appetitive functions respectively [62, 63]. Lesions of serotonin-containing terminals, systemic injections 5-HT antagonists or 5-HT synthesis inhibitors each have anti-punishment effects (see [64]). Acute tryptophan depletion (ATD), a dietary manipulation that putatively impairs 5-HT transmission also appears to reduce aversively-motivated behavior suppression [65].

Drugs that partially agonize 5-HT1A receptors and/or antagonize 5-HT2 receptors have strong anti-punishment effects in pigeons [66,67,68], although these effects are more variable in mammals [69,70,71]. 5-HT2C agonizts and SSRIs reverse punishment-resistant cocaine-seeking in rats, while serotonin depletion and 5-HT2C antagonism increased punishment-resistant cocaine-seeking [72].

Norepinephrine and norepinephrine agonists also have strong anti-punishment effects [73, 74], and anti-punishment anxiolytics tend to increase norepinephrine activity and release [75, 76]. Concurrently increasing dopamine and norepinephrine transmission, which hypothetically boosts reinforcement and inhibits punishment signals respectively, leads to an increase in disadvantageous choices involving timeout punishments [77]. Endogenous dopamine and norepinephrine may promote reinforcement-sensitivity and punishment-insensitivity, driving the return of responding following omission of an anticipated aversive outcome.

Forebrain circuits implicated in punishment

fMRI studies using a variety of punishment approaches (monetary loss, loss feedback, etc.) implicate human amygdala [78] and its interactions with hippocampus [79] and ventral striatum [80] in punishment (Fig. 2). Anxiolytic, 5-HT, and norepinephrine effects on punishment are linked to amygdala [81,82,83]. This role is dissociable from Pavlovian fear. In rodents, basolateral amygdala (BLA) lesions and inactivations (particularly caudal portions) attenuate punishment suppression independently from any contributions of Pavlovian fear [28, 50, 84]. The role of central nucleus of the amygdala (CeA) is less clear. Some studies show that CeA mediates Pavlovian but not punishment suppression [85], whereas others suggest a role in punishment of cocaine-seeking [44, 86].

Fig. 2
Fig. 2

Forebrain regions implicated in punishment. Regions are shaded according to the typical effects of inactivations or lesions of that region on response suppression within punishment protocols. Arrows show regional connectivity implicated in punishment. AcbSh nucleus accumbens shell (ventral striatum), AI anterior insular cortex, BLA basolateral amygdala, CeA central amygdala, Cg cingulate cortex, Hipp hippocampus, IL infralimbic cortex, OFC orbitofrontal cortex, PrL prelimbic cortex, vHipp ventral hippocampus

The prefrontal cortex (PFC) has long been implicated in aversion, decision-making and behavioral control [87,88,89], and has been argued to mediate punishment behavior [90, 91]. We consider the major subdivisions of the rodent PFC, although homologous structures within the human PFC have been similarly implicated in punishment. Within the medial PFC (mPFC), cingulate (Cg) activity is correlated with the magnitude of unpleasantness experienced in response to noxious stimuli [92, 93]. This has been linked to aversion learning [94,95,96]. Although electrical stimulation of the Cg inhibits behavior [97], lesions of Cg in cats only impair active avoidance, leaving passive avoidance intact [98], suggesting Cg mediates negative reinforcement but not punishment.

The prelimbic cortex (PrL) has been implicated in aversive Pavlovian associations [99], learning appetitive R–O associations, and top-down control of behavior [88, 100]. PrL hypoactivity is associated with impaired sensitivity to punishment in cocaine seeking [101]. Infralimbic (IL) activity is correlated with, sufficient, and necessary for the extinction of instrumental responding [102,103,104,105]. This role may extend to suppression of responding due to punishment, fitting with a proposed role for IL in behavioral inhibition [106, 107]. Activity of shock-responsive mPFC neurons is correlated with avoidance of a punished response while stimulation of these shock-responsive neurons results in response suppression [108]. However, PrL or IL inactivations or lesions do not affect primary punishment [109, 110]. Thus, although mPFC activity is correlated with and sufficient for general behavioral suppression, evidence that mPFC is necessary for punishment remains elusive. This remains an important area for further investigation. However, given the key role of medial PFC in Pavlovian fear and the fact that punishment experiments always entail the confounding influence of Pavlovian fear (see above), it will be critical to ensure that appropriate controls for Pavlovian learning are in place to enable correct attribution of a manipulation to effects on punishment as opposed to fear (e.g., inclusion of a second unpunished response, direct assessment of Pavlovian fear, stimulus yoking).

In both rodents and primates, the orbitofrontal cortex (OFC) has been thought to encode value [111, 112], as well as mediate response inhibition and choice [113,114,115,116]. The primate OFC contains reward and aversion-coding neurons activated by appetitive and aversive stimuli, respectively [117]. This extends to instrumental tasks and may be topographically partitioned with human medial OFC linked to reinforcement and lateral OFC to punishment [115, 118]. Humans with bilateral OFC lesions are impaired in avoiding disadvantageous options compared to healthy controls in the Iowa Gambling Task [119, 120]. However, in non-human animal studies, OFC inactivation has inconsistent and conflicting effects on punishment, impairing [110], enhancing [121, 122], or having no effect [109] on punishment. These disparities await satisfactory resolution.

The anterior insular (AI) has been strongly implicated in aversion. It is activated in response to, and anticipation of, aversive stimuli [123,124,125,126,127]. This has been linked to pain modulation [128], as well as cognitive and behavioral processes in response to aversive stimuli [91, 129]. AI activity has also been implicated in inhibitory control of behavior [130, 131]. However, AI lesions or inactivations fail to affect punishment suppression [109, 110], but do affect punishment-influenced, subjectively-motivated choice [110, 132].

The ventral striatum, particularly accumbens shell (AcbSh) is also implicated in punishment. Like BLA, AcbSh inactivation increases punished responding [84]. Both AcbSh and BLA may determine overall levels of responding following punishment, decreasing punished responses while increasing safe responses. Kim et al. [108] reported that activity of shock-responsive Acb-projecting mPFC neurons predicted suppression of punished reward-seeking. Stimulation of shock-activated mPFC →Acb neurons also suppressed reward-seeking, suggesting that the mPFC →Acb pathway may mediate punishment suppression, but there is currently no evidence that this projection is necessary for punishment.

Finally, Gray [133, 134] hypothesized that mutually inhibitory behavioral systems compete to guide behavior: the behavioral inhibition (BIS) and behavioral activation (BAS) systems. Punishment activates the BIS, in turn suppressing BAS-driven responding for reward. BIS function was attributed to the septohippocampal system and its monoaminergic afferents from the brainstem [51, 134, 135]. Electrical stimulation of the anterior septum produces somatomotor inhibition [136], while septohippocampal-lesions impair punishment suppression and passive avoidance, independently of spatial learning [137]. Interestingly, septohippocampal lesions often enhanced active avoidance and had less clear effects on Pavlovian fear. Thus, septohippocampal manipulations specifically affect behavioral suppression during punishment, and not aversion generally. These effects mirror those of systemically administered anxiolytics [51, 134, 138], leading to the view that the anti-punishment effects of anxiolytics are driven by disruptions of BIS function [135, 138]. In rats, this role of hippocampus in behavioral suppression appears to be mediated by the ventral, not dorsal, hippocampus [139]

Scales to measure individual differences in human BIS (trait sensitivity to punishment) and BAS (trait sensitvity to reward) have been developed [140]. Higher BIS scores are associated with greater suppression of behaviors in response to negative outcomes [141]. BIS scores are also associated with increased amygdala and hippocampal gray matter volumes [142]. fMRI studies have detected a correlation between BIS scores and punishment-induced amygdala-hippocampus co-activation [79]. Humans with bilateral hippocampus lesions readily switch away from a deck following monetary loss in the Iowa Gambling Task, showing normal physiological responses to punishment (unlike those with amygdala damage), but do not show a preference for advantageous decks (see [143]). This suggests intact aversion processing but impaired use of experienced contingencies to determine choice.

Midbrain dopamine circuits

Activity of midbrain dopamine (DA) neurons is necessary and sufficient for reinforcement of behavior. Symmetrically, negative outcomes cause pauses in VTA DA firing [11, 12, 144, 145]. It follows that punishment could be due to inhibition of VTA DA neurons. Certainly, inhibiting VTA DA causes conditioned place aversion [146,147,148] and D1 or D2 dopamine receptor manipulations in the ventral striatum, a major target of reward-coding VTA DA neurons, can result in place aversion [148, 149]. However, the role of VTA DA in punishment remains unclear (Fig. 3).

Fig. 3
Fig. 3

Midbrain dopamine circuits implicated in punishment. Mesolimbic and nigrostriatal projections are generally reward-coding; punishment could be encoded via pauses in DA firing and decreased DA release at projection targets. Mesocortical neurons burst fire to aversive stimuli, ostensibly causing increased DA release within the PFC. Aversion-coding LHb neurons can exert relevant control over each of these pathways, although its role in punishment has been disputed. The role of each of these circuits in punishment remain unclear. Acb nucleus accumbens (ventral striatum), D1r D1 receptor, D2r D2 receptor, DA dopamine, dStr dorsal striatum, fr fasciculus retroflexus, LHb lateral habenula, PFC prefrontal cortex, RMTg rostromedial tegmental nucleus

VTA DA neurons are inhibited by the lateral habenula (LHb) via the rostromedial mesopontine tegmental nucleus (RMTg) [150, 151]. LHb neurons burst fire to unexpected aversive events and reward omissions [11, 144], and were speculated to mediate punishment learning and behavior [152, 153]. fMRI studies observe habenula BOLD responses to aversive shocks, negative feedback, and omission of anticipated positive feedback [154, 155]. Aversion-coding within LHb may stem from aversion-coding inputs from the globus pallidus (GP; entopeduncular nucleus [EP] in rodents) [156,157,158] and lateral hypothalamus [159], alongside motivationally-pertinent VTA and 5-HT inputs [160,161,162,163,164]. Optogenetic stimulation of the LHb → RMTg pathway negatively reinforces nose-poking and suppresses acquisition of a reinforced response [152]. However, these studies did not isolate punishment and a variety of approaches to preventing LHb activity do not affect punishment suppression in rats [165]. That said, the LHb appears to play important roles in active avoidance [159] and risky decision-making [166].

Aversive outcomes cause pauses in nigrostriatal DA neuron firing [144, 167]. These neurons have similar inhibitory inputs from the RMTg [168]. Stimulation of the nigrostriatal pathway is rewarding, while inhibition of this pathway can be aversive in real-time place aversion [169]. Increases and decreases in SNc DA firing, and corresponding increases and decreases in striatal DA release, may differentially recruit direct and indirect pathways of the basal ganglia [170,171,172]. Optogenetic stimulation of D1R-expressing medium spiny neurons (MSNs) is reinforcing, while such stimulation of D2R-expressing MSNs is punishing [173]. High concentrations of DA preferentially activate D1R-expressing direct pathway neurons, while low levels of DA preferentially activate D2R-expressing indirect pathway neurons. Thus, Kravitz and Kreitzer [174] suggest LTP of the direct pathway and LTD of the indirect pathway mediates reinforcement, whereas LTP of the indirect pathway and LTD of the direct pathway mediates punishment, in response to bursts and pauses in SNc firing, respectively.

Although this model describes plasticity within striatal pathways that may underpin punishment learning well, it is worth noting that the role of DA in punishment is complex. For example, punishment and reinforcement share common features. Systemic administration of indirect DA agonist amphetamine increases, whereas the DA-receptor antagonist α-flupenthixol decreases, the punishing effect of an aversive CS [85]. This finding is important because it shows that appetitively and aversively motivated conditioned stimuli share common dopaminergic substrates for their influence on instrumental performance. Moreover, some midbrain DA neurons are phasically excited by aversive stimuli (175, 176]). This has been linked to aversion-coding within mesocortical neurons [177,178,179,180]. Stimulation of the mesocortical pathway, or its direct excitatory inputs from LHb, is aversive [181], although the role of this circuit in punishment is undetermined.

Punishment and neuropsychiatric disorders

The study of punishment has great potential to provide insights into decision-making and motivational deficits in neuropsychiatric disorders. To date, most of this research has focused on characterizing the nature of any changes in punishment sensitivity and describing some of the underlying neural correlates. The roles of these alterations in punishment processing, their status as predictors of disorder severity, duration, treatment, and relapse are all poorly understood. Nonetheless, the potential remains to address these deficits and restore normal decision-making to help address the burdens of these disorders.

Risky drug use and an insensitivity to the adverse consequences of drug taking is a diagnostic criterion for substance use disorders [182]. Behavioral addictions, such as gambling disorder, and impulse-control disorders [182], are also characterized by persistent behaviors despite adverse consequences, along with an apparent inability to appropriately suppress that behavior (i.e., impulsivity). Similar deficits have been noted for Obsessive Compulsive Disorder (OCD; [183, 184]) and Attention Deficit Hyperactivity Disorder (ADHD; [185]). These diverse conditions appear to share significant overlap in psychological and neurobiological underpinnings [183, 186]. Thus, punishment and conflict tasks provide important opportunities to probe loss of behavioral control in these disorders. To date, these tasks have been exploited to assess both motivation to seek drug rewards and to assay individual differences in the development of addiction-like behavior, with particular focus on when and how drug-seeking behavior becomes less sensitive to punishment [187,188,189,190,191,192]. The findings from these tasks have been reviewed in detail elsewhere [193, 194]. One important point to note about these models is that the adverse consequence of drug seeking or taking (i.e., punishment) tends to be immediate (footshock), whereas in human drug users these adverse consequences (e.g., ill health, incarceration) can be delayed. Whether the processes involved in immediate vs. delayed punishment are the same or different remains an important unanswered questions.

The application of punishment to understanding core deficits in neuropsychiatric disorders extends further still. Antisocial personality disorder, conduct disorder and oppositional defiant disorder patients are each characterized by alterations in punishment sensitivity [185, 195, 196]. Individuals with psychopathy or psychopathic traits choose punished options more often than matched controls and do not learn to suppress punished responses across trials [196,197,198]. This impaired instrumental suppression in psychopathy likely has complex causes but has been attributed, in part, to disrupted prefrontal and amygdala function. Interestingly, both heightened and blunted amygdala activity have been reported in response to aversive stimuli among these populations [199,200,201] and smaller amygdala volumes reported compared to controls [202, 203]. Moul and colleagues [204] suggest that these differences in amygdala responses are linked to different amygdala subregions, with overactivation linked to alterations in central amygdala valence-coding and underactivation to alterations in BLA encoding of outcome value.

In contrast, depression is associated with increased sensitivity to punishment [205]. Depressed individuals show heightened sensitivity to negative feedback and errors. They also reduce risk-taking more than matched controls following punishment [206]. Depressed individuals can perform comparably or disadvantageously relative to healthy controls on the Iowa Gambling Task (IGT), depending on the task variant used. This profile, possibly reflecting depressed patients’ flattened hyper-sensitivity to punishment and/or hyposensitivity to reward [207,208,209], is linked to alterations in frontostriatal systems. A key deficit is failing to shift responding when contingencies shift [210] and could be due to a failure to adaptively extinguish avoidance. This is consistent with the observation that depression is associated with increased behavioral inhibition and lower behavioral activation, with lower behavioral activation (which would drive punishment extinction) being the better predictor of continuing depression symptoms [211]. Computational models have linked these deficits to serotonergic dysfunction [212].


Punishment offers a rich experimental preparation for answering fundamental questions about learning, motivation and decision-making. It also provides unique opportunities to help understand core dysfunctions in complex, neuropsychiatric disorders. We hope it is clear from this primer that many interesting learning processes are involved in even the simplest punishment design, and that determining which of these learning processes are controlling behavior is an important consideration. Remarkably, we are far from a coherent understanding of punishment. We know far less about the brain mechanisms of punishment than those for reinforcement or for other forms of aversive learning. Although some key brain regions have been identified, the precise nature of these contributions to learning and motivational processes, relevant connectivity, and cell types, remain poorly understood and await detailed investigation.

Additional information

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


  1. 1.

    Mackintosh NJ. Conditioning and associative learning. Oxford, United Kingdom: Oxford University Press; 1983.

  2. 2.

    Dunham PJ. Some effects of punishment upon unpunished responding. J Exp Anal Behav. 1972;17:443–50.

  3. 3.

    Estes WK. Outline of a theory of punishment. In: Campbell BA, Church RM, (Eds.). Punishment and aversive behavior. New York, NY: Appleton-Century-Crofts; 1969. p. 57–82.

  4. 4.

    Declercq M, De Houwer J. On the role of US expectancies in avoidance behavior. Psychon Bull Rev. 2008;15:99–102.

  5. 5.

    Hoffman HS, Fleshler M. Stimulus aspects of aversive controls: the effects of response contingent shock. J Exp Anal Behav. 1965;8:89–96.

  6. 6.

    Hunt HF, Brady JV. Some effects of punishment and intercurrent “anxiety” on a simple operant. J Comp Physiol Psychol. 1955;48:305–10.

  7. 7.

    Annau Z, Kamin LJ. The conditioned emotional response as a function of intensity of the US. J Comp Physiol Psychol. 1961;54:428–32.

  8. 8.

    Church RM. Response suppression. In: Campbell BA, Church RM, (Eds.). Punishment and aversive behavior. New York, NY: Appleton-Century-Crofts; 1969. p. 111–56.

  9. 9.

    Azrin NH. Some effects of two intermittent schedules of immediate and non-immediate punishment. J Psychol. 1956;42:3–21.

  10. 10.

    Konorski J. Integrative activity of the brain: an interdisciplinary approach.. Chicago: University of Chicago Press; 1967.

  11. 11.

    Matsumoto M, Hikosaka O. Representation of negative motivational value in the primate lateral habenula. Nat Neurosci. 2009a;12:77–84.

  12. 12.

    Schultz W. Behavioral dopamine signals. Trends Neurosci. 2007;30:203–10.

  13. 13.

    Dickinson A, Dearing MF. Appetitive-aversive interactions and inhibitory processes. In: Dickinson A, Boakes RA, (Eds.). Mechanisms of learning and motivation: A memorial volume to Jerzy Konorski. Hillsdale, NJ: Erlbaum; 1979. p. 203–31.

  14. 14.

    Wrase J, Kahnt T, Schlagenhauf F, Beck A, Cohen MX, Knutson B, Heinz A. Different neural systems adjust motor behavior in response to reward and punishment. Neuroimage. 2007;36:1253–62.

  15. 15.

    Delgado, MR, Jou, RL, & Phelps, EA. Neural systems underlying aversive conditioning in humans with primary and secondary reinforcers. Front Neurosci. 2011;5:71.

  16. 16.

    Church RM. The varied effects of punishment on behavior. Psychol Rev. 1963;70:369–402.

  17. 17.

    Miller NE. Learning resistance to pain and fear: Effects of overlearning, exposure, and rewarded exposure in context. J Exp Psychol. 1960;60:137–45.

  18. 18.

    Marchant NJ, Campbell EJ & Kaganovsky K. (2017). Punishment of alcohol-reinforced responding in alcohol preferring P rats reveals a bimodal population: implications for models of compulsive drug seeking. Prog Neuro-Psychopharmacol Biol Psychiatry.

  19. 19.

    Archer J. Rodent sex differences in emotional and related behavior. Behav Biol. 1975;14:451–79.

  20. 20.

    Beatty WW, Fessler RG. Sex differences in sensitivity to electric shock in rats and hamsters. Bull Psychon Soc. 1977;10:189–90.

  21. 21.

    Bolles RC, Holtz R, Dunn T, Hill W. Comparisons of stimulus learning and response learning in punishment. Learn Motiv. 1980;11:78–96.

  22. 22.

    Goodall G. Learning due to the response-shock contingency in signalled punishment. Q J Exp Psychol Sect B: Comp Physiol Psychol. 1984;36:259–79.

  23. 23.

    Solomon RL. Punishment. Am Psychol. 1964;19:239–53.

  24. 24.

    Boe EE, Church RM. Punishment: Issues and Experimentation. New York, NY: Appleton Century Crofts; 1968.

  25. 25.

    Bolles RC. Theory of Motivation. New York, NY: Harper & Row; 1967.

  26. 26.

    Pearce JM, Hall G. Overshadowing the instrumental conditioning of a lever-press response by a more valid predictor of the reinforcer. J Exp Psychol: Anim Behav Process. 1978;4:356–67.

  27. 27.

    St. Claire-Smith R. The overshadowing and blocking of punishment. Q J Exp Psychol. 1979;31:51–61.

  28. 28.

    Jean-Richard-dit-Bressel P, McNally GP. The role of the basolateral amygdala in punishment. Learn Mem. 2015;22:128–37.

  29. 29.

    Goodall G, Mackintosh NJ. Analysis of the Pavlovian properties of signals for punishment. Q J Exp Psychol Sect B: Comp Physiol Psychol. 1987;39:1–23.

  30. 30.

    Holman JG, Mackintosh NJ. The control of appetitive instrumental responding does not depend on classical conditioning to the discriminative stimulus. Q J Exp Psychol. 1981;33B:21–31.

  31. 31.

    Kamin LJ. ‘Attention-like’ processes in classical conditioning. In: Jones MR, (Ed.). Miami symposium on the prediction of behavior: Aversive stimulation. Coral Gables, FL: University of Miami Press; 1968. p. 9–33.

  32. 32.

    Nasser HM, McNally GP. Appetitive–aversive interactions in Pavlovian fear conditioning. Behav Neurosci. 2012;126:404–22.

  33. 33.

    Dickinson A, Pearce JM. Inhibitory interactions between appetitive and aversive stimuli. Psychol Bull. 1977;84:690–711.

  34. 34.

    Masserman JH, Pechtel C. Neuroses in monkeys: a preliminary report of experimental observations. Ann New Y Acad Sci. 1953;56:253–65.

  35. 35.

    Logan FA, Wagner AR. Reward and Punishment. Boston, MA: Allyn & Bacon; 1965.

  36. 36.

    Holz WC, Azrin NH. Discriminative properties of punishment. J Exp Anal Behav. 1961;4:225–32.

  37. 37.

    Bouton ME, Schepers ST. Renewal after the punishment of free operant behavior. J Exp Psychol: Anim Learn Cogn. 2015;41:81–90.

  38. 38.

    Marchant NJ, Khuc TN, Pickens CL, Bonci A, Shaham Y. Context-induced relapse to alcohol seeking after punishment in a rat model. Biol Psychiatry. 2013;73:256–62.

  39. 39.

    Panlilio LV, Thorndike EB, Schindler CW. Lorazepam reinstates punishment-suppressed remifentanil self-administration in rats. Psychopharmacology. 2004;179:374–82.

  40. 40.

    Panlilio LV, Thorndike EB, Schindler CW. Reinstatement of punishment-suppressed opioid self-administration in rats: an alternative model of relapse to drug abuse. Psychopharmacology. 2002;168:229–35.

  41. 41.

    Marchant NJ, Campbell EJ, Whitaker LR, Harvey BK, Kaganovsky K, Adhikary S, Hope BT, Heins RC, Prisinzano TE, Vardy E, Bonci A, Bossert JM, Shaham Y. Role of ventral subiculum in context-induced relapse to alcohol seeking after punishment-imposed abstinence. J Neurosci. 2016;36:3281–94.

  42. 42.

    Marchant NJ, Rabei R, Kaganovsky K, Caprioli D, Bossert JM, Bonci A, Shaham Y. A critical role of lateral hypothalamus in context-induced relapse to alcohol seeking after punishment-imposed abstinence. J Neurosci. 2014;34:7447–57.

  43. 43.

    Estes, WK. An experimental study of punishment. Psychological Monographs, 1944;57 (3, Whole No. 263).

  44. 44.

    Pelloux Y, Minier-Toribio A, Hoots JK, Bossert JM, Shaham Y. Opposite effects of basolateral amygdala inactivation on context-induced relapse to cocaine seeking after extinction versus punishment. J Neurosci. 2018;38:51–59.

  45. 45.

    Hunt HF, Brady JV. Some effects of electro-convulsive shock on a conditioned emotional response (“anxiety”). J Comp Physiol Psychol. 1951;44:88–98.

  46. 46.

    Balleine BW, Dickinson A. Goal-directed instrumental action: contingency and incentive learning and their cortical substrates. Neuropharmacology. 1998;37:407–19. doi:10.1016/S0028-3908(98)00033-1

  47. 47.

    Kimmel HD, Terrant FR. Bias due to individual differences in yoked control designs. Behav Res Methods Instrum. 1968;1:11–14.

  48. 48.

    Church RM. Systematic effect of random error in the yoked control design. Psychol Bull. 1964;62:122–31.

  49. 49.

    Pollard GT, Howard JL. Effects of drugs on punished behavior: pre-clinical test for anxiolytics. Pharmacol & Ther. 1990;45:403–24.

  50. 50.

    Killcross AS, Everitt BJ, Robbins TW. Symmetrical effects of amphetamine and alpha-flupenthixol on conditioned punishment and conditioned reinforcement: contrasts with midazolam. Psychopharmacology. 1997;129:141–52.

  51. 51.

    McNaughton N, Gray JA. Anxiolytic action on the behavioural inhibition system implies multiple types of arousal contribute to anxiety. J Affect Disord. 2000;61:161–76.

  52. 52.

    Escorihuela RM, Fernandez-Teruel A, Zapata A, Nüñez JF, Tobeńa A. Flumazenil prevents the anxiolytic effects of diazepam, alprazolam and adinazolam on the early acquisition of two-way active avoidance. Pharmacol Res. 1993;28:53–58.

  53. 53.

    Pellon R, Ruíz A, Lamas E, Rodríguez C. Pharmacological analysis of the effects of benzodiazepines on punished schedule-induced polydipsia in rats. Behav Pharmacol. 2007;18:81–87.

  54. 54.

    Barrett JE, Brady LS, Witkin JM. Behavioral studies with anxiolytic drugs. I. Interactions of the benzodiazepine antagonist Ro 15-1788 with chlordiazepoxide, pentobarbital and ethanol. J Pharmacol Exp Ther. 1985;233:554–9.

  55. 55.

    Rasmussen EB, Newland MC. Quantification of ethanol’s antipunishment effect in humans using the generalized matching equation. J Exp Anal Behav. 2009;92:161–80.

  56. 56.

    Glowa JR, Barrett JE. Effects of alcohol on punished and unpunished responding of squirrel monkeys. Pharmacol Biochem Behav. 1976;4:169–73.

  57. 57.

    Koob GF, Mendelson WB, Schafer J, Wall TL, Britton KT, Bloom FE. Picrotoxinin receptor ligand blocks anti-punishment effects of alcohol. Alcohol. 1989;5:437–43.

  58. 58.

    Glowa JR, Crawley J, Suzdak PD, Paul SM. Ethanol and the GABA receptor complex: Studies with the partial inverse benzodiazepine receptor agonist Ro 15-4513. Pharmacol Biochem Behav. 1988;31:767–72.

  59. 59.

    Miyazaki K, Miyazaki KW, Doya K. The role of serotonin in the regulation of patience and impulsivity. Mol Neurobiol. 2012;45:213–24.

  60. 60.

    Soubrie P. Reconciling the role of central serotonin neurons in human and animal behavior. Behav Brain Sci. 1986;9:319–35.

  61. 61.

    Deakin JW, Graeff FG. 5-HT and mechanisms of defence. J Psychopharmacol. 1991;5:305–15.

  62. 62.

    Cools R, Nakamura K, Daw ND. Serotonin and dopamine: unifying affective, activational, and decision functions. Neuropsychopharmacology. 2011;36:98–113.

  63. 63.

    Daw ND, Kakade S, Dayan P. Opponent interactions between serotonin and dopamine. Neural Netw. 2002;15:603–16.

  64. 64.

    Stein L, Wise CD, Belluzzi JD. Neuropharmacology of reward and punishment. In: Iversen LL, Iversen SD, Snyder SH, (Eds.). Drugs, Neurotransmitters, and Behavior. New York, NY: Springer; 1977. p. 25–49.

  65. 65.

    Faulkner P, Deakin JW. The role of serotonin in reward, punishment and behavioural inhibition in humans: Insights from studies with acute tryptophan depletion. Neurosci Biobehav Rev. 2014;46:365–78.

  66. 66.

    Barrett JE. Studies on the effects of 5‐HT1A drugs in the pigeon. Drug Dev Res. 1992;26:299–317.

  67. 67.

    Brocco MJ, Koek W, Degryse AD, Colpaert FC. Comparative studies on the anti-punishment effects of chlordiazepoxide, buspirone and ritanserin in the pigeon, Geller-Seifter and Vogel conflict procedures. Behav Pharmacol. 1990;1:403–18.

  68. 68.

    Gleeson S, Ahlers ST, Mansbach RS, Foust JM, Barrett JE. Behavioral studies with anxiolytic drugs. VI. Effects on punished responding of drugs interacting with serotonin receptor subtypes. J Pharmacol Exp Ther. 1989;250:809–17.

  69. 69.

    Sanger DJ. Effects of buspirone and related compounds on suppressed operant responding in rats. J Pharmacol Exp Ther. 1990;254:420–6.

  70. 70.

    Sanger DJ. Increased rates of punished responding produced by buspirone-like compounds in rats. J Pharmacol Exp Ther. 1992;261:513–7.

  71. 71.

    Howard JL, Pollard GT. Effects of buspirone in the geller‐seifter conflict test with incremental shock. Drug Dev Res. 1990;19:37–49.

  72. 72.

    Pelloux Y, Dilleen R, Economidou D, Theobald D, Everitt BJ. Reduced forebrain serotonin transmission is causally involved in the development of compulsive cocaine seeking in rats. Neuropsychopharmacology. 2012;37:2505–14.

  73. 73.

    Kruse H, Dunn RW, Theurer KL, Novick WJ, Shearman GT. Attenuation of conflict‐induced suppression by clonidine: Indication of anxiolytic activity. Drug Dev Res. 1981;1:137–43.

  74. 74.

    Margules DL. Localization of anti-punishment actions of norepinephrine and atropine in amygdala and entopeduncular nucleus of rats. Brain Res. 1971;35:177–84.

  75. 75.

    Barnes NM, Sharp T. A review of central 5-HT receptors and their function. Neuropharmacology. 1999;38:1083–152.

  76. 76.

    Hajós‐Korcsok É, Sharp T. Effect of 5‐HT1A receptor ligands on Fos‐like immunoreactivity in rat brain: Evidence for activation of noradrenergic transmission. Synapse. 1999;34:145–53.

  77. 77.

    Baarendse PJ, Winstanley CA, Vanderschuren LJ. Simultaneous blockade of dopamine and noradrenaline reuptake promotes disadvantageous decision making in a rat gambling task. Psychopharmacology. 2013;225:719–31.

  78. 78.

    Zalla T, Koechlin E, Pietrini P, Basso G, Aquino P, Sirigu A, Grafman J. Differential amygdala responses to winning and losing: A functional magnetic resonance imaging study in humans. Eur J Neurosci. 2000;12:1764–70.

  79. 79.

    Hahn T, Dresler T, Plichta MM, Ehlis AC, Ernst LH, Markulin F, et al. Functional amygdala-hippocampus connectivity during anticipation of aversive events is associated with Gray’s trait “sensitivity to punishment”. Biol Psychiatry. 2010;68:459–64.

  80. 80.

    Camara E, Rodriguez-Fornells A, Münte TF. Functional connectivity of reward processing in the brain. Front Human Neurosci. 2009;2:19.

  81. 81.

    Liu M, Glowa JR. Regulation of benzodiazepine receptor binding and GABA A subunit mRNA expression by punishment and acute alprazolam administration. Brain Res. 2000;887:23–33.

  82. 82.

    Margules DL. Noradrenergic basis of inhibition between reward and punishment in amygdala. J Comp Physiol Psychol. 1968;66:329–34.

  83. 83.

    Sommer W, Moller C, Wiklund L, Thorsell A, Rimondini R, Nissbrandt H, Heilig M. Local 5, 7-dihydroxytryptamine lesions of rat amygdala: Release of punished drinking, unaffected plus-maze behavior and ethanol consumption. Neuropsychopharmacology. 2001;24:430–40.

  84. 84.

    Piantadosi PT, Yeates DC, Wilkins M, Floresco SB. Contributions of basolateral amygdala and nucleus accumbens subregions to mediating motivational conflict during punished reward-seeking. Neurobiol Learn Mem. 2017;140:92–105.

  85. 85.

    Killcross S, Robbins TW, Everitt BJ. Different types of fear-conditioned behaviour mediated by separate nuclei within amygdala. Nature. 1997;388:377–80.

  86. 86.

    Xue Y, Steketee JD, Sun W. Inactivation of the central nucleus of the amygdala reduces the effect of punishment on cocaine self‐administration in rats. Eur J Neurosci. 2012;35:775–83.

  87. 87.

    Onge JRS, Stopper CM, Zahm DS, Floresco SB. Separate prefrontal-subcortical circuits mediate different components of risk-based decision making. J Neurosci. 2012;32:2886–99.

  88. 88.

    Ragozzino ME. The contribution of the medial prefrontal cortex, orbitofrontal cortex, and dorsomedial striatum to behavioral flexibility. Ann New Y Acad Sci. 2007;1121:355–75.

  89. 89.

    Szczepanski SM, Knight RT. Insights into human behavior from lesions to the prefrontal cortex. Neuron. 2014;83:1002–18.

  90. 90.

    Kobayashi S. Organization of neural systems for aversive information processing: pain, error, and punishment. Front Neurosci. 2012;6:136.

  91. 91.

    Wiech K, Tracey I. Pain, decisions, and actions: a motivational perspective. Front Neurosci. 2013;7:46.

  92. 92.

    Rainville P, Duncan GH, Price DD, Carrier B, Bushnell MC. Pain affect encoded in human anterior cingulate but not somatosensory cortex. Science. 1997;277:968–71.

  93. 93.

    Sikes RW, Vogt BA. Nociceptive neurons in area 24 of rabbit cingulate cortex. J Neurophysiol. 1992;68:1720–32.

  94. 94.

    Carter CS, Braver TS, Barch DM, Botvinick MM, Noll D, Cohen JD. Anterior cingulate cortex, error detection, and the online monitoring of performance. Science. 1998;280:747–9.

  95. 95.

    Furlong TM, Cole S, Hamlin AS, McNally GP. The role of prefrontal cortex in predictive fear learning. Behav Neurosci. 2010;124:574.

  96. 96.

    Johansen JP, Fields HL. Glutamatergic activation of anterior cingulate cortex produces an aversive teaching signal. Nat Neurosci. 2004;7:398–403.

  97. 97.

    Kaada BR. Somato-motor, autonomic and electrocorticographic responses to electrical stimulation of rhinencephalic and other structures in primates, cat, and dog; a study of responses from the limbic, subcallosal, orbito-insular, piriform and temporal cortex, hippocampus-fornix and amygdala. Acta Physiol Scand Suppl. 1951;24:1–262.

  98. 98.

    McCleary RA. Response specificity in the behavioral effects of limbic system lesions in the cat. J Comp Physiol Psychol. 1961;54:605–13.

  99. 99.

    Corcoran KA, Quirk GJ. Activity in prelimbic cortex is necessary for the expression of learned, but not innate, fears. J Neurosci. 2007;27:840–4.

  100. 100.

    Marquis JP, Killcross S, Haddon JE. Inactivation of the prelimbic, but not infralimbic, prefrontal cortex impairs the contextual control of response conflict in rats. Eur J Neurosci. 2007;25:559–66.

  101. 101.

    Chen BT, Yau HJ, Hatch C, Kusumoto-Yoshida I, Cho SL, Hopf FW, Bonci A. Rescuing cocaine-induced prefrontal cortex hypoactivity prevents compulsive cocaine seeking. Nature. 2013;496:359–62.

  102. 102.

    Marchant NJ, Furlong TM, McNally GP. Medial dorsal hypothalamus mediates the inhibition of reward seeking after extinction. J Neurosci. 2010;30:14102–15.

  103. 103.

    Peters J, De Vries TJ. D-cycloserine administered directly to infralimbic medial prefrontal cortex enhances extinction memory in sucrose-seeking animals. Neuroscience. 2013;230:24–30.

  104. 104.

    Peters J, LaLumiere RT, Kalivas PW. Infralimbic prefrontal cortex is responsible for inhibiting cocaine seeking in extinguished rats. J Neurosci. 2008;28:6046–53.

  105. 105.

    Peters J, Kalivas PW, Quirk GJ. Extinction circuits for fear and addiction overlap in prefrontal cortex. Learn Mem. 2009;16:279–88.

  106. 106.

    Killcross S, Coutureau E. Coordination of actions and habits in the medial prefrontal cortex of rats. Cereb Cortex. 2003;13:400–8.

  107. 107.

    Arnsten AF, Li BM. Neurobiology of executive functions: Catecholamine influences on prefrontal cortical functions. Biol Psychiatry. 2005;57:1377–84.

  108. 108.

    Kim CK, Ye L, Jennings JH, Pichamoorthy N, Tang DD, Yoo ACW, et al. Molecular and circuit-dynamical identification of top-down neural mechanisms for restraint of reward seeking. Cell. 2017;170:1013–27.

  109. 109.

    Pelloux Y, Murray JE, Everitt BJ. Differential roles of the prefrontal cortical subregions and basolateral amygdala in compulsive cocaine seeking and relapse after voluntary abstinence in rats. Eur J Neurosci. 2013;38:3018–26.

  110. 110.

    Jean-Richard-dit-Bressel P, McNally GP. Lateral, not medial, prefrontal cortex contributes to punishment and aversive instrumental learning. Learn Mem. 2016;23:607–17.

  111. 111.

    Schoenbaum G, Roesch M. Orbitofrontal cortex, associative learning, and expectancies. Neuron. 2005;47:633–6.

  112. 112.

    Cardinal RN, Parkinson JA, Hall J, Everitt BJ. The contribution of the amygdala, nucleus accumbens, and prefrontal cortex to emotion and motivated behaviour. Int Congr Ser. 2003;1250:347–70.

  113. 113.

    O’Doherty JP, Dayan P, Friston K, Critchley H, Dolan RJ. Temporal difference models and reward-related learning in the human brain. Neuron. 2003;38:329–37.

  114. 114.

    Schoenbaum G, Roesch MR, Stalnaker TA, Takahashi YK. A new perspective on the role of the orbitofrontal cortex in adaptive behaviour. Nat Rev Neurosci. 2009;10:885–92.

  115. 115.

    Arana FS, Parkinson JA, Hinton E, Holland AJ, Owen AM, Roberts AC. Dissociable contributions of the human amygdala and orbitofrontal cortex to incentive motivation and goal selection. J Neurosci. 2003;23:9632–8.

  116. 116.

    Hodgson TL, Mort D, Chamberlain MM, Hutton SB, O’Neill KS, Kennard C. Orbitofrontal cortex mediates inhibition of return. Neuropsychologia. 2002;40:1891–901.

  117. 117.

    Morrison SE, Salzman CD. Representations of appetitive and aversive information in the primate orbitofrontal cortex. Ann New Y Acad Sci. 2011;1239:59–70.

  118. 118.

    O’Doherty J, Kringelbach ML, Rolls ET, Hornak J, Andrews C. Abstract reward and punishment representations in the human orbitofrontal cortex. Nat Neurosci. 2001;4:95–102.

  119. 119.

    Bechara A, Damasio H, Damasio AR, Lee GP. Different contributions of the human amygdala and ventromedial prefrontal cortex to decision-making. J Neurosci. 1999;19:5473–81.

  120. 120.

    Bechara A, Damasio H, Damasio AR. Emotion, decision making and the orbitofrontal cortex. Cereb Cortex. 2000;10:295–307.

  121. 121.

    Orsini CA, Trotta RT, Bizon JL, Setlow B. Dissociable roles for the basolateral amygdala and orbitofrontal cortex in decision-making under risk of punishment. J Neurosci. 2015;35:1368–79.

  122. 122.

    Clarke HF, Horst NK, Roberts AC. Regional inactivations of primate ventral prefrontal cortex reveal two distinct mechanisms underlying negative bias in decision making. Proc Natl Acad Sci. 2015;112:4176–81.

  123. 123.

    Simmons A, Matthews SC, Stein MB, Paulus MP. Anticipation of emotionally aversive visual stimuli activates right insula. NeuroReport. 2004;15:2261–5.

  124. 124.

    Simmons A, Strigo I, Matthews SC, Paulus MP, Stein MB. Anticipation of aversive visual stimuli is associated with increased insula activation in anxiety-prone subjects. Biol Psychiatry. 2006;60:402–9.

  125. 125.

    Coghill RC, Talbot JD, Evans AC, Meyer E, Gjedde A, Bushnell MC, Duncan GH. Distributed processing of pain and vibration by the human brain. J Neurosci. 1994;14:4095–108.

  126. 126.

    Franciotti R, Ciancetta L, Della Penna S, Belardinelli P, Pizzella V, Romani GL. Modulation of alpha oscillations in insular cortex reflects the threat of painful stimuli. Neuroimage. 2009;46:1082–90.

  127. 127.

    Hayes DJ, Northoff G. Identifying a network of brain regions involved in aversion-related processing: A cross-species translational investigation. Front Integr Neurosci. 2011;5:49.

  128. 128.

    Jasmin L, Rabkin SD, Granato A, Boudah A, Ohara PT. Analgesia and hyperalgesia from GABA-mediated modulation of the cerebral cortex. Nature. 2003;424:316–20.

  129. 129.

    Menon V, Uddin LQ. Saliency, switching, attention and control: a network model of insula function. Brain Struct Funct. 2010;214:655–67.

  130. 130.

    Cai W, Ryali S, Chen T, Li CSR, Menon V. Dissociable roles of right inferior frontal cortex and anterior insula in inhibitory control: Evidence from intrinsic and task-related functional parcellation, connectivity, and response profile analyses across multiple datasets. J Neurosci. 2014;34:14652–67.

  131. 131.

    Ghahremani A, Rastogi A, Lam S. The role of right anterior insula and salience processing in inhibitory control. J Neurosci. 2015;35:3291–2.

  132. 132.

    Daniel ML, Cocker PJ, Lacoste J, Mar AC, Houeto JL, Belin‐Rauscent A, Belin D. The anterior insula bidirectionally modulates cost‐benefit decision‐making on a rodent gambling task. Eur J Neurosci. 2017;46:2620–8.

  133. 133.

    Gray JA. A critique of Eysenck’s theory of personality. In: Eysenck HJ (ed.). A Model for Personality. Berlin, Germany: Springer-Verlag; 1981. p. 246–76

  134. 134.

    Gray J. The neuropsychology of anxiety: Inquiry into the septohippocampal system.. Oxford, United Kingdom: Clarendon Press; 1982.

  135. 135.

    Eison AS, Temple DL. Buspirone: Review of its pharmacology and current perspectives on its mechanism of action. Am J Med. 1986;80:1–9.

  136. 136.

    Kaada BR, Jarisen J, Andersen P. Stimulation of the hippocampus and medial cortical areas in unanesthetized cats. Neurology. 1953;3:844–844.

  137. 137.

    Gray JA, McNaughton N. Comparison between the behavioural effects of septal and hippocampal lesions: a review. Neurosci & Biobehav Rev. 1983;7:119–88.

  138. 138.

    Gray JA. Drug effects on fear and frustration: Possible limbic site of action of minor tranquilizers. In: Iversen LL, Iversen SD, Snyder SH, (eds.). Drugs, Neurotransmitters, and Behavior. New York, NY: Springer; 1977. p. 433–529.

  139. 139.

    Trivedi MA, Coover GD. Lesions of the ventral hippocampus, but not the dorsal hippocampus, impair conditioned fear expression and inhibitory avoidance on the elevated T-maze. Neurobiol Learn Mem. 2004;81:172–84.

  140. 140.

    Torrubia R, Avila C, Moltó J, Caseras X. The Sensitivity to Punishment and Sensitivity to Reward Questionnaire (SPSRQ) as a measure of Gray’s anxiety and impulsivity dimensions. Personal Individ Differ. 2001;31:837–62.

  141. 141.

    Avila C, Torrubia R. Personality differences in suppression of behavior as a function of the probability of punishment. Personal Individ Differ. 2006;41:249–60.

  142. 142.

    Barros-Loscertales A, Meseguer V, Sanjuan A, Belloch V, Parcet MA, Torrubia R, Avila C. Behavioral inhibition system activity is associated with increased amygdala and hippocampal gray matter volume: a voxel-based morphometry study. Neuroimage. 2006;33:1011–5.

  143. 143.

    Gupta R, Koscik TR, Bechara A, Tranel D. The amygdala and decision-making. Neuropsychologia. 2011;49:760–6.

  144. 144.

    Matsumoto M, Hikosaka O. Lateral habenula as a source of negative reward signals in dopamine neurons. Nature. 2007;447:1111–5.

  145. 145.

    Mirenowicz J, Schultz W. Preferential activation of midbrain dopamine neurons by appetitive rather than aversive stimuli. Nature. 1996;379:449–51.

  146. 146.

    Liu ZH, Shin R, Ikemoto S. Dual role of medial A10 dopamine neurons in affective encoding. Neuropsychopharmacology. 2008;33:3010–20.

  147. 147.

    Tan KR, Yvon C, Turiault M, Mirzabekov JJ, Doehner J, Labouèbe G, et al. GABA neurons of the VTA drive conditioned place aversion. Neuron. 2012;73:1173–83.

  148. 148.

    Danjo T, Yoshimi K, Funabiki K, Yawata S, Nakanishi S. Aversive behavior induced by optogenetic inactivation of ventral tegmental area dopamine neurons is mediated by dopamine D2 receptors in the nucleus accumbens. Proc Natl Acad Sci. 2014;111:6455–60.

  149. 149.

    Shippenberg TS, Bals-Kubik R, Huber A, Herz A. Neuroanatomical substrates mediating the aversive effects of D-1 dopamine receptor antagonists. Psychopharmacology. 1991;103:209–14.

  150. 150.

    Hong S, Jhou TC, Smith M, Saleem KS, Hikosaka O. Negative reward signals from the lateral habenula to dopamine neurons are mediated by rostromedial tegmental nucleus in primates. J Neurosci. 2011;31:11457–71.

  151. 151.

    Balcita-Pedicino JJ, Omelchenko N, Bell R, Sesack SR. The inhibitory influence of the lateral habenula on midbrain dopamine cells: Ultrastructural evidence for indirect mediation via the rostromedial mesopontine tegmental nucleus. J Comp Neurol. 2011;519:1143–64.

  152. 152.

    Stamatakis AM, Stuber GD. Activation of lateral habenula inputs to the ventral midbrain promotes behavioral avoidance. Nat Neurosci. 2012;15:1105–7.

  153. 153.

    Hikosaka O, Sesack SR, Lecourtier L, Shepard PD. Habenula: crossroad between the basal ganglia and the limbic system. J Neurosci. 2008;28:11825–9.

  154. 154.

    Lawson RP, Seymour B, Loh E, Lutti A, Dolan RJ, Dayan P, et al. The habenula encodes negative motivational value associated with primary punishment in humans. Proc Natl Acad Sci. 2014;111:11858–63.

  155. 155.

    Ullsperger M, von Cramon DY. Error monitoring using external feedback: Specific roles of the habenular complex, the reward system, and the cingulate motor area revealed by functional magnetic resonance imaging. J Neurosci. 2003;23:4308–14.

  156. 156.

    Shabel SJ, Proulx CD, Trias A, Murphy RT, Malinow R. Input to the lateral habenula from the basal ganglia is excitatory, aversive, and suppressed by serotonin. Neuron. 2012;74:475–81.

  157. 157.

    Wickens J. Toward an anatomy of disappointment: Reward-related signals from the globus pallidus. Neuron. 2008;60:530–1.

  158. 158.

    Hong S, Hikosaka O. The globus pallidus sends reward-related signals to the lateral habenula. Neuron. 2008;60:720–9.

  159. 159.

    Lecca S, Meye FJ, Trusel M, Tchenio A, Harris J, Schwarz MK et al. Aversive stimuli drive hypothalamus-to-habenula excitation to promote escape behavior. eLife. 2017;6:e30697.

  160. 160.

    Root DH, Mejias-Aponte CA, Qi J, Morales M. Role of glutamatergic projections from ventral tegmental area to lateral habenula in aversive conditioning. J Neurosci. 2014;34:13906–10.

  161. 161.

    Shabel SJ, Proulx CD, Piriz J, Malinow R. GABA/glutamate co-release controls habenula output and is modified by antidepressant treatment. Science. 2014;345:1494–8.

  162. 162.

    Stamatakis AM, Jennings JH, Ung RL, Blair GA, Weinberg RJ, Neve RL, et al. A unique population of ventral tegmental area neurons inhibits the lateral habenula to promote reward. Neuron. 2013;80:1039–53.

  163. 163.

    Good CH, Wang H, Chen YH, Mejias-Aponte CA, Hoffman AF, Lupica CR. Dopamine D4 receptor excitation of lateral habenula neurons via multiple cellular mechanisms. J Neurosci. 2013;33:16853–64.

  164. 164.

    Hwang EK, Chung JM. 5HT1B receptor-mediated pre-synaptic depression of excitatory inputs to the rat lateral habenula. Neuropharmacology. 2014;81:153–65.

  165. 165.

    Jean-Richard-dit-Bressel P, McNally GP. The role of the lateral habenula in punishment. PLoS ONE. 2014;9:e111699.

  166. 166.

    Stopper CM, Floresco SB. What’s better for me? Fundamental role for lateral habenula in promoting subjective decision biases. Nat Neurosci. 2014;17:33–35.

  167. 167.

    Christoph GR, Leonzio RJ, Wilcox KS. Stimulation of the lateral habenula inhibits dopamine-containing neurons in the substantia nigra and ventral tegmental area of the rat. J Neurosci. 1986;6:613–9.

  168. 168.

    Jhou TC, Geisler S, Marinelli M, Degarmo BA, Zahm DS. The mesopontine rostromedial tegmental nucleus: A structure targeted by the lateral habenula that projects to the ventral tegmental area of tsai and the substantia nigra compacta. J Comp Neurol. 2009;513:566–96.

  169. 169.

    Ilango A, Kesner AJ, Keller KL, Stuber GD, Bonci A, Ikemoto S. Similar roles of substantia nigra and ventral tegmental dopamine neurons in reward and aversion. J Neurosci. 2014;34:817–22.

  170. 170.

    Kreitzer AC, Malenka RC. Striatal plasticity and basal ganglia circuit function. Neuron. 2008;60:543–54.

  171. 171.

    Bromberg-Martin ES, Matsumoto M, Hikosaka O. Dopamine in motivational control: rewarding, aversive, and alerting. Neuron. 2010;68:815–34.

  172. 172.

    Hikosaka O. Basal ganglia mechanisms of reward‐oriented eye movement. Ann New Y Acad Sci. 2007;1104:229–49.

  173. 173.

    Kravitz AV, Tye LD, Kreitzer AC. Distinct roles for direct and indirect pathway striatal neurons in reinforcement. Nat Neurosci. 2012;15:816–8.

  174. 174.

    Kravitz AV, Kreitzer AC. Striatal mechanisms underlying movement, reinforcement, and punishment. Physiology. 2012;27:167–77.

  175. 175.

    Cohen JY, Haesler S, Vong L, Lowell BB, Uchida N. Neuron-type-specific signals for reward and punishment in the ventral tegmental area. Nature, 2012:482: 85–88.

  176. 176.

    Matsumoto M, Hikosaka O. Two types of dopamine neuron distinctly convey positive and negative motivational signals. Nature. 2009b;459:837–41.

  177. 177.

    Kim CK, Yang SJ, Pichamoorthy N, Young NP, Kauvar I, Jennings JH, et al. Simultaneous fast measurement of circuit dynamics at multiple sites across the mammalian brain. Nat Methods. 2016;13:325–8.

  178. 178.

    Lammel S, Ion DI, Roeper J, Malenka RC. Projection-specific modulation of dopamine neuron synapses by aversive and rewarding stimuli. Neuron. 2011;70:855–62.

  179. 179.

    Lammel S, Lim BK, Malenka RC. Reward and aversion in a heterogeneous midbrain dopamine system. Neuropharmacology. 2014;76:351–9.

  180. 180.

    Mantz J, Thierry AM, Glowinski J. Effect of noxious tail pinch on the discharge rate of mesocortical and mesolimbic dopamine neurons: selective activation of the mesocortical system. Brain Res. 1989;476:377–81.

  181. 181.

    Lammel S, Lim BK, Ran C, Huang KW, Betley MJ, Tye KM, et al. Input-specific control of reward and aversion in the ventral tegmental area. Nature. 2012;491:212–7.

  182. 182.

    American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Washington, DC: Author; 2013.

  183. 183.

    Figee M, Pattij T, Willuhn I, Luigjes J, van den Brink W, Goudriaan A, et al. Compulsivity in obsessive–compulsive disorder and addictions. Eur Neuropsychopharmacol. 2016;26:856–68.

  184. 184.

    Morein-Zamir S, Robbins TW. Fronto-striatal circuits in response-inhibition: relevance to addiction. Brain Res. 2015;1628:117–29.

  185. 185.

    Humphreys KL, Lee SS. Risk taking and sensitivity to punishment in children with ADHD, ODD, ADHD+ODD, and controls. J Psychopathol Behav Assess. 2011;33:299–307.

  186. 186.

    Petry NM. Substance abuse, pathological gambling, and impulsiveness. Drug Alcohol Depend. 2001;63:29–38.

  187. 187.

    Kasanetz F, Deroche-Gamonet VER, Berson NEG, Balado E, Lafourcade M, Manzoni O, Piazza PV. Transition to addiction is associated with a persistent impairment in synaptic plasticity. Science. 2010;328:1709–12.

  188. 188.

    Kasanetz F, Lafourcade M, Deroche-Gamonet V, Revest J-M, Berson N, Balado E, Fiancette J-F, Renault P, Piazza P-V, Manzoni OJ. Prefrontal synaptic markers of cocaine addiction-like behavior in rats. Mol Psychiatry. 2012;18:729–37.

  189. 189.

    Radke AK, Jury NJ, Kocharian A, Marcinkiewcz CA, Lowery-Gionta EG, Pleil KE, McElligotto ZA, McKlveen JM, Kash TL, Holmes AL. Chronic EtOH effects on putative measures of compulsive behavior in mice. Addict Biol. 2017;22:423–34.

  190. 190.

    Belin D, Mar AC, Dalley JW, Robbins TW, Everitt BJ. High impulsivity predicts the switch to compulsive cocaine-taking. Science. 2008;320:1352–5.

  191. 191.

    Vanderschuren LJMJ, Everitt BJ. Drug seeking becomes compulsive after prolonged cocaine self-administration. Science. 2004;305:1017–9.

  192. 192.

    Deroche-Gamonet V, Belin D, Piazza PV. Evidence for addiction-like behavior in the rat. Science. 2004;305:1014–7.

  193. 193.

    Smith, RJ, & Laiks, LS (2017). Behavioral and neural mechanisms underlying habitual and compulsive drug seeking. Prog. Neuro-Psychopharmacol. Biol. Psychiatry.

  194. 194.

    Vanderschuren LJ, Minnaard AM, Smeets JA, Lesscher HM. Punishment models of addictive behavior. Curr Opin Behav Sci. 2017;13:77–84.

  195. 195.

    Matthys W, Van Goozen SH, Snoek H, Van Engeland H. Response perseveration and sensitivity to reward and punishment in boys with oppositional defiant disorder. Eur Child Adolesc Psychiatry. 2004;13:362–4.

  196. 196.

    Blair RJR, Mitchell DGV, Leonard A, Budhani S, Peschardt KS, Newman C. Passive avoidance learning in individuals with psychopathy: modulation by reward but not by punishment. Personal Individ Differ. 2004;37:1179–92.

  197. 197.

    Newman JP, Kosson DS. Passive avoidance learning in psychopathic and nonpsychopathic offenders. J Abnorm Psychol. 1986;95:252–6.

  198. 198.

    Newman JP, Patterson CM, Kosson DS. Response perseveration in psychopaths. J Abnorm Psychol. 1987;96:145–8.

  199. 199.

    Müller JL, Sommer M, Wagner V, Lange K, Taschler H, Röder CH, et al. Abnormalities in emotion processing within cortical and subcortical regions in criminal psychopaths: evidence from a functional magnetic resonance imaging study using pictures with emotional content. Biol Psychiatry. 2003;54:152–62.

  200. 200.

    Schneider F, Habel U, Kessler C, Posse S, Grodd W, Müller-Gärtner HW. Functional imaging of conditioned aversive emotional responses in antisocial personality disorder. Neuropsychobiology. 2000;42:192–201.

  201. 201.

    Birbaumer N, Veit R, Lotze M, Erb M, Hermann C, Grodd W, Flor H. Deficient fear conditioning in psychopathy: a functional magnetic resonance imaging study. Arch General Psychiatry. 2005;62:799–805.

  202. 202.

    Weber S, Habel U, Amunts K, Schneider F. Structural brain abnormalities in psychopaths—A review. Behav Sci Law. 2008;26:7–28.

  203. 203.

    Yang Y, Raine A, Narr KL, Colletti P, Toga AW. Localization of deformations within the amygdala in individuals with psychopathy. Arch General Psychiatry. 2009;66:986–94.

  204. 204.

    Moul C, Killcross S, Dadds MR. A model of differential amygdala activation in psychopathy. Psychol Rev. 2012;119:789–806.

  205. 205.

    Eshel N, Roiser JP. Reward and punishment processing in depression. Biol Psychiatry. 2010;68:118–24.

  206. 206.

    Hevey D, Thomas K, Laureano-Schelten S, Looney K, Booth R. Clinical depression and punishment sensitivity on the BART. Front Psychol. 2017;8:670.

  207. 207.

    Must A, Szabó Z, Bódi N, Szász A, Janka Z, Kéri S. Sensitivity to reward and punishment and the prefrontal cortex in major depression. J Affect Disord. 2006;90:209–15.

  208. 208.

    Whitmer AJ, Frank MJ, Gotlib IH. Sensitivity to reward and punishment in major depressive disorder: Effects of rumination and of single versus multiple experiences. Cogn Emot. 2012;26:1475–85.

  209. 209.

    Elliott R, Sahakian BJ, McKay AP, Herrod JJ, Robbins TW, Paykel ES. Neuropsychological impairments in unipolar depression: the influence of perceived failure on subsequent performance. Psychol Med. 1996;26:975–89.

  210. 210.

    Cella M, Dymond S, Cooper A. Impaired flexible decision-making in major depressive disorder. J Affect Disord. 2010;124:207–10.

  211. 211.

    Kasch KL, Rottenberg J, Arnow BA, Gotlib IH. Behavioral activation and inhibition systems and the severity and course of depression. J Abnorm Psychol. 2002;111:589–97.

  212. 212.

    Dayan P, Huys QJ. Serotonin in affective control. Ann Rev Neurosci. 2009; 32:95–126.

Download references


Preparation of this manuscript was supported by the Australian Research Council and the National Health and Medical Research Council. We have no other sources of funding or commercial interests to disclose.

Author information


  1. UNSW Sydney, Sydney, NSW, 2052, Australia

    • Philip Jean-Richard-Dit-Bressel
    • , Simon Killcross
    •  & Gavan P. McNally


  1. Search for Philip Jean-Richard-Dit-Bressel in:

  2. Search for Simon Killcross in:

  3. Search for Gavan P. McNally in:

Competing interests

The authors declare no competing interests.

Corresponding author

Correspondence to Gavan P. McNally.

About this article

Publication history






Further reading