INTRODUCTION

Behavioral inhibition (BI) refers to a well-studied temperament style identified reliably in infancy and early childhood. Young children with BI display heightened sensitivity to novel auditory and visual stimuli, and avoid unfamiliar situations and people (Fox et al, 2005; Kagan et al, 1984). Moreover, as a group, children with a history of early BI show a higher prevalence of anxiety disorders including social anxiety disorder (SAD) at later ages. However, the developmental trajectories of individual children with a history of BI are highly variable such that many BI children do not develop clinical or even subclinical levels of anxiety in later childhood. Thus, the study of differential risk and resilience among children with this early temperament, including the cognitive and neural processes underlying these varying trajectories, is of great theoretical and practical importance. The current paper uses a dual-processing model of information processing, embedded within a systems neuroscience perspective, to provide a framework for studying and understanding such processes.

For children with BI, a dual-processing model describes two information-processing strategies that interact to shape social and emotional outcomes. First, beginning in infancy, BI is characterized by an exaggerated tendency to automatically orient toward novel stimuli, responding to such stimuli as if they are threats. For young children with BI, novel stimuli quickly and markedly engage neural systems supporting salience detection, rapid information processing, and defensive responding. This tendency to quickly and automatically process potential threats is clearly adaptive throughout the lifespan when confronted with truly threatening situations. Nevertheless, for some children, possibly those with particularly stable forms of BI, this highly reactive and automatic style of information processing persists and develops into an over-generalized and biased style of information processing. By the preschool years, a second processing strategy involving more controlled, goal-directed activity emerges and continues to mature throughout childhood and adolescence. Together, these automatic and controlled processes influence the way children process, interpret, and interact with nonsocial and social stimuli in their environments. Importantly, based on a dual-processing model, variable social and emotional outcomes for children with BI may reflect the end result of the interaction between these two processing strategies. In this review, we use existing theory and research in cognitive development, temperament and personality, psychopathology, and affective neuroscience to propose three developmental models that may account for the joint influence of automatic and controlled processing on the variable social and emotional outcomes of BI children. These models (Top-Down Model of Control; Risk Potentiation Model of Control; and Overgeneralized Control Model) are similar in their emphasis on the combined influence of automatic and controlled information-processing strategies but differ in (1) their assumptions regarding the degree of inter-dependence in these two processing systems during development and (2) the direction of the hypothesized effect of each on the social and emotional adjustment of BI children. The review begins with a detailed summary of empirical studies examining BI, automatic and controlled processing, and differential risk for anxiety. Next, we return to these hypothetical models and discuss their relative merits and the unique directions for future research associated with each. Finally, we conclude with a discussion of the clinical implications of this framework.

BI AND DEVELOPMENTAL RISK

BI refers to ‘the child’s early initial behavioral reactions to unfamiliar people, objects, and contexts, or challenging situations’ (Kagan et al, 1985, p. 53). Individual differences in motor and affective reactions to novel stimuli can be reliably identified as early as 4 months of age. Although such reactive tendencies, in and of themselves, are not the same as BI, these tendencies are predictive of BI in the toddler years (Fox et al, 2001; Fox et al, submitted; Kagan and Snidman, 1991). Toddlers are said to exhibit BI when they are slow to approach unfamiliar stimuli and instead remain in close proximity to caregivers when confronted with novel objects or unfamiliar people. These behaviors are thought to arise from a lowered threshold to engage neural and physiological systems associated with novelty detection, orienting, and defensive responding. Functionally, the behaviors of toddlers with BI allow an immediate, albeit relatively inflexible, source of regulation by maintaining physical distance between the child and unfamiliar stimuli.

Research on the course of BI quantifies the stability of the phenotype and its relations to social and emotional outcomes. Longitudinal studies demonstrate a good deal of stability in the expression of BI over early and middle childhood (eg, Fox et al, 2001; Kagan et al, 1987; Kerr et al, 1994; Sanson et al, 1996) and from childhood to early adulthood (Gest, 1997). Over development, the continued expression of BI is thought to limit both the quantity and quality of children’s experiences, particularly in novel contexts and/or with unfamiliar others. For example, early BI predicts elevated social wariness and the use of passive and relatively ineffective social problem solving strategies with unfamiliar peers in the toddler and preschool years (Degnan et al, in press; Walker et al, 2013). This leads children with BI to miss potentially formative experiences that facilitate adaptive social development. Consistent with this view, numerous studies document concurrent and prospective associations between BI and various forms of socioemotional maladjustment into middle and later childhood (Crozier, 1999; Rubin et al, 2009).

Importantly, there is some specificity in the adverse outcomes suffered by children with BI. BI is a significant risk factor for a range of anxiety disorders, but the association with SAD appears particularly robust (Biederman et al, 1995; Lonigan and Phillips, 2001). For example, children rated by their mothers as consistently high in BI between 14 months and 7 years of age had an approximate fourfold increased odds of a lifetime SAD when assessed in adolescence (Chronis-Tuscano et al, 2009). In a recent meta-analysis of seven studies assessing the relation between BI and SAD, Clauss and Blackford (2012) reported a sevenfold increase in the risk of later SAD diagnosis for children with a history of BI, leading them to conclude that BI is one of the strongest single risk factors for the development of SAD. It is important to note, however, that many cases of SAD arise in the absence of BI. Moreover, some children with BI develop other anxiety disorders besides SAD. Therefore, the BI-to-SAD link represents one specific developmental pathway for children with BI. This pathway holds a particularly elevated risk for SAD, and we hypothesize that a child’s trajectory along this pathway is fueled at least in part by relations among BI and automatic and controlled processing strategies.

Variable trajectories among equally inhibited toddlers are particularly interesting from a developmental perspective. Although children with a history of BI rarely become socially exuberant, many of them are indistinguishable from their peers in social contexts at later ages (Degnan et al, in press; Fox et al, 2001). For example, using latent profile analysis with repeated assessments of young children’s wariness with an unfamiliar peer between the ages of 2 and 5 years, Degnan et al reported that toddlers high in BI were significantly more likely than toddlers low in BI to display social wariness at 2 years of age. However, <30% of the children displaying high levels of social wariness at age 2 remained wary through to the 5-year assessment. That is, many children who initially displayed high levels of BI and social wariness were significantly less extreme in their behavior by the end of the preschool years. Importantly, those children who remained consistently high in BI and wariness over childhood are the ones at particular risk for adolescent anxiety diagnoses (Chronis-Tuscano et al, 2009). To date, no single behavior or reaction pattern that characterizes the early BI temperament has been shown to predict differential outcome. Rather, individual differences in experiences and a wide range of cognitive processes that develop beyond infancy and toddlerhood are thought to shape outcomes over development. For example, caregiving or peer experiences that occur from the toddler and preschool years through adolescence modulate outcome (see Degnan et al, 2010). Our focus here is specifically on within-child factors related to automatic vs controlled information-processing strategies and their associated neural systems. We believe these processing strategies together shape children’s representations and expectations regarding their current and future social environments and therefore provide a specific mechanism linking BI and SAD.

DUAL PROCESS MODELS OF INFORMATION PROCESSING

In the field of psychology, dual-process models are used to explain the dynamics and development of broad domains of functioning. These domains include attention, cognition, emotion, and social behavior (eg, Barrett et al, 2004; Eisenberg et al, 1994; MacDonald, 2008; Norman and Shallice, 1986; Rothbart and Bates, 2006; Rothbart and Derryberry, 1981; Strack and Deutsch, 2004). The overarching theme of these models is that human information processing involves at least two complementary strategies. The first strategy involves the processing of information in an automatic, stimulus-driven, and reflexive way. The second involves more controlled, goal-directed, and contemplative approaches. These systems are engaged by different stimulus properties and demands, have unique neural underpinnings, support different forms of learning, and provide potentially competing response pathways (Corbetta and Shulman, 2002). Engagement of these systems occurs on a relative rather than absolute scale, such that few behaviors are completely dominated by one or the other mode of processing. Rather, differences in behavior are explained by the relative balance of these two modes of processing in any given context. Several disparate lines of research suggest that individual differences in health and adaptation reflect the way in which these dual modes functionally integrate in the service of adaptation (eg, Carver et al, 2009; Derryberry and Rothbart, 1997).

For BI children, the deployment of automatic and controlled modes of processing in motivationally and emotionally significant contexts appears particularly relevant. Such contexts contain signals of reward and punishment, stimuli for which organisms will extend effort to approach or avoid. In such contexts, motivationally significant cues engage automatic modes of processing and trigger reflexive and rapidly deployed responses. As such, automatic information-processing modes are central to evolutionary theories emphasizing the adaptive function of rapid approach- and avoidance-related strategies. When children with a history of BI enter novel contexts, they tend to remain on the periphery, carefully watching but not engaging with novel objects or people. In such contexts, a state of hypervigilance supports detailed processing of stimulus features but limits the more flexible and integrative processing of the broader context, which is necessary for fluid, reciprocal social interactions. From a neural perspective, automatic modes of processing engage a network of brain regions centered on subcortical, medial temporal structures, particularly the amygdala and anterior hippocampus, as well as components of the ventral prefrontal cortex (PFC) that are most heavily connected to these structures (Braver et al, 2007; Posner, 2012). These subcortical structures are brain regions that are relatively old from an evolutionary perspective and relatively conserved across mammals, reflecting the adaptive advantage of this automatic, rapid mode of responding.

Whereas the automatic mode narrows attention to remain responsive to immediately present threats and rewards, the controlled mode is recruited when behavior is goal directed and dependent on the active maintenance of task-related goals, even if these goals are far removed from the immediate context. This control mode is described as reflective, endogenous, strategic, logical, and effortful. The control mode incorporates information beyond that which is immediately present, supporting more planful, reasoned and goal-directed behavior in comparison with behaviors regulated by the automatic mode. For example, engagement of controlled processing in novel contexts may allow BI children to more flexibly attend to and process novel situations and to access and implement previously learned social scripts. Moreover, controlled processing maintains a prolonged influence on behavior relative to the quick and short-acting influence of the automatic mode of processing. Controlled processes place extensive cognitive demands on the organism including working memory and self-monitoring and are therefore more resource demanding, less efficient and more slowly engaged than automatic modes of processing. Consistent with such a demanding, complex nature, this processing mode shows a later, more prolonged developmental time course, relative to automatic, reflexive modes of processing that guide behaviors from birth.

Controlled processing is further distinguished from automatic processing based on underlying neural systems. Controlled processes engage a network centered on the dorsolateral PFC (DLPFC). The DLPFC in turn draws on other regions that have a role in both controlled and automatic processing. These include the dorsal anterior cingulate gyrus, anterior insula with expanses onto the ventro-lateral PFC, and basal ganglia. Of note, this DLPFC-centered network encompasses regions, particularly so-called ‘granular’ components of PFC, which evolved relatively late, compared with the brain regions that support automatic modes of processing. Considerable debate remains on the precise adaptive function conferred by these evolutionary changes in brain anatomy. Nevertheless, many compelling theories emphasize the role of this network in flexible maintenance of goal-directed behaviors in contexts where stimulus contingencies change rapidly. Thus, for humans, the complex and rapidly changing nature of social interactions could represent one instance where flexible maintenance of goals in changing contexts confers a particularly important adaptive advantage.

DEVELOPMENTAL PERSPECTIVES

Cognitive and affective developmental theories rely heavily on dual process models (Eisenberg and Fabes, 1992; Evans, 2011; Kopp, 1982). As Rothbart’s model of temperament extends this focus to the ontogeny of individual differences (Rothbart and Derryberry, 1981), it is particularly relevant to BI and a developmentally focused, dual-processing model. In Rothbart’s model, temperament is conceptualized as constitutionally based individual differences in reactive and self-regulatory processes that influence children’s interactions with their environments (Rothbart and Bates, 2006). In Rothbart’s model, reactive tendencies reflect initial, immediate responses to changes in the internal or external environment, instances that elicit automatic orienting responses. Self-regulation in Rothbart’s model relates more closely to control processes and involves executive attention and effortful control (EC) in the service of goal-directed behavior. Given the differences in stimulus salience, time course, and automaticity, Rothbart’s model of reactive vs self-regulatory temperamental processes directly parallels the distinction between automatic and controlled processing detailed above.

Rothbart and colleagues further differentiate the automatic and controlled processes underlying temperament based on their associations with attention, information processing, and neural substrates (see Fan et al, 2002, Rothbart et al, 2007). For automatic processes, neural systems involved in attention orienting toward salient or cued locations feature prominently. The orienting system is stimulus driven and engaged on an ‘as-needed’ basis, influencing perceptual inputs and behavioral/affective outputs, to quickly and efficiently promote physiological, neural, and behavioral responses supporting basic and immediate needs (Derryberry and Rothbart, 1997). The orienting functions are tied to a posterior network distributed across the superior colliculus, pulvinar, and parietal lobe (Posner and Raichle, 1994; Posner and Rothbart, 1992). Although not emphasized by Rothbart, these regions are known to interact with a more anterior brain network, encompassing amygdala and ventro-lateral components of the PFC, as discussed by Corbetta and Shulman (2002).

In Rothbart’s model, control processes are tied to the executive attention system, which functions to monitor and resolve conflict between other brain networks in the service of goal-directed behavior (Botvinick et al, 2001; Rothbart et al, 2007). Similar to descriptions of the control mode of processing above, the executive attention system operates at a relatively slow pace and in an anticipatory manner, in a way that demands sustained effort and consumes energy. Rothbart and colleagues relate the executive attention system to an anterior network including the anterior cingulate gyrus, anterior insula, basal ganglia, and the DLPFC (Rueda et al, 2004).

Finally, Rothbart further differentiates automatic and control aspects of temperament based on the developmental course of their underlying neural systems. Orienting responses and individual differences in reaction tendencies are present from birth (DiPietro et al, 2008). The early-maturing nature of the automatic mode corresponds with the early maturation of the underlying brain networks linked to its functioning. These stimulus-driven reaction tendencies provide the dominant mode of information processing and attention regulation in early infancy (Rothbart et al, 2011). Over the toddler and preschool years, control processes gradually develop (eg, Jones et al, 2003), reflecting maturation of neural structures within the executive attention network. As the executive and orienting neural networks become more connected and integrated over childhood (Gao et al, 2009; Fair et al, 2009), control processes are thought to become an increasingly influential source of regulation over attention, emotion, and behavior.

With development, control processes continue to mature and increase in their complexity. For example, across childhood there are rapid improvements in inhibitory control reflected by faster and more accurate performance on tasks involving conflicting response options, such as the Go/No-Go or Eriksen Flanker task (McDermott et al, 2007; Wiebe et al, 2012). Development within the anterior attention network, centered around the DLPFC, supports improved behavioral performance through the implementation of more planful and proactive control strategies. Developmental research with a variant of the AX-continuous performance task (AX-CPT) demonstrates such changes in strategy and resulting performance over development. These improvements are thought to be supported by the DLPFC and surrounding regions as depicted in Figure 1.

Figure 1
figure 1

Figure 1 depicts the AX-CPT task and the presumed neural bases for reactive vs proactive responding.

PowerPoint slide

In the AX-CPT task shown in Figure 1, children are presented with a series of cue (‘A’ or ‘B’) followed by target (‘X’ or ‘Y’) sequences that require different responses. Participants are instructed to make one response when the ‘A’ cue is followed by the ‘X’ target (eg, pushing a ‘1’ on a key pad). For all other cue and target sequences (‘A-Y’, ‘B-X’, and ‘B-Y’), the child must provide an alternate response (eg, pushing ‘2’ on a key pad). Optimal performance on the task requires both proactive and reactive control. Proactive control is more strongly cued by an ‘A’ cue than a ‘B’ cue because the nature of the target stimulus (ie, ‘X’ or ‘Y’) immediately following an ‘A’ determines the correct response, whereas the nature of the target stimulus following a ‘B’ is irrelevant—all B trials require the same response. Thus, as illustrated in Figure 1, ‘A’ cues are thought to engage a neural circuit encompassing the DLPFC, which initiates proactive control by maintaining task-related goals and associated target responses through the presentation of the critical ‘X’ or ‘Y’ event. As the ‘A-X’ sequence is more frequent than the ‘A-Y’ sequence, the presence of the ‘A’ event creates a bias for the target response. This bias supports rapid, accurate, and efficient goal-directed behavior when the ‘X’ event follows, presumably through effective maintenance of task-related goal representations in a DLPFC-based circuit (Miller and Cohen, 2001). However, when a ‘Y’ target follows an ‘A’ cue, the child must shift their dominant and planned behavioral response, through engagement of a different circuit centered around the ACC. Behaviorally, this shift is reflected in longer RTs to A-Y events relative to A-X events.

Using this task with children, Munakata and colleagues (see Morton and Munakata, 2002; Munakata et al, 2012) have charted developmental changes in the ability to engage proactive control strategies. In a cross-sectional study, Chatham et al (2009) reported that 8-year olds, like adults, effectively used proactive control strategies on the AX-CPT task, evidenced by significantly longer reaction times to A-Y vs A-X sequences. In contrast, 3-year-old children’s performance was characterized by a more reactive response style based on the identity of the second stimulus, failing to maintain goal representations cued by the preceding A stimulus. Thus, although basic control processes first emerge in the toddler and preschool years, they continue to develop allowing them to guide behaviors in more planful and efficient ways over the course of childhood.

Of note, the trajectory of risk in children with BI unfolds as these two systems mature. Brain imaging studies suggest that adolescence provides an inflection point for the interacting maturation of these two information-processing systems (eg, Casey et al, 2011; Crone and Dahl, 2012; Somerville et al, 2011). As such, adolescence also is expected to be a key period when children with a history of BI may overcome their risk for anxiety in general or SAD in particular. Moreover, as patterns of brain function coalesce in adolescence, so may the persistence of problems in anxiety. Indeed, longitudinal data provide some signs of greater persistence for anxiety that is expressed in adolescence relative to earlier developmental stages.

USING DUAL-PROCESS THEORY TO INFORM MODELS OF RISK IN BI

In largely independent lines of research, Rothbart and Munakata both suggest that more mature control processes do not replace automatic modes of responding during development. Rather, development is marked by an increasing integration of these processes and their underlying neural systems. Optimal task performance and effective self-regulation more broadly defined depend on the flexible implementation of both modes of processing based on specific contextual and task demands. Sensitivity to these specific demands is shaped at least partly by individual differences in temperamental reaction tendencies (eg, Paelecke et al, 2012; Wolfe and Bell, 2007). We present these ideas schematically in Figure 2 where the normative developmental shift from primarily automatic processing to more controlled processing (moving from left to right in figure) is illustrated. In panel ‘a’, a relatively adaptive developmental pathway is illustrated in which automatic and controlled processing systems become integrated and can be flexibly recruited based on specific task demands. In panel ‘b’, a less adaptive pathway is illustrated in which the processes and their underlying networks are less integrated and one remains more dominant than the other. Based on these ideas, we outline three potential models that can be used to make predictions about the joint influences of individual differences in automatic and controlled processing in terms of differential developmental risk for children with a history of early BI.

Figure 2
figure 2

Figure 2 depicts a heuristic developmental model illustrating a shift from primarily automatic to both automatic and controlled processing over development (ie, moving left to right on figure). Superimposed on these normative changes are individual differences in the integration and relative balance of neural systems underlying automatic and controlled processing. In panel a, the systems are both accessible and function in a complementary manner indexing a more adaptive profile. In panel b, one system maintains more dominance and/or there is less integration between systems indexing a less adaptive profile.

PowerPoint slide

Top-Down Model of Control

Derryberry and Rothbart (1997) described several ways in which the development of control or regulatory processes could interact with reactive or automatic modes of processing to shape risk for anxiety in individuals prone to social withdrawal, including young children with BI. In one scenario, control processes regulate automatic response tendencies in a top-down manner, such that developing control strategies allow individuals with BI to shift their attention away from threat, minimizing hypervigilance and withdrawal. Based on this model, children with a history of BI who are high in EC would be at reduced risk for social and emotional maladjustment. Similar ‘top-down’ models have been proposed in the study of child internalizing disorders in which control processes are hypothesized to moderate risk for disorder in children with high levels of temperamental negative affect (Lonigan and Phillips, 2001). In such top-down models, automatic and controlled processes, and their underlying neural systems, are viewed as developing relatively independent of one another and therefore have simple additive effects on later outcomes. Under such a model, it would be hypothesized that a history of BI would be orthogonal, or unrelated to, performance on standard cognitive control tasks, and that an enduring tendency to process information in a reflexive and automatic way could be overcome by the proficient use of these controlled processing strategies, reducing risk for maladaptive social and emotional outcomes.

Risk Potentiation Model of Control

A second plausible model describing the joint influence of automatic and controlled processing strategies on outcome for BI children is that the reactive style of responding to potential threats and signals of novelty becomes a BI child’s default mode of responding. In an effort to regulate associated affective states, the controlled processing network is recruited frequently, but the cognitive states associated with control (ie, holding rules and goals in working memory, closely monitoring behavior relative to these goals) potentiate, rather than regulate, underlying feelings of fear and apprehension. In this model, automatic and control processes create a positive feedback loop where one initiates and/or maintains the other. Under such a model, it would be hypothesized that children with a history of BI and those at risk for later anxiety would not display deficits on controlled processing tasks. However, in contrast to the top-down model, in this model cognitive control processes potentiate risk by supporting extended processing and elaboration of fear-based cognitions, increasing risk for later social and emotional maladaptation (see Derryberry and Rothbart, 1997). This extended processing could in turn adversely influence children’s developing cognitive representations and affect regulation (Derryberry and Reed, 1996) and set the stage for maladaptive cognitive processes including rumination and anxious apprehension.

Overgeneralized Control Model

A third hypothetical model postulates that the early automatic response biases of BI children support associative learning across a variety of contexts and lead to overgeneralized orienting reactions and the implementation of control strategies in contexts that don’t require them. As such, these overgeneralized responses limit the flexibility with which both automatic and controlled processes are implemented by reducing the specificity of responses to specific environmental cues. Similar to the Risk Potentiation Model, there is an assumption of a developmental dependence between the two systems, where a failure to map automatic or reactive control processes to situational cues affects a child’s ability to adaptively titrate the implementation of later-developing control processes. This failure to accurately map processing strategies onto internal and external contextual cues results in inefficient ‘toggling’ between different processes. Unlike the two previous models, the focus is not on the mean levels of automatic or controlled processing but rather the contextual specificity with which each is implemented. This overgeneralization limits the adaptive function of either processing strategy. Moreover, it results in increased risk of social and emotional maladaptation for children with a history of BI.

In the following sections, we review data on BI in relation to automatic and controlled information processing. First, we consider data on the relations among BI, automatic modes of information processing involved in attention orienting, and risk for SAD. Then, we review studies examining the relations among BI, control processes, and risk. Following the reviews, results are integrated and interpreted with reference to the three hypothetical models described above.

AUTOMATIC MODES OF PROCESSING AND BI

Attention Orienting in BI

Automatic modes of processing are assessed using a variety of attention orienting tasks. Processing biases are expressed based on the duration and latency of orienting responses toward different types of cues. Concurrent and longitudinal studies link BI to enhanced orienting toward a variety of motivationally significant cues (see Table 1a). In addition, several longitudinal studies suggest that this tendency to rapidly engage automatic orienting responses heightens risk for anxiety in children with a history of BI (see Table 1b).

Table 1A Summary of Studies Relating BI to Automatic Information Processing
Table 1B Summary of Studies Demonstrating Moderating Effect of Automatic Information Processing Biases on Anxiety-Related Outcomes

Kagan and colleagues linked the behavioral and physiological profile of children with BI to neural models of fear potentiation and conditioning (eg, Davis, 1986; LeDoux et al, 1988). They noted that the freezing and hypervigilant responses of children high in BI paralleled those observed across species in response to fear-eliciting stimuli. Drawing on animal models, Kagan hypothesized that the behavioral and physiological profile of BI children arose from a biased tendency to engage the amygdala and associated circuitry supporting fear conditioning (LeDoux, 2000). This emphasis on automatic modes of responding and associated amygdala reactivity stimulated many studies focused on BI in relation to activity in downstream physiological systems that are influenced by automatic response circuits. This includes motor response patterns, markers of activity in the autonomic nervous system (ie, heart rate and vagal tone; Kagan et al, 1987), cortical activity patterns (Calkins et al, 1996; McManis et al, 2002), and neuroendocrine profiles (ie, cortisol; Kagan et al, 1988; Schmidt et al, 1997). Recent advances in neuroscience have allowed this original amygdala-based model to be expanded to include broader perturbations in a distributed neural circuit, encompassing components of the PFC and striatum (Bar-Haim et al, 2009; Bishop et al, 2004; Guyer et al, 2006; Hardee et al, 2013; Helfinstein et al, 2012). Common across these diverse models is an emphasis on the relations between BI and hypersensitivity in neural circuitry rapidly engaged by automatic modes of processing and a resulting behavioral sensitivity to motivationally salient cues.

In his early work, Kagan highlighted the links between BI and automatic modes of processing with an emphasis on attention orienting. This emerged from observations of toddlers’ attention deployment under varying stimulus conditions. Garcia Coll et al (1984) reported that 21-month olds with signs of BI had significantly higher and more stable heart rates than toddlers without signs of BI in response to visual and auditory stimuli. Importantly, these differences occurred specifically when children attended to stimuli that were discrepant and unfamiliar, such as scrambled pictures or novel sounds. These differences were thought to reflect initial orienting responses, leading attention in children with BI to become quickly and automatically ‘captured’ by unfamiliar stimuli.

Prospective longitudinal studies suggest that heightened automatic processing may be a core feature of the BI phenotype and precede the behavioral expression of BI in toddlerhood. Specifically, 4-month-old infants who express high levels of motor agitation and distress to presentations of increasingly complex unfamiliar sounds, sights, and smells display significantly higher levels of BI at 14 months of age relative to less reactive infants (Calkins et al, 1996; Kagan and Snidman, 1991; Moehler et al, 2008). Such highly reactive infants also display enhanced sensitivity to novelty at 9 months of age compared with less reactive infants, as indexed by exaggerated startle reactions in the presence of an unfamiliar adult, reflecting heightened reactivity of a brainstem-mediated defensive reflex (Schmidt and Fox, 1998). These findings in infants resemble findings in older children, adolescents, and adults, where potentiated startle has been linked to BI and anxiety disorders (Barker et al, 2014; Reeb-Sutherland et al, 2009a, 2009b; Lissek et al, 2005; Mineka and Zinbarg, 2006).

Research using electrophysiology to index automatic modes of processing has identified differences as a function of temperament. Marshall et al (2009) examined the neural correlates of novelty processing using a three-stimulus auditory oddball event-related potential (ERP) paradigm with 9-month-old infants. The stimuli were frequent standard tones, infrequent deviant tones, and a set of complex novel sounds. Relative to low-reactive infants, infants with high reactive tendencies at 4 months exhibited larger amplitude positive slow wave ERP responses to the deviant tones across frontal, central, and parietal midline recording sites. This enhanced positive slow wave is interpreted as reflecting the heightened sensitivity of the orienting network in highly reactive infants.

Studies of sensory and motor responses provide important data on automatic orienting reactions to novelty as they manifest early in development. However, such research is poorly suited for mapping the particular neural structures that relate to such automatic modes of processing. The spatial resolution of functional magnetic resonance imaging (fMRI) is better suited for mapping these neural systems. Several studies demonstrate that early BI predicts structural and functional properties in subcortical and cortical networks related to salience detection, threat sensitivity, and attention orienting. For example, Schwartz et al (2003) reported that young adults (mean age 21.8 years), who were identified as high in BI during the second year of life, relative to those who had expressed no such tendencies, showed enhanced bilateral amygdala responses when passively viewing novel vs familiar faces. For males in this sample, those classified as high in negative reactivity as 4-month olds showed particularly strong right amygdala responses (Schwartz et al, 2012).

Other fMRI research uses tasks previously used in patients with anxiety disorders to generate insights on the neural correlates of automatic processes in BI. For example, Perez-Edgar et al (2007) examined the relations between early BI and amygdala responses using a face-viewing paradigm used extensively in research on anxiety disorders. In this study, adolescents with a history of childhood BI, relative to adolescents with no such history, showed exaggerated amygdala responses when they rated their own fear in response to the faces, a response pattern comparable to that displayed by adolescents with anxiety disorders in prior studies (McClure et al, 2007). However, in other respects, adolescents with a history of BI responded differently than adolescents with anxiety disorders. Although clinically anxious children’s exaggerated amygdala responses were specific to negatively valenced stimuli (McClure et al, 2007), BI related to exaggerated amygdala responses to all expressions, regardless of valence, suggesting an overgeneralized reaction tendency.

A comparable set of behavioral and fMRI studies examined automatic modes of processing in BI and in relation to anxiety disorders using attention orienting tasks. These studies rely on the dot-probe paradigm, where biased attention orienting toward threat is reflected in patterns of responding to neutral probes (eg, Krain Roy et al, 2008). In this paradigm, participants are presented with two side-by-side faces differing in emotional expression. One face is neutral, whereas the other displays a positive or negative emotional expression (ie, happy or angry). The faces are followed by a neutral ‘probe’ target that requires a particular motor response. On congruent trials, the stimulus appears in the same location as the affective face, and on incongruent trials the stimulus appears in the same location as the neutral face. Bias scores are computed by subtracting the mean RT when the stimulus appears in the location of each emotion face (ie, angry and happy) from the mean RT when the stimulus replaces the neutral face. As such, positive values index vigilance toward the emotion faces and negative values index attentional avoidance of the emotion faces.

Behavioral and fMRI findings from the dot-probe task link BI to specific patterns of attention orienting. Both behaviorally and neurally, BI individuals demonstrate automatic processing biases for threat-relevant stimuli. Perez-Edgar et al (2010a) reported that adolescents with a childhood history of BI showed a significant bias for angry, but not happy, faces. In contrast, adolescents with no history of BI showed a significant bias for happy, but not angry, faces. This interaction of BI and emotion condition only held under relatively short stimulus presentation conditions suggesting that BI is specifically associated with early, automatic, and reactive attention biases. Similarly, using a subsample of these participants in early adulthood, Hardee et al (2013) used the dot-probe task to link BI to fronto-amygdala connectivity. In this study, fronto-amygdala connectivity was more variable in BI than non-BI subjects, when contrasted across threat and no-threat events appearing in the dot-probe task. These differences in connectivity occurred in a circuit connecting the amygdala and ventral PFC, thereby implicating circuits involved in automatic modes of processing.

Reward Processing in BI

Automatic modes of processing are deployed to salient stimuli of both positive and negative valence. Studies using the dot-probe task link BI to enhanced automatic orienting to negatively valenced stimuli. However, four other studies link BI to enhanced reactivity to positively valenced stimuli. For such stimuli, the striatum represents a key subcortical structure that interacts with the PFC to support more automatic modes of processing. Guyer et al (2006) found that adolescents with a history of childhood BI showed greater striatal sensitivity to incentives, relative to adolescents without a history of childhood BI. Perez-Edgar et al (2014) suggested that this enhanced sensitivity specifically occurs in the subset of children with BI who possess a particular dopamine-related genotype. Bar-Haim et al (2009) reported similar patterns, specifically when reward outcomes were contingent on a correct behavioral response by the participant. Finally, these first three findings link BI to enhanced striatal responding during reward anticipation. In a fourth study, Helfinstein et al (2011) extended these findings to show relations between BI and striatal sensitivity to reward feedback.

Guyer et al (2014) extended these studies by using a paradigm with high ecological validity to examine striatal sensitivity to social feedback in adolescents with and without a childhood history of BI. In this study, adolescents rated a series of photographs of other adolescents in terms of whether they would or would not like to interact with them in an online chatroom. In a later visit, participants were presented with fictitious feedback from these peers about whether they were interested (or not) in interacting with the participant. Adolescents with a history of BI showed greater putamen activation when anticipating feedback from peers they had expressed an interest in interacting with versus those they had not selected. Interestingly, this pattern of hyperactivation was specific to anticipating, but not actually receiving, feedback.

In summary, BI relates to automatic modes of processing in response to motivationally salient stimuli across a variety of contexts. This includes observations of infant behavior as well as quantifications of physiological responses and ERPs to novel stimuli. Early in development, this sensitivity appears specific to novel and threat-relevant stimuli but at older ages there is evidence that a history of BI is associated with more generalized sensitivity to negatively and positively valenced events. Finally, this enhanced reactivity is expressed on fMRI in two key subcortical nodes, the amygdala and striatum, as well as their extensions to the PFC. The fact that these automatic processing biases are expressed behaviorally and neurally, even when assessed many years after the assessment of BI, suggests that early BI reflects a stable response style, one that persists through adolescence. This is manifest in automatic information processing, in part, as a biasing in the development of these underlying neural networks.

Risk Moderation

Several studies suggest that heightened automatic processes moderate the relations between early BI and risk for later social maladjustment or anxiety (see Table 1b). From the first year of life, individual differences in attention relate to developmental trajectories of BI children. Specifically, Perez-Edgar et al (2010b) examined the relations between automatic attention orienting at 9 months of age and BI assessed repeatedly between 14 months and 7 years of age. An interrupted stimulus attention paradigm was used to quantify the degree to which attention was captured by neutral, suddenly appearing, task-irrelevant stimuli. The study found that 9-month-old infants who quickly and consistently had their attention diverted from a central visual cue toward the peripheral task-irrelevant cues went on to show a pattern of increasing BI over childhood. Further, initial levels of BI predicted observed social discomfort in a laboratory peer dyad assessment in adolescence (mean age 14.02 years), but only for participants who displayed relatively low levels of sustained attention at 9 months. It is interesting to note that 9-month orienting was not associated with mean levels of BI at 14 months, but rather, a pattern of increasing BI over time. This finding suggests that global attention orienting may be an associated characteristic of BI (vs a core feature) and that reciprocal associations arise among attention orienting, BI, and developmental risk.

The idea of reciprocal influences between attention orienting, BI, and developmental risk is further supported by the findings from five studies that used either novel or negatively valence stimuli within the context of an attention assessment. First, Reeb-Sutherland et al (2009b) examined the modulatory role of sensitivity to novelty in adolescence, based on the P3 ERP component to infrequent, complex novel sounds in an auditory oddball task. This study found that hypersensitivity to novelty significantly increased risk for anxiety diagnoses but only among adolescents with a childhood history of BI. Similar findings emerged in a second report from Reeb-Sutherland et al (2009a), where risk moderation was indexed by enhanced startle response, not to threat, but to safety cues. Finally, three studies used the dot-probe task to demonstrate similar patterns of risk moderation by automatic modes of processing. In one study, Perez-Edgar et al (2011) found that BI in early childhood was unrelated to threat biases at 5 years, but that the relation between early BI and 5-year anxiety was specific to those children who did display threat biases. In a second study, Perez-Edgar et al (2010a) found a similar association in which early BI predicted social withdrawal in adolescence but only for those who also showed an attention bias to threat in adolescence. In the third study, Hardee et al (2013) used fMRI with the dot-probe task in the same cohort studied by Perez-Edgar et al (2010a) but when they were young adults. This latter study found that patterns of amygdala–prefrontal connectivity moderated the relation between childhood BI and internalizing symptoms in young adulthood.

In summary, automatic modes of information processing characterize all infants’ early interactions with their social and non-social environments. These quick, stimulus-based reactions support associative learning and short-term adaptation. However, as demonstrated by the BI literature, when exaggerated or over-generalized, such strong reaction tendencies may adversely impact social and emotional development by biasing the type of information that is attended to. The available data on automatic modes of processing support Kagan’s original proposition on the relations between temperament and enhanced orienting to novelty. Moreover, strong and enduring automatic processing biases moderate the impact of BI on developmental risk. These risk moderation studies suggest that the correspondence between BI and automatic processing biases is not one-to-one but rather they influence each other reciprocally. Rather than being a core feature of the BI phenotype, automatic processing biases may be better conceptualized as a mechanism that sustains continually high levels of BI over childhood and results in an enhanced risk for social maladjustment and anxiety. In the next section, we review the literature on the relations between BI and later-developing control processes, and their implementation in the service of self-regulation. The outcome of children with BI in terms of their risk for anxiety is thought to reflect interactions between the earlier and later-developing processes.

CONTROL PROCESSES AND BI

Control Processes and Risk

In Rothbart’s model of temperament, EC is a broad-band factor encompassing various control processes. These processes include inhibitory control, attention shifting, conflict monitoring, and response monitoring—processes that are attributed to the functioning of the executive attention network involving the DLPFC (Rothbart et al, 2007). Unlike more automatic modes of responding, these control processes appear more gradually over the course of early childhood and show a prolonged period of development well into adolescence (Rothbart and Rueda, 2005). Control processes place significant demands on the brain’s ability to represent and maintain goals as well as to monitor the progress and success of these goal-oriented behaviors. The benefits of these control processes are widely noted. For example, in a large birth cohort, Moffitt et al (2011) reported that a childhood history of relatively strong inhibitory control predicted greater physical and mental health well into adulthood, at age 32. Importantly, these associations held after controlling for financial background factors and intelligence, suggesting a unique causal role for control processes in promoting healthy development.

More nuanced relations emerge in other work on the association between controlled processing and social or emotional functioning. Consistent with Moffitt et al (2011) and top-down models of control, across a series of studies, low levels of control consistently appear to confer risk. However, in contrast to such findings, other studies reveal that rather than being protective, high levels of control also may confer risk. For example, Carlson and Wang (2007) examined the association between observational measures of inhibitory control and emotion regulation in 5-year-old children. Overall, they found the expected positive correlation between inhibitory control and effective emotion regulation. In addition, there was a quadratic effect, reflecting the fact that emotion regulation skills were highest for children with average levels of inhibitory control. Consistent with the temperament model of Eisenberg and Fabes (1992), these findings suggest that both over- and under-control carries risks, at least in some temperaments (Fox, 2013).

Although BI is consistently associated with enhanced automatic responses, data on the relations among control processes, BI, and risk are less consistent. Some reports find that internalizing problems are associated with particularly high levels of control (eg, Oosterlaan et al, 1998); other reports find no association (for a review, see Oosterlaan, 2001). Such inconsistencies may reflect the fact that high levels of control only carry risk in a subgroup of children, possibly those who also possess exaggerated automatic response tendencies. In particular, as BI is consistently linked with strong automatic processing biases, control processes may moderate risk in children with BI. Several studies have evaluated this possibility using behavioral, electrophysiological, and neuroimaging paradigms (see Table 2).

Table 2 Summary of Studies Relating BI, Controlled Attention Processes, and Anxiety-Related Outcomes

In a longitudinal study, Thorell et al (2004) examined the relations between BI and inhibitory control at age 5 and their association with mental health problems 3 years later. At age 5, BI was weakly but significantly correlated (r=0.21) with better inhibitory control on a go/no-go task. Longitudinally, later-developing social anxiety was predicted by a non-linear association between BI and inhibitory control such that children high in both BI and inhibitory control faced the highest risk for social anxiety. As noted by Thorell et al (2004), these data demonstrate that cognitive control processes may potentiate risk in the context of heightened automatic response strategies, as reflected in early BI.

Other data speak to the relations among BI, specific control processes, and risk. For example, White et al (2011) examined the relations between BI assessed at 24 months and two separate control processes, attention shifting and inhibitory control, assessed in the laboratory at age 4. The authors further examined how BI and these different control processes related to anxiety symptoms at 4 and 5 years of age. Across the full sample, BI was unrelated to either attention shifting or inhibitory control and only modestly related to later anxiety problems (r=0.22). However, relations between early BI and later anxiety were moderated, although in different ways, by the different control processes. Specifically, BI predicted later parent-reported anxiety only among children who were relatively low in attention shifting or relatively high in inhibitory control. This finding illustrates the importance of considering the specific functions of different control processes in relation to the regulatory challenges faced by BI children. Consistent with the Thorell et al (2004) findings, inhibitory control appears to potentiate risk for BI children who already have heightened levels of automatic control over attention and information processing. In contrast, more consistent with top-down models of control, attention shifting may be protective because it directly facilitates more flexible information processing for BI children and minimizes the potential for developing biases toward selectively attending, processing, and elaborating on threat-relevant stimuli and experiences.

Finally, as noted above, adolescence is thought to represent a key developmental period when these interactions unfold and influence risk. Such an unfolding presumably involves both the unique genetic and environmental influences of this age period. However, existing longitudinal studies examine children with BI at relatively widely spaced intervals from late childhood through adolescence. Thus, it is difficult to precisely delineate the timing and nature of processes that lead the two information-processing systems to interact in a way that creates risk for SAD or other forms of anxiety. More research is needed that charts such processes, specifically extending available data on neural predictors of outcome in BI.

Neural Correlates of Control

Extending behavioral studies of controlled processes, three reports examined the neural correlates of different control processes using two ERPs, the N2 and the error-related negative (ERN). The N2 is elicited by incompatible stimulus displays (ie, no go trials on Go/No Go task; incompatible trials on Flanker task) and is thought to signal the need for inhibitory control to allow subdominant responses to be performed. Thus, the N2 triggers behavioral changes that support successful performance on the trial at hand. In contrast, the ERN is elicited after an incorrect response on speeded reaction time tasks and is thought to signal the need for behavioral change (ie, slowing down) to support improved future performance.

Henderson (2010) examined self-reported shyness in middle childhood as it relates to the N2 ERP, using a modification of the Eriksen Flanker task. Although shyness was unrelated to behavioral or physiological indices of inhibitory control, the N2 amplitude moderated the association between shyness and both social anxiety and negative attribution biases. For children with relatively large N2 responses, shyness was associated with heightened social anxiety and a more negative attribution style, a pattern consistent with the potentiating effects of inhibitory control on risk for BI children reported above. Similar findings arose in two studies on BI. In one of these, Lahat et al (2014b) found that a history of BI in toddlerhood predicted a larger amplitude N2 response on a Go/No Go task at age 7. Moreover, the combination of high BI and larger amplitude N2 responses predicted less competent social behavior with an unfamiliar peer. In the other, Lamm et al (in press) found a similar relation between early BI and the N2 amplitude on a Go/No-Go task, also at age 7. Moreover, using LORETA to model source space activation, Lamm et al (in press) localized the source of the BI-related differences on incompatible trials to higher estimated dorsal ACC and DLPFC activation. Finally, as in both the Henderson (2010) and Lahat et al (2014b) studies, Lamm et al (in press) found that the N2 amplitude moderated the effects of early BI on later social functioning, with BI predicting a composite of observed social reticence with an unfamiliar peer and parent-reported fear and shyness, but only among children with relatively large N2 amplitudes.

Together these findings link early BI to functioning in brain regions previously implicated in control processes. These regions include the ACC and DLPFC, as indexed by enhanced amplitude N2 responses. It is also important to note that in both the Lahat et al (2014b) and Henderson (2010) papers, the moderating effect of the N2 amplitude was apparent whether incompatible (high conflict) or compatible (low conflict) trials were examined, suggesting the possibility that over-generalized neural sensitivity and engagement of conflict monitoring could be a specific mechanism linking BI to anxiety.

Performance monitoring represents a second aspect of control implicated in BI and anxiety. The ERN is an ERP component elicited following the enactment of a behavioral response, thereby indexing performance monitoring. Thus, the ERN captures the extent to which one notices, and reacts to, discrepancies between their intended and actual behaviors during goal-directed activities. Several research groups have reported that children and adults with elevated state anxiety or anxiety diagnoses show exaggerated ERN responses (eg, Gehring et al, 2000; Hajcak et al, 2003; Ladouceur et al, 2006; McDermott et al, 2009; Pailing and Segalowitz, 2004). The ERN is attributed to activity in the mPFC and is thought to reflect heightened attention to errors in performance (Yeung et al, 2004). In a recent meta-analysis, Moser et al (2013) reported that the relation between anxiety and the ERN is due primarily to the strong relation between the ERN and the apprehension/worry, as opposed to arousal, aspects of anxiety. Again, consistent with models of the potentiating effect of control processes for anxiety-prone individuals, and the N2 results reported above, the Moser et al (2013) meta-analysis suggests that engaging monitoring processes can create a positive feedback loop for anxiety-prone individuals leading to excessive self-focus and worry—core aspects of anxious apprehension.

Two recent studies demonstrate that response monitoring and the ERN may be a particularly important mechanism linking BI to later risk for clinically significant anxiety. McDermott et al (2009) reported that adolescents with a childhood history of BI (14 months to 7 years) showed enhanced ERN responses on a modified Flanker task relative to adolescents without a history of BI. Further, the relation between earlier BI and the probability of a lifetime diagnosis of any clinically significant anxiety diagnosis as assessed by semistructured interviews with adolescents and their parents was moderated by the ERN amplitude. For children with a history of BI, smaller ERN amplitudes tended to reduce the probability of a lifetime diagnosis (OR=0.82, p=0.06), whereas the ERN was unrelated to probability of diagnosis for children without a history of BI. Importantly, as was the case in the N2/conflict detection studies by Lahat et al (2014b) and Henderson (2010), a history of BI was unrelated to behavioral performance (error rates or reaction times) on the Flanker task, suggesting that the exaggerated cortical responses did not support superior task performance. Examining the association between the ERN and clinically significant anxiety longitudinally, Lahat et al (2014a), in a separate study, reported that BI in toddlerhood was associated with larger ERN amplitudes at 7 years of age. Further, the magnitude of the differences between ERPs to error vs correct responses moderated the association between early BI and symptoms of social phobia at 9 years. Specifically, early BI predicted maternal and child reported social phobia but only among children who had strong neural responses to their own errors at 7 years of age.

It is important to note that in all of the studies of BI and the neural correlates of control reviewed above, the paradigms used non-affective stimuli. To further understand the specific regulatory demands faced by children with a history of childhood BI, Jarcho et al (2013) conducted an fMRI study in which they administered an emotion conflict task (Etkin et al, 2006) to young adults with and without a history of childhood BI. In this task, pictures of fearful and happy facial expression are displayed with the word ‘Happy’ or ‘Fear’ overlaid on the picture. On compatible trials, the expression and word are the same and on incompatible trials the expression and the word differ, eliciting conflict. Behavioral and neural responses on incompatible vs compatible trials serve as an index of emotional conflict detection. When an incompatible trial follows a compatible trial, RTs tend to be longer, indexing conflict detection. In contrast, the interference effect does not occur when an incompatible trial follows another incompatible trial—indexing the ability to retain the initial trial demands (the first incompatible trial) in working memory in order to facilitate performance on the next incompatible trial—referred to as conflict adaptation. Interestingly, adults with generalized anxiety disorders fail to demonstrate this adaptation—and instead continue to experience interference on incompatible trials as though each trial is processed in isolation from the other. The ability to maintain cognitive representations online to facilitate future behavior is attributed to the functioning of an amygdala–mPFC circuit that supports healthy emotion regulation (Etkin and Schatzberg, 2011). Jarcho et al (2013) reported that adults with and without a history of BI did not differ in their behavioral performance on the task (with both showing comparable levels of conflict detection and neither showing evidence of conflict adaptation). However, adults with a history of childhood BI exhibited greater dmPFC activity during conflict trials compared with adults without a history of BI. In addition, during conflict adaptation trials (ie, incompatible trial that directly follows another incompatible trial), adults with a history of BI showed enhanced bilateral putamen responses, whereas adults without a history of BI showed an opposite pattern of increased striatal reactivity on the incompatible trials directly following compatible trials. Once again, a general finding was that in the absence of any performance benefits, a childhood history of BI had an enduring influence on the extent of neural reactivity to the eliciting conditions.

REVISITING DUAL-PROCESS MODELS OF RISK IN BI

As the review above demonstrates, there is a growing literature linking early BI to behavioral and neural indices of both automatic and controlled information processing. Below we revisit the three hypothetical models linking BI, automatic and controlled processing, and developmental risk in light of the reviewed studies.

Top-Down Model of Control

This model postulates that the development of controlled information processing strategies provides a consistent source of regulation over early automatic processing biases exhibited by children with BI. This perspective is consistent with population-based studies demonstrating positive impacts of self-control, broadly construed, on a variety of developmental outcomes (Moffitt et al, 2011) as well as models of risk moderation in relation to childhood internalizing problems (eg, Lonigan and Phillips, 2001). Such models hypothesize that domain-general control processes universally ‘dampen’ or regulate the strong reactions elicited in affectively or motivationally significant contexts in the service of successful goal-directed behavior. Another assumption of direct effect models is that automatic and controlled processes are independent of each other developmentally such that early automatic processing biases have little impact on the developmental course of more controlled processes. The review above suggests that the top-down model is too general to account for differential risk in children with a history of BI. Specifically, the influence of behavioral indices of control on risk for children with a history of BI depends on the particular control process being assessed. For example, White et al (2011) found that performance on an attention shifting task was indeed protective against the development of anxiety in children with early BI. But importantly, in the same sample of children, performance on an inhibitory control task was not only not protective, but conferred increased risk for later anxiety. Therefore, this study in particular highlights the need in future research for greater specificity in describing the neural, attentional, and cognitive demands of specific control tasks. As well, the findings reviewed suggest that rather than studying the ‘main effects’ of controlled processing, with the assumption that they are universally beneficial, more nuanced relations will be uncovered by focusing on the interplay of children’s automatic or reactive biases with these control processes. By doing so, research questions will move beyond studying global regulatory skills to ask more specific questions about which control functions best regulate which automatic or reactive tendencies.

Interestingly, in several ERP and fMRI studies reviewed, behavioral performance on control tasks was unrelated to BI or later risk. Rather, BI predicted the extent of neural activation during the execution of the tasks and these patterns of activation predicted differential levels of risk. This suggests that it may not be mean levels of performance per se but rather the strategies and resources (both neural and attentional) required that confer differential levels of risk.

Risk Potentiation Model of Control

Given that sensory sensitivity and orienting biases are evident from very early in infancy for BI children, this model postulates that this style of responding may become a BI child’s ‘default mode’ of interacting with his/her environment particularly under emotionally or motivationally salient conditions. This reactive mode of regulating attention means that attention is easily drawn away from a goal-directed task by novel or discrepant, but task-irrelevant, cues in the environment. Thus, controlled processing is implemented as a way to maintain goal-directed behavior. Behavioral results reviewed showed that BI was infrequently associated with behavioral performance on controlled processing tasks and when there was an association it tended to be positive, although low in magnitude. This model further postulates that automatic and controlled processing strategies create a positive feedback loop. In this model, cognitive states associated with controlled processing such as response monitoring, planfulness, and holding rules in working memory function to maintain, prolong, and amplify initial automatic biases. Findings demonstrating increased risk for anxiety among BI children with high levels of inhibitory control (eg, White et al, 2011) are consistent with this model and with findings in the literature relating heightened performance monitoring to anxious apprehension/worry (see Moser et al, 2013). The idea of risk potentiation and a positive feedback loop is also supported by the repeated finding that despite comparable behavioral performance, children with a history of BI show exaggerated neural responses on performance monitoring Lahat et al (in press) and conflict detection tasks under affectively neutral (Lahat et al, in press; Lamm et al, 2014) and emotionally salient (Jarcho et al, 2013, 2014) conditions. These exaggerated neural responses appear to be one mechanism through which BI increases risk for anxiety by supporting the development of cognitive biases characteristic of anxiety. For example, high levels of monitoring (ie, for conflict or one’s own performance) may transform general reactive states into more elaborative feelings and emotions (eg, Strack and Deutsch, 2004) or promote unnecessary deliberation (Moser et al, 2013). A clear future research direction associated with this model is empirical examination of the nature of the cognitive biases (eg, attribution style and rumination) that mediate the relation between control processes and anxiety. As well, these links will be better understood by experimentally manipulating biases and examining the resulting influences on patterns of neural activation during the execution of controlled processing tasks.

Overgeneralized Control Model

In this model, the early automatic response biases of BI children support associative learning and the overgeneralized pairing of non-threatening cues to states of potential harm and fear. In a parallel manner, control strategies may also come to be overgeneralized and implemented in contexts that do not require them. This lack of specificity in the implementation of both automatic and control processes limits the flexibility and efficiency of information processing. This model is consistent with findings that children with a history of early BI show exaggerated startle responses to safety cues (Barker et al, 2014) and that adolescents with a history of childhood BI and also a lifetime diagnosis of anxiety show the same pattern of startle reactivity to safety cues (Reeb-Sutherland et al, 2009a). This is also consistent with the Perez-Edgar et al (2007) finding that adolescents with a history of childhood BI showed exaggerated amygdala responses to all emotion faces (not just threatening ones) when asked to rate their subjective experiences. In addition, it is supported by the finding that enhanced N2 amplitudes on both incompatible (high conflict) and compatible (low conflict) trials linked BI to indices of social and emotional maladaptation (Henderson, 2010; Lahat et al, 2014b) and that a childhood history of BI is associated with the failure to discriminate between positive and negative feedback in vmPFC activation during a monetary incentive delay task in adolescence (Helfinstein et al, 2011).

CLINICAL IMPLICATIONS

A dual-systems model, embedded within a systems neuroscience perspective, provides an organizing framework around which to integrate the growing literature relating BI, unique patterns of automatic and controlled attention, and relative risk for SAD. Given the early and relatively stable expression of BI, longitudinal studies allow the opportunity to track individual differences across key periods of normative emotional, cognitive, and neural development. The unique patterns of automatic and controlled processing displayed by BI children, and the processes that differentiate those at greatest risk for anxiety from those who go on to follow a normative developmental course, provide insights into potential targets of prevention and intervention for children with a history of stable BI.

From early in infancy, BI is associated with heightened orienting toward unfamiliar social and nonsocial stimuli as evidenced behaviorally and physiologically. Later in childhood and adolescence, this reactivity is evidenced in biased attention toward threat and neural sensitivity to a variety of incentive conditions. Importantly, the extent to which these biases in automatic attention orienting persist across childhood and adolescence is predictive of risk for anxiety. This finding raises the possibility that attention training paradigms could be used as an intervention, or prevention, for young stably inhibited children. Attention bias modification training (Hakamata et al, 2010) trains attention by having participants complete many trials in which task parameters implicitly influence attention. For example, a paradigm to train attention away from threat would repeatedly present a neutral target in the opposite location relative to the threat stimulus. Recent work shows some success in using such an approach to train clinically anxious children to develop an attention bias toward positive stimuli (Waters et al, 2013). However, there is a good deal of variability in the direction and extent of children’s initial biases as well as in responses to training, suggesting that more research is needed to understand the mechanisms of change and factors influencing susceptibility to change.

The relations between BI and the later development of cognitive control processes are less clear. Several studies suggest enhanced inhibitory control and performance monitoring, assessed both behaviorally and physiologically, increase the risk for anxiety for children with a history of BI. Although these cognitive control processes are widely viewed as critical for the development of self-regulation, the findings reviewed above suggest that the beneficial effects of cognitive control are dependent upon a child’s temperamental profile. For many children with a history of BI, attention and behavior continues to be driven in large part by automatic, reflexive biases. For these children, adding additional reflective and intentional control may result in a state of rigid over-control. Cognitive control processes show remarkable normative change over the life course (Davidson et al, 2006; Roberts et al, 2006) and can be trained using relatively simple paradigms (Rueda et al, 2012) with changes noted behaviorally and neurally (Berkman et al, 2014). It has therefore been suggested that self-control training should become a universal (vs targeted) component of early childhood education (Greenberg, 2006). Again, the findings with BI suggest that children with BI would show little benefit from training in inhibitory control. The data do suggest though that children with a history of BI may benefit from training in other specific components of cognitive control including attention flexibility and shifting.

Finally, one of the biggest challenges in understanding the combined influence of automatic and controlled information-processing strategies for children with BI is to develop paradigms that best mimic the day-to-day challenges faced by children with heightened BI (eg, Guyer et al, 2014). That is, when attention is drawn to potential signs of threat, how are BI children able to quickly and efficiently encode, process, and respond to the complexities of their social environments. Such questions will best be addressed in the future by combining the precision and parametric manipulations of standard cognitive tasks used in the field of developmental neuroscience with the ecological validity of the observational tasks standard in the study of temperament and social development.

FUNDING AND DISCLOSURE

The authors declare that there are no personal financial holdings that could be perceived as constituting a potential conflict of interest. Dr Pine has received compensation for activities related to teaching, editing, and clinical care that pose no conflict of interest. Drs Fox and Henderson have received compensation for activities related to teaching and professional service that pose no conflict of interest.