While it has long been accepted that neurons in the developing and adult brain die, it has only recently been widely recognized that new cells are also born in adulthood that develop phenotypes and connectivity characteristic of mature neurons (Markakis and Gage, 1999; Gould et al, 2000). Adult neurogenesis occurs across mammalian species including mouse, rat, monkey, humans, and others, appearing most robustly in certain brain regions, particularly the dentate gryrus (DG) of the hippocampus and in the olfactory system (Eriksson et al, 1998; Kornack and Rakic, 2001). The idea that new neurons should positively impact cognition has generated significant interest especially in regard to the hippocampus since this structure is prominently identified with short- and long-term memory (McClelland et al, 1995), local mechanisms of learning and memory such as long-term potentiation (LTP) and its variants (Wang et al, 1997), brain adaptations to sex and stress hormones influencing pubertal neurodevelopment and stress responses (Garcia-Segura et al, 1994; McEwen, 2000), and various forms of mental illness including major depression, post-traumatic stress disorder, and trauma-related personality disorders, and schizophrenia (Bremner et al, 1997; Duman et al, 2000; Eisch, 2002). Increased dentate granule cell proliferation or survival is associated with increased cognitive performance, social interaction, environmental novelty, sex-steroids, antidepressant medications, and electroconvulsive treatment (Duman et al, 1999; Gould et al, 2000; Kempermann and Gage, 2002), while decreases are associated with cognitive deficits, social isolation, physical or psychological stress, or exposure to stress hormones (Duman et al, 2000; Gould et al, 2000; Shors et al, 2001).

Despite accumulating data, the functional significance of neuronal birth and death in the adult brain remains incompletely understood. Given data supporting that neuroadaptations among permanent neuronal populations serve as a basis for cognitive and emotional learning, memory and behavior (Kandel et al, 1991), the added role or functional necessity of neurogenic events remains unclear. Moreover, since molecular, neurochemical, pharmacological, and environmental factors identified with the control of apoptotic and/or neurogenic events also impact traditionally recognized forms of neuroplasticity among permanent neural populations (Schwartz, 1992; Duman et al, 2000; Gould et al, 2000), the design of biological experiments capable of defining or proving a unique role for neurogenic events in already plastic systems is daunting. Another uncertainty involves the possibility of functional associations of apoptotic and neurogenic events (Biebl et al, 2000). Few studies have addressed whether the cognitive significance of neurogenesis can be fully understood without regard to a larger paradigm of neuronal turnover in which both apoptotic and neurogenic events play important roles (Dong et al, 2003; Nottebohm, 2002).

By allowing direct investigation of the learning and memory characteristics of neural systems, neural network simulations are especially useful in understanding and contrasting the functional significance of alternate forms of plasticity in a manner not easily observable with biological methods (Aakerlund and Hemmingsen, 1998). Network simulations may also represent an important complimentary approach to direct biological investigations in examining the influence of apoptosis and neurogenesis on cognitive functions of neural systems. Simulations of the olfactory system by Cecchi et al (2001) revealed that neurogenic events paired with apoptosis by competitive elimination can operate alone as an effective form of neuroplasticity allowing efficient learning of new information. Building on these findings, we aimed to utilize elementary three-layer neural network simulations that are already capable of robust learning via incremental neuronal connection plasticity, and to superimpose on these systems simulated apoptotic and neurogenic events in the middle layer. This approach allows comparisons of network learning with and without various regimens of neuronal turnover. In choosing from among a large diversity of network architectures and functional attributes, we selected standard multilayer feedforward pattern recognition networks because of their relative simplicity and wide usage, combined with a powerful and intuitively approachable learning capability (Widrow and Lehr, 1998). The simplicity of these networks, while sacrificing the added complexity needed to simulate more biologically accurate networks, offered the possibility of uncovering fundamental or generic properties that may be generalized to other more complex systems.

We studied three-layer feedforward networks that learn to produce specific firing patterns in the output layer upon the introduction of topographically specific firing patterns in the input layer (ie alphabetic letters). Learning of whole data sets (alphabets) occurs via a progression of incremental connection strength changes between neurons. With the storage of information, the ‘maturation’ of individual neurons is observable as the quantifiable growth of axodendritic connection strengths from initial low random values.

Network simulations were used to test the hypothesis that neuronal turnover in information-bearing (‘adult’) networks produces greater performance in learning new information that can be achieved by ongoing connection plasticity alone. Additionally, we tested whether alternate proportions or patterns of simulated apoptotic/neurogenic events may determine the extent of these informatic effects. In modeling neuronal turnover in the ‘adult’ condition, we simulated various regimens of apoptosis–neurogenesis in networks after they had first accurately learned an initial data set, the Roman alphabet. Networks were then tested on their performance in learning a new data set, the Greek alphabet. Under all learning conditions, the incremental weight change learning algorithm was unaltered to simulate a consistent plastic potential among all neurons, whether mature or information-naïve. Neuronal turnover was modeled as the elimination of information-laden individual neurons from the middle of the three-layer networks, and their replacement with new information-naïve neurons.


Network Architecture

Networks were simulated utilizing MATLAB 6.0® mathematics software with MATLAB Neural Network Toolbox® performed on a PC-compatible Hewlett-Packard Pavilion N5445 (1 MHz) using Windows XP. Network attributes including numbers of input, middle, and output layer neurons, neuronal input–output computational functions, learning algorithm, initial connection weight values, etc were set prior to these experiments as recommended by accompanying software documentation (The MathWorks, Inc.) for character recognition learning.

Networks were organized as three-layer feedforward systems in which incremental connection plasticity occurs between the layers, but simulated apoptotic/neurogenic events occur only in the middle layer. This architecture shares aspects with hippocampal organization where a non-neurogenic input layer (Entorhinal Cortex) projects to the neurogenic middle layer (DG) via modifiable perforant path axodendritic connections, and DG projects to non-neurogenic CA3 neurons via modifiable mossy fiber axodendritic connections. The simulated input layer comprised 35 neurons fully connected to the middle layer via an array of 35 × 10 axodendritic connections labeled collectively as I–M (input to middle). In turn, 10 middle layer neurons are fully connected to 35 output layer neurons via the M–O (middle to output) axodendritic system. With this architecture (Figure 1), network learning allows topographically arranged neuronal firing rates of 1 or 0 forming alphabetic letter representations in the input layer (eg ‘B’ or ‘Z’) to produce desired output firing patterns in the output layer (eg fire neuron #2 only or #26 only).

Figure 1
figure 1

Multilayer feedforward network architecture designed for learning pattern recognition of alphabetic characters are used to study informatic effects of neuronal turnover in the middle layer. Desired input–output associations of the letters ‘B’ and ‘Z’ from the Roman alphabet are shown. Darkened circles represent neuronal firing rates of 1, while clear circles represent neurons firing at or near zero.

Network Functions

Analogous to biological neurons, firing of simulated middle and ouput layer neurons is determined first by a summation of input signals from neurons in the previous layer. Each of these input signals is multiplied by an axodendritic connection weight (w), serving as the computational equivalent of synaptic strength and the critical variable of change during incremental learning (Figure 2). Summed input information plus a variable bias weight value is then processed by the neuronal input/output transformation function (MATLAB code: ‘logsig’). This continuous sigmoidal function serves as the computational equivalent of somatodendritic activation of action potentials whereby neuronal firing ranges between 0 (no firing) and 1 (maximal firing).

Figure 2
figure 2

Models of individual neurons used in simulated networks. Network input takes the form of values of one or zero presented to individual input layer (IL) neurons and relayed to each of 10 middle layer (ML) neurons via I–M connections. Each middle layer neuron receives an input signal from all 35 input neurons, each of which are multiplied by a connection weight (w) value. A dendritio-somatic input signal is calculated as the sum of these products plus a bias weight (wb), and is acted upon by an input/output transformation function to produce a middle layer neuronal output signal. Ouput layer neurons follow a similar design to produce final network output. Incremental changes in connection and bias weights of associated with middle and output layer neurons mediate network learning.

Before learning, the axodendritic connection weight (w) values of networks are set as low random values ranging between −1 and 1 in the case of I–M connections (MATLAB code: ‘rands’) and between −0.01 and 0.01 for the M–O connections. These values have previously been established as maximally optimal for learning character recognition in three-layer systems. The much smaller initial absolute magnitudes of the M–O compared to I–M connections may be considered as reflecting connection states particular to immature, information-naïve neurons where efferent axonal projections are considerably more underdeveloped compared to afferent axodendritc connections from the previous layer.

Network Learning

Axodendritic connection weight changes during learning in all experiments were determined by gradient descent backpropagation, with variable learning rate (MATLAB code: trainFcn=‘traingda’). In this algorithm, specified (to be learned) input patterns generate an actual output pattern that is mathematically compared with the desired (to be learned) output patterns, producing an error quantity. As described elsewhere (Widrow and Lehr, 1998), methods of differential equations are used to determine how small changes in I–M and M–O connection weights can be made to minimize the error quantity (error gradient descent). One epoch of training consists of one exposure to a set of input patterns (ie the Roman alphabet), corresponding to incremental changes in connection weights. Use of the variable learning rates option allows the size of the incremental weight changes to vary slightly between epochs depending on the slope of the error gradient, enhancing learning efficiency.

Data Sets

The chosen data sets were designed according to development theories that new learning involves elaboration of old information (Yates, 1996). The initial training set comprised the set of input and output representations of 26 letters of the Roman alphabet. Figure 3 shows the topographical configurations of firing patterns of input neurons and the desired output neuron to fire, corresponding to each letter of the alphabetic data set. The second training set (24 letters of the Greek alphabet) was chosen to be of similar size and complexity as the first training set, but with some noticeable differences (eg some new characters, fewer number of total characters, new input–output configurations).

Figure 3
figure 3

Input–Output patterns of the alphabetic data sets. Networks are taught to associate input representations of Roman alphabetic characters presented to the input layer with output patterns represented in the output layer. Character inputs are topographical arrangements of 1 s (black) and 0 s (white) at each of 35 neurons in the input layer. Output numbers represent the specified output neuron(s) that activate(s) (fires close to a value of 1) while all others fire at rates close to 0, upon presentation of a given character.

Simulating Apoptosis–Neurogenesis

Neuronal turnover was modeled as the removal of a variable number of middle layer neurons and their replacement with information-naïve neurons, maintaining a constant number of middle neurons at 10. Nascent middle layer neurons were architecturally similar to the neurons they replaced in terms of complete connectivity with input and ouput layers, except their connection weight (w) values were now rerandomized to low values (between −1 and 1 for I–M connections, between −0.01 and 0.01 for M–O connections). Thus neurogenic middle layer neurons were defined by immature connectivity states, as was the case for all network neurons before initial learning of the Roman alphabet. Information bearing connections of permanent output layer and mature middle layer neurons not selected for turnover were left unaltered.

Measures of Network Learning and Memory Performance

Network memory performance was measured as a quantitative comparison of information recall after learning (actual firing rates of output neurons, given presentation of alphabetic letters to input layer), with the information on which the net was trained (desired firing rates of output neurons, specified by alphabetic letters), summed over all the letters of a given alphabet. This quantity, sum of the squared errors (SSE) is defined

Lower SSE values indicate improved memory performance. Learning performance plots SSE with respect to the number of epochs of training and improves when networks require fewer epochs of training to achieve lower SSEs.

Network Performance Analysis

Computations governing input–output activity and the learning procedures were fully deterministic and did not represent a source of variation. Other than experimental effects, some variation in performance was introduced from the low randomized initialization values assigned to neuronal connection weights in information-naïve networks or neurogenerated neurons. Assignment of these low, random values optimizes learning potential of information-naïve networks and is analogous to genetic sources of variation in biological studies since they are set prior to environmental exposure (information learning). To control for this source of genetic-like variation, studies were conducted on groups composed of eight networks first individualized by virtue of their separately randomized initial weight configurations, and then trained on the Roman alphabet (Figure 4). Experimental groups were then created according to the particular condition of neuronal turnover imposed on the original group of eight networks. Two-tailed Student's t-tests or ANOVA procedures were used to compare the effects of two or more conditions of neuronal turnover on memory performance. Repeated measures ANOVA were used for group comparisons of learning performance followed by the post hoc Tukey procedure where applicable. Results were considered significant at the p=0.05 level.

Figure 4
figure 4

General experimental design. Eight individualized immature networks, defined by low random connection weight values modeling nascent, information-naïve neurons, undergo the standard incremental training algorithm to learn the Roman alphabet. These networks are then replicated multiple times and sorted into identical groups exposed to unique conditions of neuronal turnover and new learning of the Greek alphabet.


Experiment 1: Effects of Increasing the Proportion of Neurons Undergoing Apoptosis–Neurogenesis

Eight individualized ‘immature’ networks were trained on the Roman alphabet to a level of recall performance of SSE=0.01, requiring a group mean of 517±24.9 epochs of training. Each of the eight nets were then replicated and sorted into five identical groups treated under one of five conditions of apoptosis–neurogenesis where 1, 2, 5, 8, and 10 (all) middle layer neurons underwent turnover, followed by training on the Greek alphabet. As demonstrated in Figure 5, initial training on Roman was associated with distributed patterns of connection weight growth. Upon subsequent neuronal turnover and training on the second alphabet (Greek), connection weights of neurogenic middle layer neurons showed similar connection growth, while permanently intact neurons showed various degrees of connection weight revision.

Figure 5
figure 5

Serial Hinton graphs of a typical network reveal axodendritic weight changes in middle to output layer connections during learning. Individual Connection weights values are proportional to the block sizes plotted according to their neuron of origin in the middle layer (x-axis) and connection to neuron in the ouput layer (y-axis). Colors indicate valence of connection weights (red=negative, green=positive). Connection weights of information-naïve networks, initially too small for graphical representation here, undergo growth that allows storage of the entire Roman alphabet with high accuracy by 461 epochs. After learning Roman and challenged with new learning of the Greek alphabet, connections undergo further adaptation. Without ‘apoptosis–neurogenesis’, connection changes are subtle by 100 epochs and become more pronounced by 2800 epochs when the network achieves mediocre recall accuracy of Greek (SSE=6.0). Alternatively, with apoptosis–neurogenesis of middle layer neurons 1–5, implementation of the same incremental learning algorithm drives the axodendritic growth of new neurons into mature configurations, allowing for improved recall of the new alphabet by 2800 epochs (SSE=3.0).

The results of increasing turnover on recall performance of old information (Roman) before learning the new information (Greek), and learning performance upon training on Greek are shown in Figure 6. Increasing the number of neurons undergoing turnover generally increased degradation of memory of the Roman alphabet (Figure 6a): with no neurons, SSE=0.1±0.0; 2 neurons, SSE=17.77±1.60; 5 neurons, SSE=48.32±3.23; 8 neurons, SSE=71.01±5.03, and all 10 neurons SSE=47.85±1.83 (group differences, F4,35=94.07, p<0.001; and post hoc p<0.001 differences between all groups except 5 and 10 neuron turnover conditions were not mutually different). In the early phase of training on Greek, epochs 1–400 (Figure 6b), there was significant overall improvement in learning performance among all groups across epochs (F3,105=548.80, p<0.001), but groups experiencing higher proportions of turnover showed greater new learning performance (main group effect F4,35=43.52, p<0.001; epochs × group interaction F12,105=30.84, p<0.001), with the groups with 8 and 10 (all) neurons undergoing turnover performing significantly differently from each other and all other groups by post hoc (p<0.01). Learning performance over the late phase of training (epochs 400–2800: Figure 6c) was measured only for groups with 0, 2, and 5 neurons undergoing turnover. Networks with 8 and 10 neurons effected were not included in this analysis since 2 and 6 networks from each group, respectively, reached performance criteria of SSE=0.01 before 2800 epochs. As in the early phase, all groups showed continuous learning improvement (main effect of epochs F6,126=81.32, p<0.001) but the group with the highest number of newly generated neurons (5) showed significantly greater learning performance (main group effect F2,21=8.95, p<0.01, epochs × group interaction F12,126=1.92, p<0.05) differing from the 0 and 2 groups by p<0.01 in post hoc testing.

Figure 6
figure 6

Experiment 1: effects of increasing the proportion of dentate neurons undergoing apoptosis–neurogenesis. Performance measures on y-axis are SSE, which become smaller with better performance. (a) After learning the Roman alphabet and undergoing turnover of increasing proportions of middle layer neurons, networks show increasing degradation of accurate recall of the Roman alphabet (ANOVA, p<0.001). Small letters a, b, c, d indicate group differences by post hoc of p<0.001. (b) Repeated measures analysis over learning epochs 100–400 shows significantly better learning performance by group (p<0.001) and group × epochs (p<0.001). Letters a, b, c indicate group differences by post hoc of p<0.01. Repeated measures analysis over learning epochs 400–2800 shows significantly better learning performance by group (p<0.01) and group × epochs (p<0.05), with better learning in the 5 neuron group as compared to the 2 and 0 groups by post hoc testing (p<0.01). Recall of Roman alphabet after learning Greek was disrupted, comparable to that occurring immediately after turnover of two neurons, but did not significantly differ across the conditions.

After learning Greek, networks were again compared for their capacity to recall the Roman alphabet (Figure 6d). The mean SSEs of the five groups on Roman recall ranged between 23.8 and 27.8 and were not significantly different. Recall of the Roman alphabet after training on Greek worsened for networks with 0 and 2 neurons turned over (compare Figures 6a and d) reflecting a predominanat effect of catastrophic interference (informatic differences between the two data sets) in those conditions. However, the relative improvement in recall of Roman after learning Greek for networks with 5, 8, and 10 neurons turned over reflected the informatic similarity of the Roman and Greek alphabets.

Experiment 2: Effects of Apoptosis–Neurogenesis of Neurons with Large vs Small Axodendritic Connection Weight Changes after Learning Roman

The effects of neuronal turnover based on the characteristics of targeted neurons rather than their quantity was assessed. Middle layer neurons were individually characterized in networks based on their axodendritic growth of I–M or M–O connections during learning of the Roman alphabet (represented graphically in the upper three panels of Figure 5). This was accomplished by examining the connection weight vector magnitudes of I–M or M–O connection weights of each middle layer neuron in each network both before and after learning Roman. Connection weight vector magnitude is a measure of a neuron's total connection strength with all neurons of an adjacent layer, and is calculated as the square root of the sum of squares of each I–M or M–O connection weight. Thus for middle layer neuron #1 of a given network:

where (ILn−ML1) is connection weight value between input layer neuron n and middle layer neuron 1

As shown in Table 1 for a typical network, training on the Roman alphabet produced an increase (Δ weight) in the connection weight vector magnitudes of both I–M and M–O connection systems of all 10 middle layer neurons. Middle layer neurons of each network were then ranked according to the relative size of their Δ weight increases, and classified as either having largest (top five rank) or smallest (bottom five rank) I–M or M–O connections changes. Each of the eight Roman networks were then subjected to four new conditions of apoptosis–neurogenesis targeting the five middle layer neurons with either: (1) smallest I–M Δ weights; (2) largest I–M Δ weights; (3) smallest M–O Δ weights; or (4) largest M–O Δ weights. Statistical comparisons on learning and memory effects defined by these conditions were conducted for the I–M and M–O systems separately. On recall of Roman before learning Greek, turnover of 5 neurons with largest I–M Δ weights gave SSE=45.71±3.35, while that of the five smallest yielded SSE=53.64±3.37; these differences were not significantly different. Turnover of 5 neurons with largest M–O Δ weights gave SSE=45.07±1.44, while that of the smallest 5 yielded SSE=53.44±3.69; these differences approached significance (t14=−2.116, p=0.053).

Table 1 Representative Connection Weight Vector Magnitude Changes During Training on Roman Alphabet

Over the long phase of learning Greek (epochs 400–2800), there were no significant main effects of group or group × epoch interactions based on choosing neurons for apoptosis–neurogenesis by magnitude of I–M connection weight changes (Figure 7a). However, choosing neurons with the largest as opposed to smallest M–O connection weight growth after learning Roman (Figure 7b) significantly enhanced learning performance of the Greek alphabet (main effect of group (F1,14=6.74, p<0.05), epochs x group interaction F6,84=1.55, p>0.1).

Figure 7
figure 7

Experiment 2: Effects of apoptosis–neurogenesis of neurons with large vs small connection weight changes after learning Roman. (a) Turnover of 50% of middle layer neurons with largest vs smallest I–M connection weight changes did not significantly improve learning performance. (b) However, choosing 50% of middle layer neurons with the largest as opposed to smallest M–O connection weight changes significantly improved learning performance (*p<0.05).

Experiment 3: Effects of Intermittent Focal vs Distributed Apoptosis–Neurogenesis During Learning

Previous experiments tested the learning effects of neuronal turnover imposed prior to new learning. Here, the effects of intermittent neuronal turnover during learning of the Greek alphabet were assessed. Two middle layer neurons were chosen for turnover once every 200 epochs of learning, for a total of 1200 epochs of training. However, in the ‘focal’ condition the same two neurons were repeatedly chosen for elimination and replacement while in the ‘distributed’ condition a different, previously unaffected pair of neurons was chosen every 200 epochs (Figure 8). Repeated measures comparison of groups undergoing focal vs distributed apoptosis–neurogenesis, along with group data from experiment 1 in which networks did not undergo any turnover, revealed significant main effects of group (F2,21=5.27, p<0.05) and group × epochs interaction (F10,105=22.75, p<0.001), with the distributed condition showing significantly better performance on learning Greek compared to focal condition by post hoc testing (p<0.05).

Figure 8
figure 8

Experiment 3: Effects of intermittent focal vs distributed apoptosis–neurogenesis during learning. Two identical groups of eight networks trained on Roman were subjected to conditions of focal vs alternating distributed turnover where 20% of middle layer neurons were targeted every 200 epochs during learning of the Greek alphabet. In the focal condition, the same two neurons were repeatedly eliminated and replaced, while in the distributed condition, a new pair was chosen from among the 10 middle layer neurons every 200 epochs of learning. Apoptosis–neurogenesis among alternating distributed vs focal subpopulations of neurons vs no turnover during learning of Greek produced significant differences in learning performance by group (p<0.05) and group × epochs (p<0.001). The alternating distributed group differed from the no turnover group at p<0.05, while the focal group (ab) did not differ significantly from either by post hoc analysis.


General Informatic Effects of Simulated Turnover

Results demonstrate that in elementary multilayer neural networks capable of learning patterns via incremental synaptic strength changes, simulated apoptotic and neurogenic events in the middle layer operate in concert to regulate network learning and memory properties. These effects were observed during the acquisition of new information by networks already trained on old information stored as connectivity patterns among ‘mature’ neural elements. As the algorithms governing plastic changes within the networks were the same upon initial learning and under the experimental conditions during new learning, these effects were specifically attributable to the impact of various rates or patterns of simulated neuronal turnover on ongoing mechanisms of interneuronal plasticity. Implications of these findings for guiding future biological investigations and understanding psychiatric disorders are described below.

Experiment 1 demonstrated that elimination and replacement of neurons significantly enhanced the speed and accuracy of learning new information, but at a cost of loss of recall accuracy of old information (Figure 6). Without any imposed neuronal turnover, networks showed the weakest performance on learning new information, while better retaining old information. Analogous findings have been reported from animal studies. In adult songbirds, targeted neuronal death in the high vocal center induces replacement by progenitor cells of the same type, and leads to altered song production that later recovers (Scharff et al, 2000). Conditional presenilin-1 gene knockout mice have deficient neurogenesis and reduced extinction of contextual fear-conditioned memory, suggesting an association of hippocampal neurogenesis with the clearance of old memories (Feng et al, 2001). In rats, reductions in DG neuronal proliferation by the DNA methylating agent methylzoxymethanol acetate (MAM) causes impaired hippocampal-dependent trace conditioning (Shors et al, 2001). Whether neurogenesis, as simulated here, occurs in functional association with apoptosis as part of a larger context of neuronal turnover remains incompletely understood. In rats, upper estimates suggest that neurogenic events occur at rates equaling up to 6% of the total dentate granular neuronal population per month (Cameron and McKay, 2001), while estimates of total population growth show increases of up to 43% in the first year of adulthood (Bayer et al, 1982). However, apoptotic markers have been found in neurogenetic zones and the DG, although it remains unclear to what extent these markers represent the remains of previously functional adult neurons vs premature death of new neurons (Biebl et al, 2000; Cooper-Kuhn and Kuhn, 2002).

Anatomical and/or biological limitations may preclude neurogenesis unchecked by apoptosis, and several lines of evidence suggest that the replacement of older information-bearing neurons with newly generated neurons could convey neuroinformatic advantage (Nottebohm, 2002). Adult neuronal cultures, compared to those taken from developing brain tissue, are phenotypically distinct. Developing neurons display different patterns of neuroplasticity-related gene expression, molecular changes and morphological responses to neurotrophic proteins and neurotransmitters, and increased survivability in nonphysiologic environments (Schwartz, 1992; Penn, 2001; Vaccarino et al, 2001; Nottebohm, 2002). In hippocampal preparations, newly generated DG neurons undergo LTP more readily, and are insensitive to GABAergic inhibition of LTP compared to putative mature DG neurons (Snyder et al, 2001). Together, these data suggest that adult neurons are not amenable to the same repertoire of plastic mechanisms as young neurons, possibly rendering them less capable of large-scale plastic revision. In this case, brain regions showing robust neurogenesis may represent areas in which the complexity or volume of new information flow requires a degree of neuroplastic change such that it is more biologically cost-effective to destroy neurons and replace them with new ones, rather than remodel ‘eternal’ populations. Conversely, the apparent relative absence of adult neurogenesis and/or apoptosis in other brain regions (eg cortex) may signify the necessity of those regions to avoid the neuroinformatic cost of neuronal turnover; namely the forgetting of prior data as seen in these simulations. These implications are relevant to the ‘stability-flexibilty delemma’ described by Liljenstrom (2003), whereby neural systems must operate in a dynamic tension between a state of stability for the effective performance of ongoing mental functions based on prior experience vs a state of adaptability necessary for incorporating new information. Our findings suggest that the absence or presence of events of neural turnover in specific brain regions could reflect the necessity for biological specialization towards either stability (neocortex) or flexibility (paleocortx: olfactory system and hippocampus), thus endowing the whole brain with a capacity for effective performance in both domains. As for the hippocampus, its putative involvement in the storage and consolidation of memory within the more permanent neuronal populations of the neocortex (McClelland et al, 1995; Nadel and Bohot, 2001) could represent a mechanism where the neuroinformatic cost of forgetting with neuronal turnover in the hippocampus is circumvented. These implications suggest the need for biological investigations that might link mechanisms and phenomenology of cortical–hippocampal consolidation with neurogenesis.

Effects of Increasing Proportions of Neurons Undergoing Turnover

Experiment 1 demonstrated that greater proportions of neurons undergoing turnover produced increases in both the enhancement of new learning (Greek) and degradation of old information (Roman). The possibility that more new neurons could improve learning has been supported by animal studies. In rats, increased DG neurogenesis occurs following hippocampal-dependent associative learning (Gould et al, 1999), especially in relatively demanding learning tasks (Shors et al, 2001) or after exposure to novel contexts (Lemaire et al, 1999). In mouse strains showing different baseline rates of hippocampal neurogenesis, higher-rate strains showed steeper spatial learning curves (Kempermann and Gage, 2002). In adult song birds, seasonal changes in rates of neurogenesis may reflect its adaptive control as necessitated by seasonal changes in social contexts and patterns of communication (Nottebohm, 2002). Our simulation results demonstrate a fundamental neural systems mechanism by which regulation of rates of neuronal apoptosis and/or neurogenesis within individuals might serve to gauge associated informatic costs and benefits according to the degree of new information demands (Kempermann, 2002). In mammals, the hypothalamic–pituitary–adrenal (HPA) system may play an important role in such regulation given evidence associating corticosteroid responses with significant environmental change, cognitive function and neuronal atrophy and death in the hippocampus (McEwen, 2000). Corticosteroid stress responses decrease neurogenesis (Gould et al, 1997), and promote neuronal atrophy and death in DG (Souza et al, 2000; Haynes et al, 2001) as well as CA3 neurons (McEwen, 1999; Sapolsky, 1996). Conversely, adrenalectomy-induced corticosteroid withdrawal promotes neurogenesis and preferentially protects young DG neurons from apoptosis (Cameron and Gould, 1996). In extreme stress responses, large or sustained increases in corticosteroids might act similar to excitotoxic or anoxic lesions which produce neural degeneration and apoptosis followed by a compensatory increase in neurogenesis (Cameron et al, 1995; Gould and Tanapat, 1997; Liu et al, 1998; Dash et al, 2001; Dong et al. 2003).

Patterns of Neuronal Turnover

Experiment 2 (Figure 7) showed that while holding the proportion of neurons undergoing turnover constant at 50%, selection of neurons with the largest as opposed to smallest efferent axodendritic connection growth after learning the initial data set produced greater capacity for learning new information. This effect was more pronounced for neurons chosen for turnover by virtue of their M–O connection strengths rather than their I–M connections. These findings suggest that not only the quantitative but also the qualitative characteristics of neurons undergoing turnover can mediate neuroinformatic effects, particularly with regard to attributes of the axonal projections of replaced neurons. Interestingly, stimulation of mossy fiber efferents from the DG has been shown to influence regulation of neurogenesis in the DG (Derrick et al, 2000). The middle layer neurons acquiring larger M–O connection vector magnitudes after initial learning in our simulations could be considered analogous to subpopulations of biological DG neurons with the largest and/or most complex axonal projections into the CA3. These anatomical and functional characteristics may be associated with heightened functional energy demands, which are associated with greater vulnerability to glutamate and corticosteroid-mediated neurotoxicity (Sapolsky, 1996). Whether these or related mechanisms underlie optimal adaptive clearance of particular hippocampal neurons for maximizing neurogenesis-facilitated learning requires direct investigation.

Experiment 3 modeled neuronal turnover imposed during new learning (Figure 8). Results demonstrated that simulated apoptosis–neurogenesis occurring in a spatially and temporally distributed manner offers superior learning compared to that occurring repeatedly within an isolated subpopulation, a condition that did not functionally differ from the no-turnover one. As in experiment 2, these findings suggest that not only the rates of turnover but their patterns are important for understanding their neuroinformatic effects. Greater neuroinformatic advantage occurred under circumstances in which a distribution of neuronal maturation levels is dispersed among a significant proportion of the total middle layer population. These results may in part address an ongoing debate as to whether the DG is composed of sizable, distinct neuronal populations, one permanent, and another constantly undergoing regeneration vs being composed of a large majority undergoing regeneration albeit at different ages (Cameron and McKay, 2001; Snyder et al, 2001). While our results suggest that the latter circumstance may confer greater neuroinformatic advantage, there exists a need for further research examining the influence of regional- or neuron-specific apoptotic and/or neurogenic events on cognitive function.

Implications for Psychiatric Illness

As a tool of translational investigation, neural network modeling offers a unique capacity to study themes relating the dynamics of neural ensembles to macro-level (ie cognitive and behavioral) phenomena (Hoffman et al, 2001). On the clinical level, large new informational demands accompany drastic changes in close personal relationships, occupation, or living circumstances. These changing contexts rank among the most stressful psychological circumstances for healthy individuals, and are leading risk factors for onset of depression and other major psychiatric disorders (Dimsdale et al, 2000). Successful adaptation to these socio-environmental changes may require appropriate increases in rates or changing patterns of hippocampal apoptosis and/or neurogenesis, while suboptimal rate changes may contribute to mental illness (Jacobs et al, 2000; Eisch, 2002). Findings of the current study suggest neuronal apoptotic events need not be pathological, but may confer an adaptive advantage as a component of neuronal turnover in association with neurogenesis. Corticosteroid-mediated stress responses associated with significant contextual or psychosocial changes may thus serve as a normal mechanism involved in the instigation of increased neural turnover. The HPA system and the neurogenic hippocampus may biologically substantiate the process of dissolution, which ‘prepares for new learning by self-organization, whereby the pre-existing life history of an individual is transiently weakened, even melted down, so that new structure can grow that is not logically consistent with all that has come before’ (Freeman, 2003). However, a variety of affective, anxiety, and stress-related psychiatric and/or personality disorders may result from genetic differences and/or environmental exposures that impact a variety of parameters controlling neuronal turnover. Distinct clinical phenotypes could relate to specific pathologies that produce either inappropriately small or large neuronal turnover responses to psychosocial stress, missmatching of rates of neurogenesis and/or apoptosis, or the suboptimal selection of specific subpopulations neurons for turnover.

Future Directions in Network Modeling

To our knowledge this article is the first report comparing alternate simulated conditions of coordinated apoptotic/neurogenic events against a background network capacity for learning via a consistent algorithm of connection plasticity. Notably, all network parameters were determined prior to these experiments and were not further altered on behalf of refining the results of the imposed regimens of neuronal turnover. These methods allowed the characterization of neuronal turnover as a means by which the informatic properties of already plastic networks may be tuned according to a trade-off between new learning requirements and the retention of old information. While sacrificing the great complexity that would be required to attain greater biological realism, these elementary neural simulations provide a relatively transparent view of what may be the essential elements and fundamental informatic implications of apoptosis and neurogenesis, which could be obscured by increasing levels of sophistication. Cecchi et al (2001), have described a two-layer model of the olfactory system in which plasticity occurs solely via the ongoing installation of new inhibitory granular interneurons, concomitant with apoptotic events in subsets of these neurons based on competitive elimination. While these simulations did not assess the informatics of neural turnover in the context of ongoing incremental synaptic change, they generally agree with our own findings in suggesting a thematic property of neurogenesis and neuronal elimination to facilitate new learning and forgetting of old information. These models provide a basis for future studies of more sophisticated and biologically accurate neural simulations designed to test the generalizability or specificity of these findings. These studies could focus on specific brain structures and incorporate experimentally derived data regarding the anatomical and performance characteristic of the constituent neurons.

For example studies focusing on the hippocampus might incorporate greater numbers of neurons (approaching estimates of 1–2.4 million in rat DG) interconnected by recurrent collaterals within and between layers of the hippocampus in anatomically confirmed proportions on the order of 103 synapses per neuron (Rolls, 1996). Models could also incorporate multiple forms of neurotransmission such as GABAergic and acetylcholinergic systems, which control modes of oscillatory firing in the hippocampus during learning and memory consolidation (Buzsaki, 2001; Hasselmo et al, 2002). Simulations measuring the effects of independently variable rates of apoptosis vs neurogenesis should also be considered in the context of models accounting for cortical–hippocampal cooperation in the retention and retrieval of long-term memory (McClelland et al, 1995; Eichenbaum, 2000; Nadel et al, 2000), spatial memory (McNaughton et al, 1996; Burgess et al, 1998), temporal sequencing (Agster et al, 2002), classical conditioning, and novelty encoding (Gluck and Myers, 1996; Schmajuk et al, 2000).