In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. The prediction error theory has been proposed to account for the finding of a blocking phenomenon, in which pairing of a stimulus X with an unconditioned stimulus (US) could block subsequent association of a second stimulus Y to the US when the two stimuli were paired in compound with the same US. Evidence for this theory, however, has been imperfect since blocking can also be accounted for by competitive theories. We recently reported blocking in classical conditioning of an odor with water reward in crickets. We also reported an “auto-blocking” phenomenon in appetitive learning, which supported the prediction error theory and rejected alternative theories. The presence of auto-blocking also suggested that octopamine neurons mediate reward prediction error signals. Here we show that blocking and auto-blocking occur in aversive learning to associate an odor with salt water (US) in crickets, and our results suggest that dopamine neurons mediate aversive prediction error signals. We conclude that the prediction error theory is applicable to both appetitive learning and aversive learning in insects.
Associative learning allows animals to adapt to various environments by acquiring knowledge on events in their environments. Based on the knowledge, animals find suitable food, avoid toxic food and escape from predators. Thus, both appetitive learning and aversive learning are essential for survival of animals. Many efforts have been made to elucidate learning rules governing associative learning in mammals1,2, but whether appetitive learning and aversive learning are ruled by the same general principles remains unclear.
In associative learning in mammals, it is widely accepted that the discrepancy, or error, between the actual unconditioned stimulus (US) and predicted US determines whether learning occurs when a stimulus is paired with the US1,2. This theory stems from the finding of a “blocking” phenomenon by Kamin3. He observed in rats that a stimulus X that had been paired previously with a US could block subsequent association of a second stimulus Y to the US when the two stimuli were paired in compound with the same US (XY + training, see Table 1). Kamin3 argued that no learning of stimulus Y occurs since the US was fully predicted by stimulus X and argued that surprise is needed for learning. This proposition was formulated into the prediction error theory by Rescorla and Wagner4, and subsequent electrophysiological studies suggested that dopamine (DA) neurons in the midbrain convey reward prediction error signals1.
Evidence for the prediction error theory, however, has been imperfect since blocking can also be accounted for by theories other than the prediction error theory such as attentional theory and retrieval theory5,6,7, which account for blocking by competition between X and Y stimuli, and evidence to convincingly refute alternative theories has been lacking8,9,10.
We previously reported blocking in appetitive associative learning in crickets11. Moreover, we obtained evidence that octopamine (OA) neurons play critical roles in appetitive learning in crickets12,13,14,15,16,17,18,19, and we demonstrated that when a stimulus X was paired with water (appetitive US) under the condition of administration of an OA receptor antagonist, in which no learning of X occurs, subsequent learning of X was blocked in training to associate the stimulus X with the US given after recovery from the effect of the antagonist11. This “auto-blocking” can be accounted for by the prediction error theory since if blockade of OA-ergic transmission impairs learning but not formation of the prediction of the US by stimulus X, no learning of stimulus X should occur in subsequent training. This “auto-blocking” phenomenon cannot be accounted for by any of the competitive theories to account for blocking since it occurs without stimulus competition. Therefore, demonstration of blocking and auto-blocking phenomena in the same learning paradigm in the same species provided rigorous evidence for the prediction error theory in appetitive learning. In addition, the results of an auto-blocking experiment suggested that OA neurons mediate reward prediction error signals in crickets. However, rigorous evidence to show the applicability of the prediction error theory in aversive learning has been still lacking.
In the present study, we investigated whether blocking and auto-blocking occur in aversive learning in crickets. We have shown that DA neurons play critical roles in aversive learning in crickets12,13,14,15,16,17,18,19, as has been reported for other invertebrates20,21,22,23. We obtained evidence of blocking in conditioning to associate an odor or pattern with NaCl solution (aversive US) in crickets. We also found that “auto-blocking” occurs in aversive learning, that is, no learning of an odor X occurs in training to associate X with aversive US when the training is preceded by the same training under the condition of administration of a DA receptor antagonist. This blockade of learning was accounted for by the prediction error theory but not by alternative theories to account for blocking since no cue competition is involved.
Effects of compound conditioning
Since a blocking experiment requires conditioning of two stimuli presented at the same time, we first investigated whether crickets exhibit such compound conditioning. We used odor-pattern compound conditioning (OP+ conditioning), in which a compound stimulus consisting of an odor (O) and a visual pattern (P) is paired with a 20% NaCl solution (aversive US) (+), and we investigated whether OP+ training leads to learning of the odor or the visual pattern (Fig. 1 and Table 1). One group of animals (compound group) was subjected to 2-trial OP+ training and another group (control group) was subjected to 2-trial olfactory conditioning (O+ conditioning). Relative preference for the odor used in training compared to the control odor was tested before and at 20 min after training in both groups. The results are shown in Fig. 2a. We used a generalized linear mixed model (GLMM) to evaluate the data (see Methods). Both the compound group and control group exhibited significantly decreased preference for the conditioned odor after training compared to that before training (test term, p = 8.68 * 10−10, z = −6.132, see Supplemental Table S1). The preference for the conditioned odor after training in the compound group did not significantly differ from that in the control group (test * training term, p = 0.141, z = 1.471). The results showed that learning was achieved in the compound group as in the control group. We thus conclude that odor-pattern compound conditioning leads to conditioning of the odor.
Next, we investigated whether OP+ training leads to learning of the visual pattern. One group of animals (compound group) was subjected to 8-trial OP+ training and another group (control group) was subjected to 8-trial visual conditioning (P+ conditioning). Relative preference for the pattern used in training compared to the control pattern was tested before and at 20 min after training in both groups. The results are shown in Fig. 2b. Both the compound group and control group exhibited significantly decreased preference for the conditioned pattern after training compared to that before training (test term, p = 7.97 * 10−15, z = −7.768). The results showed that learning was achieved in both the compound group and the control group. In addition, we observed that the preference for the conditioned pattern after training in the compound group was significantly less than that in the control group (test * training term, p = 1.35 * 10−4, z = 3.818). This was unexpected since we did not observe such effect in appetitive visual learning as is discussed in a later section.
Demonstration of blocking
We next studied whether blocking occurs in aversive learning in crickets. At first, we investigated whether blocking of olfactory learning occurs. One group of crickets (blocking group) was subjected to 8-trial P+ training and then 2-trial OP+ training (Table 1). Another group (control group) was subjected to unpaired presentations of a visual pattern and aversive US (P/+ training) 8 times each and then 2-trial OP+ training. The results are shown in Fig. 3a. The preference for the trained odor after training in the control group was significantly less than that before training in the same group and was also significantly less than that before or after training in the blocking group (test * training term, p = 0.00148, z = −3.179). The results showed that conditioning was achieved in the control group but not in the blocking group.
We next studied whether blocking of visual pattern learning occurs. One group of crickets (blocking group) was subjected to 2-trial O+ training and then 8-trial OP + training. Another group (control group) was subjected to unpaired presentations of an odor and aversive US (O/+ training) 2 times each and then 8-trial OP+ training. The results are shown in Fig. 3b. The preference for the trained pattern after training in the control group was significantly less than that before training in the same group and was also significantly less than that before or after training in the blocking group (test * training term, p = 3.6 * 10−8, z = −5.509). The results showed that conditioning was achieved in the control group but not in the blocking group. The results indicate that blocking occurs in visual learning.
A neural circuit model of classical conditioning that matches the prediction error theory
We previously proposed a neural circuit model for appetitive learning that matches the prediction error theory11. The model was designed to represent neural circuits in lobes of the MB, which is known to play critical roles in learning20,21, and was based on our findings that OA neurons play critical roles in appetitive learning in crickets12,13,14,15,16,17,19. Here we propose a model of aversive learning that matches the prediction error theory (Fig. 4), in which we focused on the roles of DA neurons in aversive learning12,13,14,15,16,17,18,19. For complete description of our model, see Supplementary Figure S2.
In the model shown in Fig. 4a, “DA” neurons (assuming DA neurons projecting to the lobes of the MB) are assumed to receive inhibitory synapses from “CS” neurons (assuming Kenyon cells of the MB), the efficacy of which is strengthened by conditioning. In pairing of an olfactory CS with a sodium chloride US, “DA” neurons receive excitatory input representing actual US and inhibitory input representing predicted US by the CS, and their responses thus represent US prediction error signals. Hence, US prediction error signals govern enhancement of synaptic transmission that underlies conditioning. How the model accounts for blocking is shown in Fig. 4b (for an explanation, see legends). To better account for the model, information coded by “DA” neurons before and after training is shown in Supplemental Table S2.
Demonstration of auto-blocking
Our model predicts that blockade of synaptic transmission from DA neurons by a DA receptor antagonist (flupentixol24) during Y+ training impairs learning of Y but not formation of aversive US prediction by Y since, assuming that the antagonist impairs enhancement of “CS-CR” synapses but not that of “CS-DA” synapses (see Fig. 4a), subsequent Y+ training given after recovery from the effect of the antagonist should produce no learning. This effect is termed “auto-blocking”, because learning of Y is blocked by US prediction by Y itself, not by X in the case of blocking. We previously reported such an auto-blocking phenomenon in appetitive learning in crickets by using OA receptor antagonist (epinastine)11.
We tested whether auto-blocking occurs in aversive learning in crickets. One group of animals (auto-blocking group) was injected with flupentixol into the head hemolymph and 30 min later the group was subjected to 6-trial O+ training. The dose of flupentixol was determined based of our previous studies15,16,17. On the next day, the group was subjected to 2-trial O+ training. Another group (control group) was subjected to unpaired presentation of the odor and aversive US (O/+ training) 6 times each under the condition of application of flupentixol and then was subjected to 2-trial O+ training the next day. The results are shown in Fig. 5. The preference for the trained odor after training in the control group was significantly less than that before training in the same group and was significantly less than that before or after training in the auto-blocking group (test * training term, p = 0.00144, z = −3.186). The results showed that learning was achieved in the control group but not in the auto-blocking group and indicate that auto-blocking occurs in aversive learning in crickets.
We previously showed that octopamine receptor antagonist (epinastine) does not impair aversive learning13,14,15,16,17, and here we performed an experiment to confirm that epinastine does not lead to auto-blocking in aversive learning. One group of animals was injected with epinastine into the head hemolymph and 30 min later the group was subjected to 6-trial O+ training. On the next day, the group was subjected to 2-trial O+ training. The results are shown in Supplemental Figure S1. The preference for the trained odor after training was significantly less than that before training (test term, p = 7.23 * 10−4, z = −3.381), indicating that learning was successful. We conclude that DA receptor antagonist but not OA receptor antagonist leads to auto-blocking of aversive learning.
We obtained convincing evidence for the prediction error theory in aversive learning. We demonstrated, at first, that a blocking phenomenon occurs in aversive learning in crickets, i.e., no learning of Y occurred by XY+ training when the training was preceded by X+ training with X and Y being either visual or olfactory stimulus. Then we proposed a neural circuitry model of aversive learning, in which our previous model of aversive learning15 was modified to match the prediction error theory. Our aversive learning model (Fig. 4) was a counterpart of the appetitive learning model we proposed previously11 and predicted an “auto-blocking” phenomenon, in which no learning of X occurs by X+ training when the training is preceded by X+ training under the condition of administration of a DA receptor antagonist, and we indeed observed this phenomenon in olfactory learning. The results of the auto-blocking experiment showed the validity of the prediction error theory: To our knowledge, all theories to account for blocking other than the prediction error theory, including attentional theories5,6 and retrieval theories7 (or comparator hypothesis), assume cue competition between X and Y to account for blocking, and these theories thus fail to account for auto-blocking. Demonstration of blocking and auto-blocking phenomena in aversive learning (this study) and in appetitive learning11 in the same species provides rigorous evidence for the prediction error theory in both appetitive and aversive forms of olfactory learning in crickets. Demonstration of auto-blocking of visual learning remains for our future subject.
Previous reports on blocking in aversive learning in animals
Blocking has been reported in various systems of aversive learning in vertebrates and invertebrates. A blocking phenomenon was first demonstrated in classical conditioning of tone and light compound stimuli with electric shock US in rats3. Evaluation of this learning paradigm led to proposals of the prediction error theory4, attentional theory5,6 and retrieval theory7. Blocking in aversive learning has also been reported in mollusks, in which odor, light or tactile stimulus was paired with bitter taste, electric shock or other aversive US25,26,27. Some researchers attempted to discriminate the prediction error theory with alternative theories to account for blocking in aversive conditioning, but convincing evidence to discriminate among different theories has not been reported8,9,10. The auto-blocking experiment described here may help to discriminate different learning theories in these animals.
We observed that the effect of compound conditioning of a visual pattern and an odor was significantly more than that of conditioning of a visual pattern (Fig. 2b), indicating that simultaneous presentation of an olfactory cue facilitated conditioning of a visual cue. This is an unexpected observation since we did not find such effect in appetitive visual conditioning11. Whether this effect is specific to aversive visual learning remains to be clarified.
Roles of dopamine neurons in mediating aversive prediction error signals
DA neurons are thought to convey reinforcement signals in many systems of associative learning in insects and mammals. In the fruit-fly Drosophila, it has been suggested that different classes of DA neurons projecting to the lobes of the MB mediate reinforcement signals in aversive learning and appetitive learning20,21. In honey bees, as in crickets, it has been suggested that DA neurons convey reinforcement signals in aversive learning22, whereas OA neurons convey reinforcement signals in appetitive learning28,29. However, the exact nature of signals that DA or OA neurons convey in learning has not been characterized in any insects. Future electrophysiological studies on activities of DA neurons during conditioning are needed to clarify this issue.
In mammals, there is evidence that midbrain DA neurons mediate prediction error signals in appetitive learning1,2,30,31, but the roles of DA neurons in aversive learning remain controversial. Some researchers have suggested that midbrain DA neurons participate in aversive learning32,33 and convey aversive prediction error34, but other researchers have argued that midbrain neurons mediating aversive signals may not be DAergic31,35,36. To what extent the roles of DA neurons in associative learning are conserved between insects and mammals remains for a fascinating research subject.
Are there interactions between neurons mediating prediction errors about reward and aversiveness?
We suggest that OA and DA neurons convey prediction error signals in appetitive learning and aversive learning, respectively, in crickets and an important future subject is to investigate whether OA and DA neurons independently process reward and aversive prediction error signals, respectively, or whether these neurons tightly interact to integrate reward and aversive prediction error signals and to form a unified system to mediate value prediction error signals in insects. We previously observed that intervention of DA-ergic transmission by DA receptor antagonists or by knockdown or knockout of genes that code for a type of DA receptor by RNAi or by the CRISPR/cas9 system impaired aversive learning but did not affect appetitive learning, whereas intervention of OA-ergic transmission impaired appetitive learning but not aversive learning12,13,14,15,16,17,18,19. In this study, we showed that DA receptor antagonist but not OA receptor antagonist leads to auto-blocking of aversive learning. The results indicate that the OA reward system and DA aversion system can act independently when appetitive learning and aversive learning occur independently. Those studies, however, do not exclude the possibility that DA and OA neurons interact in a situation in which a stimulus is associated with appetitive and aversive stimuli. A similar issue has been discussed in mammals. Some researchers have suggested that separate classes of midbrain neurons mediate prediction error signals about reward and aversiveness 31,35,36, whereas other researchers have proposed that a single class of DA neurons integrates reward and aversive signals to encode value prediction error signals34. Further investigations in insects may help to better clarify this issue.
We conclude that insects predict future biologically significant events by appetitive and aversive associative learning and that DA neurons mediate prediction error signals in aversive learning. Neural circuitry mechanisms for computation of the prediction error remain unknown in any animals, and insects should emerge as pertinent models in which to elucidate this important subject.
Adult male crickets, Gryllus bimaculatus, at 1 week after the imaginal molt were used. Before the experiment, animals were placed individually in beakers and deprived of drinking water for 4 days to enhance their motivation to search for water.
Olfactory and Visual Conditioning Procedures
We used classical conditioning and operant testing procedures described previously11,37 (Fig. 1). In olfactory conditioning, maple or vanilla odor (conditioned stimulus, CS) was paired with NaCl solution (aversive US). In visual conditioning, a white-center and black-surround pattern (white-center pattern) was paired with 20% NaCl solution. The outer diameter of the pattern was 4 cm and that of the while center pattern was 3 cm. In the compound conditioning, an odor and a white-center pattern were presented simultaneously (compound CSs) and were paired with NaCl solution. A syringe was used to present the CS and US to each cricket. The syringe contained NaCl solution as US, and at its needle, a filter paper soaked with odor essence was attached as olfactory CS, and/or a white-center pattern was attached as visual CS (Fig. 1). For a conditioning trial, an odor was approached to the antennae (within 1–2 cm) or a visual pattern was approached to the head of the cricket (within 2–3 cm) and held for 3 sec, and then a drop of NaCl solution was attached to its mouth. For an unpaired trial, an odor or a visual pattern was approached to the antennae or the head and held for 3 sec, and 2.5 min later, a drop of NaCl solution was attached to its mouth by another syringe. In all pairing experiments, the intervals between the trials (inter-trial intervals, ITIs) were 5 min. After olfactory or compound conditioning trials, the air in the beaker was ventilated.
Odor preference tests were carried out as described previously11,37. All groups were tested with relative preference between the maple odor and vanilla odor before conditioning and 20 min or 1 day after conditioning. The test apparatus consisted of waiting chambers and a test chamber. The floor of the test chamber had two holes that connected the chamber with two cylindrical containers that contained a filter paper soaked with either maple or vanilla essence and was covered with a fine gauze net (Fig. 1). Three containers were mounted on a rotative holder, and two of the three containers could be located simultaneously beneath the holes of the test chamber. Before testing, a cricket was transferred to the waiting chamber and left for about 4 min to become accustomed to the surroundings. Then the cricket was allowed to enter the test chamber and the test started. Two min after the test had started, the relative positions of the odor sources were changed by rotating the container holder. The preference test lasted for 4 min. We considered that the cricket visited an odor source when the cricket probed the top net with its mouth or palpi. The time that the cricket visited each odor sources was recorded cumulatively for each seconds. If the total visiting time of a cricket to odor sources was less than 10 sec, we considered that the animal was less motivated, possibly due to a poor physical condition, and the data were rejected. In the present experiments, about 15% animals were rejected in each test.
Crickets were injected with 3 μl of saline containing 100 μM flupentixol or 2 μM epinastine (Sigma-Aldrich, Tokyo) into the head hemolymph 30 min before the training. The estimated final concentration after circulation is 350 nM for flupentixol and 7.0 nM for epinastine11,12.
Relative preference for the conditioned odor compared with the control odor was determined as the proportion of time spent visiting the conditioned odor in the total time spent visiting the two odors. We measured the search time with the accuracy of seconds. In our previous study, we used non-parametric statistical tests for evaluation of the relative preference. Since it has been proposed that the use of a generalized linear mixed model (GLMM) is advantageous for evaluation of biological data38, we used a GLMM with a binomial distribution of the relative preference, determined by the search time data sampled for each second, and logit link function. We included the test condition (test before or after training), training procedure and the interaction term (test * training) as fixed effects in the GLMM, with the training and test terms being categorical variables. Individual cricket was used as a random effect, allowing the random intercept. We used R (ver. 3. 3. 1) and lme4 (ver. 1.1.12) packages for statistical analysis. We refer to as “significantly different” if p-values in the Wald statistical test in the GLMM were p < 0.05.
Schultz, W. Behavioral theories and the neurophysiology of reward. Annu. Rev. Psychol. 57, 87–115 (2006).
Schultz, W. Updating dopamine reward signals. Curr. Opin. Neurobiol. 23, 229–238 (2013).
Kamin, L. [Predictability, surprise, attention and conditioning] Punishment and aversive behavior [Campbell, B. A. & Church, R. M. (eds.)] [279–298] (Appleton-Century-Crofts, New York, 1969).
Rescorla, R. A. & Wagner, A. R. [A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement] Classical Conditioning II [Black, A. & Prokasy, W. R. (eds)] [64–99] (Academic Press, New York, 1972).
Mackintosh, N. J. A theory of attention: Variations in the associability of stimuli with reinforcement. Psychol. Rev. 82, 276–298 (1975).
Pearce, J. M. & Hall, G. A model for Pavlovian learning: Variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychol. Rev. 87, 532–552 (1980).
Miller, R. R. & Matzel, L. D. The comparator hypothesis: a response rule for the expression of associations. Psychol. Learn. Motiv. 22, 51–92 (1988).
Pearce, J. M. [Associative learning] Animal Learning & Cognition [35–91] (Psychology press, New York, 2008).
Miller, R. R., Barnet, R. C. & Grahame, N. J. Assessment of the Rescorla-Wagner model. Psychol. Bull. 117, 363–386 (1995).
Mazur, J. E. [Chapter 4: Theories and research on classical conditioning] Learning and behavior [75–100] (Pearson education, Boston, 2013).
Terao, K., Matsumoto, Y. & Mizunami, M. Critical evidence for the prediction error theory in associative learning. Sci. Rep. 5, 8929 (2015).
Unoki, S., Matsumoto, Y. & Mizunami, M. Participation of octopaminergic reward system and dopaminergic punishment system in insect olfactory learning revealed by pharmacological study. Eur. J. Neurosci. 22, 1409–1416 (2005).
Unoki, S., Matsumoto, Y. & Mizunami, M. Roles of octopaminergic and dopaminergic neurons in mediating reward and punishment signals in insect visual learning. Eur. J. Neurosci. 24, 2031–2038 (2006).
Nakatani, Y. et al. Why the carrot is more effective than the stick: Different dynamics of punishment memory and reward memory and its possible biological basis. Neurobiol. Learn. Mem. 92, 370–380 (2009).
Mizunami, M. et al. Roles of octopaminergic and dopaminergic neurons in appetitive and aversive memory recall in an insect. BMC Biol. 7, 46 (2009).
Matsumoto, Y., Hirashima, D. & Mizunami, M. Analysis and modeling of neural processes underlying sensory preconditioning. Neurobiol. Learn. Mem. 101, 103–113 (2013).
Matsumoto, Y., Matsumoto, C. S., Wakuda, R., Ichihara, S. & Mizunami, M. Roles of octopamine and dopamine in appetitive and aversive memory acquisition studied in olfactory conditioning of maxillary palpi extension response in crickets. Front. Behav. Neurosci. 9, 230 (2015).
Awata, H. et al. Knockout crickets for the study of learning and memory: Dopamine receptor Dop1 mediates aversive but not appetitive reinforcement in crickets. Sci. Rep. 5, 15885 (2015).
Awata, H. et al. Roles of OA1 octopamine receptor and Dop1 dopamine receptor in mediating appetitive and aversive reinforcement revealed by RNAi studies. Sci. Rep. 6, 29696 (2016).
Kim, Y. C., Lee, H. G. & Han, K. A. D1 dopamine receptor dDA1 is required in the mushroom body neurons for aversive and appetitive learning in Drosophila. J. Neurosci 27, 7640–7647 (2007).
Aso, Y. et al. Three dopamine pathways induce aversive odor memories with different stability. PLoS Genet. 8 (2012).
Vergoz, V., Roussel, E., Sandoz, J. C. & Giurfa, M. Aversive learning in honeybees revealed by the olfactory conditioning of the sting extension reflex. PLoS One 2, e288 (2007).
Klappenbach, M., Maldonado, H., Locatelli, F. & Kaczer, L. Opposite actions of dopamine on aversive and appetitive memories in the crab. Learn. Mem. 19, 73–83 (2012).
Mustard, J. A. et al. Analysis of two D1-like dopamine receptors from the honey bee Apis mellifera reveals agonist-independent activity. Mol. Brain Res. 113, 67–77 (2003).
Sahley, C., Rudy, J. W. & Gelperin, A. An analysis of associative learning in a terrestrial mollusc. J. Comp. Physiol. A. 144, 1–8 (1981).
Rogers, R. F. & Matzel, L. D. Higher-order associative processing in Hermissenda suggests multiple sites of neuronal modulation. Learn. Mem. 2, 279–298 (1996).
Prados, J. et al. Blocking in rats, humans and snails using a within-subjects design. Behav. Process. 100, 23–31 (2013).
Hammer, M. & Menzel, R. Multiple sites of associative odor learning as revealed by local brain microinjections of octopamine in honeybees. Learn. Mem. 5, 146–56 (1998).
Farooqui, T., Robinson, K., Vaessin, H. & Smith, B. H. Modulation of early olfactory processing by an octopaminergic reinforcement pathway in the honeybee. J. Neurosci. 23, 5370–5380 (2003).
Steinberg, E. E. et al. A causal link between prediction errors, dopamine neurons and learning. Nat. Neurosci. 16, 966–973 (2013).
Schultz, W. Neuronal reward and decision signals: From theories to data. Physiol. Rev. 95, 853–951 (2015).
Li, S. S. Y. & McNally, G. P. The conditions that promote fear learning: Prediction error and Pavlovian fear conditioning. Neurobiol. Learn. Mem. 108, 14–21 (2014).
Wenzel, J. M., Rauscher, N. A., Cheer, J. F. & Oleson, E. B. A role for phasic dopamine release within the nucleus accumbens in encoding aversion: A review of the neurochemical literature. ACS Chem. Neurosci 6, 16–26 (2015).
Matsumoto, H., Tian, J., Uchida, N. & Watabe-Uchida, M. Midbrain dopamine neurons signal aversion in a reward-context-dependent manner. Elife 5, 1–24 (2016).
Fiorillo, C. D. Two Dimensions of Value: Dopamine neurons represent reward but not aversiveness. Science 341, 546–549 (2013).
Stauffer, W. R., Lak, A., Kobayashi, S. & Schultz, W. Components and characteristics of the dopamine reward utility signal. J. Comp. Neurol. 524, 1699–1711 (2016).
Matsumoto, Y. & Mizunami, M. Temporal determinants of long-term retention of olfactory memory in the cricket Gryllus bimaculatus. J. Exp. Biol. 205, 1429–1437 (2002).
Warton, D. I. & Hui, F. K. C. The arcsine is asinine: the analysis of proportions in ecology. Ecology 92, 3–10 (2011).
We thank Dr. Nao Ota and Mr. Yusaku Okubo for helpful comments on the statistical analysis. This study was supported by Grants-in-Aid for Scientific Research from the Ministry of Education, Science, Culture, Sports and Technology of Japan (16H0481406 and 16K1477406 to MM) and a Grant-in-Aid for JSPS Fellows (No. 15J01414 to KT).
The authors declare that they have no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
About this article
Cite this article
Terao, K., Mizunami, M. Roles of dopamine neurons in mediating the prediction error in aversive learning in insects. Sci Rep 7, 14694 (2017). https://doi.org/10.1038/s41598-017-14473-y
Trial-by-trial dynamics of reward prediction error-associated signals during extinction learning and renewal
Progress in Neurobiology (2020)
Scientific Reports (2020)
Proceedings of the Royal Society B: Biological Sciences (2019)
Tyrosine hydroxylase-immunoreactive neurons in the mushroom body of the field cricket, Gryllus bimaculatus
Cell and Tissue Research (2019)
Frontiers in Psychology (2018)