Article
|
Open Access
Featured
-
-
Article
| Open AccessTiming along the cardiac cycle modulates neural signals of reward-based learning
Previous work has shown that natural cardiac rhythms modulate the perception and reaction to sensory cues through changes in associated neural signals. Here, the authors show that sensitivity to prediction errors during reward learning is related to the phase of the cardiac cycle.
- Elsa F. Fouragnan
- , Billy Hosking
- & Alejandra Sel
-
Article
| Open AccessAn actor-model framework for visual sensory encoding
Encoding and downsampling images is key for visual prostheses. Here, the authors show that an actor-model framework using the inherent computation of the retinal network yields better performance in downsampling images compared to learning-free methods.
- Franklin Leong
- , Babak Rahmani
- & Diego Ghezzi
-
Article
| Open AccessPrefrontal signals precede striatal signals for biased credit assignment in motivational learning biases
People are more likely to take action when they expect a reward but hold back when expecting punishment. Here, the authors show that such motivational biases may stem from biased action outcome learning in cortico-striatal circuits.
- Johannes Algermissen
- , Jennifer C. Swart
- & Hanneke E. M. den Ouden
-
Article
| Open AccessNeural and computational underpinnings of biased confidence in human reinforcement learning
The mechanism of confidence formation in learning remains poorly understood. Here, the authors show that both dorsal and ventral prefrontal networks encode confidence, but only the ventral network incorporates the valence-induced bias.
- Chih-Chung Ting
- , Nahuel Salem-Garcia
- & Maël Lebreton
-
Article
| Open AccessA GPU-based computational framework that bridges neuron simulation and artificial intelligence
High computational cost severely limit the applications of biophysically detailed multi-compartment models. Here, the authors present DeepDendrite, a GPU-optimized tool that drastically accelerates detailed neuron simulations for neuroscience and AI, enabling exploration of intricate neuronal processes and dendritic learning mechanisms in these fields.
- Yichen Zhang
- , Gan He
- & Tiejun Huang
-
Article
| Open AccessAction initiation and punishment learning differ from childhood to adolescence while reward learning remains stable
Adolescence is often associated with heightened reward learning and impulsivity. Here the authors show in 742 people aged 9-18 that reward learning in fact remains stable with age, whilst punishment learning increases and action initiation decreases.
- Ruth Pauli
- , Inti A. Brazil
- & Patricia L. Lockwood
-
Article
| Open AccessPrior information differentially affects discrimination decisions and subjective confidence reports
Both decisions and the confidence accompanying them are influenced not only by incoming information, but also prior expectations. Here, the authors show that confidence in decisions is affected by prior information more than the decisions themselves.
- Marika Constant
- , Michael Pereira
- & Elisa Filevich
-
Article
| Open AccessSequence anticipation and spike-timing-dependent plasticity emerge from a predictive learning rule
Prediction of future inputs is a key computational task for the brain. Here, the authors proposed a predictive learning rule in neurons that leads to anticipation and recall of inputs, and that reproduces experimentally observed STDP phenomena.
- Matteo Saponati
- & Martin Vinck
-
Article
| Open AccessTrait anxiety is associated with hidden state inference during aversive reversal learning
Here, the authors show that anxiety-related alterations of aversive learning can be understood in terms of a computational model in which anxious humans mentally represent more hidden states as causes of different levels of threats.
- Ondrej Zika
- , Katja Wiech
- & Nicolas W. Schuck
-
Article
| Open AccessBlocking D2/D3 dopamine receptors in male participants increases volatility of beliefs when learning to trust others
Inferring other people’s intentions from their actions is essential for successful social engagement. Here, the authors show that in social contexts, dopamine D2 receptors are important in regulating uncertainty-driven belief updating.
- Nace Mikus
- , Christoph Eisenegger
- & Michael Naef
-
Article
| Open AccessReinforcement learning establishes a minimal metacognitive process to monitor and control motor learning performance
Metacognition is fundamental for regulating learning speeds and memory retention. Here, the authors demonstrate that reinforcement learning mediates this process in implicit motor learning, maximizing rewards and minimizing punishments.
- Taisei Sugiyama
- , Nicolas Schweighofer
- & Jun Izawa
-
Article
| Open AccessMeta-learning biologically plausible plasticity rules with random feedback pathways
The biological plausibility of backpropagation and its relationship with synaptic plasticity remain open questions. The authors propose a meta-learning approach to discover interpretable plasticity rules to train neural networks under biological constraints. The meta-learned rules boost the learning efficiency via bio-inspired synaptic plasticity.
- Navid Shervani-Tabar
- & Robert Rosenbaum
-
Article
| Open AccessNeuro-computational mechanisms and individual biases in action-outcome learning under moral conflict
How we juggle morally conflicting outcomes during learning remains unknown. Here, by comparing variants of reinforcement learning models, the authors show that participants differ substantially in their preference, with some choosing actions that benefit themselves while others choose actions that prevent harm.
- Laura Fornari
- , Kalliopi Ioumpa
- & Valeria Gazzola
-
Article
| Open AccessAbstract representations emerge naturally in neural networks trained to perform multiple tasks
How animals learn to generalize from one context to another remains unresolved. Here, the authors show that the abstract representations that are thought to underlie this form of generalization emerge naturally in neural networks trained to perform multiple tasks.
- W. Jeffrey Johnston
- & Stefano Fusi
-
Article
| Open AccessCerebro-cerebellar networks facilitate learning through feedback decoupling
Behavioral feedback is critical for learning, but it is often not available. Here, the authors introduce a deep learning model in which the cerebellum provides the cerebrum with feedback predictions, thereby facilitating learning, reducing dysmetria, and making several experimental predictions.
- Ellen Boven
- , Joseph Pemberton
- & Rui Ponte Costa
-
Article
| Open AccessComputational and neural mechanisms of statistical pain learning
Pain fluctuates over time in ways that are non-random. Here, the authors show that the human brain can learn to predict these changes in a manner consistent with optimal Bayesian inference by engaging sensorimotor, parietal, and premotor regions.
- Flavia Mancini
- , Suyi Zhang
- & Ben Seymour
-
Article
| Open AccessTowards artificial general intelligence via a multimodal foundation model
Artificial intelligence approaches inspired by human cognitive function have usually single learned ability. The authors propose a multimodal foundation model that demonstrates the cross-domain learning and adaptation for broad range of downstream cognitive tasks.
- Nanyi Fei
- , Zhiwu Lu
- & Ji-Rong Wen
-
Article
| Open AccessLeveraging omic features with F3UTER enables identification of unannotated 3’UTRs for synaptic genes
3’ untranslated regions (3’UTRs) play a crucial role in regulating gene expression, but our 3’UTR catalogue is incomplete. Here, the authors develop a machine learning-based framework to predict previously unannotated 3’UTRs in 39 human tissues.
- Siddharth Sethi
- , David Zhang
- & Juan A. Botia
-
Article
| Open AccessThe neural coding framework for learning generative models
Brain-inspired neural generative models can be designed to learn complex probability distributions from data. Here the authors propose a neural generative computational framework, inspired by the theory of predictive processing in the brain, that facilitates parallel computing for complex tasks.
- Alexander Ororbia
- & Daniel Kifer
-
Article
| Open AccessA robust and interpretable machine learning approach using multimodal biological data to predict future pathological tau accumulation
The authors present a machine learning approach that combines baseline multimodal data to accurately predict individualised trajectories of future pathological tau accumulation at asymptomatic and mildly impaired stages of Alzheimer’s disease.
- Joseph Giorgio
- , William J. Jagust
- & Zoe Kourtzi
-
Article
| Open AccessIntroducing principles of synaptic integration in the optimization of deep neural networks
Tasks involving continual learning and adaptation to real-time scenarios remain challenging for artificial neural networks in contrast to real brain. The authors propose here a brain-inspired optimizer based on mechanisms of synaptic integration and strength regulation for improved performance of both artificial and spiking neural networks.
- Giorgia Dellaferrera
- , Stanisław Woźniak
- & Evangelos Eleftheriou
-
Article
| Open AccessNeuronal activity in sensory cortex predicts the specificity of learning in mice
The neural mechanisms underpinning the specificity of fear memories remains poorly understood. Here, the authors highlight how neural activity prior to fear learning impacts fear memory specificity.
- Katherine C. Wood
- , Christopher F. Angeloni
- & Maria N. Geffen
-
Article
| Open AccessA self-supervised domain-general learning framework for human ventral stream representation
It is unknown whether object category learning can be formed purely through domain general learning of natural image structure. Here the authors show that human visual brain responses to objects are well-captured by self-supervised deep neural network models trained without labels, supporting a domain-general account.
- Talia Konkle
- & George A. Alvarez
-
Article
| Open AccessChronic nicotine increases midbrain dopamine neuron activity and biases individual strategies towards reduced exploration in mice
Chronic nicotine exposure impacts various components of decision-making processes, such as exploratory behaviors. Here, the authors identify the cellular mechanism and show that chronic nicotine exposure increases the tonic activity of VTA dopaminergic neurons and reduces exploration in mice.
- Malou Dongelmans
- , Romain Durand-de Cuttoli
- & Philippe Faure
-
Article
| Open AccessA model for learning based on the joint estimation of stochasticity and volatility
Human learning depends on opposing effects of two noise factors: volatility and stochasticity. Here the authors present a model of learning that shows how and why joint estimation of these factors is important for understanding healthy and pathological learning.
- Payam Piray
- & Nathaniel D. Daw
-
Article
| Open AccessNeural heterogeneity promotes robust learning
The authors show that heterogeneity in spiking neural networks improves accuracy and robustness of prediction for complex information processing tasks, results in optimal parameter distribution similar to experimental data and is metabolically efficient for learning tasks at varying timescales.
- Nicolas Perez-Nieves
- , Vincent C. H. Leung
- & Dan F. M. Goodman
-
Article
| Open AccessLinear reinforcement learning in planning, grid fields, and cognitive control
Models of decision making have so far been unable to account for how humans’ choices can be flexible yet efficient. Here the authors present a linear reinforcement learning model which explains both flexibility, and rare limitations such as habits, as arising from efficient approximate computation
- Payam Piray
- & Nathaniel D. Daw
-
Article
| Open AccessLearning with reinforcement prediction errors in a model of the Drosophila mushroom body
Dopamine neurons in the mushroom body help Drosophila learn to approach rewards and avoid punishments. Here, the authors propose a model in which dopaminergic learning signals encode reinforcement prediction errors by utilising feedback reinforcement predictions from mushroom body output neurons.
- James E. M. Bennett
- , Andrew Philippides
- & Thomas Nowotny
-
Article
| Open AccessSynaptic metaplasticity in binarized neural networks
Deep neural networks usually rapidly forget the previously learned tasks while training new ones. Laborieux et al. propose a method for training binarized neural networks inspired by neuronal metaplasticity that allows to avoid catastrophic forgetting and is relevant for neuromorphic applications.
- Axel Laborieux
- , Maxence Ernoult
- & Damien Querlioz
-
Article
| Open AccessClone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps
Higher-order sequence learning using a structured graph representation - clone-structured cognitive graphs (CSCG) – can explain how the hippocampus learns cognitive maps. CSCG provides novel explanations for transferable schemas and transitive inference in the hippocampus, and for how place cells, splitter cells, lap-cells and a variety of phenomena emerge from the same set of fundamental principles.
- Dileep George
- , Rajeev V. Rikhye
- & Miguel Lázaro-Gredilla
-
Article
| Open AccessIdentifying multiple sclerosis subtypes using unsupervised machine learning and MRI data
Multiple sclerosis is a heterogeneous progressive disease. Here, the authors use an unsupervised machine learning algorithm to determine multiple sclerosis subtypes, progression, and response to potential therapeutic treatments based on neuroimaging data.
- Arman Eshaghi
- , Alexandra L. Young
- & Olga Ciccarelli
-
Article
| Open AccessPredictive learning as a network mechanism for extracting low-dimensional latent space representations
Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.
- Stefano Recanatesi
- , Matthew Farrell
- & Eric Shea-Brown
-
Article
| Open AccessDeep learning encodes robust discriminative neuroimaging representations to outperform standard machine learning
Recent critical commentaries unfavorably compare deep learning (DL) with standard machine learning (SML) for brain imaging data analysis. Here, the authors show that if trained following prevalent DL practices, DL methods substantially improve compared to SML methods by encoding robust discriminative brain representations.
- Anees Abrol
- , Zening Fu
- & Vince Calhoun
-
Article
| Open AccessRemembrance of things practiced with fast and slow learning in cortical and subcortical pathways
Surprisingly, motor cortex becomes less involved in performing skilled motor behaviors as they are practiced. This is addressed by a model of two descending pathways featuring different types of learning: fast learning in a cortical pathway to maximize rewards and slow learning in a subcortical pathway to reinforce behaviors through repetition.
- James M. Murray
- & G. Sean Escola
-
Article
| Open AccessPlace cell maps slowly develop via competitive learning and conjunctive coding in the dentate gyrus
Place cells in the hippocampus fire action potentials at spatially selective firing fields that collectively map the environments. Here, the authors show how these activity patterns develop with experience in mice and determine the importance of competitive learning in this process.
- Soyoun Kim
- , Dajung Jung
- & Sébastien Royer
-
Article
| Open AccessUnconscious reinforcement learning of hidden brain states supported by confidence
Humans can unconsciously learn to gamble on rewarding options, but can they do so when it comes to their own mental states? Here, the authors show that participants can learn to use unconscious representations in their own brains to earn rewards, and that metacognition correlates with their learning processes.
- Aurelio Cortese
- , Hakwan Lau
- & Mitsuo Kawato
-
Article
| Open AccessA solution to the learning dilemma for recurrent networks of spiking neurons
Bellec et al. present a mathematically founded approximation for gradient descent training of recurrent neural networks without backwards propagation in time. This enables biologically plausible training of spike-based neural network models with working memory and supports on-chip training of neuromorphic hardware.
- Guillaume Bellec
- , Franz Scherr
- & Wolfgang Maass
-
Article
| Open AccessSpatial planning with long visual range benefits escape from visual predators in complex naturalistic environments
Habitat complexity influences the sensory ecology of predator-prey interactions. Here, the authors show that habitat complexity also affects the use of different decision-making paradigms, namely habit- and plan-based action selection. Simulations across habitat types show that only savanna-like terrestrial habitats favor planning during visually-guided predator evasion, while aquatic and simple terrestrial habitats do not.
- Ugurcan Mugan
- & Malcolm A. MacIver
-
Article
| Open AccessA mechanistic account of serotonin’s impact on mood
The cognitive computational mechanisms underlying the antidepressant treatment response of SSRIs is not well understood. Here the authors show that SSRI treatment in healthy subjects for a week manifests as an amplification of the perception of positive outcomes when learning occurs in a positive mood setting.
- Jochen Michely
- , Eran Eldar
- & Raymond J. Dolan
-
Article
| Open AccessMouse tracking reveals structure knowledge in the absence of model-based choice
Mouse tracking can reveal people’s subjective beliefs and whether they understand the structure of a task. These data demonstrate that people often do not use this information to make good choices.
- Arkady Konovalov
- & Ian Krajbich
-
Article
| Open AccessResponse outcomes gate the impact of expectations on perceptual decisions
The authors use a combination of perceptual decision making in rats and computational modeling to explore the interplay of priors and sensory cues. They find that rats can learn to either alternate or repeat their actions based on reward likelihood and the influence of bias on their actions disappears after making an error.
- Ainhoa Hermoso-Mendizabal
- , Alexandre Hyafil
- & Jaime de la Rocha
-
Article
| Open AccessSeparability and geometry of object manifolds in deep neural networks
Neural activity space or manifold that represents object information changes across the layers of a deep neural network. Here the authors present a theoretical account of the relationship between the geometry of the manifolds and the classification capacity of the neural networks.
- Uri Cohen
- , SueYeon Chung
- & Haim Sompolinsky
-
Article
| Open AccessDopamine transients do not act as model-free prediction errors during associative learning
Dopamine neurons are proposed to signal the reward prediction error in model-free reinforcement learning algorithms. Here, the authors show that when given during an associative learning task, optogenetic activation of dopamine neurons causes associative, rather than value, learning.
- Melissa J. Sharpe
- , Hannah M. Batchelor
- & Geoffrey Schoenbaum
-
Article
| Open AccessControllability governs the balance between Pavlovian and instrumental action selection
Pavlovian and instrumentally driven actions often conflict when determining the best outcome. Here, the authors present an arbitration theory supported by human behavioral data where Pavlovian predictors drive action selection in an uncontrollable environment, while more flexible instrumental prediction dominates under conditions of high controllability.
- Hayley M. Dorfman
- & Samuel J. Gershman
-
Article
| Open AccessOptimizing agent behavior over long time scales by transporting value
People are able to mentally time travel to distant memories and reflect on the consequences of those past events. Here, the authors show how a mechanism that connects learning from delayed rewards with memory retrieval can enable AI agents to discover links between past events to help decide better courses of action in the future.
- Chia-Chun Hung
- , Timothy Lillicrap
- & Greg Wayne
-
Article
| Open AccessInhibitory microcircuits for top-down plasticity of sensory representations
Rewards can improve stimulus processing in early sensory areas but the underlying neural circuit mechanisms are unknown. Here, the authors build a computational model of layer 2/3 primary visual cortex and suggest that plastic inhibitory circuits change first and then increase excitatory representations beyond the presence of rewards.
- Katharina Anna Wilmes
- & Claudia Clopath
-
Article
| Open AccessThe Eighty Five Percent Rule for optimal learning
Is there an optimum difficulty level for training? In this paper, the authors show that for the widely-used class of stochastic gradient-descent based learning algorithms, learning is fastest when the accuracy during training is 85%.
- Robert C. Wilson
- , Amitai Shenhav
- & Jonathan D. Cohen
-
Article
| Open AccessStable memory with unstable synapses
How are stable memories maintained in the brain despite significant ongoing fluctuations in synaptic strengths? Here, the authors show that a model consistent with fluctuations, homeostasis and biologically plausible learning rules, naturally leads to memories implemented as dynamic attractors.
- Lee Susman
- , Naama Brenner
- & Omri Barak
-
Article
| Open AccessChallenging the point neuron dogma: FS basket cells as 2-stage nonlinear integrators
Recent experimental work has revealed non-linear dendritic integration in interneurons. Here, the authors show, through detailed biophysical modeling, that fast spiking interneurons are better described with a 2-stage artificial neural network model calling into question the use of point neuron models.
- Alexandra Tzilivaki
- , George Kastellakis
- & Panayiota Poirazi