Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
A deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation. Richards et al. argue that this inspires fruitful approaches to systems neuroscience.
Many studies focus on neural associations yet understanding the brain will ultimately depend on discovering the causal interactions underlying its functionality. Moving from association to causation will thus be essential for advancing neuroscience.
When crossing the street, you can ignore the color of oncoming cars, but for hailing a taxi color is important. How do we learn what to represent neurally for each task? Here, Niv summarizes a decade of work on representation learning in the brain.
This paper first reviews the work on brain-machine interfaces (BMIs) for restoring lost motor function and then provides a perspective on how BMIs could extend to the new frontier of restoring lost emotional function in neuropsychiatric disorders.
The authors review the most recent measurement and manipulation approaches that enable links between synaptic plasticity and learning to be examined, and they propose potential future approaches to tackle this endeavor.