Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Organizing memories for generalization in complementary learning systems
Nature Neuroscience Open Access 20 July 2023
-
Detecting SNP markers discriminating horse breeds by deep learning
Scientific Reports Open Access 18 July 2023
-
Big Data and precision agriculture: a novel spatio-temporal semantic IoT data management framework for improved interoperability
Journal of Big Data Open Access 28 April 2023
Access options
Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout
References
Rosenblatt, F. Principles of Neurodynamics (Spartan, Washington, DC, 1961).
Minsky, M. L. & Papert, S. Perceptrons (MIT, Cambridge, 1969).
Le Cun, Y. Proc. Cognitiva 85, 599–604 (1985).
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. in Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1: Foundations (eds Rumelhart, D. E. & McClelland, J. L.) 318–362 (MIT, Cambridge, 1986).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Rumelhart, D., Hinton, G. & Williams, R. Learning representations by back-propagating errors. Nature 323, 533–536 (1986). https://doi.org/10.1038/323533a0
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1038/323533a0
This article is cited by
-
Exploring best-matched embedding model and classifier for charging-pile fault diagnosis
Cybersecurity (2023)
-
Big Data and precision agriculture: a novel spatio-temporal semantic IoT data management framework for improved interoperability
Journal of Big Data (2023)
-
Detecting SNP markers discriminating horse breeds by deep learning
Scientific Reports (2023)
-
Organizing memories for generalization in complementary learning systems
Nature Neuroscience (2023)
-
The neuroconnectionist research programme
Nature Reviews Neuroscience (2023)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.