Abstract
THE standard form of back-propagation learning1 is implausible as a model of perceptual learning because it requires an external teacher to specify the desired output of the network. We show how the external teacher can be replaced by internally derived teaching signals. These signals are generated by using the assumption that different parts of the perceptual input have common causes in the external world. Small modules that look at separate but related parts of the perceptual input discover these common causes by striving to produce outputs that agree with each other (Fig. la). The modules may look at different modalities (such as vision and touch), or the same modality at different times (for example, the consecutive two-dimensional views of a rotating three-dimensional object), or even spatially adjacent parts of the same image. Our simulations show that when our learning procedure is applied to adjacent patches of two-dimensional images, it allows a neural network that has no prior knowledge of the third dimension to discover depth in random dot stereograms of curved surfaces.
This is a preview of subscription content, access via your institution
Access options
Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
References
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Nature 323, 533–536 (1986).
Hastie, T. J. & Tibshirani, R. J. Generalized Additive Models (Chapman and Hall, London, 1990).
Lehky S. R. & Sejnowski, T. J. J. Neurosci. 10, 2281–2299 (1990).
Zemel, R. S. & Hinton, G. E. in Advances in Neural Information Processing Systems Vol. 3 (eds Lippman, R. P., Moody, J. E. & Touretzky, D. S.) 299–305 (Morgan Kaufmann, San Mateo, CA, 1991).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Becker, S., Hinton, G. Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature 355, 161–163 (1992). https://doi.org/10.1038/355161a0
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1038/355161a0
This article is cited by
-
Finding the semantic similarity in single-particle diffraction images using self-supervised contrastive projection learning
npj Computational Materials (2023)
-
A Review of Predictive and Contrastive Self-supervised Learning for Medical Images
Machine Intelligence Research (2023)
-
BTSwin-Unet: 3D U-shaped Symmetrical Swin Transformer-based Network for Brain Tumor Segmentation with Self-supervised Pre-training
Neural Processing Letters (2023)
-
SWIN transformer based contrastive self-supervised learning for animal detection and classification
Multimedia Tools and Applications (2023)
-
Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging
Nature Biomedical Engineering (2023)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.