## Abstract

The brain attenuates its responses to self-produced exteroceptions (e.g., we cannot tickle ourselves). Is this phenomenon, known as sensory attenuation, enabled innately, or acquired through learning? Here, our simulation study using a multimodal hierarchical recurrent neural network model, based on variational free-energy minimization, shows that a mechanism for sensory attenuation can develop through learning of two distinct types of sensorimotor experience, involving self-produced or externally produced exteroceptions. For each sensorimotor context, a particular free-energy state emerged through interaction between top-down prediction with precision and bottom-up sensory prediction error from each sensory area. The executive area in the network served as an information hub. Consequently, shifts between the two sensorimotor contexts triggered transitions from one free-energy state to another in the network via executive control, which caused shifts between attenuating and amplifying prediction-error-induced responses in the sensory areas. This study situates emergence of sensory attenuation (or self-other distinction) in development of distinct free-energy states in the dynamic hierarchical neural system.

## Introduction

The brain couples its structure with the outside world via sensorimotor experiences^{1}. A posteriori development of neural processing gradually forms our perceptual phenomena, with a priori nature determined by genes. It yields well-defined self-experience and helps to confidently situate self in relation to others. In general, we face two primary types of sensorimotor experience, self-movements and sensory events in the outside world, which may or may not be correlated. Recognition of the difference is thought to underlie self-other distinctions, sense of self, and subsequent attribution of agency^{2,3}, but the difference cannot be known a priori. The brain may acquire the capacity to modulate neural responses via sensorimotor learning, depending on the context.

A phenomenon called sensory attenuation, is recognized as one of the bases of the sense of self, especially the sense of agency^{2,4}. Sensory attenuation refers to an experience in which an exteroception produced by a self-movement is less salient than one produced externally^{5}. A perfect example is the difficulty of tickling ourselves. In addition, the ability to ignore visual changes during eye or head movements is thought to help maintain stability of the visual scene. At the neural level, sensory attenuation is observed as reduced brain responses, such as blood oxygen level-dependent (BOLD) responses, especially in sensory areas^{6,7,8}. A line of research shows that sensory attenuation is associated with predicted sensorimotor correlation and is diminished by temporal or spatial mismatch between a movement and the resultant exteroception (e.g., a temporal delay or a trajectory rotation)^{9,10,11}. There are some suggestions of underlying neural functions, such as an efference copy of the motor command^{9,12} and neuromodulation (e.g., dopaminergic transmission)^{13}. However, despite intensive work for decades, a fundamental question remains unclear: Is sensory attenuation enabled innately or is it acquired through learning?

Here, we provide a computational explanation, suggesting that a mechanism for sensory attenuation can be self-organized through learning. We focus on the following factors. If a self-movement and a sensory event in the outside world are not correlated, the resultant proprioception and exteroception occur separately. The brain may efficiently use individual sensory areas to represent these individual sensations. On the other hand, if proprioception and exteroception are correlated, they can be best explained as being generated by the same latent cause, i.e., self, rather than being generated by two independent latent states, i.e., self and other. In that case, if the brain can develop a predictive model about the spatio-temporal sensorimotor correlation, it is reasonable to represent the sensorimotor coupling using a multimodal association area, and neural responses in individual sensory areas can be attenuated. That is, through sensorimotor learning, the brain may develop the capacity to predictively modulate sensory-information flow and processing inside hierarchical neural networks, depending on the context, i.e., self generated versus other generated. To test this hypothesis, we conducted a robotic simulation experiment using a variational recurrent neural network (RNN) model based on Bayesian brain theory or free-energy minimization^{1,14,15,16,17}.

## Results

### Computational model

Our daily sensorimotor behavior generally has some sort of regularity, e.g., a set-point of body posture and dynamic movement patterns, and randomness, e.g., free movement with fluctuations^{18}. Therefore, we considered how a robot agent controlled by an RNN develops sensory attenuation through such spontaneous sensorimotor experiences (Fig. 1). A robot repeatedly generated a random behavior (imprecise movement) and then returned to the set-posture (precise movement) within five seconds (20 time steps for the robot), where the robot’s motion and a movement of an external red object were correlated (self-produced context) or uncorrelated (externally produced context). The received sensations were 3-dimensional proprioception for joint angles and 2-dimensional exteroception for the position of the object. Previous brain imaging studies show that sensory attenuation involves hierarchical interactions among multiple brain regions, including sensory areas, an association area (parietal area), and a higher cognitive area (prefrontal or supplementary motor area)^{19,20,21}. The RNN inside the robot has a corresponding hierarchical structure, referred to as sensory (exteroceptive and proprioceptive), association, and executive areas (Fig. 1a). The hierarchical RNN is a generative model that predicts sensations as well as inferring hidden causes of sensations via variational free-energy minimization^{16,17}. Each area has individual latent variables \(\varvec{z}_{\varvec{t}}\) representing each level of belief about the hidden cause of sensation in the form of Gaussian distributions. Each latent variable has prior \(\varvec{z}^{\varvec{p}}_{\varvec{t}}\) and posterior \(\varvec{z}^{\varvec{q}}_{\varvec{t}}\) distributions that correspond to estimated hidden causes before and after observing sensations, respectively. The priors and posteriors of latent variables are time-varying in sensory and association areas, while the executive area maintains a constant posterior state during sequential prediction generation within a time window (Supplementary Fig. S1– S2). We assumed that the executive area controls (and switches) sequential patterns of lower-level network behavior, like the prefrontal or supplementary motor area in a biological brain^{22,23}. At each time step, the RNN generates predictions \(\varvec{\hat{x}}_{\varvec{t}}\) about the next exteroceptive and proprioceptive sensations \(\varvec{x}_{\varvec{t}}\) from latent variables via recurrent units \(\varvec{d}_{\varvec{t}}\), representing dynamics of the environment. Variational free-energy \(F_{t}\) can be calculated as the sum of prediction error and Kullback-Leibler divergence between the posterior and prior at each *l*th network level.

This describes integration of sensory information, i.e., prediction error \(\varvec{e}_{\varvec{t}}\), and prior knowledge, i.e., prior belief \(\varvec{z}^{\varvec{p}}_{\varvec{t}}\), upon a posterior \(\varvec{z}^{\varvec{q}}_{\varvec{t}}\) update, although synaptic weights are also updated during the learning process. The decomposition of variational free-energy in Eq. (1) can also be thought of in terms of accuracy (prediction error) and complexity (the divergence between posterior and prior beliefs). A posterior is only adjusted to minimize prediction errors under the influence of the prior. A weak prior (with low precision) leads to a sensitive posterior update in response to a prediction error, while a strong prior (with high precision) leads to discounting of the prediction error. In other words, prediction errors have a greater effect on posterior beliefs in the presence of a weak, i.e., imprecise prior. The strength of the prior in each area is parameterized by the standard deviation of a Gaussian prior that is dynamically modulated to minimize the path integral of free-energy. In other words, we equip the model with the capacity to infer differences in prior precision in a way that lends the influence of prior beliefs a context sensitivity. From the available physiological evidence, it may engage local GABAergic mechanisms or neuromodulators, e.g., acetylcholine or dopamine^{14,24,25}. In addition, we introduce a hyper-parameter called the meta-prior \(W^{(l)}\), which controls the meta-level balance between the prediction error and the divergence between the posterior and the prior at each network level. Previous studies have shown that a large meta-prior causes an intrinsically strong prior, whereas a small meta-prior causes an intrinsically weak prior^{16,17}. The inclusion of meta-priors may seem superfluous, given that the precision, i.e., inverse variance, of various beliefs are inferred adaptively. However, meta-priors offer an opportunity to model a failure to infer context sensitive changes in precision. This particular failure of sensory attenuation may be an important explanation for several neuropsychiatric conditions: see below. We assumed that the meta-prior represents an innately equipped characteristic in the biological brain. In the baseline model, we set the meta-prior to the same value for all network levels (\(W^{(1)}=W^{(2)}=W^{(3)}=0.005\)) to avoid bias in the prediction-error flow.

### Emergence of sensory attenuation

This experiment comprises learning and test phases. In the learning phase, the robot learned to reproduce the two types of sensorimotor patterns (Fig. 1b). Target sensorimotor sequences were prepared in advance from human spontaneous behaviors through manual manipulations of a physical robot. We prepared 24 training sets for each set of self-produced and externally produced contexts. We ensured that each of the 24 movement patterns of the robot was the same for both contexts (see Fig. 1b and “Learning” section in the “Methods”). The RNN updated posteriors and synaptic weights to minimize the path integral of free-energy (Supplementary Fig. S1).

In the test phase, the trained robot was required to generate actions by itself and to recognize an environmental context by updating only posteriors to minimize the free-energy, with fixed synaptic weights (Fig. 2a; Supplementary Fig. S2). Action generation of the robot was performed by proportional-integral-derivative (PID) control. A PID controller receives proprioceptive predictions as target joint angles and it changes the joint angles (proprioception) to minimize the error between the current state and target. In this regard, the PID controller implements active inference under the free-energy principle by realizing posterior predictions about movement trajectories^{15}. Figure 2b–e shows an example of a test trial (Supplementary Video S1). The test trial consists of a self-produced context during time steps 0–100 and an externally produced context during time steps 100–200.

In the self-produced context, the robot moved the external object by itself, and the object position matched the robot’s hand position (Fig. 2e). The trained RNN successfully reproduced the stochastic property of learned sensorimotor sequences by modulating the association-level latent mean \(\varvec{\mu }^{\varvec{(2)}}\), such that the estimated sigma \(\varvec{\sigma }^{\varvec{(2)}}\) was high during random behavior generation, but low when returning to the set-posture (Fig. 2c). Here, \(\varvec{\mu }\) and \(\varvec{\sigma }\) correspond to the mean and standard deviation of a Gaussian posterior or prior. In other words, they play the role of sufficient statistics of approximate posterior beliefs and prior beliefs that are optimized during learning. Modulation of the latent mean in exteroceptive and proprioceptive areas \(\varvec{\mu }^{\varvec{(1)}}\) was small, with low estimated sigma \(\varvec{\sigma }^{\varvec{(1)}}\) (Fig. 2d), suggesting reduced prediction-error flow into sensory areas with high precision in its prior. These results indicate that the RNN minimized prediction errors produced by self-movements mainly by adjusting the posterior in the association area, since the posterior in sensory areas cannot be easily adjusted because of the strong prior.

Then, at time step 100, the environment was shifted to externally produced contexts, where the object position was given from test data that were not used in the learning phase. The object position and hand position became uncorrelated (Fig. 2e). The environmental change caused a stepwise change in the executive-level latent state \(\varvec{\mu }^{\varvec{(3)}}\) (Fig. 2b). At the same time, in sensory areas, modulation in the latent mean \(\varvec{\mu }^{\varvec{(1)}}\) was amplified and sigma \(\varvec{\sigma }^{\varvec{(1)}}\) increased, showing periodic modulation (Fig. 2d), like the association-level latent state. This shows that the RNN minimized prediction errors from externally produced sensations by adjusting the posterior in sensory areas, as well as in the association area.

Collectively, the hierarchical RNN attenuated neural responses in sensory areas for the self-produced context and amplified them for the externally produced context by proactively controlling precision of the prior at each network level in which the posterior in the executive area worked as the information hub for switching the lower-level precision structure and the prediction error flow (Fig. 3). Furthermore, free-energy converged into different states for distinct sensorimotor contexts (Fig. 3). This suggests that a particular free-energy minimum was developed for each sensorimotor context and the transition of the free-energy state in the network, induced by the abrupt context shift, underlay the qualitative change in the free-energy minimization.

For quantitative analysis, we prepared 10 trained networks with different initial synaptic weights and conducted 8 test trials for each trained network. Figure 4a shows the change in the sensory-level posterior mean \(\varvec{\mu }^{\varvec{(1),q}}\) per time step for the two contexts. In this study, it is referred to as a sensory-level posterior response. A paired t-test reported that the posterior response was significantly smaller in the self-produced context than in the externally produced context (\(t(9)=-3.38, p=0.0082\)). In addition, a paired t-test showed that the sensory-level prior sigma \(\varvec{\sigma }^{\varvec{(1),p}}\) in the self-produced context was significantly lower than that in the externally produced context (\(t(9)=-3.32, p=0.0089\)) (Fig. 4b).

Furthermore, we analyzed how attenuation of neural responses in sensory areas developed during the learning process. The RNN first increased sensory-level posterior responses to reconstruct target sensorimotor sequences (Fig. 4c). Then, sensory-level posterior responses in the self-produced context were gradually attenuated in both exteroceptive and proprioceptive areas. The sensory-level prior sigma diminished more in the self-produced context than in the externally produced context through the learning process (Fig. 4d). We confirmed that posterior responses in the association area were similar in self-produced and externally produced contexts (Supplementary Fig. S3a,b), indicating reduced total neural response in the self-produced context. This sensory attenuation was accompanied by recognition of both contexts in the executive area (Supplementary Fig. S3c,d). These results demonstrate emergence of sensory attenuation through a learning process via free-energy minimization. Additional analyses indicated that development of sensory attenuation was diminished by removing neurons in the association or executive area (Supplementary Fig. S4), suggesting the importance of a higher-level representation of sensorimotor correlation.

In addition, we investigated effects by modulation of the meta-prior at each network level (Supplementary Fig. S5–S7). In particular, a small meta-prior in sensory areas or a large meta-prior in the association area led to a deficit in attenuation of the sensory-level posterior response, as well as the sensory-level prior sigma, in the self-produced context. This suggests that innately decreased prediction-error flow into the association area compared to sensory areas disrupted development of sensory attenuation.

## Discussion

The current model study, using a hierarchically organized variational RNN, illustrated the possibility that a sensory attenuation mechanism can develop through learning. In the learning task, the robot alternately repeated imprecise movement (random behavior) and precise movement (returning to the set-posture) in both self-produced and externally produced contexts. The RNN developed a hierarchical generative model about how proprioceptive and exteroceptive inputs are generated from the latent causes and also represented the stochastic property by dynamically modulating precisions of hierarchical latent variables, which are individually allocated to each area of the RNN. We found that for dealing with two distinct types of sensorimotor contexts, namely self-produced and externally produced contexts, the network developed two distinct free-energy states (minima) through learning, wherein each free-energy state corresponds to each sensorimotor context (Figs. 3, 4c,d). In the developed network, the top-down and bottom-up pathways functioned as follows. In the top-down pathway, the posterior in the executive area predicted the prior precision of the association area and sensory areas. In the bottom-up pathway, prediction error determined the posterior of sensory areas, the association area, and the executive area under the constraint of the prior precision expected at each area by the top-down pathway. Here, we see that the top-down pathway and the bottom-up pathway created a closed circuit in which the posterior in the executive area served as the information hub. In the closed circuit, a particular free-energy state corresponded to the characteristic top-down precision control and bottom-up prediction-error flow inside the hierarchical RNN.

In the self-produced context, sensory attenuation was achieved by minimizing the free-energy to the corresponding free-energy state. In this context, the posterior in the executive area developed to a particular value, and induced high prior precision in sensory areas and low prior precision in the association area, which resulted in less adaptation in its posterior in sensory areas and more adaptation in the association area using the bottom-up error signal. Less adaptation in the posterior in sensory areas corresponds to sensory attenuation. The detail mechanism is as below. The hierarchical RNN recognized that proprioceptive and exteroceptive inputs were generated from the same latent cause, i.e., self, and the latent cause, including its stochastic property, was represented in the higher-level areas (association area and executive area). In this case, sensory areas were not needed to represent the latent cause at all. Indeed, Fig. 2 shows that the RNN can reconstruct sensory inputs based on the dynamic modulation of latent variables only in the association area. Then, high precision (nearly 0 sigma) of latent variables in the sensory areas is thought to be developed to minimize variational free-energy by reducing prediction error due to random sampling and reducing KL divergence between the posterior and the prior.

In an externally produced context, the process was the other way around, wherein the posterior in sensory areas adapted greatly because of the low precision in its prior regulated by the executive area. This corresponds to sensory amplification. In this context, proprioceptive and exteroceptive inputs were generated from individual causes, i.e., self, and other. The hierarchical RNN needed to represent not only the higher-level context, i.e., externally produced context, using the association and executive areas, but also unresolved lower-level information due to individual causes using individual sensory areas. That is why, precisions of latent variables in the sensory areas as well as the association area were modulated dynamically.

Furthermore, the error induced by the change of the sensorimotor context flowed bottom-up to the executive area and determined the posterior in the executive area with its prior set to a neutral value. It triggered the transition from one free-energy state to another, i.e., from sensory attenuation to sensory amplification and verse-versa. In short, precision structures for sensory attenuation in self-produced contexts and sensory amplification in externally produced contexts were self-organized in one hierarchical RNN and were switched via executive control. This suggests that the hierarchical RNN developed a type of functionality of switching between quasi-deterministic and highly stochastic dynamics in each local area, in which sensory attenuation was characterized by quasi-deterministic processing (nearly 0 sigma) in the sensory areas and highly stochastic processing in the association area, while sensory amplification was characterized by highly stochastic processing in both the sensory areas and the association area. This sort of development and transitions of distinct free-energy states provides insights into how perceptual phenomena emerge from dynamic brain-body-environment interaction in the face of uncertainty.

Sensory attenuation observed in our model is an emergent property based on variational free-energy minimization, rather than a consequence guaranteed by the equations used in the proposed model. Indeed, a small meta-prior in sensory areas led to a deficit in development of sensory attenuation, in which prediction error and free-energy for training data were smaller than those of the baseline model (see Supplementary Fig. S5and S8). This shows that sensory attenuation was not necessarily developed even if proprioceptive and exteroceptive sensory inputs are correlated and the model precisely reconstructs the sensory inputs based on variational free-energy minimization. Instead, development of sensory attenuation depends on a generative model of the world self-organized by the hierarchical RNN. In particular, the following factors suggest that development of sensory attenuation required a sort of abstract-level recognition that proprioceptive and exteroceptive inputs are generated from the same latent cause, i.e., sensorimotor coupling, which was achieved by a balanced interaction between the top-down prediction with precision and the bottom-up prediction error. First, reduced numbers of latent variables in the association area or executive area disrupted development of sensory attenuation (Supplementary Figure S4). Second, a large meta-prior in the association area as well as a small meta-prior in the sensory areas, i.e., reduced prediction-error flow into the association area, led to a deficit in development of sensory attenuation (Supplementary Fig. S5–S6). These show the importance of higher-level structure learning for developing sensory attenuation. Furthermore, the difference in the dimensions of proprioception (3-dimensional joint angle) and exteroception (2-dimensional object position) required an abstract-level representation for recognizing the sensorimotor coupling. Such an abstraction of sensorimotor information may be a reasonable consequence given the efficient coding, but it is not necessarily obvious given the self-organization by the hierarchical RNN itself through learning. This kind of emergent phenomenon self-organized through learning has not thoroughly been investigated in previous studies based on the free-energy principle. Thus, we think that this complex property emphasizes the significance of our neurorobotic approach.

There have been prior model proposals to account for sensory attenuation. One proposal^{13} postulates that sensory attenuation is caused by reducing the precision of the prediction error bottom-up to the sensory area during movement by following the free-energy principle. This model, however, does not explain the involvement of the higher executive area, as evidenced by^{19,20}. We confirmed that removal of the executive area diminished the development of sensory attenuation (Supplementary Fig. S4b), emphasizing the contribution of the frontal function. This is consistent with biological studies suggesting that signals from the frontal area, such as the supplementary motor area, predictably control the relative precision or intensity of sensations. Their disruption causes diminished sensory attenuation^{19,26}. Furthermore, the previous model intermixes two phenomena: sensory attenuation and sensory gating. Sensory attenuation compares the intensity of self-produced sensations with externally produced sensations, or the distinction between the self and others. On the other hand, sensory gating refers to a suppression process in which exteroceptions feel weaker during movement than at rest^{27}. In our learning experiment, movements of the robot in self-produced contexts and externally produced contexts were the same, avoiding the confusion with sensory gating. In this sense, our model considered only sensory attenuation (self-other distinction).

According to another leading hypothesis, the pathway of an efference copy of the motor command is thought to originate from top-down signals^{19,26}. In contrast, our model suggests that signals from the executive (frontal) area may represent predictive signals for controlling prediction-error flow inside the hierarchical network, rather than a copy of the motor command. We showed that the functionality, in which top-down signals hierarchically control bottom-up prediction-error flow inside the network (that modulates top-down signals), can be self-organized through learning.

The perspective that the sensory attenuation mechanism is a consequence of learning instead of an innate function, may be indirectly supported by a recent study suggesting that a target stimulus of sensory attenuation can be adaptively changed through rapid learning^{28}. In addition, our result (Fig. 4c) explains increased sensory attenuation with age in adults^{20}. Furthermore, our model suggests that proprioception, as well as exteroception, can be attenuated when a self-movement produces exteroception. In fact, a neuroimaging study observes less cerebellar activity when a movement produces a tactile stimulus, than when it does not^{6}.

There have been some recent advances in our understanding of movement-related suppression processes, including the finding of a difference between sensory attenuation and sensory gating. One of the discussions is about a difference between “physiological sensory attenuation” and “perceptual sensory attenuation”. A recent study suggests that “physiological sensory attenuation” and “perceptual sensory attenuation” have different neurophysiological correlates^{29}. In a recent study, “physiological sensory attenuation” was measured as a decrease in the amplitude of primary and secondary components of the somatosensory evoked potential (SEP) by comparing it during movement and at rest; thus, it may be related with sensory gating. On the other hand, “perceptual sensory attenuation” was measured in a force-matching paradigm and was suggested to be related with a decrease in prediction-error-related neural activity, such as gamma-oscillatory activity. Importantly, “perceptual sensory attenuation” negatively correlated with scores of delusional ideation (a measure of schizotypy) while no significant correlations were found between attenuation of SEP components and scores of delusional ideation. In our experiment, we focus on an attenuation of prediction-error-related response of the posterior; thus, our model may explain “perceptual sensory attenuation” and its neurocomputational mechanism. In addition, our model of sensory attenuation suggests that a decrease in prediction-error-related activity and “perceptual sensory attenuation” may represent self-other distinction, which has implications for mechanisms underlying delusional ideation.

In addition, a recent study demonstrated an enhancement, not suppression, of the intensity of predicted action outcomes by investigating effects of action on intensity of tactile stimulus reported by participants in the force-judgment paradigm^{30}. First, the authors replicated typical findings that self-produced tactile stimuli are rated as less intense than externally produced stimuli when an active contact with a button generates a tactile stimulus. However, this effect reversed when there was no active finger contact with a button. In additional experiments, they controlled the predictability of tactile action outcomes and found that expected events were perceived more, not less, intensely than unexpected events. Note that this additional experiment compared cases with more predictable and less predictable outcomes produced by own action. Since this comparison did not compare stimuli produced by own action and other’s action, this additional experimental result does not conflict with our claim. These results in the previous study^{30} may show that when sensory attenuation does not appear, an action can enhance expected touch. This enhancement effect is consistent with the basic Bayesian idea that a predictive brain focuses on precise (predictable) stimuli to ignore uninformative (uncertain) noise. In this sense, the previous study suggests that a theory of sensory attenuation requires an additional thought beyond the basic Bayesian or active inference framework. Here, our model provides a particular mechanism for sensory attenuation beyond the basic Bayesian theory. Specifically, our results suggest that sensory attenuation requires an abstract-level recognition that proprioceptive and exteroceptive inputs are generated from the same latent cause, i.e., self, which is developed through context-sensitive optimization of hierarchical latent variables. This emphasizes the importance of developmental aspects of hierarchical predictive processing.

Alterations in development of sensory attenuation were induced by modulation of the meta-prior, a hyper-parameter determining the intrinsic strength of the prior compared to the prediction error at each network level. In the basic Bayesian model, the practitioner must manually set the prior, including its precision, to compute the posterior. However, in our model, the prior in each network area is epigenetically self-organized through learning, with the influence of the higher meta-level parameter, i.e., the meta-prior. We assumed that the meta-prior is an innate characteristic determining developmental features of the prior and prediction-error flow in the biological brain, although its neural substrate remains unspecified. In our experiment, a large meta-prior in the association area, i.e., an intrinsically strong prior in the association area, led to reduced sensory attenuation, i.e., a weak prior in sensory areas, while a small meta-prior in the association area, i.e., an intrinsically weak prior in the association area, did not impair development of sensory attenuation (Supplementary Fig. S6). These results may provide insights into relationships between strong prior hypotheses and reduced sensory attenuation in schizophrenia^{9,13,31,32} and between weak prior hypotheses and normal sensory attenuation in autism spectrum disorder^{33,34,35,36,37}. It will be important to seek corresponding neurological evidence of the proposed computational mechanism for development of sensory attenuation and to explore its relevance to psychiatric disorders in future studies.

In summary, we have shown that sensory attenuation is an emergent property of free-energy minimization in active inference. Sensory attenuation is a ubiquitous phenomenon in psychology and psychophysics. Furthermore, it is receiving increasing attention in computational psychiatry, in which a failure of sensory attenuation may explain several neuropsychiatric conditions, ranging from Parkinson’s disease to schizophrenia and autism^{13,34}. Technically, we have simulated behavior in terms of active inference, which can be thought of as a generalization of control and planning as inference. Crucially, we trained our robots to perform active inference by optimizing connection weights (\(\varvec{w}\)) and adaptive variables (\(\varvec{a}\)) in hierarchically organized RNNs, which played the role of a hierarchical generative model. The implicit learning of connection strengths can be thought of as “amortization” or “learning to infer”. In other words, by minimizing (the path or time integral of) variational free-energy, robots were able to learn the mapping from sensory inputs to posterior beliefs about the causes of their sensory inputs. These posterior beliefs recognize the context in which they are operating and generate proprioceptive predictions that are realized by a PID controller. Because certain beliefs concern precision of inferred sensorimotor trajectories, this enables a context sensitive optimization of certain precisions that underwrite sensory attenuation. Put simply, when robots recognized that the proprioceptive and exteroceptive inputs are best explained by externally generated movement, they reduced the precision afforded representations of self generated sensations. Conversely, when proprioceptive and exteroceptive sensory inputs were best explained by self-generated sensations, precision of these explanations increased, thereby reducing the influence of sensory prediction errors. This is the basis of sensory attenuation. Note that this context-sensitive optimization of precision, i.e., encoding of uncertainty, is an emergent property of minimizing the path integral of free-energy. In other words, sensory attenuation emerges from a generative model that provides the best explanation for proprioceptive and exteroceptive inputs, when accumulating free-energy or model evidence (a.k.a., marginal likelihood) over time.

## Methods

### Neural network model

To simulate development of sensory attenuation, we utilized a predictive-coding-inspired variational recurrent neural network (PV-RNN), which represents a generative process of sensation from a hidden cause in the environment, based on the free-energy principle (FEP)^{16,17}. It consists of sensory (exteroceptive and proprioceptive), association, and executive areas in which there are deterministic neurons and latent (stochastic) neurons. Latent neurons represent belief about the cause of sensation as Gaussian distributions (for simplicity). Each latent state has prior and posterior probability distributions that correspond to estimated hidden causes before and after observing sensations, respectively. Based on the latent states, the PV-RNN generates predictions about next sensations in a top-down way. Here, deterministic neurons transform latent states into sensory predictions via synaptic connections that represent relationships between sensations and their causes. The PV-RNN uses a multiple timescale RNN (MTRNN)^{38} as the transformation function. An RNN represents temporal processing of the brain in that neural activity is determined by the past history of neural states. Owing to their capacity to learn to reproduce complex dynamic behaviors, RNNs have been used in computational modeling and developmental neurorobotic studies to understand cortical processing and cognitive functions, including psychiatric symptoms^{38,39,40,41,42,43}. In addition, MTRNNs have a multiple timescale property in neural activation, which enables them to represent a temporal hierarchy in the environment, as observed in the biological brain^{44}. By using a PV-RNN, we can learn complex sensorimotor behaviors that have both temporal regularity and stochasticity^{16,17}. To perform spontaneous behaviors like daily human behaviors, a neural network-controlling robot was required to process short-term random movements and a long-term periodic pattern, as in the experiment in which the multiple timescale property is thought to facilitate sensorimotor learning. In the following sections, we describe the mathematical details of top-down prediction generation and bottom-up parameter updates by the PV-RNN (Supplementary Fig. S1–S2).

#### Prediction generation

Prediction generation is performed in a top-down way through the network hierarchy. The internal state \(h_{t,i}^{(s)}\) and output \(d_{t,i}^{(s)}\) of the *i*th deterministic neuron of the *s*th target sequence at time step *t* \(\left( t \ge 1\right)\) is calculated as

Here, \(I_{\mathrm{Ed}}\), \(I_{\mathrm{Pd}}\), and \(I_{\mathrm{Ad}}\) are index sets of deterministic neurons in the exteroceptive area, proprioceptive area, and association area, respectively. \(I_{\mathrm{Ez}}\), \(I_\mathrm{Pz}\), \(I_{\mathrm{Az}}\), and \(I_{\mathrm{Cz}}\) are index sets of latent neurons in the exteroceptive, proprioceptive, association, and executive areas, respectively. \(w_{ij}\) is the weight of the synaptic connection from the *j*th neuron to the *i*th neuron; \(z_{t,j}^{(s)}\) is the output of *j*th latent (posterior) neuron at time step *t*; \(\tau\) is the time constant of the neuron; and \(b_{i}\) is the bias of the *i*th neuron. A deterministic neuron with a small time constant \(\tau\) has a tendency to change its activity rapidly, while that with a large time constant has a tendency to change its activity slowly. We set the initial internal states of the deterministic neurons \(h_{0,i}^{(s)} \left( i \in I_\mathrm{Ed},I_{\mathrm{Pd}},I_{\mathrm{Ad}}\right)\) to 0 (\(d_{0,i}^{(s)}\) is also 0).

The latent variable \(\varvec{z}\) in each area is assumed to follow a multivariate Gaussian distribution with a diagonal covariance matrix, meaning \(z_{t,i}^{(s)}\) and \(z_{t,j}^{(s)}\) are independent (\(i,j \in I_{\mathrm{Ez}},I_{\mathrm{Pz}},I_{\mathrm{Az}} \wedge i \ne j\)). Here, the mean \(\mu _{t,i}^{(s),p}\) and sigma (standard deviation) \(\sigma _{t,i}^{(s),p}\) of the prior distribution \(p(z_{t,i}^{(s)})\) in the exteroceptive, proprioceptive, and association areas are calculated from the previous deterministic state (prior experience) of the same area.

Here, \(\left( i \in I_{\mathrm{Ez}} \wedge j \in I_{\mathrm{Ed}}\right) \vee \left( i \in I_{\mathrm{Pz}} \wedge j \in I_{\mathrm{Pd}}\right) \vee \left( i \in I_{\mathrm{Az}} \wedge j \in I_{\mathrm{Ad}}\right)\). Note by optimizing the weights \(\varvec{w}\), with respect to the path integral of free-energy, we are effectively optimizing prior beliefs about sensorimotor contingencies and contexts. This can be regarded as a form of structure learning through experience. The executive area has a prior distribution as \(\mathcal {N}(0,1)\) only at the initial step (\(t=1\)) because it has a constant posterior state during sequential prediction generation, with the objective of assigning a specific executive-level posterior to each target sequence.

The posterior distribution in each area is calculated as,

Here, \(T^{(s)}\) represents the length of the *s*th target sequence. \(\varvec{a}\) is the adaptive internal state of neurons representing posterior distributions and it is updated at each time step and for each target sequence during the learning process (or each time step through online inference). Note that Eqs. (8) and (9) mean that the adaptive variables implicitly encode both posterior expectations and precision, such that optimizing the adaptive variables with respect to variational free-energy implicitly optimizes posterior expectations and the posterior confidence or precision afforded those expectations. Adaptive variables \(\varvec{a}_{\varvec{t}}\) are determined by prediction error signals \(\varvec{e}_{\varvec{t:T}}\) propagated by a back-propagation-through-time (BPTT) algorithm, meaning that the posterior of the latent state can be considered a prediction-error-related neural state. Adaptive variables \(\varvec{a}\) are initialized by the corresponding initial internal states of the neurons representing prior distributions before the learning or inference process. Based on the posterior calculation, the latent state \(z_{t,i}^{(s)}\) is obtained by sampling \(\epsilon\) from \(\mathcal {N}(0,1)\).

Finally, predictions about exteroceptive and proprioceptive sensations are individually generated from exteroceptive and proprioceptive areas, respectively.

Here, \(I_{\mathrm{Eo}}\) and \(I_{\mathrm{Po}}\) are index sets of output neurons for exteroceptive and proprioceptive predictions, respectively.

#### Parameter updates via free-energy minimization

The concept of FEP^{14} derives from the fundamental fact that self-organizing biological agents must maintain a limited repertoire of sensory states to remain alive, e.g., a human stays on the ground, not in the sea. Based on information theory, this notion can be formulated as suppression of the surprise (or the negative log-evidence) for sensations \(\varvec{x}\) over time. In the PV-RNN, the surprise over all time steps and target sequences can be written as:

Here, \(I_{\mathrm{S}}\) denotes the index set of target sequences. However, surprise cannot be directly evaluated by the agent because it needs to know all hidden states \(\varvec{z}\) of the environment that cause sensations, as described on the right side of Eq. (12). Here, FEP introduces a tractable quantity, free-energy, that bounds the surprise, and minimization of the surprise is replaced by minimization of the free-energy. The bound of the surprise in the PV-RNN can be derived by utilizing Jensen’s inequality for a concave function *f*: \(f(E[x])\ge E[f(x)]\). For clarity, summation over target sequences is temporarily omitted in the following equations. Then, Eq. (13) can be deformed as follows by introducing a dummy distribution for \(\varvec{z}_{\varvec{1}}\), \(q(\varvec{z}_{\varvec{1}})\).

In Eq. (15), we use Jensen’s inequality for a logarithmic function. The same procedure is done for \(t=2:T\).

The first term in expression (17) is the expected negative log-likelihood under *q* given that \(\varvec{d_{t}}\) depends on \(\varvec{z_{1:t}}\).

In addition, the second term can be deformed into forms of Kullback–Leibler divergence (KLD) between \(q(\varvec{z_{t}})\) and \(p(\varvec{z_{t}}|\varvec{d_{t-1}})\) while being careful to ensure that \(\varvec{d_{0}}\) is independent from \(\varvec{z_{1:T}}\).

A dummy distribution \(q(\varvec{z_{t}})\) can be replaced by the posterior distribution determined by the back-propagated prediction error \(q(\varvec{z_{t}}|\varvec{e_{t:T}})\). Thus, using Eqs. (19) and (22), the bound of the surprise can be written as,

In the experiment, we resolve calculation of the expectations of negative log-likelihood and KLD under *q* by considering single sampling to reduce computational costs.

Eventually, by introducing the meta-prior and considering the summation over different target sequences, the free-energy *F* (the bound of the surprise) in the PV-RNN is formulated as:

Here, \(W^{(l)}\) denotes the meta-prior at *l*th network level. The first term in expression (26) (negative accuracy term) is the negative log-likelihood. For simplicity, we assume that each sensory state follows a Gaussian distribution with unit variance. Then, the first term becomes just the prediction error between the real \(\varvec{x_{t}^{(s)}}\) and predicted \(\varvec{\hat{x}_{t}^{(s)}}\) sensations (plus a constant term, omitted here).

This assumption sets the precision of the prediction error to a constant value and makes it easy to consider the relative precision of prior beliefs compared to prediction errors. In the experiment, we divided the accuracy term by the dimension of each exteroceptive and proprioceptive sensation.

On the other hand, the second term (complexity term) is the KLD between the posterior and prior distributions of the latent variables. Note that variables of the posterior are updated through both the accuracy and complexity terms, but those of the prior are updated only through the complexity term. Therefore, the complexity term represents only the influence of prior beliefs, which are controlled by the meta-prior *W*. Under the assumption that the prior and posterior distributions follow a multivariate Gaussian distribution with a diagonal covariance matrix, as described above, the KLD is computed analytically as^{16,17},

Here, \(i \in I_{\mathrm{Ez}} \vee I_{\mathrm{Pz}}\) (if \(l=1\)), \(i \in I_\mathrm{Az}\) (if \(l=2\)), and \(i \in I_{\mathrm{Cz}}\) (if \(l=3\)). In the experiment, we divided the complexity term by the dimension of latent variables for each area. Note that if \(l=3\) (executive area), the complexity term exits only at \(t=1\), although we write the equation in a general way for simplicity.

In the learning phase (Supplementary Fig. S1), synaptic weights \(\varvec{w}\) and adaptive variables \(\varvec{a}\) are updated to minimize the free-energy over all time steps and target sequences as,

In the test phase after learning (Fig. S2), only adaptive variables \(\varvec{a}\) are updated, while synaptic weights are fixed. In this phase, the free-energy within a short time window *H* is summed as

Using the summed free-energy, adaptive variables \(\varvec{a_{t-H+1:t}}\) within the time window of all areas are updated, whereas the time window slides as the network time step *t* is incremented.

In both learning and test phases, we used the Adam optimizer^{45} for parameter updates, where the partial derivative of the free-energy with respect to each parameter is calculated by the BPTT algorithm.

### Experimental environment

We set a 3-axis robotic arm in a simulated square space \([-1,1]\times [-1,1]\). The lengths of the robot’s links are 0.1, 0.3, and 0.5. Each joint angle was limited to range from 0 to \(\pi\) [rad] and normalized to range from \(-0.8\) to 0.8 to match the range of the neural network output. In addition, the PV-RNN receives the 2-dimensional position of a red object as an exteroceptive sensation. During task execution, the robot receives 5-dimensional sensations every 250 ms.

### Learning

The PV-RNN learned to reproduce target sensorimotor sequences prepared in advance. First, we recorded 24 sequences of joint angles while the experimenter manually manipulated the left arm of a physical robot (Rakuda, developed by Robotis). For each sequence, the experimenter performed 10 repetitions generating a random movement and returning to the set posture within five seconds (20 time steps). Therefore, the length of each sequence is 200 time steps. We used joint angles from left shoulder pitch, left shoulder roll, and the left elbow of Rakuda as 3-dimensional proprioception data of the simulated robot arm. Next, we prepared exteroception data for a self-produced context by setting the object position as the hand position of the simulated arm robot that was calculated by forward kinematics using the recorded joint angle data. In this fashion, we obtained 24 target sequences for self-produced contexts in which exteroceptive and proprioceptive sensations are correlated. Finally, we prepared target sequences for externally produced contexts by shuffling the combination of exteroceptive and proprioceptive sequences. By doing this, we obtained 24 target sequences for externally produced contexts in which exteroceptive and proprioceptive sensations are not correlated. This shuffling procedure ensures that total numbers of changes in sensations are the same for self-produced and externally produced contexts in the learning phase. The PV-RNN learned to reproduce the prepared 48 training datasets via free-energy minimization.

### Online inference

We prepared an additional 8 exteroceptive sequences as test data that were used in externally produced contexts in the test phase. Before a test trial, initial states of adaptive variables \(\varvec{a_{1}}\) in all areas were set to the median values obtained for 24 training datasets of the self-produced context developed through learning. Based on the initial posterior states corresponding to self-produced contexts, the robot first moved the object by itself during time steps 0–100. The robot controlled its joint angles via active inference using the PID controller, for which proprioceptive predictions were used as target joint angles. Then, during time steps 100–200, the environmental context was shifted to the externally produced context, although the robot kept generating spontaneous behaviors by itself via active inference. In the externally produced context, the object position was set from test data. The goal of the robot was to flexibly recognize the environmental change by updating adaptive variables via free-energy minimization. The online inference process was performed based on an interaction between top-down prediction generation and bottom-up posterior updates. In the top-down prediction generation process, the PV-RNN generated sensory predictions \(\varvec{\hat{x}_{t-H+1:t}}\) corresponding to time steps from \(t-H+1\) to *t*, based on the posterior of latent states \(\varvec{z_{t-H+1:t}}\). In the bottom-up modulation process, the free-energy at each time step in time window *H* was calculated using prediction errors for exteroception and proprioception, for which target sensations were the real object position and joint angles from \(t-H+1\) to *t*. Adaptive variables \(\varvec{a_{t-H+1:t}}\) in the time window were updated to minimize the free-energy summed over time steps, and sensory predictions within the time window were re-generated using the updated posterior states. By repeating top-down prediction generation and bottom-up posterior updates for a certain duration, the PV-RNN generated the prior of latent states for time step \(t+1\). The generated prior was used to initialize the posterior for time step \(t+1\), and predictions about sensations for time step \(t+1\) were generated from the posterior. Using proprioceptive predictions for time step \(t+1\) as the target joint angles, the robot moved the joint angles using the PID controller. At the same time, the robot received exteroceptive sensations at time step \(t+1\). After that, the robot’s time step was incremented and the online inference process was performed for the newly received sensations. This inference process, in which recognition and prediction in the past are reconstructed from current sensations, corresponds to a “postdiction” process.

### Parameter settings

The dimension of latent variables \(\varvec{z}\) in the exteroceptive and proprioceptive areas was 1, and that in the association area was 3. Therefore, the total dimension of latent neurons in sensory and association areas was the same as the sensory dimension. The dimension of latent variables in executive area was 1. A preliminary experiment showed that a smaller number of latent neurons in the association area led to a lower level of sensory attenuation, supporting the idea that sensory attenuation is a consequence of representing sensorimotor correlation in the association area (Supplementary Fig. S4a). In addition, we confirmed that no executive-level latent state resulted in a highly decreased level of sensory attenuation, suggesting an important role of the executive-level latent state (Supplementary Fig. S4b). The numbers of deterministic neurons in the exteroceptive, proprioceptive, and association areas were all 15. In a preliminary experiment, we evaluated development of sensory attenuation for settings of 10, 15, or 20 deterministic neurons and confirmed that the setting of 15 neurons showed the largest sensory attenuation (Supplementary Fig. S9a). We set the time constant \(\tau\) to half the number of deterministic neurons (8 neurons) to 2 and that of the other neurons (7 neurons) to 4, as the simplest multiple timescale setting. We set the same multiple timescale property for both the sensory and association areas. This is because we thought that the PV-RNN controlled which levels of the network hierarchy should be used to represent sensations, depending on the context. Actually, the experimental results show that the PV-RNN mainly used the association area in the self-produced context and both the sensory and association areas in an externally produced context, in which the timescale property included in sensations was the same for the two contexts. In a preliminary experiment, we confirmed that the multiple timescale setting led to greater sensory attenuation compared to the single timescale setting \(\tau =2\) for all deterministic neurons (Supplementary Fig. S9b). Synaptic weights were initialized with random values using the default method implemented by Pytorch. Biases of deterministic neurons were initialized with and fixed to random values following a Gaussian distribution *N*(0, 10), for which the variance of biases is close to the firing threshold variability found in biological neurons^{46}, as well as the optimal value in a spiking neural network model^{47} and a recurrent neural network model^{41}. We trained 10 networks with different initial synaptic weights for each hyper-parameter setting for quantitative evaluations. In the learning phase, parameters including synaptic weights \(\varvec{w}\) and adaptive variables \(\varvec{a}\) were updated 200,000 times with the Adam optimizer. We used the same parameter setting of Adam as in the original paper: \(\alpha = 0.001\) (learning rate), \(\beta _{1}=0.9\), and \(\beta _{2}=0.999\). In the test phase, adaptive variables \(\varvec{a}\) were updated 50 times at each time step for a time window of length \(H = 10\) with \(\alpha = 0.09\). We chose the optimal parameter setting in the test phase from the combinations of \(\alpha =\{0.001, 0.005, 0.01, 0.03, 0.05, 0.07, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5\} \times H=\{10, 15\}\) by evaluating levels of prediction error in the baseline meta-prior setting (\(W^{(1)}=W^{(2)}=W^{(3)}=0.005\)).

### Statistical analysis

We used paired t-tests for statistical analyses of network behaviors, such as the posterior response and the level of the prior sigma. All statistical tests were two-tailed, and the significance level was set at \(p<0.05\). The current study conducted an original unprecedented computational simulation experiment. Thus, it is difficult to estimate the effect size, and no statistical methods were used to pre-determine sample size. Considering the high reproducibility of computational simulation, we set the minimum sample size that seemed statistically testable (10 samples) . Indeed, even for 10 samples, paired t-tests reported clear differences between self-produced context and externally produced context (e.g., \(t(9)=-3.38; p=0.0082\) for posterior response and \(t(9)=-3.32; p=0.0089\) for prior sigma). Therefore, we concluded that a larger sample size would not have significantly influenced our main result. Data analyses were conducted using R software (version 3.3.2).

## Data availability

All data is available in the manuscript and the supplementary information.

## Code availability

Computer code for the neural network model was written using Pytorch (a library for deep learning) and is available online (https://github.com/h-idei/pvrnn_sa.git).

## References

Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science.

*Behav. Brain Sci.***36**, 181–204. https://doi.org/10.1017/S0140525X12000477 (2013).Braun, N.

*et al.*The senses of agency and ownership: A review.*Front. Psychol.***9**, 535. https://doi.org/10.3389/fpsyg.2018.00535 (2018).Legaspi, R. & Toyoizumi, T. A bayesian psychophysics model of sense of agency.

*Nat. Commun.***10**, 4250. https://doi.org/10.1038/s41467-019-12170-0 (2019).Dewey, J. A. & Knoblich, G. Do implicit and explicit measures of the sense of agency measure the same thing?.

*PLoS One***9**, e110118. https://doi.org/10.1371/journal.pone.0110118 (2014).Weiskrantz, L., Elliot, J. & Darlington, C. Preliminary observations on tickling oneself.

*Nature***230**, 598–599. https://doi.org/10.1038/230598a0 (1971).Blakemore, S.-J., Wolpert, D. M. & Frith, C. D. Central cancellation of self-produced tickle sensation.

*Nat. Neurosci.***1**, 635–640. https://doi.org/10.1038/2870 (1998).Bäß, P., Jacobsen, T. & Schröger, E. Suppression of the auditory n1 event-related potential component with unpredictable self-initiated tones: Evidence for internal forward models with dynamic stimulation.

*Int. J. Psychophysiol.***70**, 137–143. https://doi.org/10.1016/j.ijpsycho.2008.06.005 (2008).Arikan, B. E.

*et al.*Perceiving your hand moving: Bold suppression in sensory cortices and the role of the cerebellum in the detection of feedback delays.*J. Vis.***19**, 14. https://doi.org/10.1167/19.14.4 (2019).Blakemore, S.-J., Wolpert, D. & Frith, C. Why can’t you tickle yourself?.

*NeuroReport***11**, R11–R16 (2000).Bays, P. M., Wolpert, D. M. & Flanagan, J. R. Perception of the consequences of self-action is temporally tuned and event driven.

*Curr. Biol.***15**, 1125–1128. https://doi.org/10.1016/j.cub.2005.05.023 (2005).Kilteni, K., Andersson, B. J., Houborg, C. & Ehrsson, H. H. Motor imagery involves predicting the sensory consequences of the imagined movement.

*Nat. Commun.***9**, 1–9. https://doi.org/10.1038/s41467-018-03989-0 (2018).Wolpert, D. M., Ghahramani, Z. & Jordan, M. I. An internal model for sensorimotor integration.

*Science***269**, 1880–1882. https://doi.org/10.1126/science.7569931 (1995).Brown, H., Adams, R. A., Parees, I., Edwards, M. & Friston, K. Active inference, sensory attenuation and illusions.

*Cogn. Process.***14**, 411–427. https://doi.org/10.1007/s10339-013-0571-3 (2013).Friston, K. The free-energy principle: A unified brain theory?.

*Nat. Rev. Neurosci.***11**, 127–138. https://doi.org/10.1038/nrn2787 (2010).Adams, R. A., Shipp, S. & Friston, K. J. Predictions not commands: Active inference in the motor system.

*Brain Struct. Funct.***218**, 611–643. https://doi.org/10.1007/s00429-012-0475-5 (2013).Ahmadi, A. & Tani, J. A novel predictive-coding-inspired variational rnn model for online prediction and recognition.

*Neural Comput.***31**, 2025–2074. https://doi.org/10.1162/neco_a_01228 (2019).Ohata, W. & Tani, J. Investigation of the sense of agency in social cognition, based on frameworks of predictive coding and active inference: A simulation study on multimodal imitative interaction.

*Front. Neurorobot.***14**, 61. https://doi.org/10.3389/fnbot.2020.00061 (2020).Inoue, K., Nakajima, K. & Kuniyoshi, Y. Designing spontaneous behavioral switching via chaotic itinerancy.

*Sci. Adv.***6**, eabb3989. https://doi.org/10.1126/sciadv.abb3989 (2020).Haggard, P. & Whitford, B. Supplementary motor area provides an efferent signal for sensory suppression.

*Cogn. Brain Res.***19**, 52–58. https://doi.org/10.1016/j.cogbrainres.2003.10.018 (2004).Wolpe, N.

*et al.*Ageing increases reliance on sensorimotor prediction through structural and functional differences in frontostriatal circuits.*Nat. Commun.***7**, 13034. https://doi.org/10.1038/ncomms13034 (2016).Boehme, R., Hauser, S., Gerling, G. J., Heilig, M. & Olausson, H. Distinction of self-produced touch and social touch at cortical and spinal cord levels.

*Proc. Natl. Acad. Sci.***116**, 2290. https://doi.org/10.1073/pnas.1816278116 (2019).Eagleman, D. M. The where and when of intention.

*Science***303**, 1144–1146. https://doi.org/10.1126/science.1095331 (2004).Leek, E. C. & Johnston, S. J. Functional specialization in the supplementary motor complex.

*Nat. Rev. Neurosci.***10**, 78–78. https://doi.org/10.1038/nrn2478-c1 (2009).Yu, A. J. & Dayan, P. Uncertainty, neuromodulation, and attention.

*Neuron***46**, 681–692. https://doi.org/10.1016/j.neuron.2005.04.026 (2005).Corlett, P., Taylor, J., Wang, X.-J., Fletcher, P. & Krystal, J. Toward a neurobiology of delusions.

*Prog. Neurobiol.***92**, 345–369. https://doi.org/10.1016/j.pneurobio.2010.06.007 (2010).Pynn, L. K. & DeSouza, J. F. The function of efference copy signals: Implications for symptoms of schizophrenia.

*Vision. Res.***76**, 124–133. https://doi.org/10.1016/j.visres.2012.10.019 (2013).Kilteni, K. & Ehrsson, H. H. Predictive attenuation of touch and tactile gating are distinct perceptual phenomena.

*iScience***25**, 104077. https://doi.org/10.1016/j.isci.2022.104077 (2022).Kilteni, K., Houborg, C. & Ehrsson, H. H. Rapid learning and unlearning of predicted sensory delays in self-generated touch.

*eLife***8**, e42888. https://doi.org/10.7554/eLife.42888 (2019).Palmer, C. E., Davare, M. & Kilner, J. M. Physiological and perceptual sensory attenuation have different underlying neurophysiological correlates.

*J. Neurosci.***36**, 10803–10812. https://doi.org/10.1523/JNEUROSCI.1694-16.2016 (2016).Thomas, E. R., Yon, D., de Lange, F. P. & Press, C. Action enhances predicted touch.

*Psychol. Sci.***33**, 48–59. https://doi.org/10.1177/09567976211017505 (2022).Powers, A. R., Mathys, C. & Corlett, P. R. Pavlovian conditioning-induced hallucinations result from overweighting of perceptual priors.

*Science***357**, 596–600. https://doi.org/10.1126/science.aan3458 (2017).Corlett, P. R.

*et al.*Hallucinations and strong priors.*Trends Cogn. Sci.***23**, 114–127. https://doi.org/10.1016/j.tics.2018.12.001 (2019).Blakemore, S.-J.

*et al.*Tactile sensitivity in asperger syndrome.*Brain Cogn.***61**, 5–13. https://doi.org/10.1016/j.bandc.2005.12.013 (2006).Lawson, R. P., Rees, G. & Friston, K. J. An aberrant precision account of autism.

*Front. Hum. Neurosci.***8**, 302. https://doi.org/10.3389/fnhum.2014.00302 (2014).Haker, H., Schneebeli, M. & Stephan, K. E. Can bayesian theories of autism spectrum disorder help improve clinical practice?.

*Front. Psychiatry***7**, 25 (2016).Palmer, C. J., Lawson, R. P. & Hohwy, J. Bayesian approaches to autism: Towards volatility, action, and behavior.

*Psychol. Bull.***143**, 521–542. https://doi.org/10.1037/bul0000097 (2017).Finnemann, J. J., Plaisted-Grant, K., Moore, J., Teufel, C. & Fletcher, P. C. Low-level, prediction-based sensory and motor processes are unimpaired in autism.

*Neuropsychologia***156**, 107835. https://doi.org/10.1016/j.neuropsychologia.2021.107835 (2021).Yamashita, Y. & Tani, J. Emergence of functional hierarchy in a multiple timescale neural network model: A humanoid robot experiment.

*PLoS Comput. Biol.***4**, e1000220 (2008).Yamashita, Y. & Tani, J. Spontaneous prediction error generation in schizophrenia.

*PLoS One***7**, e37843–e37843. https://doi.org/10.1371/journal.pone.0037843 (2012).Idei, H.

*et al.*A neurorobotics simulation of autistic behavior induced by unusual sensory precision.*Comput. Psychiatry***2**, 164–182. https://doi.org/10.1162/CPSY_a_00019 (2018).Idei, H., Murata, S., Yamashita, Y. & Ogata, T. Homogeneous intrinsic neuronal excitability induces overfitting to sensory noise: A robot model of neurodevelopmental disorder.

*Front. Psych.***11**, 762. https://doi.org/10.3389/fpsyt.2020.00762 (2020).Idei, H., Murata, S., Yamashita, Y. & Ogata, T. Paradoxical sensory reactivity induced by functional disconnection in a robot model of neurodevelopmental disorder.

*Neural Netw.***138**, 150–163. https://doi.org/10.1016/j.neunet.2021.01.033 (2021).Finkelstein, A.

*et al.*Attractor dynamics gate cortical information flow during decision-making.*Nat. Neurosci.***24**, 843–850. https://doi.org/10.1038/s41593-021-00840-6 (2021).Newell, K., Liu, Y. & Mayer-Kress, G. Time scales in motor learning and development.

*Psychol. Rev.***108**(1), 57–82 (2001).Kingma, D. P. & Ba, J. A method for stochastic optimization, Adam (2017). arXiv:1412.6980.

Azouz, R. & Gray, C. M. Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo.

*Proc. Natl. Acad. Sci.***97**, 8110–8115. https://doi.org/10.1073/pnas.130200797 (2000).Mejias, J. F. & Longtin, A. Optimal heterogeneity for coding in spiking neural networks.

*Phys. Rev. Lett.***108**, 22810215. https://doi.org/10.1103/PhysRevLett.108.228102 (2012).

## Acknowledgements

This work was partially supported by an unrestricted gift from Google and supported by a JSPS Grant-in-Aid (Nos. JP19J20281, JP22J01708), JST Moonshot R &D (No. JPMJMS2031), and JST CREST Grants (Nos. JPMJCR16E2, JPMJCR21P4).

## Author information

### Authors and Affiliations

### Contributions

Conceptualization: H.I., J.T. Methodology: H.I., Y.Y., J.T. Investigation: H.I., W.O. Visualization: H.I., Y.Y., J.T. Funding acquisition: H.I., Y.Y., T.O., J.T. Project administration: J.T. Supervision: J.T. Writing—original draft: H.I. Writing—review and editing: H.I., W.O., Y.Y., T.O., J.T.

### Corresponding authors

## Ethics declarations

### Competing interests

The authors declare no competing interests.

## Additional information

### Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Supplementary Information

Supplementary Information 1.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Idei, H., Ohata, W., Yamashita, Y. *et al.* Emergence of sensory attenuation based upon the free-energy principle.
*Sci Rep* **12**, 14542 (2022). https://doi.org/10.1038/s41598-022-18207-7

Received:

Accepted:

Published:

DOI: https://doi.org/10.1038/s41598-022-18207-7

## Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.