## Abstract

Isolated spikes and bursts of spikes are thought to provide the two major modes of information coding by neurons. Bursts are known to be crucial for fundamental processes between neuron pairs, such as neuronal communications and synaptic plasticity. Neuronal bursting also has implications in neurodegenerative diseases and mental disorders. Despite these findings on the roles of bursts, whether and how bursts have an advantage over isolated spikes in the network-level computation remains elusive. Here, we demonstrate in a computational model that not isolated spikes, but intrinsic bursts can greatly facilitate learning of Lévy flight random walk trajectories by synchronizing burst onsets across a neural population. Lévy flight is a hallmark of optimal search strategies and appears in cognitive behaviors such as saccadic eye movements and memory retrieval. Our results suggest that bursting is crucial for sequence learning by recurrent neural networks when sequences comprise long-tailed distributed discrete jumps.

## Introduction

Neurons in the brain display a variety of temporal discharging patterns, among which bursting represents the generation of multiple spikes with brief inter-spike intervals (typically several milliseconds) in a short period of time (typically, several tens to hundreds of milliseconds). Bursting neurons are found ubiquitously in the brain and are thought to play active roles in transferring and routing information^{1,2,3,4,5}, inducing synaptic plasticity^{6,7}, and supporting and/or altering cognitive functions^{2,7,8,9,10,11,12,13,14}. Altered neuronal bursting has been implicated in neurodegenerative disorders^{15} and depression^{16}. While our understanding of the roles of bursting has been advanced, the computational advantages of spike bursts over isolated spikes remain elusive.

Here, we show the benefits of bursting activity in learning sequences generated by a special class of random walks observed in various animal behaviors. We investigate whether and how bursting neurons improve the ability of neural network models to learn the dynamical trajectories of Lévy flight, which is a random walk with step sizes obeying a heavy-tailed distribution^{17,18,19}. As a consequence, Lévy flight consists of many short steps and rare long-distance jumps. A well-known characteristic of Lévy flight is that it makes search more efficient than Brownian walks which only consist of relatively short steps^{20,21}. Many processes observed in biology^{22,23,24} and physics^{25,26} can be described as Lévy flight. In neuroscience, an interesting example of Lévy flight is the stochastic trajectories of saccadic eye movement^{27} on which the visual exploration of the objects of interest significantly relies. Several cortical and subcortical regions including the frontal eye field, superior colliculus, and cerebellar cortex participate in controlling and executing saccades^{28} and various neurons show spike bursts in these regions^{8,9,29,30}. Propagation of gamma-frequency (\(\sim\) 40 Hz) bursts of the local field potentials also obeys Lévy flight in the middle temporal cortex of marmosets, which is engaged in visual motion processing^{31}. Other examples of Lévy flight are found in memory processing of animals. In the spatial exploration of rodents, the animal spends the majority of time for exploring small local areas but occasionally travels to distant places at greater speeds^{32}. Hippocampal^{10} and subicular^{33} neurons can learn spatial receptive fields and are known to exhibit burst firing. In human subjects, memory recall can be viewed as foraging behavior obeying Lévy flight^{34,35,36}. The appearance of Lévy flight in various types of foraging behavior and the participation of bursting neurons in the relevant brain regions motivate us to explore what benefits neuronal bursting brings to the learning and execution of such behavior.

For this purpose, we employ reservoir computing (RC) that uses a recurrent network model and FORCE learning of information-readout neurons for efficient learning of time-varying external signals (i.e., teaching signals)^{37}. Originally, RC and FORCE learning were formulated for rate-coding neurons, and FORCE learning of continuous dynamical trajectories is generally fast. The RC system was also quite successful in modeling neural activities recorded from various cortical areas^{38,39,40,41}. Later, RC was extended to networks of spiking neurons^{42,43}, and variants of FORCE learning or some other learning method^{44} for spiking neurons have also been proposed^{45,46,47}. Results of the previous studies have indicated that isolated spikes are sufficient for learning smooth trajectories. However, whether and how isolated spikes and bursts contribute differently to learning a more general class of sequences has not been explored. In this study, we clarify this by using a spiking-neuron version of FORCE learning for training an RC system of bursting neurons.

## Results

Our model follows the conventional framework of reservoir computing except that neurons constituting a recurrent network called reservoir have regular-spiking (RS) and bursting modes. Neurons in the reservoir project to two readout neurons to describe the two-dimensional coordinates \((x_1, x_2)\) of Lévy flight, and the outputs of these neurons are fed back to all neurons in the reservoir (Fig. 1a). The weights of readout connections are modified based on the FORCE learning extended to spiking neurons^{47}. In the RS mode, the neurons tend to generate isolated spikes (Fig. 1b) whereas they are strongly bursty in the bursting mode (Fig. 1c). See “Methods” for the details of the network model and construction of Lévy flight.

During learning, the model was repeatedly exposed to a periodic target signal representing the repetition of a finite portion of Lévy flight trajectories. The model can learn these trajectories in either RS or bursting mode. Stochastic jumps in the trajectory are thought to be difficult for the model to accurately learn. As we will show later, the accuracy and speed of learning significantly depend on the mode of firing. Figure 1d displays an example of the time-varying output of the two readout neurons after the model learned a target signal in the bursting mode. As expected, the output of the model tends to deviate largely from the target trajectory when it shows relatively large jumps. Nonetheless, overall the model well replicates the target trajectory in the burst mode even after the learning process is turned off. The agreement between the target trajectory and the model’s output is more clearly visible in the time evolution of the variables \(x_1\) and \(x_2\) (Fig. 1e).

We quantitatively compare the performance of the model in learning between the bursting and RS modes. The strength of synaptic connections that gives an optimal performance may differ in the individual modes. To make a fair comparison, we first search an optimal coupling strength that minimizes the error in each mode. We calculate the average errors between a target trajectory and an actual output in the bursting mode and the RS mode as a function of the connection strength *G*. Figure 2a,b show the average errors of the trajectories obtained in trial-25 and trial-50 of learning, respectively, when the target length is 400 ms. The errors during trial-25 or trial-50 were first temporally averaged, and then the average and standard deviation of the resultant temporal averages were calculated over 20 simulation runs with different initial settings of neural states and synaptic weights. For each value of *G*, the standard deviations of the average error are calculated over these 20 simulations. As we can see from the figures, the error is minimized for relatively weak connections (\(G \sim\) 50) in the bursting mode. In contrast, the model achieves the least error at much stronger connections (\(G\sim\) 170) in the RS mode. The minimum average error is slightly smaller in the bursting mode than in the RS mode although the error sizes are not greatly different between the two modes after 50 cycles of training (Fig. 2b). Lyapunov exponents indicate that the initial network state was weakly non-chaotic in the RS mode and weakly chaotic in the (optimal) bursting mode (Supplementary Fig. 1). Similar differences in learning behavior are observed between the RS mode and bursting mode for another choice of the parameters of Lévy flight (Supplementary Fig. 2). Given these results, one might conclude that spike bursts have little advantage over isolated spikes in the present sequence learning task.

However, the results presented in Fig. 2a,b reveal two intriguing differences in learning between the RS mode and the bursting mode. First, while the two modes yield approximately the same minimum values of average errors (see “Methods”), the bursting mode yields a much smaller variance at the minimum error than the RS mode. In particular, Fig. 2a demonstrates that the variance almost vanishes for the optimal range of *G* values after 25 training cycles in the bursting mode. This is not the case for the optimal range of *G* values in the RS mode. Second and more importantly, the average error decreases much faster during learning in the bursting mode than in the RS mode, showing impressively different learning speeds between the two modes (Fig. 2c). Generally, the FORCE learning enables rapid learning of a smooth target trajectory even if the trajectory is chaotic^{37}. However, our results show that the FORCE learning with isolated spikes requires several tens of trials for learning a target trajectory representing random walks of Lévy flight. In strong contrast, spike bursts enable the same rule to learn such a target trajectory at a similar accuracy within only ten trials. The merits of bursting are also suggested by the common observation that the individual neurons tend to generate spike bursts after learning at the corresponding optimal coupling strength irrespective of the mode (Supplementary Fig. 3).

As the length of target trajectories is increased, performance in sequence learning is degraded in both modes. However, the superiority of the bursting mode over the RS mode in rapid sequence learning remains (Fig. 2d,e). We note that the absolute values of the error are not really meaningful. These values become smaller as we include more neurons in the reservoir (Fig. 2f).

Now, we investigate why and how spike bursts improve the performance of the network model in learning trajectories of Lévy flight. We show that synchronized bursting of neurons plays an active role in the present sequence learning. Figure 3a shows the time evolution of a portion of the learned trajectory \(x_1(t)\) and \(x_2(t)\) with vertical dashed lines indicating the times of long-distance flights. Here, a long-distance flight refers to a step \((\Delta x_1, \Delta x_2)\) of which the length \(\sqrt{\Delta x_1(t)^2+ \Delta x_2(t)^2}\) is greater than 0.16, which approximately corresponds to the top 5% of long-distance jumps. In Fig. 3b,c, we show spike raster of 100 bursting neurons chosen randomly from the reservoir during the corresponding period of time before and after learning, respectively. While there are many neurons that rarely fire, some neurons intermittently generate brief (\(\sim\) 30 ms) to prolonged (\(\sim\) 150 ms) high-frequency bursts. The individual neurons change their firing patterns before and after learning, but the distributions of inter-spike intervals at the population level remain almost unchanged during learning (Supplementary Fig. 4a,b).

However, visual inspection of the spike raster suggests that many neurons start or stop generating spike bursts around the times of large flights after learning and that such a tendency is weak before learning. Therefore, assuming that spikes with their inter-spike intervals shorter than 6 ms belonged to a burst, we identified the onsets and end times of bursts of individual neurons and calculated the distributions of the onset/end times of bursts relative to the times of the nearest large jumps (i.e., the times of burst onsets/ends minus the times of the nearest neighbor large flights) before (Fig. 3d) and after learning (Fig. 3e). The threshold of 6 ms was determined from a gap in the inter-spike interval distribution (Supplementary Fig. 3b2). Intriguingly, the post-learning distributions exhibited sharp peaks around the origin of the axis for the relative time. The relative times of burst onsets show a particularly prominent peak. These results reveal that the RC system operating in the bursting mode learns the target trajectory of Lévy flight by shifting the times of bursts close to the occurrence times of large jumps. In other words, the RC system synchronizes bursting of the individual neurons around the times of large jumps. This synchronization of bursts is thought to advantage recurrent networks of bursting neurons in learning of sequences that involve abrupt changes in the trajectories.

The above results shown for the bursting mode and the bursting of many neurons after learning (Supplementary Fig. 3) suggest that bursting also plays a similar role for learning in the RS mode. We examined this possibility by investigating learning performance in the RS mode for different coupling strengths: \(G=\) 50, 100 and 150. Before learning, the majority of neurons showed regular spiking for \(G=50\) whereas a larger portion of neurons had bursting patterns for \(G=100\) and 150 (Fig. 4a, top). In all three cases, the number of bursting neurons was increased and the error was decreased after learning (Fig. 4a, bottom). Interestingly, learning with a larger value of *G* reduced the error more efficiently (Fig. 4b). In addition, the number of synchronous bursting near the onset of large jumps increased more prominently as the value of *G* was increased (Fig. 4c). These results show that the network set in the RS mode also develops bursting states to improve the accuracy of learning.

We further examined how synchronous neuronal bursting evolves during the progress of learning in the optimal model for the RS mode (\(G=170\)). As shown previously, this model has a tendency of bursting even before learning (Supplementary Fig. 4a). Synchronous bursting was not prominent at the trial-10 of learning but became prominent sometime between trial 10 and trial 20 (Fig. 5). Intriguingly, during this period the error was decreased to a similar magnitude to the minimum error of the optimal model for the bursting mode (c.f. Fig. 2c). The results indicate the pivotal role of burst synchronization in the present sequence learning task.

## Discussion

We have trained an RC system of spiking neurons on a difficult sequence learning task where the target sequence represents random walks. FORCE learning can project the neural population activity of the reservoir quickly onto a target trajectory for a wide range of continuous trajectories including chaotic ones. This fast convergence of learning is a merit of RC, making RC useful for various practical applications. However, when a target trajectory consists of abrupt steps including long-distance jumps, as was the case in Lévy flight, FORCE learning with isolated spikes requires a large number of trials for minimizing the error signal. In contrast, the same learning rule can rapidly minimize the error by aligning the onsets as well as the end times of bursts in the neighborhoods of the times of long-distance jumps. This implies that the system synchronizes bursts of the individual neurons around these times. Such time-locked synchronization also emerges in the RS mode during learning. Moreover, the growth of synchronous bursting improves the performance of the trained model. This result suggests that the initial absence of synchronous bursts is the primary course of slow learning in the RS mode. Since the optimal model for the RS mode has a tendency of bursting before learning, a transition from the RS neuron to a bursting type is unlikely to be the primary course. Thus, the RC system can learn the Lévy flight trajectories more efficiently with bursts than with isolated spikes. Our model suggests that bursts contribute crucially to learning foraging-like cognitive behaviors.

Our results show an interesting qualitative agreement with some experimental observations. It has been known that the onsets of bursts in the saccade-related burst neurons are tightly linked to saccade onsets in the superior colliculus^{8,9}. These neurons tend to discharge prior to a saccade if the movement is in their preferred direction, and their discharges follow rather than precede saccades for movements deviating from their preferred directions. Altough our model is far simpler compared to the actual neural circuits that control saccadic eye movements^{48}, the sharp peak of burst onsets around the times of long-distance steps in Fig. 3e seems to be consistent with the characteristic behavioral correlates of the saccade-related burst neurons in the superior colliculus.

During spatial navigation, hippocampal place cells exhibit both bursts and isolated spikes^{3}, and the different discharging patterns are thought to play distinct functional roles in the hippocampal memory processing^{3,11,49}. The hippocampal area CA3, which has prominent recurrent excitatory connections, resembles a reservoir in this model. Furthermore, an abstract model of the entorhinal-hippocampal memory system accounted for the different statistical structures of hippocampal sequence generation, such as diffusive vs. Lévy flight-like random walks^{32}. Therefore, the hippocampal circuits are of potential relevance to this study. However, the relationships between spatial information coding and the cells’ discharging patterns are not simple, depending on specific cell types and brain regions^{33,49}. To our knowledge, whether CA3 neural population synchronizes their burst discharges around the times of long-distance runs of animals has not been known. On the other hand, it is known that bursts of CA3 neurons mostly occur in an inbound travel towards their receptive field centers^{10}. Clarifying the distinct computational roles of isolated spikes and bursts to the hippocampal memory processing is an intriguing open question.

In summary, this study showed the advantages of bursting neuronal activity in rapid learning of dynamical trajectories obeying Lévy flight. Bursting is ubiquitously found in various regions of the brain, and previous studies suggest the active roles of bursts in robust spike propagation and induction of synaptic plasticity. Our results give a further insight into the unique role of bursts at the network-level learning and computation.

## Methods

### Neuron model

We describe neurons in the reservoir with the Izhikevich model, which is able to mimic the temporal discharging patterns of various neurons^{50}:

where \(a=0.02\) and \(b=0.2\), *i* is a neuron index, and the number of neurons \(N=1000\). The values of \(v_i\) and \(u_i\) are reset to *c* and \(u_i+d\) when \(v_i\) reaches the threshold of 30 mV. We set \(c=-65\) mV and \(d=8\) in the RS mode and \(c=-50\) mV and \(d=2\) in the bursting mode. We use this model without taking refractory periods into account for simplicity of numerical simulations though some neurons may exhibit unrealistically high frequency bursting.

Synaptic current is given as \(I_i=s_i(t)+I_b\), where \(I_b\) is a constant bias and recurrent synaptic inputs are

in terms of the instantaneous firing rate \(r_i(t)\) of neuron *i* at time *t*. Throughout this study, we set \(I_b=10\). The synaptic weight matrix \(w_{ij}\) has non-modifiable components \(w^0_{ij}\) and modifiable components \(\phi _j^{(k)}\), with *G* and *Q* being constant parameters. The non-modifiable components have the connection probability \(p=0.1\) and their values are drawn from a normal distribution with mean 0 and variance \(1/\sqrt{Np^2}\). While \(Q=100\) throughout this paper, the value of *G* is mode-dependent, as shown later. The encoding parameter \(\eta _i^{(k)}\) (\(k=1,\ 2)\) is randomly drawn from the uniform distribution \([-1,+1]\). The linear decoder \(\phi ^{(k)}_i(t)\) determines activities of the readout units \(x^{(k)}(t)\):

which should approximate a given target trajectory.

### FORCE learning

We used a straight-forward extension of the FORCE learning to spiking neurons^{47}. A double exponential filter was used to low-pass filter the individual spikes of the *i*-th neuron in the reservoir:

where \(\tau _r\) and \(\tau _d\) are the synaptic rise time and synaptic decay time, respectively. Values of these parameters were set as \(\tau _r=2\) ms and \(\tau _d= 20\) ms.

Using the error signals \(e^{(k)}(t)=f^{(k)}(t)-x^{(k)}(t)\), we update the decoders as follows:

The initial conditions are given as \(\phi _j^{(k)}(0)=0\) and \(\mathbf{P}(0)=\mathbf{I}_N/ \lambda\), where \(\mathbf{I}_N\) is an *N*-dimensional identity matrix and \(\lambda =10\) for both regular and bursting modes. The performance of the model is evaluated by the average squared error between a target trajectory and the corresponding network output:

where \(<\cdot >_t\) means averaging over time within the corresponding learning step.

### Lévy flight

Trajectories obeying Lévy flight were generated by using the function, scipy.stats.levy_stable.rvs(), in the Scipy library of Python for scientific calculations. This function generates a series of random numbers that obey the Lévy distribution^{17,18}. In short, a stable distribution has the characteristic function of the form,

where \(\alpha\), \(\beta\), *c*, and \(\mu\) are the characteristic exponent, skewness parameter, scale parameter, and location parameter, respectively, and

The probability density function for a stable distribution is given as

where \(-\infty<x<\infty\). If we set \(c=1\) and \(\mu =0\), we obtain a class of long-tailed distributions in which Lévy distribution of a narrow sense is obtained for \(\alpha =0.5\) and \(\beta =1\). However, to obtain sufficiently significant long tails, unless otherwise stated, we used the values \(\alpha =1.5\) and \(\beta =0\) throughout this study. The distribution of jump distances is shown for the choice of parameter values together with another example also used in this study (Supplementary Fig. 2a).

Now, step sizes of a two-dimensional Lévy flight can be written as

where the angle of each step \(\theta (t)\) is drawn randomly from the uniform distribution \(0 \le \theta \le 2 \pi\), and the step amplitude *R*(*t*) was determined as \(R=F^{-1}(r; \alpha ,\beta ,c,\mu )\), where

and \(0 < r \le 1\) is a uniform random number.

We limited the target trajectories with in a square area \(|x_1| \le 2\), \(|x_2| \le 2\) by normalizing the coordinates of Lévy flight as

where \(\Delta x_{k, \mathsf {min}}\) and \(\Delta x_{k,\mathsf {max}}\) (\(k=1,2\)) stand for the minimum and maximum values of the step sizes, respectively, constituting the target trajectory.

## References

Lisman, J. E. Bursts as a unit of neural information: Making unreliable synapses reliable.

*Trends Neurosci.***20**, 38–43 (1997).Fanselow, E. E., Sameshima, K., Baccala, L. A. & Nicolelis, M. A. Thalamic bursting in rats during different awake behavioral states.

*Proc. Natl. Acad. Sci. U. S. A.***98**, 15330–15335 (2001).Harris, K. D., Hirase, H., Leinekugel, X., Henze, D. A. & Buzsáki, G. Temporal interaction between single spikes and complex spike bursts in hippocampal pyramidal cells.

*Neuron***32**, 141–149 (2001).Izhikevich, E. M., Desai, N. S., Walcott, E. C. & Hoppensteadt, F. C. Bursts as a unit of neural information: Selective communication via resonance.

*Trends Neurosci.***26**, 161–167 (2003).Naud, R. & Sprekeler, H. Sparse bursts optimize information transmission in a multiplexed neural code.

*Proc. Natl. Acad. Sci. U. S. A.***115**, E6329–E6338 (2018).Larson, J. & Lynch, G. Induction of synaptic potentiation in hippocampus by patterned stimulation involves two events.

*Science***232**, 985–988 (1986).Yin, L.

*et al.*Autapses enhance bursting and coincidence detection in neocortical pyramidal cells.*Nat. Commun.***9**, 4890 (2018).Goossens, H. H. L. M. & van Opstal, A. J. Optimal control of saccades by spatial-temporal activity patterns in the monkey superior colliculus.

*PLoS Comput. Biol.***8**, e1002508 (2012).Sparks, D. L. & Mays, L. E. Movement fields of saccade-related burst neurons in the monkey superior colliculus.

*Brain Res.***190**, 39–50 (1980).Mizuseki, K., Royer, S., Diba, K. & Buzsaki, G. Activity dynamics and behavioral correlates of CA3 and CA1 hippocampal pyramidal neurons.

*Hippocampus***22**, 1659–1680 (2012).Xu, W.

*et al.*Distinct neuronal coding schemes in memory revealed by selective erasure of fast synchronous synaptic transmission.*Neuron***73**, 990–1001 (2012).Payeur, A., Guerguiev, J., Zenke, F., Richards, B. A. & Naud, R. Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits.

*Nat. Neurosci.***24**, 1010–1019 (2021).Wang, M.

*et al.*Single-neuron representation of learned complex sounds in the auditory cortex.*Nat. Commun.***11**, 4361 (2020).Fujita, K., Kashimori, Y. & Kambara, T. Spatiotemporal burst coding for extracting features of spatiotemporally varying stimuli.

*Biol. Cybern.***97**, 293–305 (2007).Miller, B. R., Walker, A. G., Barton, S. J. & Rebec, G. V. Dysregulated neuronal activity patterns implicate corticostriatal circuit dysfunction in multiple rodent models of Huntington’s disease.

*Front. Syst. Neurosci.***5**, 26 (2011).Yang, Y.

*et al.*Ketamine blocks bursting in the lateral habenula to rapidly relieve depression.*Nature***554**, 317–322 (2018).Lévy, P.

*Theorie de L’Addition des Variables Aleatoires*(Gauthier-Villars, 1954).Mandelbrot, B.

*The Fractal Geometry of Nature*(Freeman, 1977).Abe, M. S. Functional advantages of Lévy walks emerging near a critical point.

*Proc. Natl. Acad. Sci. U. S. A.***117**, 24336–24344 (2020).Viswanathan, G. M.

*et al.*Optimizing the success of random searches.*Nature***401**, 911–914 (1999).Bartumeus, F., Da Luz, M. G. E., Viswanathan, G. M. & Catalan, J. Animal search strategies: A quantitative random-walk analysis.

*Ecology***86**, 3078–3087 (2005).Ott, A., Bouchaud, J., Langevin, D. & Urbach, W. Anomalous diffusion in living polymers: A genuine Levy flight?.

*Phys. Rev. Lett.***65**, 2201–2204 (1990).Brockmann, D., Hufnagel, L. & Geisel, T. The scaling laws of human travel.

*Nature***439**, 462–465 (2006).Huda, S.

*et al.*Lévy-like movement patterns of metastatic cancer cells revealed in microfabricated systems and implicated in vivo.*Nat. Commun.***9**, 4539 (2018).Corral, A. Universal earthquake-occurrence jumps, correlations with time, and anomalous diffusion.

*Phys. Rev. Lett.***97**, 178501 (2006).Barthelemy, P., Bertolotti, J. & Wiersma, D. A Lévy flight for light.

*Nature***453**, 495–498 (2008).Boccignone, G. & Ferraro, M. Modelling gaze shift as a constrained random walk.

*Phys. A***331**, 207–218 (2004).Sparks, D. L. & Barton, E. J. Neural control of saccadic eye movements.

*Curr. Opin. Neurobiol.***3**, 966–972 (1993).Kojima, Y. A neuronal process for adaptive control of primate saccadic system.

*Prog. Brain Res.***249**, 169–181 (2019).Quinet, J., Schultz, K., May, P. J. & Gamlin, P. D. Neural control of rapid binocular eye movements: Saccade-vergence burst neurons.

*Proc. Natl. Acad. Sci. U. S. A.***117**, 29123–29132 (2020).Liu, Y., Long, X., Martin, P. R., Solomon, S. G. & Gong, P. Lévy walk dynamics explain gamma burst patterns in primate cerebral cortex.

*Commun. Biol.***4**, 739 (2021).McNamee, D. C., Stachenfeld, K. L., Botvinick, M. M. & Gershman, S. J. Flexible modulation of sequence generation in the entorhinal-hippocampal system.

*Nat. Neurosci.***24**, 851–862 (2021).Simonnet, J. & Brecht, M. Burst firing and spatial coding in subicular principal cells.

*J. Neurosci.***39**, 3651–3662 (2019).Rhodes, T. & Turvey, M. T. Human memory retrieval as lévy foraging.

*Phys. A***385**, 255–260 (2007).Costa, T., Boccignone, G., Cauda, F. & Ferraro, M. The foraging brain: Evidence of Lévy dynamics in brain networks.

*PLoS One***11**, e0161702 (2016).Patten, K. J., Greer, K., Likens, A. D., Amazeen, E. L. & Amazeen, P. G. The trajectory of thought: Heavy-tailed distributions in memory foraging promote efficiency.

*Mem. Cogn.***48**, 772–787 (2020).Sussillo, D. & Abbott, L. F. Generating coherent patterns of activity from chaotic neural networks.

*Neuron***63**, 544–557 (2009).Sussillo, D., Churchland, M. M., Kaufman, M. T. & Shenoy, K. V. A neural network that finds a naturalistic solution for the production of muscle activity.

*Nat. Neurosci.***18**, 1025–1033 (2015).Rajan, K., Harvey, C. D. & Tank, D. W. Recurrent network models of sequence generation and memory.

*Neuron***90**, 128–142 (2016).Enel, P., Procyk, E., Quilodran, R. & Dominey, P. F. Reservoir computing properties of neural dynamics in prefrontal cortex.

*PLoS Comput. Biol.***12**, e1004967 (2016).Martín-Vázquez, G., Asabuki, T., Isomura, Y. & Fukai, T. Learning task-related activities from independent local-field-potential components across motor cortex layers.

*Front. Neurosci.***12**, 429 (2018).Denève, S. & Machens, C. K. Efficient codes and balanced networks.

*Nat. Neurosci.***19**, 375–382 (2016).Abbott, L., DePasquale, B. & Memmesheimer, R. M. Building functional networks of spiking model neurons.

*Nat. Neurosci.***19**, 350–355 (2016).Bellec, G.

*et al.*A solution to the learning dilemma for recurrent networks of spiking neurons.*Nat. Commun.***11**, 3625 (2020).Thalmeier, D., Uhlmann, M., Kappen, H. J. & Memmesheimer, R.-M. Learning universal computations with spikes.

*PLoS Comput. Biol.***12**, e1004895 (2016).Kim, C. M. & Chow, C. C. Learning recurrent dynamics in spiking networks.

*eLife***7**, e37124 (2018).Nicola, W. & Clopath, C. Supervised learning in spiking neural networks with FORCE training.

*Nat. Commun.***8**, 2208 (2017).Optican, L. M. & Pretegiani, E. What stops a saccade?.

*Philos. Trans. R. Soc. B***372**, 20160194 (2017).Epsztein, J., Brecht, M. & Lee, A. K. Intracellular determinants of hippocampal CA1 place and silent cell activity in a novel environment.

*Neuron***70**, 109–120 (2011).Izhikevich, E. M. Simple model of spiking neurons.

*IEEE Trans. Neural Netw.***14**, 1569–1572 (2003).

## Acknowledgements

This work was partly supported by Grants-in-Aid for Specially Promoted Research (JSPS KAKENHI) no. 18H05213.

## Author information

### Authors and Affiliations

### Contributions

T.F. and T.A. designed the study. M.O. and T.A. performed numerical simulations. T.F., T.A., and M.O. wrote the manuscript.

### Corresponding author

## Ethics declarations

### Competing interests

The authors declare no competing interests.

## Additional information

### Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Supplementary Information

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Ohta, M., Asabuki, T. & Fukai, T. Intrinsic bursts facilitate learning of Lévy flight movements in recurrent neural network models.
*Sci Rep* **12**, 4951 (2022). https://doi.org/10.1038/s41598-022-08953-z

Received:

Accepted:

Published:

DOI: https://doi.org/10.1038/s41598-022-08953-z

## Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.