Adaptive stochastic resonance for unknown and variable input signals


All sensors have a threshold, defined by the smallest signal amplitude that can be detected. The detection of sub-threshold signals, however, is possible by using the principle of stochastic resonance, where noise is added to the input signal so that it randomly exceeds the sensor threshold. The choice of an optimal noise level that maximizes the mutual information between sensor input and output, however, requires knowledge of the input signal, which is not available in most practical applications. Here we demonstrate that the autocorrelation of the sensor output alone is sufficient to find this optimal noise level. Furthermore, we demonstrate numerically and analytically the equivalence of the traditional mutual information approach and our autocorrelation approach for a range of model systems. We furthermore show how the level of added noise can be continuously adapted even to highly variable, unknown input signals via a feedback loop. Finally, we present evidence that adaptive stochastic resonance based on the autocorrelation of the sensor output may be a fundamental principle in neuronal systems.


Biological organisms, as well as technical systems, rely on sensors that transmit environmental signals into the system for subsequent information processing. Sensors, in general, have a limited sensitivity, so that input signals with amplitudes below a certain threshold cannot normally be detected. Stochastic resonance (SR), a phenomenon first described by Benzi et al. in 19811 enables non-linear systems to detect even sub-threshold signals by means of added noise2,3,4, which brings the weak input signal at random time points to above-threshold levels. This mechanism, however, requires a tuning of the added noise level to work properly. In the case of input signals with known properties, such as periodic signals with known frequency, the optimum noise level can be found by maximizing the signal-to-noise ratio of the sensor output4, 5. For arbitrary, non-periodic input signals, however, it is not possible to separate signal and noise in the detector output by frequency filtering, so that the quality of the signal transmission must be assessed in a different way.

Regardless of the detailed mechanism, SR has been identified in a wide range of biological systems, including the mating behavior of Nezara viridula that are able to detect subthreshold acoustic calling signals when mixed with acoustic Gaussian noise of suitable intensitiy6, rat cutaneous mechanoreceptors when stimulated with subthreshold aperiodic stimuli plus noise2 and the paddlefish which relies on electrical signals, amplified with stochastic resonance, to hunt edible plankton7. SR has recently received increasing attention especially in the context of experimental and computational neuroscience where it helps to explain how neuronal systems operate in noisy environments8,9,10.

It can be demonstrated that an optimal level for the added noise exists that maximizes the information transmission from the sub-threshold input to the sensor output11, 12. In self-adaptive non-linear signal detection systems based on SR, the optimum noise level can be continuously adjusted (increased or reduced until an optimum is found) via a feed-back loop, so that the system response in terms of information throughput remains optimal even if the properties of the input signal change (Fig. 1). For this processing principle, the term adaptive SR has been coined13,14,15.

Figure 1

Principle of adaptive SR. Stochastic resonance describes how a sub-threshold signal with added noise of sufficient amplitude can result in a non-zero sensor output. If the input signal amplitude changes over time, the optimal noise level can be estimated and adjusted in a feedback loop from a resonance curve. If the probability density of the input signal levels is known, the resonance curve can be obtained from a variety of objective functions. For unknown input signals, however, the only applicable objective function is the autocorrelation of the sensor output.

A widely used measure (in the following refered to as ‘objective function’) for quantifying the information throughput is the mutual information (MI, cf. Methods) between the input signal and the sensor output3, 12, 14 a statistical quantity that measures the mutual dependence of these two variables16. It has been shown previously that the MI has a maximum at a well-defined, optimal intensity of the added noise12.

To calculate the MI (or other objective functions such as the signal-to-noise ratio, or the cross-correlation between input and output signal), the input signal must be known2,3,4, 6, 10,11,12,13, 17, but this is often not the case. Here, we consider the practically important case of unknown and not necessarily periodic input signals. Although finding the optimal level of noise becomes less important with arrays of transducers17, it still remains an unsolved problem for single detector systems.

In this letter we show that this fundamental limitation of adaptive SR can be overcome by another objective function, namely the autocorrelation (AC, cf. Methods) of the detector output. Maximizing the output AC leads to similar or identical estimates of optimal noise intensities for SR as estimates based on established objective functions, yet with the decisive advantage that no knowledge of the input signal is required (Fig. 1).


To demonstrate the equivalence of the mutual information (cf. Methods) between input and output, and the autocorrelation of output (cf. Methods), for finding the optimal noise level, we consider a so-called discrete-symmetric model with a symmetric threshold |θ| = 1.1 and a discrete output y t  = {−1.0, 0.0, 1.0}, i.e. the sensor output is zero for a sensor input s t of −θ < s t  < θ, −1.0 for s t  < −θ, and 1.0 for s t  > θ. As a discrete input signal we generate a correlated bipolar string s t  = {−1.0, 1.0} with the probability of successive values being identical p(s t  = s t+1) = 0.7 (Fig. 2 top). Without added noise, the input signal alone never passes the thresholds.

Figure 2

Results of analytical model and numerical simulations. Top: Example of a discrete signal consisting of a correlated bipolar string with p(s t  = s t+1) = 0.7. Bottom: Comparison of different objective functions versus noise intensity for correlated bipolar strings with symmetric threshold and a signal-to-threshold distance of 0.1. Analytical and numerical solutions coincide, and all objectives functions show a maximum at the same noise level. MI = mutual information, AC = autocorrelation of the output signal.

Furthermore, we introduce the concept of the success probability (cf. Supplements): i.e. the conditional probability that, given a certain input, the correct corresponding output is produced. For a discrete-symmetric sensor with correlated bipolar input, we prove analytically (cf. Supplements) that the success probability as a function of noise intensity has a well-defined peak, indicating the existence of an optimal level of noise for SR. We also prove that the mutual information and the autocorrelation of the output can be expressed as strictly monotonous functions of the success probability (cf. Supplements). Hence both, the mutual information and the autocorrelation of the output exhibit their maximum at the same level of noise. We numerically tested our approach and confirmed that both the mutual information and the autocorrelation of the output yielded similar optimal noise levels (Fig. 2 bottom). In addition, we applied our analytical model to the important case of soft thresholds18 and non-Gaussian noise and show that this only changes the success probability Q, whereas the MI and AC remain monotonous functions of Q and, hence, peak at the same level of noise (cf. Supplements).

To generalize our finding and demonstrate that both approaches for quantifying the information throughput give similar optimal noise intensities, we simulated a variety of models comprising different discrete and continuous input signals, different types of detector models and different signal-to-threshold distances (the difference between the temporal averaged sub-threshold signal and the threshold).

As detectors, we tested the above described discrete-symmetric model (Fig. 3d), a discrete-asymmetric (Fig. 3c), a continuous-asymmetric (Fig. 3a) and a continuous-symmetric (Fig. 3b) detector model (cf. Methods). These detectors are all memoryless, i.e. their output at a certain time depends only on their input at that time but neither on their internal state, nor on past inputs. We also tested a model with memory and implemented the biologically important neuronal leaky integrate-and-fire model (cf. Methods).

Figure 3

Detector models. Sketch, demonstrating the reponse of different sensor types to a sinusoidal input siganl (red line). We compare the four cases of continuous (a,b) or discrete (c,d) output (blue lines or dots, respectively) and with asymmetric (a,c) or symmetric (b,d) thresholds (green lines).

As discrete aperiodic input signals, we used the correlated bipolar string described above (Fig. 2 top). In addition, we tested different time series of continuous aperiodic signals, namely the time series x(t) of the 3-dimensional Roessler attractor (Fig. 4a top, cf. Methods), an Ornstein-Uhlenbeck process (Fig. 4b top, cf. Methods), and wave files of recorded speech (Fig 4c top). Furthermore, a sine wave with constant frequency and amplitude was used as continuous periodic input (Fig. 4d top).

Figure 4

Performance of the output autocorrelation method. Comparison of resonance curves using the mutual information (red) and the output autocorrelation (blue) for different continuous signals. (a) Projection of the 3-dimensional Roessler attractor in one dimension, asymmetric threshold. (b) Ornstein-Uhlenbeck process, symmetric threshold. (c) Wave files of recorded speech, music and natural sounds, symmetric threshold. The biologically important case of the leaky integrate-and-fire model is shown in (d) with sine wave with constant frequency and amplitude as input. The signal-to-threshold distance was 0.1 for all cases.

Four sample resonance curves (i.e. the mutual information and the autocorrelation of the output as functions of noise intensity) are shown in Fig. 4. In all cases the mean signal-to-threshold distance in arbitrary units was set to 1.1. Although the resonance curves from the mutual information (red) and the autocorrelation of the output (blue) are different, they peak at identical noise intensities.

We then tested all combinations of the five different detector models (that is: continuous-symmetric, continuous-asymmetric, discrete-symmetric, discrete-asymmetric, leaky integrate-and-fire) and the three input signal types (correlated bipolar string, Roessler attractor, sine wave) with twenty different signal-to-threshold distances (from 0.0 to 1.0 in steps of 0.05, arbitrary units) (Fig. 5). For each combination, ten trials were performed with different seeds for the random number generator resulting in a total of 3000 simulations. For each simulation, the optimal noise intensities according to the mutual information (x-axes) and the autocorrelation of the output (y-axes) have been evaluated and averaged over the ten performed trials per combination. Almost all tested combinations yielded nearly identical estimates of optimal noise intensity. The total correlation coefficient between optimal noise intensity based on mutual information versus autocorrelation was r = 0.96 with a mean squared error of e = 0.026. Except for the combination of a correlated bipolar string with the leaky integrate-and-fire neuron, all other combinations yielded correlation coefficients from 0.925 to 0.999 with mean squared errors from 0.010 to 0.042, further demonstrating the equivalence of the two measures.

Figure 5

Correlation between optimal noise level from mutual information versus output autocorrlation. Numerical data from 5 sensor models, 3 different input signal, and different signal-to-threshold distances. The optimal noise intensities found by maximizing the mutual information (x-axes) are plotted versus the optimal noise intensities found by maximizing the output autocorrelation (y-axes). Except for correlated bipolar strings in combination with a leaky integrate-and-fire sensor, the optimal noise levels from both methods are highly correlated, with an overall correlation coefficient of r = 0.960 and a mean squared error of e = 0.027.


We have shown that the autocorrelation of the output signal can serve as a universal objective function to estimate the optimal level of noise in SR-based sensory systems. This new approach allows for the first time the technical implementation of adaptive SR in situations where the information content of the input signal is unknown or highly variable. Once the signal is optimally transmitted by the sensor, the (noisy) output signal can be further improved by several techniques, such as kernel density estimation19.

Several interesting phenomena are closely related to stochastic resonance. A well-known technical system to reduce the effect of noise during analog-to-digital conversion is the Schmitt trigger20, whereby two different analog thresholds are defined for the rising and falling edges of the discretized signal. For the case of a periodic input signal and colored Gaussian noise, the ocurrance of SR in this system has been demonstrated analytically21.

In dithering22, noise is added to audio or image signals in order to reduce the perception of quantization errors and the appearance of spurious large-scale patterns that can result from analog-to-digital conversion. Such spurious large scale patterns are associated with positive autocorrelations in the digitized signal. Therefore, although the aim in dithering is not to increase information transmission, the subjective perception of the signal by a human recipient may be improved by adding noise that in this case reduces such positive autocorrelations.

Related to this is the phenomenon of ghost stochastic resonance, which emerges when nonlinear systems are driven by periodic signals with more than one frequency in the presence of noise. An example for this effect is the auditory perception of ‘ghost’ frequencies that are not present in the acoustic pressure wave. The uncovering of this phenomenon helped to understand the perception of pitch in complex sound signals and the so-called ‘missing fundamental illusion’23, 24. Interestingly, when noise of increasing intensity is added to sound, the emergence of a ghost resonance coincides with a maximum of the output autocorrelation23, 24. This, in turn is closely linked to the effect of coherence resonance25, whereby a non-linear excitable system is driven by noise, resulting in a maximally autocorrelated system output at a certain optimum noise intensity.

Although SR can explain a number of adaptive processes in neural systems, in particular in auditory and visual signal processing26, how exactly that may be implemented is a matter of debate. Even for known input signals, the use of the mutual information or related approaches seem daunting for a biological system, since calculating them requires mathematical operations that are hard to implement in neuronal networks. By contrast, the autocorrelation function can be easily implemented with neuronal networks using delay-lines and coincidence detectors, as proposed in ref. 27 and verified, for example, in the nucleus laminaris of birds (barn owls), where they serve to code interaural time differences28. More recent work has shown that a surrogate of the autocorrelation function can even be computed by a single noisy neuron23, 24. Therefore, adaptive SR based on output autocorrelation may be a major processing principle in neural sensory systems and perception, and may also be responsible for pathologic conditions including neuropathic pain or tinnitus26.


Mutual Information

The mutual information I(S; Y) quantifies the mutual dependence of two random variables S and Y.16. It determines how similar the joint distribution p(s, y) is compared to the product of the marginal distributions p(s)p(y)

$$I(S;Y)=\sum _{s}\sum _{y}\,p(s,y)\,{\mathrm{log}}_{2}(\frac{p(s,y)}{p(s)p(y)}),$$

For continuous random variables, the summation is replaced by a double integral:

$$I(S;Y)={\int }_{S}{\int }_{Y}p(s,y){\mathrm{log}}_{2}(\frac{p(s,y)}{p(s)\,p(y)})\,dy\,ds,$$

where p(s, y) is now the joint probability density function of S and Y, and p(s) and p(y) are the marginal probability density functions. The natural unit of I(S; Y) is bits, however in some cases it is more convenient to divide the total mutual information by the time, or by the number of spikes within the observed spike train, and thus derive mutual information rates R(S; Y) measured in bits s −1 or bits spike −1. The choice of the mutual information as an objective function for adaptive stochastic resonance is natural, because the fundamental purpose of any sensor is to transmit information into a subsequent information processing system. Indeed, it has been shown by several authors3, 12, 14 that, within the context of stochastic resonance, I(S; Y) as a function of the variance σ 2 of the added noise has a maximum that indicates the optimal level of noise.

Output autocorrelation

The output autocorrelation function of the time lag τ (more precisely, the autocorrelation coefficient29) is defined as

$${C}_{yy}(\tau )=\frac{{\langle ({y}_{t+\tau }-\bar{y})({y}_{t}-\bar{y})\rangle }_{t}}{{\langle {({y}_{t}-\bar{y})}^{2}\rangle }_{t}},$$

where \(\overline{y}\) is the mean and 〈 t indicates averaging over time. We note that for most applications (and discrete time steps) it is sufficient to consider only one time lag, i.e. τ = 1. However, for more complex signals, e.g. streams of n-bit words, it might be beneficial to calculate C yy (τ) for a number of different subsequent lag times. In order to derive a single objective value in the case of multiple time lags τ, the root mean square (rms29) of the autocorrelation function

$$\,{\rm{R}}{\rm{M}}{\rm{S}}\,({C}_{yy})=\sqrt{\frac{1}{{N}_{\tau }}\sum _{\tau }{({C}_{yy}(\tau ))}^{2}}$$

is calculated, where N τ is the total number of different time lages.

Input signals for numerical simulations

Both, synthetically generated as well as natural signals were used in numerical simulations. As an example of a discrete input signal, a correlated, bipolar string s t  {−1, +1} was generated, in which the probability of successive values being identical was Prob(s t  = s t−1)  = 0.7. As examples of continuous signals we used: first, a sine waveform signal with constant frequency and amplitude; second, an aperiodic time series derived from the variable x(t) of the Roessler attractor30

$$\begin{array}{rcl}\dot{x} & = & -(y+z)\\ \dot{y} & = & x+ay\\ \dot{z} & = & b+(x-c)z,\end{array}$$

with parameters a = 0.15, b = 0.2 and c = 7.1; third, the aperiodic random Ornstein-Uhlenbeck process31 \(\dot{x}=-\,\frac{1}{\tau }x+\epsilon \,\xi (t)\), where ξ is an independent normally distributed random variable, τ is the correlation time and \(\epsilon \) is the noise amplitude; fourth, wave files of speech, music and natural sounds. All synthetically generated signals were computed by numerically integrating the differential equations using fourth order Runge Kutta method.

Sensor models for numerical simulations

Four different memory-less sensor models were implemented by combining symmetric and asymmetric thresholds, with discrete and continuous sensor output functions (Fig. 5). In the symmetric models, there exist two thresholds +θ and −θ. Without added noise, the sensor output is zero for |s t | < θ, s t  − θ for s t  > θ, and s t  + θ for s t  < −θ. In the asymmetric models, there exists only a single positive threshold θ > 0. Here, without added noise, the sensor output is zero for s t  < θ and s t  − θ for s t  >= θ. We note that the analytical model described above belongs to the class of discrete symmetric models. As one example of the detectors with memory, were the output depends not only on the momentary input and added noise, but also on earlier internal states of the detector itself, we choose the leaky integrate-and-fire neuron model32, 33 with

$$\dot{x}=-\,\frac{1}{{\tau }_{m}}x+{s}_{t}$$

where x is the membrane potential, τ m the mebrane time constant and s t the input signal. If x crosses the threshold θ from below, an output spike is generated and the membrane potential is set to the resting potential x r , which is chosen to be zero for simplicity.


  1. 1.

    Benzi, R., Sutera, A. & Vulpiani, A. The mechanism of stochastic resonance. Journal of Physics A: mathematical and general 14, L453 (1981).

    ADS  MathSciNet  Article  Google Scholar 

  2. 2.

    Collins, J., Imhoff, T. & Grigg, P. Noise-enhanced information transmission in rat sa1 cutaneous mechanoreceptors via aperiodic stochastic resonance. Journal of Neurophysiology 76, 642–645 (1996).

    CAS  PubMed  Google Scholar 

  3. 3.

    Levin, J. E. & Miller, J. P. Broadband neural encoding in the cricket cereal sensory system enhanced by stochastic resonance. Nature 380, 165–168 (1996).

    ADS  CAS  Article  PubMed  Google Scholar 

  4. 4.

    Gammaitoni, L., Hänggi, P., Jung, P. & Marchesoni, F. Stochastic resonance. Reviews of modern physics 70, 223 (1998).

    ADS  CAS  Article  Google Scholar 

  5. 5.

    Wiesenfeld, K. & Jaramillo, F. Minireview of stochastic resonance. Chaos: An Interdisciplinary Journal of Nonlinear Science 8, 539–548 (1998).

    Article  Google Scholar 

  6. 6.

    Spezia, S. et al. Evidence of stochastic resonance in the mating behavior of nezara viridula (l.). The European Physical Journal B 65, 453–458 (2008).

    ADS  CAS  Article  Google Scholar 

  7. 7.

    Hänggi, P. Stochastic resonance in biology how noise can enhance detection of weak signals and help improve biological information processing. ChemPhysChem 3, 285–290 (2002).

    Article  PubMed  Google Scholar 

  8. 8.

    Faisal, A. A., Selen, L. P. & Wolpert, D. M. Noise in the nervous system. Nature Reviews Neuroscience 9, 292–303 (2008).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  9. 9.

    Mino, H. The effects of spontaneous random activity on information transmission in an auditory brain stem neuron model. Entropy 16, 6654–6666 (2014).

    ADS  Article  Google Scholar 

  10. 10.

    Douglass, J. K., Wilkens, L., Pantazelou, E. & Moss, F. et al. Noise enhancement of information transfer in crayfish mechanoreceptors by stochastic resonance. Nature 365, 337–340 (1993).

    ADS  CAS  Article  PubMed  Google Scholar 

  11. 11.

    Wiesenfeld, K. & Moss, F. et al. Stochastic resonance and the benefits of noise: from ice ages to crayfish and squids. Nature 373, 33–36 (1995).

    ADS  CAS  Article  PubMed  Google Scholar 

  12. 12.

    Moss, F., Ward, L. M. & Sannita, W. G. Stochastic resonance and sensory information processing: a tutorial and review of application. Clinical neurophysiology 115, 267–281 (2004).

    Article  PubMed  Google Scholar 

  13. 13.

    Mitaim, S. & Kosko, B. Adaptive stochastic resonance. Proceedings of the IEEE 86, 2152–2183 (1998).

    Article  Google Scholar 

  14. 14.

    Mitaim, S. & Kosko, B. Adaptive stochastic resonance in noisy neurons based on mutual information. IEEE Transactions on Neural Networks 15, 1526–1540 (2004).

    Article  PubMed  Google Scholar 

  15. 15.

    Wenning, G. & Obermayer, K. Activity driven adaptive stochastic resonance. Physical review letters 90, 120602 (2003).

    ADS  CAS  Article  PubMed  Google Scholar 

  16. 16.

    MacKay, D. J. Information theory, inference and learning algorithms (Cambridge university press, 2003).

  17. 17.

    Collins, J., Chow, C. C. & Imhoff, T. T. et al. Stochastic resonance without tuning. Nature 376, 236–238 (1995).

    ADS  CAS  Article  PubMed  Google Scholar 

  18. 18.

    Greenwood, P. E., Müller, U. U. & Ward, L. M. Soft threshold stochastic resonance. Physical Review E 70, 051110 (2004).

    ADS  MathSciNet  Article  Google Scholar 

  19. 19.

    Müller, U. U. & Ward, L. M. Stochastic resonance in a statistical model of a time-integrating detector. Physical Review E 61, 4286 (2000).

    ADS  Article  Google Scholar 

  20. 20.

    Schmitt, O. H. A thermionic trigger. Journal of Scientific Instruments 15, 24 (1938).

    ADS  Article  Google Scholar 

  21. 21.

    Melnikov, V. I. Schmitt trigger: A solvable model of stochastic resonance. Phys. Rev. E 48, 2481–2489 (1993).

    ADS  Article  Google Scholar 

  22. 22.

    Ken Pohlman, C. Principles of digital audio (2005).

  23. 23.

    Chialvo, D. R. How we hear what is not there: A neural mechanism for the missing fundamental illusion. Chaos: An Interdisciplinary Journal of Nonlinear Science 13, 1226–1230 (2003).

    Article  Google Scholar 

  24. 24.

    Balenzuela, P., Braun, H. & Chialvo, D. R. The ghost of stochastic resonance: an introductory review. Contemporary Physics 53, 17–38 (2012).

    ADS  CAS  Article  Google Scholar 

  25. 25.

    Pikovsky, A. S. & Kurths, J. Coherence resonance in a noise-driven excitable system. Physical Review Letters 78, 775 (1997).

    ADS  MathSciNet  CAS  Article  MATH  Google Scholar 

  26. 26.

    Krauss, P. et al. Stochastic resonance controlled upregulation of internal noise after hearing loss as a putative cause of tinnitus-related neuronal hyperactivity. Frontiers in Neuroscience 10 (2016).

  27. 27.

    Licklider, J. A duplex theory of pitch perception. The Journal of the Acoustical Society of America 23, 147–147 (1951).

    ADS  Article  Google Scholar 

  28. 28.

    Sullivan, W. & Konishi, M. Neural map of interaural phase difference in the owl’s brainstem. Proceedings of the National Academy of Sciences 83, 8400–8404 (1986).

    ADS  CAS  Article  Google Scholar 

  29. 29.

    Patrick, F. Measurement and data analysis for engineering and science (2005).

  30. 30.

    Rössler, O. E. An equation for continuous chaos. Physics Letters A 57, 397–398 (1976).

    ADS  Article  Google Scholar 

  31. 31.

    Ornstein, L. S. On the theory of the brownian motion. Physical Review 36, 823–841 (1930).

    ADS  Article  MATH  Google Scholar 

  32. 32.

    Lapique, L. Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarization. J Physiol Pathol Gen 9, 620–635 (1907).

    Google Scholar 

  33. 33.

    Burkitt, A. N. A review of the integrate-and-fire neuron model: I. homogeneous synaptic input. Biological cybernetics 95, 1–19 (2006).

    MathSciNet  CAS  Article  PubMed  MATH  Google Scholar 

Download references


We acknowledge support by Deutsche Forschungsgemeinschaft and Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) within the funding programme Open Access Publishing.

Author information




P.K. and C.M. performed the calculations and computer simulations. P.K., C.M., K.T. and H.S. discussed the theoretical background. P.K., A.S. and C.S. prepared the figures. P.K., K.T., C.M., B.F. and H.S. wrote the manuscript.

Corresponding author

Correspondence to Holger Schulze.

Ethics declarations

Competing Interests

The authors declare that they have no competing interests.

Additional information

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Krauss, P., Metzner, C., Schilling, A. et al. Adaptive stochastic resonance for unknown and variable input signals. Sci Rep 7, 2450 (2017).

Download citation

Further reading


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing