Abstract
Physical reservoir computing utilizes a physical system as a computational resource. This nontraditional computing technique can be computationally powerful, without the need of costly training. Here, a Hopf oscillator is implemented as a reservoir computer by using a nodebased architecture; however, this implementation does not use delayed feedback lines. This reservoir computer is still powerful, but it is considerably simpler and cheaper to implement as a physical Hopf oscillator. A nonperiodic stochastic masking procedure is applied for this reservoir computer following the time multiplexing method. Due to the presence of noise, the Euler–Maruyama method is used to simulate the resulting stochastic differential equations that represent this reservoir computer. An analog electrical circuit is built to implement this Hopf oscillator reservoir computer experimentally. The information processing capability was tested numerically and experimentally by performing logical tasks, emulation tasks, and time series prediction tasks. This reservoir computer has several attractive features, including a simple design that is easy to implement, noise robustness, and a high computational ability for many different benchmark tasks. Since limit cycle oscillators model many physical systems, this architecture could be relatively easily applied in many contexts.
Similar content being viewed by others
Introduction
Reservoir computing (RC) is a bioinspired, supervised machinelearning computational framework based on artificial recurrent neural networks (RNNs), which utilizes naturally emergent dynamics of a physical resource^{1,2,3,4,5,6}. Conventional machine learning schemes use backpropagation through time^{7} to train an entire recurrent neural network. This method is computationally expensive, since all the weights of the network need to be updated to mimic a target function. Echo state networks^{8} and liquid state machines^{9} are two concepts that addressed this issue in the early 2000s. Reservoir computing merges these concepts. In reservoir computing, the neural network is formed from a set of coupled nonlinear nodes, where the network is divided into three parts: an input layer, the reservoir, and the readout layer. Unlike conventional RNNs, only the readout layer requires training by a simpler training algorithm, such as linear or ridge regression^{10}. Thus, the RC architecture is much faster and more stable than conventional RNN methods, which is one of the key advantages of this information processing framework.
There are many realworld applications of reservoir computing, including bitwise logical operations^{11,12,13}, speech recognition^{6}, handwritten digit recognition^{14}, wireless communications^{1}, complex and chaotic time series predictions^{1,6,15,16,17,18}, image recognition^{19}, emulation of nonlinear time series^{4,10}, and morphological computation^{20,21}. The echo state architecture of a reservoir allows the use of physical systems as reservoir computers, also known as physical reservoir computers (PRCs). Many physical systems have been shown to perform as PRCs, including an array of nonlinear mechanical oscillators^{11,22,23}, soft robotic bodies^{20,24,25,26}, tensegrity structures^{21,27}, and origami structures^{28,29}.
Importantly, quantum systems can be used as PRCs. The natural disordered quantum dynamics of an ensemble system was utilized to emulate nonlinear time series, including a chaotic system^{30}. A Kerr nonlinear oscillator was used in sine wave phase estimation using its complex amplitudes as computational nodes^{31}. Nuclearmagneticresonance spinensemble system was used for nonlinear dynamics emulation task by implementing spatial multiplexing approach to increase computational power^{32}. Dissipative quantum dynamics was used to build a quantum reservoir computer (QRC) for nonlinear temporal tasks^{33}.
Physical reservoir computers were initially constructed from only the coupled, real dynamic nodes. Later, a virtual nodebased reservoir computing method was proposed by implementing a time multiplexing approach in which a delayed feedback was used as a single nonlinear dynamic node to perform computation^{6}. This method simplifies the complexity of a reservoir built from an array of physical nonlinear nodes. This approach has been popularly used to construct physical reservoir computers for different tasks, such as an optoelectronic oscillator for optical information processing^{34}, a photonicsbased passive linear fiber reservoir for signal processing^{35}, an FPGA implementation using a single autonomous Boolean logic element for pattern recognition^{5}, timedelay reservoirs for forecasting of stochastic nonlinear time series^{36}, a delayed Duffing silicon beam for parity tasks^{12}, and a semiconductor laser with delayed optical feedback for nonlinear time series prediction^{37}. These reservoirs used a delay line to create the necessary nodes for computation. A simpler approach can be taken by creating the nodes without the presence of any delay or feedback line^{38}. This approach is studied less, though it makes the reservoir architecture much simpler.
Here, a Hopf oscillator is used as a physical reservoir. The Hopf oscillator can also be used as the building block for adaptive oscillators^{39,40}, which can natively learn information without any training. The Hopf oscillator can exhibit limit cycle motion, which provides a source of memory by storing information in its dynamic states. Although a binary periodic masking function is popularly used for timemultiplexed reservoir^{6,10,12}, noise can also be used as periodic mask^{41}. In this paper, a Hopf oscillator PRC is constructed that uses a nonperiodic stochastic mask. A Hopf oscillator physical reservoir computer is fabricated as an analog circuit, which is compared with Euler–Maruyama simulations^{40,42,43}. This Hopf PRC can successfully complete benchmark machine learning tasks, including parity tasks, fundamental logic gate tasks^{12}, nonlinear dynamic emulation tasks^{4}, and various time series prediction tasks^{44}. The information rate is used as the performance metric for logical tasks^{11}, and the normalized mean square error (NMSE) is used for the emulation and time series tasks^{4}.
The rest of the article is organized as follows. In “System equations for Hopf physical reservoir computer” section, the equations of motion for the stochastic Hopf oscillator PRC are presented. In “Mapping methodology” section, the methodology of mapping the oscillator’s dynamics to an information processing scheme is discussed for an example task by using the Euler–Maruyama simulation. The effects of the pseudoperiod and the noise on computational ability are discussed in “Pseudoperiod and noise” section. In “Analog circuit experiment” section, the analog circuit experiment is described. In “Benchmark tasks for Hopf PRC” section, different benchmark tasks are performed with the numerical and experimental Hopf PRC, which includes logic tasks, emulation tasks of time series, and prediction tasks. The concluding remarks are stated in “Concluding remarks” section.
System equations for Hopf physical reservoir computer
The equations of motion for the Hopf oscillator are^{45}:
For this Hopf oscillator, x and y are the first and second states, respectively, and the sinusoidal forcing is given by \(A \sin (\Omega t+\phi )\). A list of the parameters is given in Table 1. The information is first encoded as an input, u(t), which will depend on the benchmark task being performed. The mask is defined by white Gaussian noise as:
Here, \(\sigma \) is the noise amplitude, \({\dot{W}}\) is white Gaussian noise, and \(\beta \) is a positive bias. It should be noted that \({\dot{W}}\) does not exist, but its differential form, dW, does^{46}.
To send information to the PRC to be processed, an external forcing function that contains the information signal, u(t), and the stochastic mask, m(t), is constructed as:
This external forcing function is injected into both the amplitude of the sinusoidal forcing, A, and the parameter affecting the limit cycle radius, \(\mu \). Including this force, the equations for the Hopf PRC are written as:
Mapping methodology
To use the dynamics of the Hopf oscillator as a physical reservoir computer, the dynamics must first be mapped. To describe this mapping, an exclusive OR (XOR) logical task is used as an example. In this section, the Hopf PRC is simulated using an Euler–Maruyama scheme, since the mask is stochastic^{42}. Shannon’s information metric is used to quantify the performance of the reservoir when performing logical tasks, such as the XOR operation^{11,43}.
For this task, the binary “false” and “true” values are encoded as discrete negative ones and positive ones, respectively, in a discrete signal, r(z). r(z) is defined such that \(z \in {\mathbf {Z}}^+\) and \(r(z) \in \{1,+1\}\), which is depicted in Fig. 1a. To input this into a continuous dynamical system, these values are first mapped to a continuous input function, u(t), as follows:
This function is depicted in Fig. 1b. \(T_p\) is a constant pseudoperiod, in which u(t) does not change its value. Thus, for the XOR logical task, the input function, \(u(t) \in \{1,+1\}\), is a random square wave with a pseudoperiod, \(T_p\). This implies that each of the “true” (e.g., +1) or “false” (e.g., −1) values affect the system for an amount of time, \(T_p\). The mask function, m(t), is depicted in Fig. 1c.
The Hopf PRC system described in Eq. (4) is numerically integrated using the Euler–Maruyama (EM) method, since the PRC is stochastic^{42,47}. For these simulations, the integration time step, \(dt=10^{5}\) seconds, the total simulation time in this case was \(3000 T_p=300\) seconds, and \(T_p=0.1\) sec. This example simulation is shown in Fig. 1.
The time history of the x state obtained from the simulation is depicted in Fig. 1d. Next, x(t) is rescaled by subtracting the mean, \(\mu _x\), and dividing by the standard deviation, \(\sigma _x\), using Eq. (6):
In this equation, the inverse hyperbolic tangent function is used as a nonlinear activation function. Only the real part of \(tanh^{1}(\frac{x\mu _x}{\sigma _x})\) is used for the subsequent steps. The time history of the X state is depicted in Fig. 1e.
Next, equidistant nodes are created by dividing each pseudoperiod, \(T_p\), equally into \(N(=20)\) nodes, as shown in Fig. 1f. Over each pseudoperiod, \(T_p\), the N node values are referred to as the nodal state, which is depicted in Fig. 1g.
The node matrix, S, is an \(N \times K\) matrix; for this example, \(N=20\) is the number of nodes over a pseudoperiod, and \(K=3000\) is the total number of pseudoperiods. Truncating the final 20% of this S matrix (\(600T_p\)), a new matrix, L (\(480T_p\)) is formed, which will be used in the training process. Throughout this paper, next, the reservoir computer is trained using ridge regression, as in Eq. (7):
A target signal (the M vector) is created from the encoded input based on a benchmark task, which in this case is XOR task. For each pseudoperiod, there will be one target value that is found by performing the XOR operation between the inputs, r(z) and \(r(z1)\). In this way, the target vector, M, is found for the XOR task. Linear regression based training is then applied to the nodal state matrix, L, to map it to the desired output using Eq. (7). In Eq. (7), w is the weight vector found after training, I is the identity matrix, \(\lambda =10^{1}\) is the regularization parameter used to avoid overfitting, and o(k) is the prediction of the reservoir computer at the kth pseudoperiod. The discrete input, r(z) and continuous input, u(t) are given in Fig. 2a and b respectively. Figure 2c shows this prediction along with the corresponding target signal. In the final step, the prediction is binarized since XOR is a binary task, which is depicted in Fig. 2d. It should be noted that a nonlinear dynamic emulation task would not require this final step of discretization.
For a logical task, the efficacy of the reservoir computer is quantified using Shannon’s information rate^{48}. The information rate, R, can be defined as follows:
Here H(x) is the Shannon entropy, which denotes how much information is encoded in a signal. This can be defined as follows:
In this equation, \(p_i\) is the probability of getting a particular bit, i. \(H_y(x)\) is the conditional entropy, which denotes the probability of getting an incorrect bit in the target signal:
Here \(p_i(j)=p(ji)=\frac{p(i,j)}{\sum _j p(i,j)}\) and p(i, j) is the joint probability distribution of the two variables, i and j, each of which can take a value of “1” or “− 1” for a logical task. i is a bit from the target, and j is a bit from the prediction. The information rate, R, for this case was calculated to be 0.98 based on the prediction from the validation portion (not including in the training process). Due to the nature of this binary target signal, the Shannon entropy is 1.0, which marks the maximum value of the information rate for this task. It should be noted that the lower limit of R is zero, which would be achieved if every prediction was incorrect, while the upper limit of R depends on the task. For the parity tasks considered here, the upper limit of R is equal to one.
Pseudoperiod and noise
In this section, the effects of pseudoperiod and noise on the computational ability of the reservoir are explored. For this discussion, several parity tasks (defined in Eq. (12)) are used to understand the effects of the pseudoperiod and noise on the computational ability of the reservoir.
The relationship between the pseudoperiod, \(T_p\), and the natural frequency of the oscillator, \(\omega _0\), is explored in Fig. 3 by using the 2nd and 4th order parity tasks. In Fig. 3a–c, the reservoir computer’s performance is measured for three different values of \(T_p\) while varying the natural period, \(\frac{2 \pi }{\omega _0}\). It is found that the reservoir has better performance when the pseudoperiod is an integer multiple of the natural period of the oscillator. The reservoir’s performance is studied using both resonance (\(\omega _0 = \Omega \)) and nonresonance (\(\omega _0\ne \Omega \)) conditions. It is found that both cases can result in strong or weak computational ability depending on the fractional relationship between the natural period and the pseudoperiod. However, maintaining this design can still fail to make a robust reservoir computer when \(T_p\) is very low (e.g., \(T_p=0.05\) seconds). For the remainder of the paper, combinations of \(T_p\) and \(\omega _0\) are chosen such that the pseudoperiod is an integer multiple of the natural period of the oscillator.
Noise is ubiquitous in physical systems. For this reason, noise is introduced into this system using a stochastic masking function. Figure 4 shows the relationship between the computational ability, as measured with R, the noise amplitude, \(\sigma \), and the noise bias, β. The simulations presented in Fig. 4 are performed for the 4th order parity task (left) and 6th order parity task (right). The reservoir is found to be robust against a certain level of noise intensity, which demonstrates its potential to be implemented under the influence of environmental noise. However, increasing noise intensity does decrease the computational ability of the reservoir. This effect may be observed for a higher order task, which requires a longer memory (e.g., the 6th order parity task of Fig. 4). When \(\beta =0\), the computational ability was the lowest. Since the nonperiodic noise mask with increasing noise intensity deteriorates the computational ability, it should be noted that the Hopf reservoir computer can also be built by excluding the noise mask (\(\sigma =0\)).
Analog circuit experiment
To build a physical reservoir computer (PRC), an analog circuit implementation of Eq. (4) was designed, fabricated, and tested. The circuit’s equations are given in Eq. (11):
Here, \(V_u\) is the input voltage, \(V_m\) is the stochastic masking voltage, \(V_\mu \) is the limit cycle radius voltage, \(V_{\omega _0}\) is the resonance constant voltage. \(V_x\) and \(V_y\) are the states, which correspond to states x and y in Eq. (4). The circuit implementation used TL082 operational amplifiers and AD633 multipliers in standard integrator network configurations. The error tolerance is 1% for the resistors and \(2\%\) for the capacitors. The continuous input function, \(V_u\), the stochastic masking function, \(V_m\), and the sinusoidal forcing, \(\sin (\Omega t+\phi )\), were created in MATLAB and sent to the circuit via a National Instrument (NI) cDAQ9174. This cDAQ9174 also collected the \(V_x\) and \(V_y\) states. A sampling frequency of \(10^5\) samples/s was used to collect data for all the experiments. The resistor values were chosen such that \(R_1=10\) k\(\Omega \) and \(R_2=100\)k\(\Omega \), and the capacitor values were chosen such that \(C=0.1\mu \)F. A simplified schematic is shown in Fig. 5.
The \(V_x\) state will be treated in the same manner that the x state was treated in “Mapping methodology” section. That is, the \(V_x\) state will be rescaled using Eq. (6), and then the rescaled state will be used to form the nodal state matrix, L. The target signal vector, M, will be created following the same process discussed in “Mapping methodology” section. Finally, Eq. (7) will be used to train the PRC to map input data to the desired output values. As an example, the analog circuit Hopf PRC was used to solve the XOR task as in the previous section, which is depicted in Fig. 6. The information rate, R, for this case was calculated to be 1.0 based on the prediction from the validation portion (not including in the training process).
Benchmark tasks for Hopf PRC
The Hopf PRC is numerically and experimentally tested with three benchmark tasks: (1) logic tasks, (2) emulation tasks of time series, and (3) prediction tasks. Logic tasks include the fundamental logic gate tasks and parity tasks of different orders. Emulation tasks of time series will test the PRC’s ability to reproduce nonlinear auto regressive moving average (NARMA) tasks of different orders. Prediction tasks include the Santa Fe time series and sunspot prediction tasks.
Logic benchmark tasks
Parity tasks
The computing efficacy of the reservoir is first evaluated with parity benchmark tasks. Since it is a logical task, the input function, u(t), is generated with a random binary signal, r(z), as discussed in “Mapping methodology” section. The nth order parity function, \(P_n\), is defined by the following equation:
As n increases, this task will require more memory and nonlinearity from the reservoir. As given in “Mapping methodology” section, Shannon’s information metric is used to measure the performance of the PRC for logic tasks. For \(n=1\), the firstorder task does not require any memory from the input of the previous pseudoperiod, so the task is linear. For \(n>1\), the task is nonlinear, which demands that the reservoir computer must also possess memory and the nonlinear separation ability. In Fig. 7, the ability of the Hopf PRC to follow parity tasks of 2nd to 5th order, both experimentally and in simulations. The initial \(4000T_p=400\) seconds are used for training, and the final \(1000T_p=100\) seconds are used for testing. The performance difference between the PRC experiment and the simulation could be due to the presence of nonlinear circuit components in the analog circuit, which are not represented in Eq. (11). For instance, \(V_u\) must jump between \(1\) and \(+1\), but this instantaneous change takes a finite amount of time in the circuit.
Fundamental logic gate tasks
The computing performance of the reservoir is also assessed with fundamental logic gates: NOT (\(\lnot \)), AND (\(\wedge \)), and OR (\(\vee \)). The input function, u(t), is generated with a random binary signal as discussed in “Mapping methodology” section, and the Shannon’s information metric is used again to measure the performance of this PRC. Figure 8 depicts the response of the Hopf PRC acting as fundamental logic gates, both experimentally and in simulations. In all cases, the Hopf PRC achieved an information rate that was maximal.
Emulation tasks
The reservoir is also evaluated with emulation tasks. The nonlinear autoregressive moving average (NARMA) time series is used to test whether the reservoir possesses adequate nonlinearity and long time lags^{4,24,26,28}. These tasks show the multitasking capability of the reservoir. NARMA tasks from the 2nd to 20th orders are used to test the reservoir. A NARMA task of order n is given in Eq. (13), where the initial target values are set to 0.19:
In Eq. (13), \(M_n\) is the target of the system. n is the order of NARMA task, \((f_1,f_2,f_3) = (\frac{2.11}{500}, \frac{3.73}{500}, \frac{4.33}{500})\), and \((\alpha ,\zeta ,\gamma ,\delta )=(0.3, 0.05, 1.5, 0.1)\)^{26,28}. u(t) is the continuous input that is used to force the Hopf PRC, which is a function of a three sinusoidal functions. It should be noted that this formulation of the NARMA emulation task is nonstandard. The u(t) given in Eq. (13) was used for other dynamic systems in which inertia played a large role^{4,26}. Similarly, this nonstandard NARMA task is used here to evaluate this analog circuit reservoir. The reservoir emulates this nonlinear function, but it should be noted that the correlation present in Eq. (13) does not allow a definitive evaluation of the longterm memory characteristics of this reservoir.
In the simulations and experiments, \(\Delta t=0.1\) seconds, and the sampling rate was \(10^5\) samples/second. Figure 9 shows several NARMA tasks. Instead of the information rate, the normalised mean square error (NMSE) is used to evaluate the performance of the reservoir computer for the NARMA tasks:
The final 20% of the target signal (16,000–20,000 pseudoperiods) is used for the validation. \(M_n\) is the target, and \(o_n\) is the prediction from the reservoir computer. In Eq. (14), \(j_0\) is the starting time step, and \(j_f\) is the ending time step from the test section. From the plots in Figs. 9 and 10, the numerical simulations of the Hopf PRC show superior performance as compared to the experiment. However, both have an acceptable performance until the 20th order task. The PRC can perform much higher order NARMA tasks than the order of the parity tasks.
Prediction tasks
Santa Fe task
Time series forecasting is an important benchmark for a reservoir. The Santa Fe time series was first used in a time series forecasting competition as a benchmark test. The Santa Fe time series data set A is a univariate time series found from the recorded intensity of a chaotic farinfraredlaser^{49}. The target signal is generated to predict the value at the next time step based on the values of the current and previous time steps. Figure 11a shows the Hopf PRC’s performance on this laser time series, for both the experiment and the numerical simulations. NMSE is used as the performance metric.
Santa Fe time series data set B is a multivariate time series found from the sleep laboratory of the Beth Israel Hospital (current name: Beth Israel Deaconess Medical Center) in Boston, Massachusetts^{50,51}. This data set was taken from the MITBIH Polysomnographic Database record (slp60) and submitted to the Santa Fe Time Series Competition in 1991^{52}. The heart rate, chest volume (respiration force), and blood oxygen concentration comprise the target.
For each of these time series, the target signal is again generated to predict the next step based on the values of the current and previous time steps. In each case, the original time series is normalized to use as the input. Figure 11b–d shows the reservoir computer’s performance in predicting subsequent values of the heart rate, respiratory force, and blood oxygen concentration, respectively, through both experiments and numerical simulations. The NMSE is calculated in each case to evaluate the performance of the reservoir.
Sunspot prediction task
The prediction of the total number of sunspots (\(S_n\)) is also a onestep time series prediction task similar to the Santa Fe time series^{10}. Daily and monthly total sunspot numbers were used in one step forecasting purpose by the reservoir computer. The necessary data set is taken from WDCSILSO, Royal Observatory of Belgium, Brussels^{53}. Again, for each of the time series, the target signal is generated to predict the next value based on the value of the current and previous time steps, and the original time series is normalized to use as the input to the oscillator. Figure 12 (top) shows the reservoir’s performance in predicting the next steps of the daily total counted sunspots, and Fig. 12 (bottom) shows the performance in predicting monthly counted sunspots. Again, the NMSE is used to evaluate the reservoir’s efficacy for this task.
Concluding remarks
In this paper, the Hopf oscillator is explored as a physical reservoir computer through employing a timemultiplexed, nodebased architecture with a stochastic masking function. Discarding the regularly used delay lines, this Hopf PRC is a simple and cheap method for creating a physical reservoir computer. Since quantum systems are capable of limit cycle motion^{54}, this Hopf PRC formulation might be applicable for quantum PRCs. The Euler–Maruyama method was used for the numerical simulations of this Hopf PRC. An analog circuit of this Hopf PRC was developed, fabricated, and tested. The Hopf PRC was found to possess multitasking capability, since it was shown to perform logic operations, emulation tasks, and time series prediction tasks. Taking inspiration from adaptive oscillators, the input signal was injected into multiple locations, including the parameter that affects the limit cycle radius and the amplitude of the sinusoidal forcing. Additionally, the masking function used in this PRC is stochastic. Since this PRC architecture is tested with noise, it also suggests that this reservoir computer should be robust to environmental noises in practical implementations.
References
Jaeger, H. & Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004).
Lukoševičius, M. & Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149 (2009).
Nakajima, K. Physical reservoir computingan introductory perspective. Jpn. J. Appl. Phys. 59, 060501 (2020).
Nakajima, K., Hauser, H., Li, T. & Pfeifer, R. Information processing via physical soft body. Sci. Rep. 5, 10487 (2015).
Haynes, N. D., Soriano, M. C., Rosin, D. P., Fischer, I. & Gauthier, D. J. Reservoir computing with a single timedelay autonomous boolean node. Phys. Rev. E 91, 020801 (2015).
Appeltant, L. et al. Information processing using a single dynamical node as complex system. Nat. Commun. 2, 1–6 (2011).
Werbos, P. J. Backpropagation through time: What it does and how to do it. Proc. IEEE 78, 1550–1560 (1990).
Jaeger, H. The echo state approach to analysing and training recurrent neural networkswith an erratum note. Bonn Ger. German Natl. Res. Center Inf. Technol. GMD Tech. Rep. 148, 13 (2001).
Natschläger, T., Maass, W. & Markram, H. The liquid computer: A novel strategy for realtime computing on time series. Spec. Issue Found. Inf. Process. TELEMATIK 8, 39–43 (2002).
Appeltant, L. et al. Reservoir computing based on delaydynamical systems (Vrije Universiteit Brussel/Universitat de les Illes Balears, These de Doctorat, 2012).
Shougat, M. R. E. U., Li, X., Mollik, T. & Perkins, E. An information theoretic study of a duffing oscillator array reservoir computer. J. Comput. Nonlinear Dyn. 16, 081004 (2021).
Dion, G., Mejaouri, S. & Sylvestre, J. Reservoir computing with a single delaycoupled nonlinear mechanical oscillator. J. Appl. Phys. 124, 152132 (2018).
Laporte, F., Dambre, J. & Bienstman, P. Simulating selflearning in photorefractive optical reservoir computers. Sci. Rep. 11, 1–10 (2021).
Du, C. et al. Reservoir computing using dynamic memristors for temporal information processing. Nat. Commun. 8, 1–10 (2017).
Inubushi, M. & Yoshimura, K. Reservoir computing beyond memorynonlinearity tradeoff. Sci. Rep. 7, 1–10 (2017).
Wang, R., Kalnay, E. & Balachandran, B. Neural machinebased forecasting of chaotic dynamics. Nonlinear Dyn. 98, 2903–2917 (2019).
Pathak, J., Hunt, B., Girvan, M., Lu, Z. & Ott, E. Modelfree prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Phys. Rev. Lett. 120, 024102 (2018).
Rafayelyan, M., Dong, J., Tan, Y., Krzakala, F. & Gigan, S. Largescale optical reservoir computing for spatiotemporal chaotic systems prediction. Phys. Rev. X 10, 041037 (2020).
Borlenghi, S., Boman, M. & Delin, A. Modeling reservoir computing with the discrete nonlinear schrödinger equation. Phys. Rev. E 98, 052101 (2018).
Urbain, G., Degrave, J., Carette, B., Dambre, J. & Wyffels, F. Morphological properties of massspring networks for optimal locomotion learning. Front. Neurorobot. 11, 16 (2017).
Caluwaerts, K., DHaene, M., Verstraeten, D. & Schrauwen, B. Locomotion without a brain: Physical reservoir computing in tensegrity structures. Artif. Life 19, 35–66 (2013).
Coulombe, J. C., York, M. C. & Sylvestre, J. Computing with networks of nonlinear mechanical oscillators. PLoS ONE 12, e0178663 (2017).
Zheng, T. et al. Parameters optimization method for the timedelayed reservoir computing with a nonlinear duffing mechanical oscillator. Sci. Rep. 11, 1–11 (2021).
Hauser, H., Ijspeert, A. J., Füchslin, R. M., Pfeifer, R. & Maass, W. Towards a theoretical foundation for morphological computation with compliant bodies. Biol. Cybern. 105, 355–370 (2011).
Nakajima, K. et al. Computing with a muscularhydrostat system. In 2013 IEEE International Conference on Robotics and Automation, 1504–1511 (IEEE, 2013).
Nakajima, K. et al. A soft body as a reservoir: Case studies in a dynamic model of octopusinspired soft robotic arm. Front. Comput. Neurosci. 7, 91 (2013).
Caluwaerts, K. et al. Design and control of compliant tensegrity robots through simulation and hardware validation. J. R. Soc. Interface 11, 20140520 (2014).
Bhovad, P. & Li, S. Physical reservoir computing with origami and its application to robotic crawling. Sci. Rep. 11, 1–18 (2021).
Bhovad, P. & Li, S. Physical reservoir computing with origami: a feasibility study. In Behavior and Mechanics of Multifunctional Materials XV, vol. 11589, 1158903 (International Society for Optics and Photonics, 2021).
Fujii, K. & Nakajima, K. Harnessing disorderedensemble quantum dynamics for machine learning. Phys. Rev. Appl. 8, 024030 (2017).
Govia, L., Ribeill, G., Rowlands, G., Krovi, H. & Ohki, T. Quantum reservoir computing with a single nonlinear oscillator. Phys. Rev. Res. 3, 013077 (2021).
Nakajima, K., Fujii, K., Negoro, M., Mitarai, K. & Kitagawa, M. Boosting computational power through spatial multiplexing in quantum reservoir computing. Phys. Rev. Appl. 11, 034021 (2019).
Chen, J., Nurdin, H. I. & Yamamoto, N. Temporal information processing on noisy quantum computers. Phys. Rev. Appl. 14, 024065 (2020).
Larger, L. et al. Photonic information processing beyond turing: An optoelectronic implementation of reservoir computing. Opt. Express 20, 3241–3249 (2012).
Vinckier, Q. et al. Highperformance photonic reservoir computer based on a coherently driven passive cavity. Optica 2, 438–446 (2015).
Grigoryeva, L., Henriques, J., Larger, L. & Ortega, J.P. Stochastic nonlinear time series forecasting using timedelay reservoir computers: Performance and universality. Neural Netw. 55, 59–71 (2014).
Argyris, A., Schwind, J. & Fischer, I. Fast physical repetitive patterns generation for masking in timedelay reservoir computing. Sci. Rep. 11, 1–12 (2021).
Marković, D. et al. Reservoir computing with the frequency, phase, and amplitude of spintorque nanooscillators. Appl. Phys. Lett. 114, 012409 (2019).
Li, X. et al. A fourstate adaptive Hopf oscillator. PLoS ONE 16, e0249131 (2021).
Li, X. et al. Stochastic effects on a Hopf adaptive frequency oscillator. J. Appl. Phys. 129, 224901 (2021).
Nakayama, J., Kanno, K. & Uchida, A. Laser dynamical reservoir computing with consistency: An approach of a chaos mask signal. Opt. Express 24, 8679–8692 (2016).
Higham, D. J. An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 43, 525–546 (2001).
Perkins, E. & Balachandran, B. Effects of phase lag on the information rate of a bistable duffing oscillator. Phys. Lett. A 379, 308–313 (2015).
Nguimdo, R. M., Verschaffelt, G., Danckaert, J. & Van der Sande, G. Simultaneous computation of two independent tasks using reservoir computing based on a single photonic nonlinear node with optical feedback. IEEE Trans. Neural Netw. Learn. Syst. 26, 3301–3307. https://doi.org/10.1109/TNNLS.2015.2404346 (2015).
Nayfeh, A. H. & Balachandran, B. Applied Nonlinear Dynamics: Analytical, Computational, and Experimental Methods (John Wiley & Sons, 2008).
Chorin, A. J. & Hald, O. H. Brownian motion. In Stochastic Tools in Mathematics and Science, 47–81 (Springer, 2009).
Perkins, E. Effects of noise on the frequency response of the monostable duffing oscillator. Phys. Lett. A 381, 1009–1013 (2017).
Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948).
Gershenfeld, N. A. & Weigend, A. S. The Future of Time Series: Learning and Understanding (CRC Press, 2018).
Rigney, D. R. Multichannel physiological data description and analysis. Time Ser. Predict. (1994).
Goldberger, A. L. et al. Physiobank, physiotoolkit, and physionet: Components of a new research resource for complex physiologic signals. Circulation 101, e215–e220 (2000).
Ichimaru, Y. & Moody, G. Development of the polysomnographic database on cdrom. Psychiatry Clin. Neurosci. 53, 175–177 (1999).
SILSO World Data Center. The international sunspot number. International Sunspot Number Monthly Bulletin and online catalogue (1818–2018).
Arosh, L. B., Cross, M. & Lifshitz, R. Quantum limit cycles and the Rayleigh and van der Pol oscillators. Phys. Rev. Res. 3, 013130 (2021).
Acknowledgements
Partial support for this project from DARPA’s Young Faculty Award is greatly appreciated. Research was sponsored by the Army Research Office and was accomplished under Grant No. W911NF2010336. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Author information
Authors and Affiliations
Contributions
E.P. conceived the numerical and physical experiment(s) and guided process; M.S. and X.L. built the circuit, M.S., X.L., and T.M. wrote programs and simulated the experiment(s); E.P. and M.S. analyzed the results. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Shougat, M.R.E.U., Li, X., Mollik, T. et al. A Hopf physical reservoir computer. Sci Rep 11, 19465 (2021). https://doi.org/10.1038/s4159802198982x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s4159802198982x
This article is cited by

Hopf physical reservoir computer for reconfigurable sound recognition
Scientific Reports (2023)

Distinguishing artificial spin ice states using magnetoresistance effect for neuromorphic computing
Nature Communications (2023)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.