Abstract
Hardware implementation in resourceefficient reservoir computing is of great interest for neuromorphic engineering. Recently, various devices have been explored to implement hardwarebased reservoirs. However, most studies were mainly focused on the reservoir layer, whereas an endtoend reservoir architecture has yet to be developed. Here, we propose a versatile method for implementing cyclic reservoirs using rotating elements integrated with signaldriven dynamic neurons, whose equivalence to standard cyclic reservoir algorithm is mathematically proven. Simulations show that the rotating neuron reservoir achieves recordlow errors in a nonlinear system approximation benchmark. Furthermore, a hardware prototype was developed for nearsensor computing, chaotic timeseries prediction and handwriting classification. By integrating a memristor array as a fullyconnected output layer, the allanalog reservoir computing system achieves 94.0% accuracy, while simulation shows >1000× lower systemlevel power than prior works. Therefore, our work demonstrates an elegant rotationbased architecture that explores hardware physics as computational resources for highperformance reservoir computing.
Similar content being viewed by others
Introduction
Reservoir computing is a bioinspired machine learning paradigm introduced in the early 21st century^{1,2,3}. The randomly and recurrently connected nonlinear nodes in the reservoir layer provide efficient implementation platforms for recurrent neural networks with low training costs (Fig. 1a). In principle, the complex dynamics generated by the reservoir nonlinearly map the input data to spatiotemporal state patterns in a highdimensional feature space, where the state vectors of different classes can be linearly separated^{1,4}. Furthermore, reservoir computing is a powerful approach for processing temporal signals due to the recurrent connections that create dependencies between current and past neuron states, which is also known as shortterm memory or fading memory^{2,5}. In particular, reservoir computing has demonstrated excellent performance in complex timeseries prediction and classification tasks^{4,6}.
Given the potential of reservoir computing, exploring physical dynamics as computational resources of reservoirs for highly efficient information processing has received considerable research attention in recent years. In 2011, a pioneer study^{7} introduced a delaybased reservoir and the concept of virtual nodes into a physical implementation of a cyclic reservoir (CR), as shown in Fig. 1b which is a simplified reservoir without performance degradation^{5}. This compelling finding provided an effective method for performing hardwarebased reservoir computing, making it an attractive candidate in the field of neuromorphic computing. In followup studies, various emerging devices and systems were investigated as physical reservoirs^{8}, and they included spintronic devices^{9}, photonic devices^{10,11,12,13,14}, quantum devices^{15}, memristive devices^{16,17,18}, nanowire networks^{19}, and even soft robotic arms^{20}. However, the main drawbacks associated with the use of delayed feedback and timemultiplexing are as follows: (i) delayed feedback is costly for hardware implementations using conventional complementary metal–oxide–semiconductor (CMOS) technology or optical approaches, which require additional digital components^{7,21}, such as analogtodigital converters (ADCs) and randomaccess memory, or bulky optical fibers^{10,11,22,23}, respectively; (ii) in the absence of a delayed feedback line, a reservoir computing system cannot simultaneously maintain an appropriate memory capacity (MC) or satisfactory state richness. For example, previous research revealed that shortening the step size in time multiplexing could improve the MC but at the cost of reducing the state richness, or vice versa^{16}. (iii) The serial operations in time multiplexing increase system complexity and latency for both input and readout, whereas parallel computing, which enhances the throughput, is more desirable in neuromorphic computing^{24}. These obstacles hinder further reductions in power and size when the cost for an entire reservoir computer, from the signal input to the computing output, is considered; thus, a knowledge gap associated with massive deployment in practical applications remains. There is an urgent need to develop a new architecture involving hardwarebased reservoir computers of miniature size with low power consumption and high capability for largescale integration^{8,25}.
In this work, we propose a rotating neuronbased architecture for physically performing reservoir computing in a more intuitive way, namely rotating neurons reservoir (RNR), whose rotation behavior matches with the neurons update in a CR, as rigorously proven through mathematical derivations. Compared with the existing implementations in reservoir computing^{17,19,20,21,23}, the RNR is hardwarefriendly, resourceefficient, fully parallel, and explainable by standard CR. To verify the feasibility and potential of the RNR, an electrical RNR (eRNR) design based on CMOS circuits is introduced together with a simulator. Furthermore, a prototype eRNR composed of eight parallel reservoir circuits is built to perform analog nearsensor computing, and realtime Mackey–Glass time series prediction and realtime handwriting recognition are successfully performed in hardware experiments. To realize an allanalog reservoir computing system, the eRNR is further integrated with an analog memristor array that implements the fully connected output layer. Through the proposed noiseaware training method, the conductance variation of the memristor array is accommodated, and high classification accuracy of 94.0% is achieved for a handwritten vowel recognition task. Finally, a CMOS circuit simulation based on standard 65 nm technology indicated that the eRNR system is projected to consume as little as 32.7 μW of system power in the handwriting recognition task; this total would be more than three orders of magnitude lower than that achieved by literaturereported reservoir systems. These results highlight the tremendous potential of the proposed RNR, offering a promising paradigm for resourceefficient reservoir computers.
Results
Physical CR with rotating neurons
The rotation couples the physical RNR and software CR. The mathematical derivation of the RNR proves that a rotating neuron array is equivalent to a CR model (Fig. 1b) as detailed in the Methods section. Figure 1c illustrates the operation principle of the rotationbased reservoir: if the neuron array is fixed, the pre and postneuron rotors rotate in the same direction to periodically shift the connections, which is equivalent to rotating the neuron while fixing the pre and postneuron rotors. Figure 1d shows an example of a threeneuron RNR. The rotors shift the connections before and after the neurons. The channels on the rightside output the analog computing results equivalent to the neuron states in a CR model with the same input. We shall mention that the fundamental of RNR is widely applicable to various rotating components, not limited to CMOS implementations, that can be developed as a reservoir by embedding dynamic neurons.
Thus, the main challenge of implementing a hardware RNR is the construction of the physical rotors and dynamic neurons based on the above approaches. Figure 2a illustrates a schematic of an Nneuron eRNR designed using CMOS circuits. The implementation of the input layer using binary weights is important because it allows the system to directly interface with analog sensory signals. W_{in} is taken to be a matrix consisting of a randomly generated uniform distribution of 1 and 1 values, which have been proven to be effective as multilevel weights^{26}. Assuming that the signal source is u(t), for each neuron, the driving signal should be γu(t) or –γu(t) during onetime step, where γ is the input scaling factor. W_{in} can be configured by changing the switches (S_{1} to S_{N}). Note that the W_{in} should remain unchanged while the RNR is operating so that the switches can be replaced with fixed connections.
Next, the preneuron rotor is implemented using N Nchannel multiplexers composed of transmission gates. All multiplexers share a common address line from a log_{2}N (for N = 2, 4, 8, 16 …) bit counter but different channel sequences for neuron connections, as illustrated in Fig. 2a. A driving clock with a period of τ_{r} is used to sequentially increase the counter address from 0 to N − 1 and then reset it to 0. This address is used to control the activated channels of all the multiplexers. Because the sequence of neuron connections is inconsistent, every multiplexer is connected to a different neuron during one τ_{r}. Such a configuration ensures that every input channel transmitting γu(t) or −γu(t) continues to poll every neuron during every rotation cycle τ_{r} × N, which corresponds to the transformation \({\gamma({{{{\bf{R}}}}^{{{{\rm{k}}}}1}})^{{{\rm{T}}}}{{{{\bf{W}}}}_{{{\rm{in}}}}{{{\bf{u(k)}}}}}}\) as described in the Methods section, where R^{k1} denotes (k1)timeshifting. Upon receiving the neuron input \({\gamma({{{{\bf{R}}}}^{{{{\rm{k}}}}1}})^{{{\rm{T}}}}{{{{\bf{W}}}}_{{{\rm{in}}}}{{{\bf{u(k)}}}}}}\) and adding to its current value, the resulting neuron output a(k) is represented by the voltage level measured at the right side of the neuron circuit. The final step is to employ another postneuron rotor at the output to convert a(k) to a state vector s(k). The postneuron rotor performs an operation that is a mirror of that implemented by the input multiplexer array to obtain the forward rotation R.
In addition to the rotors, dynamic neurons are also crucial elements in nonlinear computing. Based on the fundamental RNR characteristics described in the Methods section, a neuron in the RNR should possess three important characteristics: nonlinearity, integration ability, and leakage ability (Fig. 2b). Figure 2c illustrates a dynamic neuron specifically for the eRNR. Figure 2d and e plot the nonlinearity (a rectified linear unit (ReLU) that can be implemented with a diode) and integration characteristics (with a time constant τ_{n} = R_{int} × C_{int} for the neuron), respectively. In the absence of the diode, the activation function becomes linear. The design and modeling of the dynamic neuron used in the eRNR are detailed in the Methods section. As discussed in Fig. 2b and the Methods section, most of the recently reported devices and materials for physical reservoir computing could also be used as the neuron in the RNR architecture^{9,16,17}. Finally, an eRNR can be built by combining rotors and neurons. Multiple parallel RNRs can simultaneously connect to a common input signal but use different W_{in} configurations to increase the state richness. Figure 2f illustrates a complete eRNR computing architecture that includes M parallel Nneuron eRNRs. The output weights are obtained through training and mapped in a memristor array to calculate the final results.
Moreover, a noisefree simulator was developed to evaluate the performance of the eRNR under different configurations and demonstrate its equivalence to a CR (as proven analytically in the Methods section). The first simulation was designed to confirm the consistency between the RNR and the CR and emphasize the role of rotation in the RNR. The key network characteristics based on different parameters, nonlinearities, and rotation directions were investigated. Before comparing the network characteristics of the software CR and the hardware RNR, a numerical method was developed to calculate the software CR parameters, such as the input scaling factor α and recurrent strength β, from the RNR behaviors to find the CR counterpart for a hardware RNR (see Methods). The prime taskindependent network characteristic for a reservoir is the MC, which indicates its capability to retain the fading memory of the previous input^{8,27} and plays a critical role in the reservoir’s performance in temporal signal processing. The standard MC measurement is introduced in Supplementary Note 1. Figure 3a plots the MC as a function of reservoir size N in different scenarios. We observed excellent agreement in the MC between the eRNR and its CR counterpart for both ReLU and linear activation functions. The ReLU neurons yielded a lower MC because the nonlinearity suppressed the fading information for previous inputs, as also observed in earlier studies^{27,28}. For the RNR, we investigated the effect of the rotating direction to validate the design of the two rotors. The four lines at the bottom of Fig. 3a show the MC when the two rotors stopped or codirectionally rotated. The nearzero MC suggests that in cases with no rotation and counterdirectional rotation, the RNR failed to implement reservoir computing functionalities since there was no MC for processing the temporal signal. In addition to MC, the other three important network characteristics are computing ability (CA), kernel quality (KQ), and generalization rank (GR)^{29} (see Supplementary Note 1). These factors were analyzed by varying the time constant of neurons τ_{n}, which also changed the parameter matching result for the CR counterpart. As shown in Fig. 3b, the network characteristics of the physical eRNR again matched that of its CR counterpart. Here, the minor difference may be attributed to the imperfect diode characteristics as a ReLU function. The results presented in Fig. 3a, b corroborate the finding that a properly configured RNR (rotation in a common direction) is equivalent to a softwarebased CR and hence can be used for implementing physical reservoir computing.
The performance benchmark for the eRNR
As an implementation of reservoir computing, the eRNR should be able to approximate a nonlinear system, for which a nonlinear autoregressive moving average system (NARMA) is a widely recognized benchmark for testing reservoir computing performance. A standard tenthorder NARMA system can be expressed by the following formula:
where x(k) is a randomly generated white noise input in the range of [0, 0.5] and y(k + 1) is the target number. As can be observed in Eq. (1), the recursive configuration demands both nonlinear fitting and MC for the prediction model. In this task, an eRNR model was used to receive the x(k) input and then predict the y(k + 1) output after training. In total, 4000 data samples (x(k) and y(k)) for NARMA10 were generated to train (3000 samples) and test (1000 samples) the eRNR model. Given the same x(k), the normalized root mean square error (NRMSE) of the predicted result y’(k) versus y(k) calculated with the NARMA10 model based on Eq. (1) was used to quantify modeling performance. In the first trial, two key parameters of the eRNR, the input scaling factor γ and time constant of dynamic neurons τ_{n}, were assessed while other parameters were fixed to obtain the optimal NRMSE for a single 400neuron eRNR. The input scaling factor changes the effective range of nonlinearity, and the time constant affects the decay factor d. The noisefree simulation result is plotted in Fig. 3c, where the optimal value (NRMSE = 0.078) was found at γ = 0.061 and τ_{n} = 1.1 s. It is worth mentioning that in a neuromorphic computing system, the electronic devices directly interacting with the environment and natural signals could exhibit a much longer time constant (e.g., greater than millisecond scale) compared with that of typical digital systems^{30}. A fast time constant could result in an insufficient MC for retaining historical information. Such biologically realistic time constant values (τ_{n} and τ_{r}, from milliseconds to seconds scale) were used throughout the explored hardware implementation and simulation processes. The performance can be further improved by increasing the number of parallel reservoirs M with different input weights W_{in} as illustrated in Fig. 2f. As shown in Fig. 3d, the resulting NRMSE can be clearly reduced by increasing M or N. The minimum NRMSE achieved in this experiment is 0.055 at N = 388 and M = 50. Figure 3e shows an instance of the predicted value y’(t) in comparison with the ground truth y(t) when NRMSE = 0.055. To the best of our knowledge, the NRMSE values for both the single eRNR (0.078) and parallel eRNRs (0.055) are lower than those reported in the previous studies^{7,31} in the field of reservoir computing. Notably, the exponential form of nonlinearity in the transition region of the diode (different from the ideal ON/OFF form in the ReLU function used by the software) enhances the state representation of the NARMA10 system. This result demonstrates the tremendous potential of the eRNR in highorder nonlinear system approximation due to the rich physical dynamics of electronics devices.
Physical eRNR implementation: realtime chaotic signal prediction
The eRNR design can be implemented using commercial offtheshelf components. Here, we developed a proofofconcept prototype with τ_{n} = 1 s, N = 8, and M = 8, as shown in Fig. 4a. The eight parallel eRNRs shared common power, counter, positive input, and negative input characteristics. The input weight W_{in} varied for every eRNR to create diverse neuron dynamics and increase the state richness. More details about the prototype can be found in Supplementary Note 2. To evaluate the state generation performance, the first experiment with the 8 × 8 eRNR system was a multistep ahead prediction for Mackey–Glass chaotic system, which has been used in various reservoir computing studies as a benchmark task^{1,17,32}. The Mackey–Glass system is defined by
where the system parameters γ, β, and n were set to the widely used values 0.1, 0.2, and 10, respectively. Additionally, the system is chaotic when τ > 16.8, and predictions become correspondingly more difficult. In this experiment, we set τ = 17 and the initial value y(0) = 1.2 following previous works. The samples generated based on the Mackey–Glass system were input into the 8 × 8 eRNR system with a sampling rate of 8 Hz. This sampling rate should be the same as the driving frequency of the counter to ensure that every sample point is captured; that is, τ_{r} = 0.125 s. Based on this configuration, the 64 parallel output channels produce state values of the measured voltage for postprocessing. With our customized demonstration platform (the description of this platform is available in Supplementary Note 2), the Mackey–Glass chaotic signal y(k) was continuously fed into the eRNR system. The training state matrix s(k) with a length of 64 based on y(k) was used for output weight W_{out} training through linear regression, and the target value was input into the Mackey–Glass dataset shifted by i steps (y(k + i)). Here, the number of shifted steps i depended on how many steps ahead of y(k) the system could predict. The system continuously received y(k) without any preprocessing and produced 64 state outputs, which were multiplied by W_{out} to predict the value y’(k + i). This process was performed in realtime with the demonstration platform, and all the data, including y(k), y’(k + i), and s(k), were visualized (see Supplementary Movie 1).
To better understand how the number of parallel RNRs (i.e., M) affected the prediction performance of the system, the states within 360 s (2880 × 64 samples, half for training and half for testing) were collected with the platform. Again, the NRMSE was used to quantify the difference between the actual values y(k + i) and the predicted values y’(k + i). The result is shown in Fig. 4b. As i increased, the time series became increasingly difficult to predict, resulting in a higher NRMSE; however, this NRMSE increase can be alleviated by using additional parallel reservoirs to enhance computational performance. Two examples of onestepahead prediction using one reservoir (NRMSE = 0.17) and eight parallel reservoirs (NRMSE = 0.03) are plotted in Fig. 4c, d, respectively. The traces of y(k + i) and y’(k + i) in the phase space were also examined (Fig. 4e, f). The traces of eight eRNRs exhibited excellent consistency with the true values compared with the traces for the onereservoir system. These experimental results suggest that the 8 × 8 eRNR prototypes can be used to make accurate predictions of variables in the MackeyGlass chaotic system after training. Even with the inevitable noise introduced by the analog circuits, the eRNR can successfully emulate the chaotic system, with a low NRMSE of 0.03. Moreover, our experiment revealed that the eRNR prototype can properly predict onestepahead for more chaotic signals (τ > 17) (Supplementary Fig. 1a–f). In comparison, the system performance could degrade as τ increases in multistepahead prediction (Supplementary Fig. 1g).
Demonstration of nearsensor computing: handwriting recognition
In the literature, some previously reported reservoir computing demonstrations achieved relatively low power consumption for certain parts inside systems using emerging devices and materials^{9,16,17}. However, the operations for entire systems are usually overlooked. An interface between a sensory signal and the reservoir input is usually necessary, and assistive techniques, such as converting between digital and analog data, memory buffering, preprocessing and feature extraction, are also often required^{7,9,17}. These sophisticated operations increase system complexity and power consumption but are necessary for conventional physical reservoir computing and remain a key challenge for practical deployment^{8}. In this work, a prime advantage of our eRNR prototype is that it can directly receive analog sensory signals and produce the parallel state output without any digital memory use or preprocessing, which could considerably reduce the power consumption of the overall system. In fact, this strength is highly attractive for emerging applications in analog nearsensor computing; notably, the processor can act as a direct interface for sensory signals for cognitive computing purposes^{33}.
To demonstrate analog nearsensor computing, a resistive touch screen was employed to provide an analog sensory signal for a handwritten vowel recognition task. In the experimental setup, a frontend circuit converted the resistive variations into two continuous signals representing the X and Y coordinates of the activated pixel on the screen. The 8 × 8 eRNR system used in the Mackey–Glass task was divided into two 4 × 8 eRNR subsystems (i.e., N = 8 and M = 4) to process X and Y temporal signals, and the total length of the state channel was still 64. In this case, the two subsystems still shared common power and counter but had different positive and negative inputs from the X and Yaxes. A photograph of the hardware is shown in Fig. 5a. This experiment demonstrates that five different handwritten vowels (A, E, I, O, and U) can be distinguished after highdimensional nonlinear mapping in the eRNR. Additionally, one important advantage of using reservoir computing systems is that their shortterm memory property allows the network to retain the fading information of previous inputs in the state matrix at each time step. Thus, the state matrix obtained at the end of a handwritten event contains the information for the entire handwritten trace. After training, the eRNR system can perform pointbypoint analog reservoir state generation without accessing digital memory. Consequently, the memory unit for storing a certain length of data, such as the data in a sliding window or segmented signal, in conventional machine learning approaches can be eliminated by making full use of the MC. Further advancement of this system involves the analog output weights stored in a memristor crossbar array to realize allanalog signal processing^{34,35}, for which the power consumption can be further reduced by taking advantage of the computinginmemory capability of memristors. Thus, from the sensory signal to the classification result, the entire system can perform nearsensor computing in the analog domain, as shown in Fig. 5b.
In our experiment, handwritten vowel data from eight participants were collected (see Methods), and typical handwritings are displayed in Fig. 5c. For different handwritten vowels, Fig. 5d shows the X and Y signals input into the eRNRs, and Fig. 5e shows the resulting state output of the 64 channels. Using the labeling, training, and testing procedure introduced in the Methods section, 683 handwritten vowels (of a total of 703 in the test set) were correctly recognized, yielding a high accuracy of 97.1%. Examples of the pointbypoint outputs for the five classes are illustrated in Fig. 5f, and the confusion matrix is shown in Fig. 5g. The errors mainly occurred when predicting ‘O’, which was misclassified as ‘U’ in some cases since these two classes are associated with similar writing traces. Here the softwaretrained W_{out} was deployed with the demonstration platform to perform realtime nearsensor handwriting recognition (see Supplementary Movie 2).
The next experiment further integrated the eRNR system with a memristor crossbar array that served as the output layer. In this experiment, a differential pair of two memristors was used to represent one synaptic weight, so 640 memristors were used to represent all the weights in the above W_{out} (see Methods and Supplementary Fig. 2). It is noted that the analog weights in a memristor array usually suffer from conductance variation issues (e.g., read noise) due to the nonideal device characteristics, leading to certain performance degradation compared with the floatingpoint digital weights in software^{35}. The next simulation evaluated the effect of memristor conductance noise on the classification performance of the system to establish a proper training scheme. Figure 5h shows the result of directly mapping W_{out} without noiseaware training; notably, the accuracy decreased significantly as the noise level increased. In our experiment, the intrinsic noise of the memristor was the dominant noise source in the allanalog system. To achieve high accuracy, we adopted a noiseaware training method to obtain a robust W_{out} in the presence of memristor conductance variation^{36,37}. In the noiseaware training scheme, Gaussian white noise with a standard deviation of ±0.03 was added to the normalized training state data before regression, and the resulting accuracy is plotted in Fig. 5h. The comparisons between digital W_{out}, target analog W_{out}, and the average values of the measured W_{out} after mapping are visualized in Supplementary Fig. 3. Most of the weight values can be successfully mapped to the memristor array with acceptable device variation, and the standard deviation (target conductance minus measured conductance) is approximately 0.368 μS. Finally, the confusion matrix using analog W_{out} measured from the memristor array is shown in Fig. 5i. Using the noiseaware training method and the measured analog W_{out}, the classification accuracy was improved from 29.2 ± 0.9% (without noiseaware training) to 94.0 ± 0.8% (with noiseaware training). The recognition result for each participant is summarized in Supplementary Fig. 4.
Systemlevel power estimation and benchmark testing
The power consumption for the whole eRNRbased reservoir computing system can be divided into two parts: eRNR circuit consumption and memristor array consumption. For the eRNR circuit, an 8neuron eRNR was designed and simulated using a standard 65 nm CMOS process based on the parameters used in the handwriting recognition task. The power estimation process and simulation are described in the Methods, where the power of eRNR was estimated by the simulation of the CMOS circuit using the foundryprovided library. The result indicates that the eRNR method can reduce the system power consumption for the handwriting task and chaotic signal prediction to 32.7 μW. The simulation also suggests that the static power, mainly associated with the dynamic neurons and the leakage current of transistors, plays a dominant role when the processing rate (1/τ_{r}) is lower than 100 kHz (for which the power consumption was estimated to be 79.1 μW). This striking advantage is associated with the unique allanalog computing capability of our eRNRimplemented reservoir computing system, which saves the energy for frequent data conversion between digital and analog domains. It should also be highlighted that our allanalog eRNR provides more than three orders of magnitude lower systemlevel power consumption compared with previous cuttingedge reservoir computing systems, whose power are in the ranges of 83 mW to 150 W using different implementation methods (see Supplementary Table 1)^{10,38,39,40}.
As we can see, in contrast to conventional digital systems, the electronics’ intrinsic dynamics were fully explored as computational resources in the allanalog eRNR architecture. A complete rotationbased reservoir computing system can be implemented by designing pre and postneuron rotors and dynamic neurons; this approach uses highly simplified hardware and is endorsed by the CR theory. Additional discussion and comparison of the power efficiency of the eRNR can be found in Supplementary Note 3.
Discussion
In summary, we developed a hardwarefriendly RNR architecture for allanalog neuromorphic computing; the resulting structure represents a fundamentally different reservoir architecture than those used in conventional hardware implementations. The proposed RNR has been validated in theory, simulation, and experimental analyses. The theoretical analysis of RNR rigorously mapped the CR algorithm onto the physical rotation of dynamic neuron array, providing a solid foundation for hardware implementation. Such an RNR can be embedded into natural rotating components in various electronics, mechanical systems, or even nanorobotics and empower them with computing capability. In the simulation using the eRNR model, the NARMA10 prediction task was performed to benchmark the system with varying hyperparameters, and recordlow NRMSE values of 0.078 for a single eRNR and 0.055 for parallel eRNRs were achieved. It was found that the additional nonlinearity provided by the hardwarebased dynamic neurons enhanced system performance in the approximation of the NARMA10 system, thus highlighting the computing potential of the proposed RNR. Furthermore, an 8 × 8 eRNR prototype was developed based on RNR theory for nearsensor analog computing. The prototype successfully demonstrated multistepahead prediction of chaotic time series, and eight parallel reservoirs were found to reduce the prediction NRMSE from 0.17 to 0.03 for the studied Mackey–Glass chaotic system. This experimental result further validates the computing capability of our eRNR prototype under different experimental configurations. By further integrating the eRNR with an analog memristor array as the fully connected output layer, an allanalog reservoir computing system was realized to perform handwriting recognition tasks. A noiseaware training method was used to accommodate the conductance variation of the memristor array and improved the classification accuracy to 94.0%. In the simulation of the eRNR circuit, the overall system power consumption was estimated to be as low as 32.7 μW for the handwriting tasks operating at 10 Hz (τ_{r} = 0.1 s), reflecting an advantage of more than three orders of magnitude compared to the consumption reported for reservoir computing systems in the literature. Additionally, further power analysis suggested that the static power, mainly dissipated by the dynamic neurons, dominates the system at processing rates below 100 kHz, while the overall system power remains at a low level for high processing rates (>100 kHz) (see Supplementary Table 1). This result can be explained by the fact that most computations occur in the analog domain that only contribute to static power, which is a general advantage of analog neuromorphic computing. Dynamic power, mainly attributed to logic switches and memristor arrays, starts to dominate the system at processing rates higher than 100 kHz (see Supplementary Table 2). Further discussion on the lowpower advantage of eRNRs can be found in Supplementary Note 3.
To further enhance the eRNR system capabilities when performing complex tasks, a useful approach is to increase the number of neurons (N) or the number of parallel eRNRs (M) to expand the network size. Furthermore, a deep eRNR, consisting of multiple eRNR cells in series, could enhance the classification performance for inputs of different classes. From a hardware perspective, dynamic neurons could be replaced by recently reported emerging devices (e.g., dynamic memristors^{16,17} and spintronic devices^{9}) to further reduce the system size and power consumption. Different configurations of neurons could be beneficial for enhancing state richness and improving system performance. In addition, the eRNR design can be miniaturized and monolithically integrated onto chips to reduce power requirements and promote ultrafast computing. It is also worth mentioning that various rotational hardware could be explored for constructing efficient pre and postneuron rotors, which are the key to implementing the RNR. Our work demonstrates that the RNR is wellsuited for largescale and highspeed neuromorphic computing systems and has tremendous potential for use in applications involving the Internet of Things and edge computing, among others.
Methods
Fundamentals of the RNR
For a typical reservoir computing with an mdimensional input, an ndimensional output, and N neurons (Fig. 1a), the input coefficients W_{in} (m × N) and reservoir weights W_{res} (N × N) are randomly generated^{1}. The complex dynamics stemming from the massive and random connections in the reservoir layer aid in nonlinearly mapping the mdimensional input to the Ndimensional feature space where different input classes can be linearly separated. For n output classes, only the output weights W_{out} (N × n) need to be trained by using linear regression, which is relatively efficient compared to other recurrent neural network methods^{1,2,41}. Note that linear ridge regression is used for training throughout this work. The neuron dynamics in the reservoir layer play an important role in signal mapping based on the following equation:
where s(k) denotes the neuron state matrix with length N at the kth time step, u(k) is the mdimensional input, α and β are the scaling factors for the input and recurrent weights, respectively, and f(x) is a nonlinear transform function. In reservoir computing, the reservoir layer W_{res} can be designed in a deterministic manner rather than being based on random connections^{5}. In this case, the W_{res} becomes a shifted identity matrix R
As a result, W_{res} is significantly simplified, and the network topology becomes CR, as shown in Fig. 1b. Previous research concluded that CR could achieve comparable results to those of conventional reservoir computing^{5}. Then, the matrix R corresponds to onetime shifting in a ring structure, and R^{k} indicates a ktime cyclic shift analogous to physically rotating an object. As illustrated in Fig. 1d, it is assumed that (i) the post and preneuron rotors are described by R and its transpose matrix R^{T}, respectively; (ii) a(k) is the dynamic neuron output at the kth step; and (iii) s_{r}(k) is the state matrix of the RNR at the kth step measured at the end of each rotor’s channel (before the output weights). Considering the rotation of the neuron output, the state s_{r}(k) updating formula can be written as
which indicates that, at the kth step, the state matrix s_{r}(k) is obtained by rotating the neuron output a(k) for (k1) times. Furthermore, the output of dynamic neurons is determined based on both an input shift and the previous states
where d denotes the decay factor resulting from the dynamic property of the neuron (see the next subsection in the Methods), γ is the scaling factor for the input, and f_{r}(x) is the nonlinear transform implemented by the dynamic neurons. Equation (6) describes the signal flow through the neurons. Given an input u(k), it is first multiplied by the input weights W_{in}. After k reverse rotations of the input connections, the signal is fed into the dynamic nonlinear neurons, which output a(k + 1). If both sides of Eq. (5) are multiplied by R^{k}, we can obtain
Using Eq. (5), Eq. (7) can be simplified as
Here, the excellent consistency between Eq. (3) and Eq. (8) reveals that the proposed physical RNR architecture (Fig. 1c) is equivalent to a software CR. Thus, a rotating object with dynamic neurons can act as a reservoir computer without using extra control units, ADC or memory, which remarkably reduces the system complexity and power consumption compared with those in conventional hardware implementation (see Supplementary Note 3).
Design and modeling of dynamic neurons
By observing Eq. (3), it appears that a dynamic neuron for the proposed RNR should satisfy three important characteristics as shown in Fig. 2b: provide a nonlinear activation function f(x); support integration ability for the summation between the current input and previous state a(k − 1); and support leakage, as related to the decay factor d, to avoid saturation caused by the integration process. Any passive element that exhibits these three characteristics could essentially be used as a dynamic neuron in the RNR architecture by finetuning the time constants of neurons and rotors. A dynamic node working in a physical reservoir may suffer from device variation issues, which impact system performance. Previous studies have revealed that a certain degree of device variation may be beneficial to system performance by enhancing state richness^{16,17}, but determining how to precisely control device variability warrants future explorations.
In implementations using standard electronics (Fig. 2c), a ReLUtype nonlinear transform can be provided by a diode, and the resistor R_{int} and capacitor C_{int} can act as integrators. Leakage can be considered by connecting the system to the ground via a large resistance R_{leakage}. In the simulation, this neuron can be modeled as follows:
where V_{o}(t) and V_{i}(t) denote the input and output voltages, respectively. The saturation current I_{s} and thermal voltage V_{T} stem from the Shockley diode equation \(I={I}_{s}({e}^{\frac{{V}_{D}}{{V}_{T}}}1)\). The typical values for germanium diodes I_{s} = 25 × 10^{−9} A and V_{T} = 0.026 V were used in the simulation. In the case of linear neurons, the last term \(\frac{1}{{C}_{{int}}}{I}_{s}({e}^{\frac{{V}_{o}\left(t\right)}{{V}_{T}}}1)\) should be removed from Eq. (9).
In our simulation, Eq. (9) was solved in MATLAB/Simulink. The discrete neuron output in Eq. (3) becomes a(k) = V_{o}(kτ_{r}). The pre and postneuron rotors can be modeled by continuously shifting W_{in}u(k) and the neuron output a(k). Since R_{leakage} is a large resistance, the time constant associated with this neuron is mainly determined by the integrator τ_{n} = R_{int}C_{int}. For the rate of rotation τ_{r}, we normally use an empirical value of τ_{r} = τ_{n}/8.
Parameter matching
It has been analytically proven that a physical RNR can perform the same functionality as a CR (Eq. (8)). Therefore, given a properly configured RNR, its CR counterpart should exist and exhibit similar network characteristics. Parameter matching provides a numerical method to determine the CR counterpart. The main difference between a hardware RNR and a software CR is associated with nonideal dynamic neurons, which result in different amplitude ranges for integration and nonlinearity. Therefore, the objective is to find the appropriate scaling coefficients for the software activation function to approximate the hardware neuron output under the same input W_{in}u(k). An arbitrary u(k) was generated as an input to the RNR, and the neuron output a(k) was obtained. Assuming that this a(k) is generated by a software CR, a comparative neuron update vector can be defined
where a_{p} is the neuron output sequence of recurrent factor α, input scaling factor β, and the ReLU cutoff value V_{c}. For certain values of α, β and V_{c}, the CR should match the RNR if the resulting a_{p}(k) is close to a(k) for any k. Hence, the CR counterpart of an RNR can be found by matching the three parameters. First, V_{c} is the threshold voltage of the diode, unlike that in the ideal ON/OFF case in the software ReLU function (V_{c} = 0). This value can be obtained based on the minimum value of the a(k) sequence. Second, α and β are determined by searching the potential values and finding those that minimize the NRMSE between a(k) and a_{p}(k), which can be described as \(\mathop{{{\min }}}\nolimits_{\alpha ,\beta }{NRMSE}\left({{{{{\boldsymbol{a}}}}}}\left({{{{{\boldsymbol{k}}}}}}\right),{{{{{{\boldsymbol{a}}}}}}}_{{{{{{\boldsymbol{p}}}}}}}\left({{{{{\boldsymbol{k}}}}}}\,{{{{{\boldsymbol{+}}}}}}\,{{{{{\boldsymbol{1}}}}}}{{{{{\boldsymbol{,}}}}}}\,\alpha ,\beta {{{{{\boldsymbol{,}}}}}}{V}_{c}\right)\right)\). For example, for an RNR with τ_{n} = 1 s, τ_{r} = 0.125 s, and γ = 0.5, the matched CR parameters are α = 0.87, β = 0.12, and V_{c} = −0.18, and the corresponding MC values are compared in Fig. 3a.
Handwritten vowel recognition using an eRNR
The parameters of the eRNR used in the handwritten vowel recognition task are τ_{n} = 1 s, τ_{r} = 0.1 s, N = 8, and M = 4 (for each X and Y channel). All data were collected with our customized platform. In total, 66 channel data streams, including the twoaxis signals and signals from 64 reservoir state channels, were collected at each time step. During the data collection process, eight participants were asked to write the five vowels on a resistive touch screen, and repeat at least 20 times for each vowel. Data for 1103 handwritten vowels (2802 s) were successfully collected. The location and class of each handwritten vowel were labeled at the final rising/falling edge of the X and Y raw data. We labeled the end of each handwritten vowel (the blue square in Fig. 5d) where the state matrix at this time step contains the information of the handwritten trace because of MC. Specifically, the 64 × 1 state matrix collected at the time denoted by the green dot can be considered a feature vector for the corresponding handwritten trace.
After data collection and labeling, the database was divided into a training set (400 handwritten vowels; 1025.8 s) and a testing set (703 handwritten vowels; 1776.2 s). According to the pointbypoint computation introduced above, the size of the training label matrix Y_{train} for the five classes should be a fivedimensional data stream in which only the locations of green squares are set to 1, and values of 0 are assigned at other points. For training W_{out} (64 × 5), ridge regression with the target = Y_{train} (fivedimensional label for 1025.8 s) and variables = S_{train} (64dimensional state vector for 1025.8 s) was used. Next, W_{out} was multiplied by the test state matrix (Y_{test}’ = S_{test} × W_{out}) to obtain a fivedimensional output representing the possibility of five potential classes at each time step, which corresponded to the graphs in Fig. 5f. To quantify the classification accuracy, the predicted output for the testing set Y_{test}’ was compared with the manually labeled locations Y_{test}. For every location in a handwritten event, for example, y_{test}(k)k = k_{x}, the actual output was investigated to find the maximum value in the range of y_{test}(k_{x} − 7) to y_{test}(k_{x} + 3). The corresponding channel that output the maximum value was considered the predicted class.
Memristorbased output layer
Memristorbased analog computing has displayed excellent potential in neuromorphic computing. While the input and reservoir layer are generally established based on eRNR design, the output layer, which employs standard vectormatrix multiplication operations, can be effectively implemented by a memristor array for endtoend allanalog computing^{42,43}. The memristor array has a unit cell of onetransistoroneresistor (1T1R). Each 1T1R consists of a resistive switching memristor with a material stack of TiN/HfO_{x}/TaO_{y}/TiN connected to a Si transistor that is fabricated using a standard 130 nm Si CMOS process^{44,45}. The description of the memristor array can be found in Supplementary Fig. 2. As described in the main text, we used 640 memristors in total to represent 320 weights in the output layer. The computation principles of memristorbased analog computing can be expressed as I = V × G = V × (G_{p} − G_{n}), where G represents the weight matrix W, and G_{p} and G_{n} are the positive and negative conductance matrices, respectively. Furthermore, we use a standard writewithverify scheme to map the weight matrix W_{out} to the conductance of the memristor array^{34}.
Power estimation
As shown in Fig. 2a, the neurons, as passive components, are driven by the negative and positive sensory signals, providing a power source P_{s}. Also, the energy consumed by the counter and transmission gates depends on not only the static power but also the rate of rotation τ_{r}. The total power consumption P of the system consisting of M 8neuron eRNRs (where the number of neurons N is fixed at 8) can be expressed as
where P_{c} and P_{t} represent the static power of the counter and transmission gates, respectively, and \({E}_{c}^{{dyn}}\) and \({E}_{t}^{{dyn}}\) represent the dynamic energy dissipated in the transition region driven by the rate of rotation 1/τ_{r}. \({E}_{m}^{{dyn}}\) is the energy consumed in the output layer (memristor array) for one inference. The M parallel eRNRs can share one counter, but the power for the other components increases with the number of parallel eRNRs M. For our application involving realtime handwritten signals, the operation period τ_{r} is relatively slow (0.1 s) to match the time scale of human operations.
The simulation result shows that P_{s} = 3.27 μW, P_{c} = 0.93 μW, and P_{t} = 0.70 μW, regardless of how fast the rotors are operating. Moreover, the energyrelated to the rotation rate is \({E}_{c}^{{dyn}}\) = 0.31 pJ and \({E}_{t}^{{dyn}}\) = 0.07 pJ. For the memristorbased output layer, the power dissipated by the voltage buffer driving the memristor array and the memristor array itself is 144 and 0.8 μW, respectively. During every τ_{r}, the only onetime inference is needed since all state channels are monotonously increased or decreased. The memristor array takes ~50 ns to respond to the state voltage. Therefore, the dynamic energy of the memristor array for every inference step is \({E}_{m}^{{dyn}}\) = (144 μW + 0.8 μW) × 50 ns × 64 = 463.36 pJ/class. The total power consumption of an 8 × 8 eRNR can then be calculated using Eq. (11). The simulated power breakdown at different frequencies is shown in Supplementary Table 2. Notably, this result also reveals that the power would not considerably increase at rates of rotation (1/τ_{r}) below 100 kHz since static power dissipation dominates the system.
Data availability
The source data for Figs. 2–5 are provided in separate Source Data files. Other data that support the findings of this study are available from the corresponding authors upon reasonable request. Source data are provided with this paper.
Code availability
The code for the eRNR simulator and NARMA10 task is available at https://github.com/TsinghuaLEMONLab/Rotatingneuronsreservoir / (https://doi.org/10.5281/zenodo.5909080). Other codes that support the findings of this study are available from the corresponding authors upon reasonable request.
References
Jaeger, H. The “echo state” approach to analysing and training recurrent neural networkswith an erratum note. German National Research Center for Information Technology GMD Technical Report, Bonn, Germany. 148, 13 (2001).
Jaeger, H. & Haas, H. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004).
Maass, W., Natschläger, T. & Markram, H. Realtime computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002).
Lukoševičius, M. & Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149 (2009).
Rodan, A. & Tino, P. Minimum complexity echo state network. IEEE Trans. Neural Netw. 22, 131–144 (2011).
Pathak, J., Hunt, B., Girvan, M., Lu, Z. & Ott, E. Modelfree prediction of large spatiotemporally chaotic systems from data: a reservoir computing approach. Phys. Rev. Lett. 120, 024102 (2018).
Appeltant, L. et al. Information processing using a single dynamical node as complex system. Nat. Commun. 2, 468 (2011).
Tanaka, G. et al. Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100–123 (2019).
Torrejon, J. et al. Neuromorphic computing with nanoscale spintronic oscillators. Nature 547, 428 (2017).
Brunner, D., Soriano, M. C., Mirasso, C. R. & Fischer, I. Parallel photonic information processing at gigabyte per second data rates using transient states. Nat. Commun. 4, 1–7 (2013).
Larger, L. et al. Highspeed photonic reservoir computing using a timedelaybased architecture: Million words per second classification. Phys. Rev. X 7, 011015 (2017).
Paquot, Y. et al. Optoelectronic reservoir computing. Sci. Rep. 2, 287 (2012).
Sun, L. et al. Insensor reservoir computing for language learning via twodimensional memristors. Sci. Adv. 7, eabg1455 (2021).
Antonik, P., Marsal, N., Brunner, D. & Rontani, D. Human action recognition with a largescale braininspired photonic computer. Nat. Mach. Intell. 1, 530–537 (2019).
Nakajima, K., Fujii, K., Negoro, M., Mitarai, K. & Kitagawa, M. Boosting computational power through spatial multiplexing in quantum reservoir computing. Phys. Rev. Appl. 11, 034021 (2019).
Zhong, Y. et al. Dynamic memristorbased reservoir computing for highefficiency temporal signal processing. Nat. Commun. 12, 408 (2021).
Moon, J. et al. Temporal data classification and forecasting using a memristorbased reservoir computing system. Nat. Electron. 2, 480–487 (2019).
Du, C. et al. Reservoir computing using dynamic memristors for temporal information processing. Nat. Commun. 8, 2204 (2017).
Lilak, S. et al. Spoken digit classification by inmaterio reservoir computing with neuromorphic atomic switch networks. Front. Nanotechnol. 3, 38 (2021).
Nakajima, K. et al. A soft body as a reservoir: case studies in a dynamic model of octopusinspired soft robotic arm. Front. Comput. Neurosci. 7, 1–19 (2013).
Soriano, M. C. et al. Delaybased reservoir computing: noise effects in a combined analog and digital implementation. IEEE Trans. Neural Netw. Learn. Syst. 26, 388–393 (2015).
Duport, F., Schneider, B., Smerieri, A., Haelterman, M. & Massar, S. Alloptical reservoir computing. Opt. Express 20, 22783–22795 (2012).
Duport, F., Smerieri, A., Akrout, A., Haelterman, M. & Massar, S. Fully analogue photonic reservoir computer. Sci. Rep. 6, 22381 (2016).
Kendall, J. D. & Kumar, S. The building blocks of a braininspired computer. Appl. Phys. Rev. 7, 011305 (2020).
Covi, E. et al. Adaptive extreme edge computing for wearable devices. Front. Neurosci. 15, 429 (2021).
Kuriki, Y., Nakayama, J., Takano, K. & Uchida, A. Impact of input mask signals on delaybased photonic reservoir computing with semiconductor lasers. Opt. Express 26, 5777–5788 (2018).
Ortín, S. et al. A unified framework for reservoir computing and extreme learning machines based on a single timedelayed neuron. Sci. Rep. 5, 14945 (2015).
Inubushi, M. & Yoshimura, K. Reservoir computing beyond memorynonlinearity tradeoff. Sci. Rep. 7, 10199 (2017).
Appeltant, L. Reservoir Computing Based on DelayDynamical Systems. Doctoral thesis (2012).
Indiveri, G. & Liu, S. Memory and information processing in neuromorphic systems. Proc. IEEE 103, 1379–1397 (2015).
Jaeger, H. Adaptive nonlinear system identification with echo state networks. Adv. Neural Inf. Process. Syst. 15, 609–616 (2002).
Zhu, R. et al. Harnessing adaptive dynamics in neuromemristive nanowire networks for transfer learning. in 2020 International Conference on Rebooting Computing (ICRC). 102–106 (IEEE).
Zhou, F. & Chai, Y. Nearsensor and insensor computing. Nat. Electron. 3, 664–671 (2020).
Yao, P. et al. Fully hardwareimplemented memristor convolutional neural network. Nature 577, 641–646 (2020).
Liu, Z. et al. Neural signal analysis with memristor arrays towards highefficiency brain–machine interfaces. Nat. Commun. 11, 4234 (2020).
Joshi, V. et al. Accurate deep neural network inference using computational phasechange memory. Nat. Commun. 11, 2473 (2020).
Kariyappa, S. et al. Noiseresilient DNN: tolerating noise in PCMbased ai accelerators via noiseaware training. IEEE Trans. Electron Devices 68, 4356–4362 (2021).
Alomar, M. L. et al. Efficient parallel implementation of reservoir computing systems. Neural Comput. Appl. 32, 2299–2313 (2020).
Kleyko, D., Frady, E. P., Kheffache, M. & Osipov, E. Integer echo state networks: efficient reservoir computing for digital hardware. IEEE Trans. Neural Networks Learn. Syst. 1–14 (2020).
Alomar, M. L. et al. Digital implementation of a single dynamical node reservoir computer. IEEE Trans. Circuits Syst. II Express Briefs 62, 977–981 (2015).
Wang, W., Liang, X., Assaad, M. & Heidari, H. Wearable wristworn gesture recognition using echo state network. in 2019 IEEE International Conference on Electronics, Circuits and Systems. 875–878 (IEEE, 2019).
Yu, J. et al. Energy efficient and robust reservoir computing system using ultrathin (3.5 nm) ferroelectric tunneling junctions for temporal data learning. in 2021 Symposium on VLSI Technology. 1–2 (IEEE, 2021).
Milano, G. et al. In materia reservoir computing with a fully memristive architecture based on selforganizing nanowire networks. Nat. Mater. (Published online, 2021).
Wu, W. et al. A methodology to improve linearity of analog RRAM for neuromorphic computing. in 2018 IEEE Symposium on VLSI Technology. 103–104 (IEEE, 2018).
Liu, Z. et al. Multichannel parallel processing of neural signals in memristor arrays. Sci. Adv. 6, eabc4797 (2020).
Acknowledgements
This work was in part supported by China’s key research and development program 2021ZD0201205 (H.W.), Natural Science Foundation of China 91964104 (J.T.), 61974081 (J.T.), 62025111 (H.W.), 62104126 (Y.Z.), XPLORER Prize (H.W.), 92064001 (B.G.) and the UK EPSRC under grant EP/W522168/1 (H.H.). We thank Beijing IECUBE Technology Co., Ltd. for their generous support of the testing system.
Author information
Authors and Affiliations
Contributions
X.L., Y.Z., and J.T. conceived and designed the experiments. X.L. set up the simulation, hardware prototype and conducted the experiments. Z.L., X.L., P.Y., and K.S. contributed to the memristor array measurement and power analysis. X.L., Q.Z., B.G., and H.Q. contributed to the data analysis. X.L. and J.T. wrote the paper with inputs from H.H. All authors discussed the results and commented on the paper. J.T. and H.W. supervised the project.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Zdenka Kuncic, Serge Massar, and the other anonymous reviewer(s) for their contribution to the peer review of this work. Peer review reports are available
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Source data
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Liang, X., Zhong, Y., Tang, J. et al. Rotating neurons for allanalog implementation of cyclic reservoir computing. Nat Commun 13, 1549 (2022). https://doi.org/10.1038/s41467022292601
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41467022292601
This article is cited by

Neuromorphic overparameterisation and fewshot learning in multilayer physical neural networks
Nature Communications (2024)

Memristorbased hardware accelerators for artificial intelligence
Nature Reviews Electrical Engineering (2024)

Highlyintegrable analogue reservoir circuits based on a simple cycle architecture
Scientific Reports (2024)

A multiterminal ioncontrolled transistor with multifunctionality and wide temporal dynamics for reservoir computing
Nano Research (2024)

Allferroelectric implementation of reservoir computing
Nature Communications (2023)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.