Abstract
Chaos, or exponential sensitivity to small perturbations, appears everywhere in nature. Moreover, chaos is predicted to play diverse functional roles in living systems. A method for detecting chaos from empirical measurements should therefore be a key component of the biologist’s toolkit. But, classic chaosdetection tools are highly sensitive to measurement noise and break down for common edge cases, making it difficult to detect chaos in domains, like biology, where measurements are noisy. However, newer tools promise to overcome these limitations. Here, we combine several such tools into an automated processing pipeline, and show that our pipeline can detect the presence (or absence) of chaos in noisy recordings, even for difficult edge cases. As a firstpass application of our pipeline, we show that heart rate variability is not chaotic as some have proposed, and instead reflects a stochastic process in both health and disease. Our tool is easytouse and freely available.
Introduction
A remarkable diversity of natural phenomena are thought to be chaotic. Formally, a system is chaotic if it is bounded (meaning that, like a planet circling a star, its dynamics stay inside an orbit rather than escaping off to infinity), and if it is deterministic (meaning that, with the exact same initial conditions, it will always evolve over time in the same way), and if tiny perturbations to the system get exponentially amplified (Glossary (Supplementary Information), Supplementary Figs. 1, 2). The meteorologist Edward Lorenz famously described this phenomenon as the butterfly effect: in a chaotic system, something as small as the flapping of a butterfly’s wings can cause an effect as big as a tornado. This conceptually simple phenomenon—i.e., extreme sensitivity to small perturbations—is thought to appear everywhere in nature, from cosmic inflation^{1}, to the orbit of Hyperion^{2}, to the Belousov–Zhabotinskii chemical reaction^{3}, to the electrodynamics of the stimulated squid giant axon^{4}. These are only a few examples of the many places in nature where chaos has been found.
It is relatively simple to determine if a simulated system is chaotic: just run the simulation a few times, with very slightly different initial conditions, and see how quickly the simulations diverge (Supplementary Fig. 1). But, if all that is available are measurements of how a real, nonsimulated system evolves over time—for e.g., how a neuron’s membrane potential changes over time, or how the brightness of a star changes over time—how can it be determined if those observations come from a chaotic system? Or if they are just noise? Or if the system is in fact periodic (Glossary, Supplementary Figs. 1, 2), meaning that, like a clock, small perturbations do not appreciably influence its dynamics?
While a reliable method for detecting chaos using empirical recordings should be an essential part of any scientist’s toolbox, such a tool might be especially helpful to biologists, as chaos is predicted to play an important functional role in a wide variety of biological processes (that said, we note that real biological systems cannot be purely chaotic in the strict mathematical since, since they certainly contain some level of dynamic noise—see Glossary—but that researchers have long speculated that many biological processes are still predominantly deterministic, but chaotic^{5}). For example, following early speculations about the presence of chaos in the electrodynamics of both cardiac^{6} and neural^{7} tissue, the science writer Robert Pool posited in 1989 that “chaos may provide a healthy flexibility for the heart, brain, and other parts of the body.”^{8} Though this point has been intensely debated since the 1980s^{5,9}, a range of more specific possible biological functions for chaos have since been proposed, including potentially maximizing the information processing capacity of both neural systems^{10} and gene regulatory networks^{11}, enabling multistable perception^{12}, allowing neural systems to flexibly transition between different activity patterns^{13}, and boosting cellular survival rates through the promotion of heterogeneous gene expression^{14}. And there is good reason to expect chaos to exist in biological systems, as a large range of simulations of biological processes^{15}, and in particular of neural systems^{9}, show clear evidence of chaos. Moreover, unambiguous evidence of biological chaos has been found in a very small number of real cases that were amenable to comparison to good theoretical models; these include periodically stimulated squid giant axons^{4} and cardiac cells^{16}, as well as the discharges of the Onchidium pacemaker neuron^{17} and the Nitella flexillis internodal cell^{18}. But, beyond simulations and these select empirical cases, most attempts to test the presence or predicted functions of chaos in biology have fallen short due to high levels of measurement noise (Glossary) in biological recordings. For this reason, it has long been recognized that biologists need a noiserobust tool for detecting the presence (or absence) of chaos in their noisy empirical data^{9,15}.
Researchers also need a tool that can detect varying degrees of chaos (Glossary) in noisy recordings. In strongly chaotic systems, initially similar system states diverge faster than they do in weakly chaotic systems. And such varying degrees of chaos are predicted to occur in biology, with functional consequences. For example, a model of white blood cell concentrations in chronic granulocytic leukemia can display varying levels of chaos, and knowing how chaotic those concentrations are in actual leukemia patients could have important implications for health outcomes^{19}. As another example, models of the human cortex predict that macroscale cortical electrodynamics should be weakly chaotic during waking states and should be strongly chaotic under propofol anesthesia^{20}; if this prediction is true, then detecting changing levels of chaos in largescale brain activity could be useful for monitoring depth of anesthesia and for basic anesthesia research. Thus, it is imperative to develop tools that can not only determine that an experimental system is chaotic, but also tools to assess changing levels of chaos in a system.
Although classic tools for detecting the presence and degree of chaos in data are slow, require large amounts of data, are highly sensitive to measurement noise, and break down for common edge cases, more recent mathematical research has provided new, more robust tools for detecting chaos or a lack thereof in noisy timeseries recordings. Here, for the first time (to our knowledge), we combine several key mathematical tools into a single, fully automated Matlab processing pipeline, which we call the Chaos Decision Tree Algorithm^{21} (Fig. 1). The Chaos Decision Tree Algorithm takes a single timeseries of any type—be it recordings of neural spikes, timevarying transcription levels of a particular gene, fluctuating oil prices, or recordings of stellar flux  and classifies those recorded data as coming from a system that is predominantly (or “operationally”^{22}) stochastic, periodic, or chaotic. The algorithm requires no input from the user other than a timeseries recording, though we have structured our code such that users can also select from among a number of alternative subroutines (see Methods section, Fig. 1).
In this paper, we show that the Chaos Decision Tree Algorithm performs with very high accuracy across a wide variety of both real and simulated systems, even in the presence of relatively high levels of measurement noise. Moreover, our pipeline can accurately track changing degrees of chaos (for e.g., transitions from weak to strong chaos). With an eye toward applications to biology, the simulated systems we tested included a highdimensional meanfield model of cortical electrodynamics, a model of a spiking neuron, a model of white blood cell concentrations in chronic granulocytic leukemia, and a model of the transcription of the NFκB protein complex. We also tested the algorithm on a wide variety of nonbiological simulations, including several difficult edge cases; these included strange nonchaotic systems, quasiperiodic systems, colored noise, and nonlinear stochastic systems (see Glossary for definitions of these terms), which are all classically difficult to distinguish from chaotic systems^{23,24,25,26}. We also tested the algorithm on a hyperchaotic system (Glossary), which can be difficult to distinguish from noise^{25}, as well as on several nonstationary processes (Glossary) in order to test the robustness of the algorithm against nonstationarity. Finally, we tested the Chaos Decision Tree Algorithm on several empirical (i.e. nonsimulated) datasets for which the groundtruth presence or absence of chaos has been previously established by other studies. These included an electronic circuit in periodic, strange nonchaotic, and chaotic states^{27}, a chaotic laser^{28}, the stellar flux of a strange nonchaotic star^{29}, the linear/stochastic North Atlantic Oscillation index^{30}, and nonlinear/stochastic Parkinson’s and essential tremors^{26}. Overall, our pipeline performed with nearperfect accuracy in classifying these data as stochastic, periodic, or chaotic, as well as in tracking changing degrees of chaos in both real and simulated systems. Finally, we applied our algorithm to electrocardiogram recordings from healthy subjects, patients with congestive heart failure, and patients with atrial fibrillation^{31}, and provide evidence that heart rate variability reflects a predominantly stochastic, rather than chaotic process.
We have made our Matlab code freely and publicly available at https://figshare.com/s/80891dfb34c6ee9c8b34.
Results
The Chaos Decision Tree Algorithm^{21} is depicted graphically in Fig. 1. The pipeline consists of four steps: (1) Determine if the data are stochastic using permutation entropy^{32} and a combination of Amplitude Adjusted Fourier transform surrogates^{33,34} and Cyclic Phase Permutation surrogates^{34,35} (Glossary), (2) Denoise the data using the Schreiber denoising algorithm^{36} (Glossary), (3) Correct for possible signal oversampling, and (4) Test for chaos using a modified 0–1 test for chaos^{23,37,38,39,40} (Glossary). For each step of the processing pipeline, we compared the performance of different available tools (i.e., different surrogatebased tests for stochasticity, different denoising methods, different downsampling methods, and different chaosdetection methods), and chose the tools with the highest classification performance (Supplementary Tables 1–14). Note that with user input, the Chaos Decision Tree Algorithm can use any of the alternative tools tested here, and that with no user input other than a timeseries recording, the algorithm will automatically use the tools we found maximized its performance. All results reported in the main body of this paper are for this automated set of highperforming tools. See Supplementary Fig. 3 for example timetraces illustrating each step of the algorithm.
We tested the performance of the (automated) Chaos Decision Tree Algorithm in detecting the presence and degree of chaos in a wide range of simulated and empirical systems for which the groundtruth presence of chaos, periodicity, or stochasticity has already been established. Details about each dataset and how the groundtruth presence or absence of chaos in those systems was previously determined are included in the Methods. Note that some systems are labeled “SNA,” which is an abbreviation for “strange nonchaotic attractor” (Glossary). These are systems whose attractors in phase space (Glossary) are fractal (like chaotic systems), but which are periodic (i.e., nonchaotic). Among these, we included the only known nonartificial strange nonchaotic system, the stellar flux of the socalled golden star KIC 5520878, as recorded by the Kepler space telescope^{29}. All simulated datasets consisted of 10,000 timepoints, and all initial conditions were randomized. For systems with more than one variable, we here report results for linear combinations of those variables (see Methods section), under the assumption that in most reallife cases, empirical recordings will contain features of multiple components of the system of study; that said, we also confirmed that the Chaos Decision Tree Algorithm has very high performance for individual system variables (Supplementary Table 15).
Results for simulations of biological systems are reported in Table 1, and results for nonbiological simulations are reported in Table 2. Note that no measurement noise was added to the colored noise signals in Table 2, as doing so would flatten their power spectra. Because the datasets in Tables 1 and 2 were used to choose between alternative methods for detecting stochasticity (Supplementary Tables 1–4), denoising (Supplementary Table 5), downsampling (Supplementary Table 6), and alternative tests of chaos (Supplementary Tables 8–13), as well as to optimize the 0–1 test for chaos (Supplementary Figs. 4–6), we further tested the Chaos Decision Tree Algorithm on held out datasets, which were not used to adjudicate between alternative tools. These held out datasets included both simulated systems (Table 3) and recordings from real (nonsimulated) systems (Table 4). Several of these held out datasets were of direct biological relevance: the periodically stimulated Poincaré oscillator in Table 3 is thought to be a good model of cardiac cell electrodynamics^{41}, which, like the Poincaré oscillator, are chaotic when periodically stimulated with certain delays between stimulation pulses^{16}; the integrated circuit in Table 4 is a physical implementation of equations that are based on the HodgkinHuxley neuron model^{42}; and the tremor signals in Table 4 are direct recordings from patients. The Chaos Decision Tree Algorithm classified the systems in Tables 1–4 as stochastic, periodic, or chaotic with nearperfect accuracy even at high levels of measurement noise, with the exception of the noisedriven sine map (Table 2)—see Discussion section. Finally, we tested the performance of the Chaos Decision Tree Algorithm on subsamples of all systems in Tables 1–3, and confirmed that it is still highly accurate for data with just 1000 timepoints (Supplementary Table 16) or 5000 timepoints (Supplementary Table 17), though we note that performance for some systems did go down with less data, which is to be expected^{40}.
Table 5 reports the accuracy of the Chaos Decision Tree Algorithm in detecting degree of chaos. Formally, a system’s degree of chaos is quantified by the magnitude of its largest Lyapunov exponent (Glossary). Unfortunately, largest Lyapunov exponents are very difficult to estimate from finite, noisy timeseries recordings. But, directly estimating largest Lyapunov exponents may not be necessary for tracking changing degrees of chaos in real systems: following prior observations of a strong correlation between a quicktocompute and noiserobust measure called permutation entropy (Glossary) and the largest Lyapunov exponents of several systems^{32,43}, the Chaos Decision Tree Algorithm approximates degree of chaos by calculating the permutation entropy of the inputted signal, after it has been denoised and corrected for possible oversampling. In agreement with prior findings, we found that permutation entropy tracked degree of chaos in the logistic map, the Hénon map, the Lorenz system, a highdimensional meanfield model of the cortex, and an electronic circuit. See Methods for details on the parameters that were used to generate dynamics with different degrees of chaos in these systems, and for details on how groundtruth largest Lyapunov exponents were calculated. Note that without downsampling, the correlation between the largest Lyapunov exponents and permutation entropy breaks down in continuous systems (Supplementary Table 14), which is to be expected, as permutation entropy has only been analytically proven to track the degree of chaos in discretetime systems^{32,44} (see Glossary).
Finally, as a firstpass implementation of our method, we applied the Chaos Decision Tree Algorithm to recordings of human heart rate variability, made available by Physionet^{31}. There has been considerable debate over whether or not the irregularities of heart rate signals (in either health or disease) reflect a predominantly chaotic process. While many classic chaosdetection methods have identified heart rate variability as chaotic (see Glass^{5} for a review), other studies have argued that this is an erroneous classification, suggesting that heart rate variability is, in fact, a nonlinear stochastic process^{45,46}, and that prior classifications of heart rate signals as chaotic simply reflect the shortcomings of classic chaosdetection methods. In agreement with this view, we here show that the Chaos Decision Tree Algorithm classified heart rate signals from healthy subjects, congestive heart failure patients, and atrial fibrillation patients as stochastic, rather than chaotic, with the exception of two congestive heart failure patients (Table 6).
Discussion
In this paper, we have introduced a processing pipeline, called the Chaos Decision Tree Algorithm^{21}, that can accurately detect whether a timeseries signal is generated by a predominantly stochastic, periodic, or chaotic system, and can also accurately track changing levels of chaos within a system using permutation entropy. The pipeline makes no assumptions about the input data. The Chaos Decision Tree Algorithm consists of four broad steps: (1) testing for stochasticity using surrogate data methods, (2) denoising, (3) downsampling if data are oversampled, and (3) testing for chaos using the modified 0–1 test. We tested the performance of several different surrogate data generation algorithms, denoising algorithms, downsampling algorithms, and parameters for the modified 0–1 test. Each alternative algorithm and parameter choice has its relative strengths and weaknesses, and we have structured our code such that a user can specify which algorithms and parameters to use for each step of the pipeline. If a user only inputs a timeseries recording without specifying any subalgorithms or parameters, then our pipeline will automatically use the methods and parameters we found yielded the most accurate results across a large and diverse set of data. All analyses reported in the main body of this paper are for this automated set of subroutines.
We tested the (automated) Chaos Decision Tree Algorithm on a diverse range of simulations of biological systems, nonbiological simulations, and empirical (nonsimulated) data recordings. Empirical data were recorded from an integrated circuit in a periodic, strange nonchaotic, and chaotic state, a chaotic laser, the stellar flux of a strange nonchaotic star, the North Atlantic Oscillation index, a Parkinson’s tremor, an essential tremor, and heart rate variability from congestive heart failure patients, atrial fibrillation patients, and healthy controls. In the cases for which the groundtruth was known (i.e., all datasets other than heart rate variability), the Chaos Decision Tree Algorithm performed at very high accuracy even at relatively high levels of measurement noise. For heart rate variability, our results support the hypothesis that cardiac rhythm variability is stochastic^{45,46}. Overall, these findings make us confident that the Chaos Decision Tree Algorithm can be fruitfully applied to biological and nonbiological signals contaminated by measurement noise.
We note a few limitations/shortcomings of our algorithm. First, the 0–1 test used in our pipeline might classify some very weakly chaotic systems (i.e., systems whose largest Lyapunov exponent is positive but very near zero) as periodic if the length of the timeseries provided is short; but, with longer timeseries, the test is guaranteed to provide accurate results^{40}. We also note that the algorithm performed poorly for the noisedriven sine map, which was consistently misclassified as chaotic (Table 2). It is possible that this system was not classified as stochastic because its level of intrinsic noise was very low; in support of this, we found that the Chaos Decision Tree Algorithm classified nonlinear dynamical systems with very low levels of intrinsic noise as deterministic, and that classifications of stochasticity became more frequent as the level of intrinsic noise was increased (Supplementary Table 18). It is also possible that this system is, in fact, an example of noiseinduced chaos^{47}. Finally, although the choice of system observables did not appreciably affect the performance of our method (Supplementary Table 15), we agree with Letellier and colleagues^{48} that some system observables are better representations of a system’s dynamics than others, and that this can have important consequences for the accuracy of nonlinear timeseries analysis methods such as this one. In light of these potential limitations, it bears reemphasizing that the absence, presence, and degree of chaos can only be determined with absolute certainty in a computer model that is free of measurement noise, by running multiple simulations and seeing how quickly initially similar states diverge. Thus, although the Chaos Decision Tree Algorithm pipeline performs at very high accuracy, it should, when possible, be used in conjunction with analyses of a computer model of the system at hand.
We hope that the Chaos Decision Tree Algorithm will help advance the decadesold effort to bring the insights of chaos theory to biology. While a diverse range of biological simulations and a small number of real biological cases have been shown to be chaotic, detecting the presence and degree of chaos in biological recordings has been difficult. The Chaos Decision Tree Algorithm overcomes the difficulties of prior tests, by being fast, highly robust to measurement noise, and, unless the user specifies specific alternative subroutines, fully automated. We welcome any efforts to identify edge cases for which our pipeline systematically breaks down; given that our pipeline is a modular decision tree, new subroutines can be added to accommodate such edge cases. We hope that our pipeline (and perhaps future iterations of it) will be useful to any of the domains of science—and in particular of biology—in which chaos has been invoked, but not tested.
Methods
The Chaos Decision Tree Algorithm
To understand the logic of the Chaos Decision Tree Algorithm^{21}, we begin with the final test in the decision tree. The crux of the Chaos Decision Tree Algorithm is the 0–1 test for chaos. The 0–1 test for chaos was originally developed by Gottwald and Melbourne^{37}, who later offered a slightly modified version of the test, which can cope with moderate amounts of measurement noise^{38}. Several years later, Dawes and Freeland further modified the test, such that it could suppress correlations induced by quasiperiodic dynamics, and thus more effectively distinguish between chaotic and strange nonchaotic dynamics, which are difficult to distinguish given only a timeseries recording^{23}. The modified 0–1 test involves taking a onedimensional timeseries of interest \({\mathbf{\phi }}\), and using it to drive the following twodimensional system:
where \(c\) is a random value bounded between 0 and \(2\pi\). For a given \(c\), the solution to Eq. (1) yields:
Gottwald and Melbourne show that if the inputted timeseries \({\mathbf{\phi }}\) is regular, the motion of p and q is bounded, while p and q display asymptotic Brownian motion if \({\mathbf{\phi }}\) is chaotic. The timeaveraged mean square displacement of p and q, plus the noise term proposed by Dawes and Freeland^{23}, is
where \({\eta }_{n}\) is a uniformly distributed random variable between \([\frac{1}{2},\frac{1}{2}]\) and \(\sigma\) is the noise level. Finally, the outputted \(K\)statistic of the 0–1 test uses a correlation coefficient to measure the growth rate of the mean squared displacement of the twodimensional system in Eq. (1):
\(K\) is computed for 100 different values of \(c\), randomly sampled between 0 and \(2\pi\), and the final output of the test is the median \(K\) across different values of \(c\). For chaotic systems, this median \(K\) value will approach 1, and for periodic systems, \(K\) will approach 0^{23,37,38,39,40}.
There are two parameters in this modified 0–1 test: the parameter \(\sigma\) that controls the level of added noise in Eq. (3), and the cutoff for what \(K\)statistic values are classified as indicating chaos or periodicity in a finite timeseries. We performed ROCcurve analyses for different values of \(\sigma\) and found that \(\sigma =0.5\) maximized classification performance across systems and noise levels (Supplementary Fig. 4), and so our pipeline automatically sets \(\sigma\) to 0.5 if \(\sigma\) is not specified by the user. Note that for nonzero values of \(\sigma\), \(K\) approaches zero as the standard deviation of a test signal approaches zero (Supplementary Fig. 5), and so the Chaos Decision Tree Algorithm multiplies a test signal by a constant to fix its standard deviation at 0.5 before applying the 0–1 test. A cutoff for K can also be inputted to our Matlab script, such that data that yield a K value greater than that cutoff are classified as chaotic and data that yield a K value less than or equal to that cutoff are classified as periodic. If no cutoff is provided, a cutoff is chosen based on an analysis of optimal cutoffs as a function of timeseries length (Supplementary Fig. 6). If the automatically selected cutoff is greater than 0.99, the cutoff is set to \(K=0.99\), as \(K\) is upperbounded by 1. We have confirmed that this automated cutoff selection yields highly accurate results for subsamples of both test and heldout datasets (Supplementary Tables 16, 17).
The 0–1 test described above only yields accurate results for data that are deterministic^{24,40,62,63}. A system is considered deterministic if, given the exact same initial conditions, it always evolves over time the same way, whereas a system is considered stochastic if there is appreciable randomness built in to its evolution over time (Glossary, Supplementary Fig. 1, 2). Not only are all chaotic systems (predominantly) deterministic—and thus the possibility of chaos can be automatically rejected if a system is found to be stochastic (though we note that a mathematically rigorous definition of chaos has recently been extended to the domain of stochastic systems, under the framework of the Supersymmetric Theory of Stochastics^{47})—but the 0–1 test is also known to incorrectly classify stochastic dynamics as chaotic^{24,62,63}. Thus, the Chaos Decision Tree Algorithm first rules out the possibility that data are predominantly stochastic before applying the modified 0–1 test. To do so, it uses a noiserobust method recently developed by Zunino and Kulp^{64}, which tests for determinism using surrogate statistics^{33}, with permutation entropy^{32} as the test statistic. The calculation of permutation entropy relies on two parameters: permutation order and timelag. We follow the recommendation from Bandt and Pompe^{32} and set the timelag to 1, and found that a permutation order of 8 maximized stochasticity detection accuracy (Supplementary Tables 2, 3). Moreover, we use a combination of amplitude adjusted Fourier transform surrogates^{33} and Cyclic Phase Permutation surrogates^{35}, unlike Zunino and Kulp, who used iterative amplitude adjusted Fourier transform^{33} surrogates, because we found that this combination led to far higher classification accuracy (Supplementary Tables 2, 3). The Chaos Decision Tree Algorithm classifies data as stochastic (and thus does not proceed to subsequent steps) if the permutation entropy of the original data falls within either surrogate distribution. The algorithm uses the Toolboxes for Complex Systems implementation of the permutation entropy algorithm, written by Andreas Müller^{65}. Surrogates are generated using the Matlab toolbox recently released by Lancaster and colleagues^{34}. Note that because Fourierbased surrogates are strictly stationary, surrogatebased tests that use only Fourierbased algorithms are only valid if the test timeseries is also stationary^{34,57}; that said, we found that nonstationarity did not affect the accuracy of a stochasticity test that uses a combination of amplitude adjusted Fourier transform and Cyclic Phase Permutation surrogates (Supplementary Tables 1–4). We also did not find that a normality transformation of the data improved the performance of our surrogatebased stochasticity test (Supplementary Table 2), counter to what has been suggested elsewhere^{22}.
If data “pass” the stochasticity test described above and are deemed operationally deterministic, then the Chaos Decision Tree Algorithm automatically denoises the inputted signal. We compared three denoising algorithms: a moving average filter (using Matlab’s smooth.m function), the Matlab Chaotic Systems Toolbox’s^{66} implementation of Schreiber’s noisereduction algorithm^{36} (Glossary), and wavelet denoising using an empirical Bayesian method with a Cauchy prior (using Matlab’s wdenoise.m function). Although it is considerably slower to run, Schreiber denoising markedly outperforms the other two approaches in recovering the deterministic component of signals contaminated by measurement noise (Supplementary Table 5), and markedly improves the performance of the modified 0–1 test (Supplementary Table 6, Supplementary Fig. 4). Thus, the Chaos Decision Tree Algorithm automatically uses Schreiber denoising before testing for chaos, unless the user specifies one of the other two denoising algorithms tested here to be used instead.
The final step of the Chaos Decision Tree Algorithm before applying the 0–1 test is to check if data are oversampled and to downsample them if they are. Gottwald and Melbourne have shown^{39} that the 0–1 test can give inaccurate results for continuous (i.e., nondiscretetime) systems sampled at a very high frequency, but that it can accurately differentiate between periodic dynamics and chaotic dynamics in continuous deterministic systems when data are properly downsampled. In light of this, the Chaos Decision Tree Algorithm utilizes the (crude) test for oversampling used by Matthews^{67}, by calculating a measure \(\eta\), which is the difference between the global maximum and global minimum of the data divided by the mean absolute difference between consecutive timepoints in the data. If \(\eta\, > \, 10\), then the data are deemed to be oversampled, and the Chaos Decision Tree Algorithm iteratively downsamples the data by a factor of 2 until \(\eta \le 10\) or until there are fewer than 100 timepoints left in the signal. We compared this approach both to no downsampling and to an alternative method, suggested by Eyébé Fouda and colleagues^{68} to improve 0–1 test performance, which downsamples by taking just the local minima and maxima of oversampled signals. We found that downsampling after denoising yields more accurate results than either alternative approach when oversampled signals are contaminated by measurement noise (Supplementary Table 6). We also note that recorded experimental data may be unlikely to be oversampled (Supplementary Table 7), and that this problem may be more likely to arise in simulated continuous systems. If the data are not oversampled, or if they have been downsampled, the Chaos Decision Tree Algorithm then applies the modified 0–1 test to the data, as described above.
Finally, the algorithm uses the permutation entropy of the inputted signal as a proxy for the degree of chaos in the system. Though the algorithm uses permutation entropy to establish whether or not a signal is predominantly deterministic (see above), permutation entropy has also been shown to tightly track the largest Lyapunov exponent (and therefore the degree of chaos) of the logistic map^{32}, the tent map^{69}, and the Duffing oscillator^{43}. We should in general expect a close correspondence between permutation entropy and Lyapunov exponents, in light of the equivalence in discretetime systems between permutation entropy and KolmogorovSinai entropy^{44,70,71,72}, which is upperbounded by the sum of a system’s positive Lyapunov exponents—a relationship known as the “Pesin identity”^{73}. When calculating permutation entropy to track degree of chaos (rather than for determinism testing as above), we follow Bandt and Pompe’s^{32} recommendation and simply set the timelag to 1 and the permutation order to 5, which we showed tracks degree of chaos in all systems tested (Table 5). Because this equivalence is only known to hold for discretetime systems^{44}, permutation entropy is only calculated after the inputted signal has been denoised and, if oversampled, downsampled; this considerably improves its ability to track degree of chaos in continuous systems (Table 5, Supplementary Table 14).
The full decision tree of our algorithm is depicted graphically in Fig. 1.
Data
Biological simulations
The following describes the simulations of biological systems analyzed in this paper. We only picked biological simulations for which the presence or absence of chaos has been established in prior work. Initial conditions were randomized in all simulations. We also tested the effect of measurement noise on the accuracy of the Chaos Decision Tree Algorithm in classifying systems, by adding white noise to our simulated data, the amplitude of which was up to 40\(\%\) the standard deviation of the original data. For each simulated system and level of measurement noise, we created 100 datasets with 10,000 time points.
Chaotic meanfield cortical model. SteynRoss, SteynRoss, and Sleigh^{20} describe a meanfield model of the cortex based on the equations first introduced by Liley and colleagues^{74,75}, which includes electrical gapjunction synapses in addition to the standard chemical synapses used in the earlier models. The model contains both inhibitory and excitatory neural populations communicating locally through gap junctions and chemical synapses and communicating over long ranges via myelinated axons. The dynamics of each neural population in the model are determined by two firstorder and six secondorder partial differential equations, which is equivalent to 14 firstorder differential equations. The primary output of the model is the mean excitatory firing rates of 120 neural populations, which approximates the largescale cortical signals that might be measured through electrocortigraphy, magnetoencephalography, or electroencephalography. SteynRoss, SteynRoss, and Sleigh^{20} show that just by varying the inhibitory gapjunction diffusivecoupling strength parameter in their model, they can produce dynamics ranging from periodicity to strong chaos. In their simulation of “waking” cortical dynamics, Turing (spatial) and Hopf (temporal) instabilities interact to produce chaotic, lowfrequency spatiotemporal oscillations. For chaotic dynamics, we simulated 2,000,000 timepoints of their “wake” simulation, with the inhibitory gapjunction diffusivecoupling strength parameter set to 0.4, and then downsampled the data to 10,000 timepoints. We only applied our algorithm to the mean excitatory firing rate of one neural population, i.e. to just one out of 14 variables describing the dynamics of just one out of 120 interacting such 14dimensional systems (though the variable is biologically welldefined). The Matlab code for the simulations is available in the Supplementary Material of SteynRoss, SteynRoss, and Sleigh^{20}.
Periodic meanfield cortical model. SteynRoss, SteynRoss, and Sleigh show that their cortical meanfield model enters a periodic, seizurelike state dominated by a Hopf instability when the inhibitory gapjunction diffusivecoupling strength parameter is set to 0.1. Just as in the chaotic case, we simulated 2,000,000 timepoints and then downsampled to 10,000 timepoints. Note that SteynRoss, SteynRoss, and Sleigh estimate the largest Lyapunov exponent of their model to be around zero when the inhibitory gapjunction diffusivecoupling strength parameter is 0.1, whereas our own estimate (using an automated version of their same method—see below) placed the largest Lyapunov exponent more clearly in the periodic regime, at −2.1.
Chaotic spiking neuron. Izhikevich^{49,76} describes a simple neuron model that can display both spiking and bursting behavior. The model consists of a neuron’s membrane potential \(v\), a membrane recovery variable \(u\), an input current \(I\), and parameters \(a\), \(b\), \(c\), and \(d\):
with the auxiliary afterspike resetting:
When \(a=0.2\), \(b=2\), \(c=56\), \(d=16\), and \(I=99\), the neuron’s membrane potential \(v\) (which is the variable we analyze) displays chaotic spikes^{49,76}. We simulated the Izhikevich neuron using a firstorder Euler method, with an integration step of 0.25 ms. We generated 50,000 time points, and dowsampled by a factor of 5 (to avoid oversampling).
Periodic white blood cell concentration. Inspired by the finding that chronic granulocytic leukemia involves apparently aperiodic oscillations in the concentration of circulating white blood cells^{77}, Mackey and Glass^{19} study mathematical models of oscillating physiological control systems. They describe a simple mathematical model of the concentration of circulating white blood cells:
where \(a=0.2\), \(b=0.1\), and \(c=10\). The parameter \(\tau\) represents the delay between white blood cell production in bone marrow and the release of those cells into the blood stream. Since this cellular generation delay time is increased in some patients with chronic granulocytic leukemia, Mackey and Glass study the behavior of this system as a function of the delay time \(\tau\). They find that as \(\tau\) increased, the resulting oscillations produced by this equation became aperiodic. Through formal analysis of Lyapunov exponents of this system, Farmer^{78} later confirmed that for \(\tau =10\), the oscillations of this system are periodic. We simulated 100,000 timepoints of the periodic MackeyGlass system using a firstorder Euler method with an integration step of 1, and then downsampled by a factor of 10 (to avoid oversampling).
Chaotic white blood cell concentration. For \(\tau =30\), Farmer confirmed^{78} that the MackeyGlass equation (Eq. (7)) for the concentration of circulating white blood cells yields a chaotic oscillation. We simulated 100,000 timepoints of the chaotic MackeyGlass system using a firstorder Euler method with an integration step of 1, and then downsampled by a factor of 10 (to avoid oversampling).
Periodic NFκB transcription. Heltberg and colleageus^{14} recently described a fivedimensional mathematical model of oscillating concentrations of the transcription factor NFκB, which regulates several genes involved in immune responses and is widely studied in immunity and cancer research. They show that the dynamics of NFκB concentration are coupled to varying levels of a cytokinlike tumor necrosis factor (TNF). They show that when TNF oscillations have a low amplitude, NF\(\upkappa\)B oscillations are periodic. We simulated periodic NF\(\upkappa\)B oscillations using Heltberg and colleagues’ Matlab code, available at https://github.com/Mathiasheltberg/ChaoticDynamicsInTranscriptionFactors.
Chaotic NFκB transcription. Heltberg and colleagues^{14} show that by increasing the amplitude of the TNF signal, the oscillating number of NFκB molecules in their model becomes chaotic. We simulated chaotic NFκB oscillations using Heltberg and colleagues’ Matlab code, available at https://github.com/Mathiasheltberg/ChaoticDynamicsInTranscriptionFactors.
Nonbiological simulations
Because there are only a limited number of biological simulations for which the presence of chaos has already been established, we also applied the Chaos Decision Tree Algorithm to a wide range of mathematical systems previously studied in the chaos theory and nonlinear timeseries analysis literatures:
Chaotic cubic map. Venkatesan and Lakshmanan^{50} describe a quasiperiodically forced cubic map, which can exhibit a large diversity of periodic, chaotic, and strange nonchaotic dynamics. In particular, the map exhibits many different routes to chaos. Their system is described by the following:
where \(\omega =\frac{\sqrt{5}1}{2}\) (the golden ratio). We set \(f=0.8\), \(Q=0\), and \(A=1.5\), which Venkatesan and Lakshmanan have shown lead to chaotic dynamics^{50}. For the results reported in Table 2, we followed Dawes and Freeland^{23} in taking a linear combination of x and \({\mathbf{\theta }}\): \({\phi }_{i}={x}_{i}/6+{\theta }_{i}/10\). Results for x individually are reported in Supplementary Table 15 (results for \({\mathbf{\theta }}\) on its own are not informative, as \({\mathbf{\theta }}\) is an independent, quasiperiodic process).
Periodic cubic map. Venkatesan and Lakshmanan^{50} show that the system in Eq. (8) exhibits periodic (onefrequency torus) dynamics when \(f=0\), \(Q=0\), and \(A=1\). We picked these parameters for periodic dynamics. To get a timeseries \({\mathbf{\phi }}\) from the cubic map, we again took a linear combination of x and \({\mathbf{\theta }}\): \({\phi }_{i}={x}_{i}/6+{\theta }_{i}/10\). Results for x and \({\mathbf{\theta }}\) individually are reported in Supplementary Table 15.
Strange nonchaotic cubic map (HH). We set \(f=0.7\), \(Q=0\), and \(A=1.88697\) for one type of strange nonchaotic dynamics, which Venkatesan and Lakshmanan^{50} have shown bring the forced cubic map into a strange nonchaotic regime via the HeagyHammel route (i.e., collision of a perioddoubled quasiperiodic torus with its unstable parent). Results for x individually are reported in Supplementary Table 15.
Strange nonchaotic cubic map (S3). We set \(f=0.35\), \(Q=0\), and \(A=0.35\) for a second type of strange nonchaotic dynamics, which Venkatesan and Lakshmanan^{50} have shown push the forced cubic map into a strange nonchaotic regime via Type3 Intermittency (i.e., inverse perioddoubling bifurcation). Results for x individually are reported in Supplementary Table 15.
Perioddoubled cubic map. Venkatesan and Lakshmanan^{50} show that the system in Eq. (8) exhibits perioddoubled dynamics when \(f=0.18\), \(Q=0\), and \(A=1.1\). We picked these parameters for perioddoubled dynamics. To get a timeseries \({\mathbf{\phi }}\) from the cubic map, we again took a linear combination of x and \({\mathbf{\theta }}\): \({\phi }_{i}={x}_{i}/6+{\theta }_{i}/10\). Results for x individually are reported in Supplementary Table 15.
Strange nonchaotic GOPY model. The first known strange nonchaotic system was described by Grebogi, Ott, Pelikan, and Yorke, commonly referred to as the GOPY model^{51}. The GOPY model is described by the following:
where \(\sigma =1.5\), \(\theta =0.5\), and \(\omega =\frac{\sqrt{5}1}{2}\) (the golden ratio). To get a timeseries \({\mathbf{\phi }}\) from the GOPY model, we followed Dawes and Freeland^{23} in taking a linear combination of x and \({\mathbf{\theta }}\): \({\phi }_{i}={x}_{i}/6+{\theta }_{i}/10\). Results for x individually are reported in Supplementary Table 15 (as is the case for the cubic map, results for \({\mathbf{\theta }}\) on its own are not informative, as \({\mathbf{\theta }}\) is an independent, quasiperiodic process).
Chaotic logistic map. The logistic map is one of the simplest known systems that can exhibit both periodic and chaotic behavior. It was originally introduced by biologist Robert May^{52} as a discretetime model of population growth. It is described by the following equation:
where \({x}_{i}\) represents the ratio of the population size at time \(i\) to the maximum possible population size. For chaotic dynamics^{52}, we set \(r=4\).
Periodic logistic map. For periodic dynamics^{52} in the logistic map, we set \(r=3.5\) in Eq. (10).
Chaotic Lorenz system. Perhaps the most famous of all chaotic systems, the Lorenz model of atmospheric convection is described by the following system of equations^{53}:
where x is the rate of convection, y is the horizontal temperature variation, and z is the vertical temperature variation. Though the equations were initially meant to model atmospheric convection, identical equations have been found in models of a wide variety of physical systems, including lasers^{79} and chemical reactions^{80}. We set \(\sigma =10\), \(\rho =30\), and \(\beta =\frac{8}{3}\), for which the Lorenz system exhibits chaos (determined by calculating the largest Lyapunov exponent of the system with these parameters, using Ramasubramanian’s algorithm^{81}). We integrated the equations for the Lorenz system using the Fourth Order RungeKutta method with an integration step of 0.01. To get a single timeseries \({\mathbf{\phi }}\) from the Lorenz model, we took a linear combination of x and y: \({\mathbf{\phi }}\)=x + y. Results for x, y, and z individually are reported in Supplementary Table 15.
Hyperchaotic generalized Henon map. Data from hyperchaotic systems, which contain more than one positive Lyapunov exponent, can be difficult to distinguish from noise^{25}. As such, hyperchaotic systems present a challenge to tests of determinism from timeseries data, which might mistake hyperchaos for stochasticity. To demonstrate the robustness of the Chaos Decision Tree Algorithm’s stochasticity test, we analyzed the Generalized Henon Map, which is described by the following equation:
We set \(a=1.76\) and \(b=0.1\), for which the Generalized Henon Map produces hyperchaos^{54}.
Noisedriven sine map. Freitas and colleagues^{55} describe a nonchaotic, randomly driven system:
where \(\mu =2.4\), \({Y}_{i}\) is a random Bernoulli process that equals 1 with probability 0.01 and 0 with probability 0.99, and \({\eta }_{i}\) is a random variable uniformly distributed between −2 and 2. Freitas and colleagues show that a chaosdetection technique called “noise titration”^{82} incorrectly classifies this system as chaotic.
Freitas map. Freitas and colleagues^{55} also describe a nonlinear correlated noise process, which we here call the “Freitas map.” The Freitas map contains dynamic noise added to a nonlinear moving average filter:
where \({v}_{n}\) is a uniform random variable distributed between 0 and 1. Freitas and colleagues show that the noise titration technique also incorrectly classifies this system as chaotic.
Bounded random walk. Nicolau^{56} describes a bounded random walk (BRW), which is a globally stationary process with local unit roots (i.e. local nonstationarities):
where \(\tau\), \(k\), \({\alpha }_{1}\), \({\alpha }_{2}\), and \(\sigma\) are parameters, and \({\epsilon }_{t}\) is a white noise error term. Note that the bounded random walk can be decomposed into a random walk, \({X}_{t}={X}_{t1}+{\sigma }_{t}{\epsilon }_{t}\), plus an adjustment function \({e}^{k}({e}^{{\alpha }_{1}({X}_{t1}\tau )}{e}^{{\alpha }_{2}({X}_{t1}\tau )})\). The adjustment function serves to pull the random walk toward \(\tau\) if the process deviates too far from \(\tau\). Though it is a stationary process (albeit with local nonstationarities), the bounded random walk is often misclassified as nonstationary by stationarity tests^{83}. Following Nicolau^{56} and Patterson^{83}, we set \(\tau =100\), \(k=15\), \({\alpha }_{1}=3\), \({\alpha }_{2}=3\), and \(\sigma =0.4\), which generates a random walk that remains roughly within the interval of 100\(\pm\)5.
Cyclostationary process. A cyclostationary autoregressive process is essentially a combination of a noisedriven linear damped oscillator and linear relaxators. Cyclostationary systems are nonstationary because their probability distributions vary cyclically with time. Following Timmer^{57}, we simulate a cyclostationary process described by the following:
where \({\epsilon }_{t}\) is a white noise error term and
\(\tau\) is the relaxation time and \(T\) is the oscillation period. We set \(\tau =50\) and \(T=10\), which Timmer has shown leads to incorrect classification of this system as nonlinear or deterministic by surrogate tests that only use Fourierbased surrogates, which are strictly stationary.
ARMA process. A general autoregressive movingaverage (ARMA) process is described by the following:
where \(c\) is a constant, \({\phi }_{1},...,{\phi }_{p}\) and \({\theta }_{1},...,{\theta }_{p}\) are parameters, and \({\epsilon }_{t},{\epsilon }_{t1},...\) are white noise error terms. An ARMA process with a lag of 1, or an ARMA(1) process, is:
When \(\phi\, <\, 1\), the ARMA process is (weakly) stationary. When \(\phi\) is close to but less than 1 and \(\theta\, \ne\, 0\), ARMA processes, though stationary, are often misclassified as nonstationary by stationarity tests^{84}. All ARMA processes simulated in this paper were lag 1, and we set \(c=0\) and \(\phi =0.99\). For the analyses in Supplementary Tables 1–4 and in Table 2, we drew \({\mathbf{\theta }}\) from a random, normal distribution with mean \(\mu =0\) and standard deviation \(\sigma =1\) for each simulation. We also tested ARMA(1) processes for \({\mathbf{\theta }}\) values fixed at −0.5, 0, 0.5, and 0.9 (Supplementary Table 19).
Random walk. A random walk is modeled by an autoregressive process with a unit root:
where \(\epsilon\) is a white noise error term with mean \(\mu =0\) and standard deviation \(\sigma =1\). Random walks are nonstationary.
Trended random walk. A trended random walk introduces a secondary nonstationarity, namely, a linear trend, to the random walk:
where \(\epsilon\) is a white noise error term with mean \(\mu =0\) and standard deviation \(\sigma =1\) and \(b\) is the slope of the linear trend. For all trended random walks simulated in this paper, \(b\) was randomly drawn from a Gaussian distribution with mean \(\mu =0\) and standard deviation \(\sigma =0.01\).
Colored noise. Colored noise refers to a stationary, stochastic process with a nonuniform power spectrum. White noise has a uniform power spectrum (meaning equal power at all frequencies); pink noise has a power spectral density proportional to \(\frac{1}{f}\), where \(f\) is frequency; red noise has a power spectral density proportional to \(\frac{1}{{f}^{2}}\); blue noise has a power spectral density proportional to \(f\); and violet noise has a power spectral density proportional to \({f}^{2}\). All colored noise signals were simulated using Zhivomirov’s algorithm^{58}, available at https://www.mathworks.com/matlabcentral/fileexchange/42919pinkredblueandvioletnoisegenerationwithmatlab.
Rössler system. The Rössler system is described by the following system of differential equations^{59}:
We set \(a=0.2\), \(b=0.2\), and \(c=5.7\), for which the Rössler system exhibits chaos^{59}. \(w\) controls the frequency of the system’s oscillations, and was set to 1. We integrated the equations for the Rössler system using the Fourth Order RungeKutta method with an integration step of 0.01. We generated 5,000,000 timepoints, and then downsampled to 10,000 datapoints. To get a single timeseries \({\mathbf{\phi }}\) from the Rössler model, we took a linear combination of x and y: \({\mathbf{\phi }}\) = x + y. Results for x, y, and z individually are reported in Supplementary Table 15.
Ikeda map. Ikeda and colleagues described a chaotic model of light passing through a nonlinear optical resonator^{85}. The model can be simplified into a twodimensional map^{86}:
where \(u\) is a parameter and
We set \(u=0.9\), for which the Ikeda map exhibits chaos^{86}. Table 3 reports results for a linear combination of the two variables, \({\mathbf{\phi }}\) = x + y. Results for x and y individually are reported in Supplementary Table 15.
Hénon map. The Hénon map^{61} is a twodimensional system of equations:
We set \(a=1.25\) and \(b=0.3\), for which the Hénon map is periodic^{87}. Table 3 reports results for a linear combination of the two variables, \({\mathbf{\phi }}\) = x + y. Results for x and y individually are reported in Supplementary Table 15.
Periodic Poincaré oscillator. The Poincaré oscillator has been widely studied as a model of biological oscillations, particularly as a model of the effect of periodic stimulation on the dynamics of biological oscillators^{88}. The oscillator is described by the following equations:
where \(k\) is a positive value that controls the oscillator’s relaxation rate. The phase of this system is described by its angular coordinate \(\phi\) in a unit cycle. Periodic stimulation of the system is modeled as a perturbation of magnitude \(b\) away from the unit cycle, which leads to an instantaneous resetting of the phase of the oscillator, as determined by the following phase resetting curve:
The period of the stimulation is determined by a parameter \(\tau\). For periodic dynamics, we analyze the timevarying phase of the Poincaré oscillator with \(b=1.13\) and \(\tau =0.69\), which Guevara and Glass show leads to phase locking between the oscillator and the periodic perturbations^{41}.
Quasiperiodic Poincaré oscillator. For quasiperiodic dynamics^{41} in the Poincaré oscillator, wet set \(b=0.95\) in Eq. (20), with an interstimulus delay \(\tau =0.75\).
Chaotic Poincaré oscillator. For chaotic dynamics^{41} in the Poincaré oscillator, wet set \(b=1.13\) and \(\tau =0.65\).
Stochastic Lorenz system. To study the effect of dynamic noise on our algorithm’s classification of stochastic chaotic systems, we took the Lorenz system described in Eq. (11), and added intrinsic/dynamic Gaussian noise to the x component of the system (we found that the system was far less sensitive to noise being injected into the y variable):
where \({\eta }_{i}\) is a normally distributed random variable with mean 0 and standard deviation 1, and \(A\) is a parameter that controls the amplitude of the dynamic/intrinsic noise. As for the deterministic case, we set \(\sigma =10\), \(\rho =30\), and \(\beta =\frac{8}{3}\). The stochastic Lorenz system was simulated using the Fourth Order RungeKutta method with an integration step of 0.01. Supplementary Table 18 reports results for different values of \(A\), both for all system variables individually and for the linear combination x + y.
Stochastic Rössler system. We also took the Rössler system described in Eq. (22), and added dynamical Gaussian noise to the x component of the system:
where \({\eta }_{i}\) is a normally distributed random variable with mean 0 and standard deviation 1, and \(A\) is a parameter that controls the amplitude of the dynamic/intrinsic noise. As for the deterministic case, we set \(a=0.2\), \(b=0.2\), and \(c=5.7\), and simulated the model using the Fourth Order RungeKutta method with an integration step of 0.01. We generated 5,000,000 timepoints, and then downsampled to 10,000 datapoints. Supplementary Table 18 reports results for different values of \(A\), both for all system variables individually and for the linear combination of x + y.
Multivariate AR model. We generated random multivariate autoregressive (AR) models using the Multivariate Granger Causality (MVGC) toolbox^{89}. To create random regression matrices, we created random 5node dense positive definite matrices using Matlab’s sprandsym.m function, with a graph density of 1. To ensure stationary dynamics, we used the MVGC toolbox’s var_specrad.m function to decay the coefficients of the random dense positive definite matrices so that their spectral radii were 0.8. To ensure uncorrelated noise in the resulting AR model, we created error matrices with diagonal elements set to 1 and offdiagonal elements set to 0. We then inputted these regression and error matrices into the MVGC toolbox’s var_to_tsdata.m function to create multivariate timeseries with 5 nodes and 10,000 timepoints. We then applied the Chaos Decision Tree Algorithm to just the univariate activity of the first node of the resulting multivariate signal.
Empirical data
We here describe the realworld data analyzed in this paper, and how these data were previously classified as stochastic, periodic, or chaotic:
A chaotic neuron integrated circuit. Data recorded from an integrated circuit were kindly sent to us by Seiji Uenohara and colleagues. The circuit is a physical implementation of a chaotic neuron model that is based on the HudgkinHuxley equations^{90}. The equations governing the circuit’s behavior can be reduced to the following twodimensional map:
where \(f(\cdot )\) is a monotonically decreasing nonlinear output function, \(b\) controls the amplitude of the quasiperiodic forcing, and \(\omega =\frac{\sqrt{5}1}{2}\) (the golden ratio). The quasiperiodic external forcing was inputted to the circuit using an analog board PXI6289, which was also used to record the circuit’s output. Varying the parameter \(b\) can bring the circuit into periodic, strange nonchaotic, and chaotic states, which Uenohara and colleagues were able to classify by analyzing the consistency of the circuit’s response to an external input^{27}. There are 10 datasets recorded from the circuit’s chaotic state.
A strange nonchaotic neuron integrated circuit. There are 10 datasets recorded from the strange nonchaotic state of Uenohara and colleagues’ circuit.
A periodic neuron integrated circuit. There are 10 datasets recorded from the periodic state of Uenohara and colleagues’ circuit.
Chaotic laser. Hübner and colleagues^{28} used phase portrait, correlation integral, and autocorrelation function analyses to detect chaos in the intensity pulsing of an unidirectional farinfrared NH3 ring laser. Laser data were downloaded from https://www.pdx.edu/biomedicalsignalprocessinglab/chaotictimeseries.
Stellar flux of a strange nonchaotic star. We analyzed stellar flux from KIC 5520878, the only known nonartificial strange nonchaotic system^{29}. Data were sent to us by John F. Lindner, who, together with colleagues, determined the status of KIC 5520878 as a strange nonchaotic system using a series of spectral scaling analyses^{29}. Data were originally obtained from the Mikulski Archive for Space Telescopes. Because there are large shifts in the data due to the stellar flux being recorded in different pixels of the Kepler Space Telescope, we visually inspected the data to find a relatively stable period (i.e. a period in between large shifts) and then detrended the data. We thus exclusively analyzed time points 11,620 to 14,003 from the dataset analzed in Lindner and colleagues’ paper^{29}.
North Atlantic Oscillation Index. We analyzed the monthly mean North Atlantic Oscillation (NAO) Index from January 1950 to December 2018. The NAO Index is the difference in atmospheric pressure at sea level between the Azores high and the Icelandic low, and has been shown by several groups of researchers, employing a range of techniques, to be stochastic^{25,91,92,93,94,95,96}. Data were downloaded from the Climate Prediction Center website (http://www.cpc.ncep.noaa.gov/).
Parkinson’s tremor. We analyzed recordings of a Parkinson’s patient’s hand acceleration, measured for 30 s at a sampling rate of 1000 Hz by piezoresistive accelerometers. Through analyses of correlation integrals, Poincaré and return maps, Lyapunov exponents, and the \(\delta\)\(\epsilon\) method, Timmer and colleageus^{57} showed that this Parkinson’s tremor was a nonlinear stochastic oscillator. Data were downloaded from http://jeti.unifreiburg.de/path_tremor/readme.
Essential tremor. We analyzed recordings of hand acceleration from a patient with an essential tremor, also measured for 30 seconds at a sampling rate of 1000 Hz by piezoresistive accelerometers. As they did with the Parkinson’s tremor, Timmer and colleagues used correlation integrals, Poincaré and return maps, Lyapunov exponents, and the \(\delta\)\(\epsilon\) method to show that this essential tremor was a nonlinear stochastic oscillator. Data were downloaded from http://jeti.unifreiburg.de/path_tremor/readme.
Heart rate (healthy subjects). Five heart beat (RRinterval) timeseries recordings from healthy subjects were downloaded from Physionet^{31}: https://www.physionet.org/challenge/chaos/. The signals were recorded using continuous ambulatory (Holter) electrocardiograms, and are in sinus rhythm. Outliers were filtered out of the data using Physionet’s WFDB software package. Though a full 24 h of data were available for each subject, we only took the first 2.78 h of data, corresponding to 10,000 timepoints. This was both to save on computation time and to be consistent with the length of other timeseries analyzed in this paper.
Heart rate (congestive heart failure patients). Five heart beat (RRinterval) timeseries recordings from patients with congestive heart failure were downloaded from Physionet^{31}. Like the healthy rate signals, these data were recorded using continuous ambulatory (Holter) electrocardiograms, are in sinus rhythm, and were filtered for outliers. Though a full 24 h of data were available for each subject, we only took the first 2.78 h of data.
Heart rate (atrial fibrillation). Five heart beat (RRinterval) timeseries recordings from patients with congestive heart failure were downloaded from Physionet^{31}. Like the healthy rate signals, these data were recorded using continuous ambulatory (Holter) electrocardiograms and were filtered for outliers, but are not in sinus rhythm. We only took the first 2.78 h of data, corresponding to 10,000 timepoints.
Parameters and largest Lyapunov exponents for data in Table 5
We here describe the methods used to generate data with different degrees of chaos for the analyses reported in Table 5, as well as the methods used to calculate largest Lyapunov exponents in these systems.
Logistic map. The logistic map has only a single parameter, \(r\) (see above). Following Bandt and Pompe^{32}, we varied \(r\) between 3.5 and 4, in intervals of 0.001, to generate 501 10,000 timepoint signals with different levels of chaos. Groundtruth largest Lyapunov exponents were calculated using the derivative method, which does not involve generating timeseries data.
Hénon map. To generate different degrees of chaos in the Hénon map, we varied its \(a\) parameter (see above) between 1 and 1.4, in intervals of 0.001, to generate 401 10,000 timepoint signals with different degrees of chaos. Groundtruth largest Lyapunov exponents were calculated using code provided in Dynamical Systems with Applications using Matlab^{97}, available at https://github.com/springermath/DynamicalSystemswithApplicationsusingMATLAB/.
Lorenz system. For the Lorenz system, we varied its \(\sigma\) parameter between 5.75 and 15, in intervals of 0.05, and generated 10,000 timepoints per simulation. Within this parameter range, the Lorenz system is chaotic, but displays varying degrees of chaos. To calculate largest Lyapunov exponents for each parameter, we used the algorithm provided by Ramasubramanian^{81}, which, like the algorithms used for the logistic and Hénon maps, does not involve generating timeseries data.
Meanfield cortical model. Following SteynRoss, SteynRoss, and Sleigh^{20}, different levels of chaos in their meanfield cortical model were generated by varying two parameters: postsynaptic inhibitory response and inhibitory diffusion. The postsynaptic inhibitory response parameter (\({\lambda }_{i}\) in their model) was varied between 0.98 and 1.018 in intervals of 0.001, and the inhibitory diffusion parameter (\({D}_{2}\) in their model) was varied between 0.1 and 0.8 in intervals of 0.05, producing a total of 585 parameter configurations. We simulated 25,000 timepoints in the model, with no downsampling, so we could again test the effect of the Chaos Decision Tree Algorithm’s automated downsampling on permutation entropy’ ability to track level of chaos in continuous systems. Unfortunately, there are no tools for analytically estimating the largest Lyapunov exponent of the meanfield cortical model, and so largest Lyapunov exponents were approximated by running two noisefree simulations of the model for each parameter configuration, with very slightly different initial conditions, and fitting a line to the rate of divergence between the two simulations from the beginning of the simulations to the point when their divergence rate saturates and flattens out. The slope of the fitted line is taken as the estimate of the largest Lyapunov exponent^{10}. While SteynRoss, SteynRoss, and Sleigh’s code for estimating largest Lyapunov exponents using this method requires subjective evaluation of where to fit the line (i.e. finding the nonsaturated part of the divergence rate plot), we automated this process by fitting a line from the beginning of the divergence rate plot to the point where its mean abruptly changes, reflecting saturation; this point was determined using Matlab’s findchangepts.m function. Simulations with approximated largest Lyapunov exponents less than −5 or greater than 5 were excluded from the analysis, as these are likely poor estimates; visual inspection confirmed that for these cases, there was often no clear point of saturation for the divergence rate between simulations, and so lines were often automatically fitted to particularly steep, short subsegments of the plot. Visual inspection further confirmed that in most other cases, there was a clear, linear rate of divergence between simulations followed by saturation, and that the automatically fitted line was a good fit.
Neuron integrated circuit. The integrated circuit data analyzed in Table 5 are the same as those analyzed in Table 4. Because the circuit is a physical implementation of a known and simple twodimensional system of equations, Uenuhara and colleagues used those equations to calculate the groundtruth largest Lyapunov exponents of the circuit in its three different states (periodic, strange nonchaotic, and chaotic), and report those largest Lyapunov exponents in their paper^{27}.
Statistics and reproducibility
In reporting the performance of the Chaos Decision Tree Algorithm, we only made recourse to statistical tests in Table 5 and Supplementary Table 14, where we report the (twotailed) pvalues of Spearman correlations between largest Lyapunov exponents and permutation entropies. Because these correlations were calculated against the same set of groundtruth largest Lyapunov exponents for each system, pvalues were Bonferronicorrected for multiple comparisons. Elsewhere, we only report the fraction out of all datasets that were correctly classified as stochastic, periodic, or chaotic (with exact sample sizes provided in all tables). Within the Chaos Decision Tree Algorithm itself, statistical tests appear in two locations. First, if the user chooses to test for stationarity, then the pipeline will only proceed to test for stochasticity if the test signal passes a (twotailed) stationarity test with \(\alpha\) = 0.05. Moreover, the pipeline uses surrogate statistics to test for determinism: if the permutation entropy of the inputted signal lies outside the distribution of permutation entropies of 1000 surrogates (which is equivalent to a twotailed statistical test with \(\alpha\) = 9.99e−4), then the data are classified as being generated by a predominantly deterministic system. No sample size calculation was performed before analyzing the data presented, as our results simply report classification accuracy for a large number of datasets (and no statistical analyses were performed beyond reporting classification accuracy). No data were excluded from the analysis. Data in Tables 1, 2 were used to optimize our algorithm, which was then retested on datasets in Tables 3, 4. Finally, note that no pvalue is associated with the \(K\)statistic outputted by the 0–1 test for chaos. To facilitate the reproducibility of our analyses, we have included code alongside our pipeline, as well as links to code in our Methods, for generating all simulated datasets tested in this paper. Links to most empirical datasets analyzed here have been provided in the Methods.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Links to Matlab scripts for simulating the meanfield cortical model and NFκB transcription are provided in the Methods. Matlab scripts for simulating all other systems described in this paper are provided along with the code for the Chaos Decision Tree Algorithm at https://figshare.com/s/80891dfb34c6ee9c8b34 (DOI: doi.org/10.6084/m9.figshare.7476362.v7). All empirical datasets except for the integrated circuit recordings are freely available online, and URLs to each data source are provided in the Methods.
Code availability
The code for the Chaos Decision Tree Algorithm is provided at https://figshare.com/s/80891dfb34c6ee9c8b34 (DOI: doi.org/10.6084/m9.figshare.7476362.v7).
References
 1.
Linde, A. D. Eternally existing selfreproducing chaotic inflanationary universe. Phys. Lett. B 175, 395–400 (1986).
 2.
Wisdom, J., Peale, S. J. & Mignard, F. The chaotic rotation of hyperion. Icarus 58, 137–152 (1984).
 3.
Hudson, J. & Mankin, J. Chaos in the belousovzhabotinskii reaction. J. Chem. Phys. 74, 6171–6177 (1981).
 4.
Kaplan, D. T. et al. Subthreshold dynamics in periodically stimulated squid giant axons. Phys. Rev. Lett. 76, 4074 (1996).
 5.
Glass, L. Introduction to controversial topics in nonlinear science: Is the normal heart rate chaotic? Chaos 19, 028501 (2009).
 6.
Goldberger, A. L. & West, B. J. Applications of nonlinear dynamics to clinical cardiology. Ann. N. Y. Acad. Sci. 504, 195–213 (1987).
 7.
Skarda, C. A. & Freeman, W. J. How brains make chaos in order to make sense of the world. Behav. Brain Sci. 10, 161–173 (1987).
 8.
Pool, R. Is it healthy to be chaotic? Science 243, 604–607 (1989).
 9.
Korn, H. & Faure, P. Is there chaos in the brain? ii. experimental evidence and related models. C. R. Biol. 326, 787–840 (2003).
 10.
Destexhe, A. Oscillations, complex spatiotemporal behavior, and information transport in networks of excitatory and inhibitory neurons. Phys. Rev. E 50, 1594 (1994).
 11.
Lizier, J.T., Prokopenko, M. & Zomaya, A.Y. The information dynamics of phase transitions in random boolean networks. In ALIFE, 374–381 (2008).
 12.
Nagao, N., Nishimura, H. & Matsui, N. A neural chaos model of multistable perception. Neural Process. Lett. 12, 267–276 (2000).
 13.
Rabinovich, M. & Abarbanel, H. The role of chaos in neural systems. Neuroscience 87, 5–14 (1998).
 14.
Heltberg, M. L., Krishna, S. & Jensen, M. H. On chaotic dynamics in transcription factors and the associated effects in differential gene regulation. Nat. Commun. 10, 71 (2019).
 15.
Glass, L. & Mackey, M.C. From clocks to chaos: The rhythms of life (Princeton University Press, 1988).
 16.
Guevara, M. R., Glass, L. & Shrier, A. Phase locking, perioddoubling bifurcations, and irregular dynamics in periodically stimulated cardiac cells. Science 214, 1350–1353 (1981).
 17.
Hayashi, H. & Ishizuka, S. Chaotic nature of bursting discharges in the onchidium pacemaker neuron. J. Theor. Biol. 156, 269–291 (1992).
 18.
Hayashi, H., Nakao, M. & Hirakawa, K. Chaos in the selfsustained oscillation of an excitable biological membrane under sinusoidal stimulation. Phys. Lett. A 88, 265–266 (1982).
 19.
Mackey, M. C. & Glass, L. Oscillation and chaos in physiological control systems. Science 197, 287–289 (1977).
 20.
SteynRoss, M. L., SteynRoss, D. A. & Sleigh, J. W. Interacting TuringHopf instabilities drive symmetrybreaking transitions in a meanfield model of the cortex: a mechanism for the slow oscillation. Phys. Rev. X 3, 021005 (2013).
 21.
Toker, D. The chaos decision tree algorithm. https://doi.org/10.6084/m9.figshare.7476362.v7 (2019).
 22.
Chan, K.S. & Tong, H. Chaos: a statistical perspective (Springer Science & Business Media, 2013).
 23.
Dawes, J. H. P. & Freeland, M. C. The ‘0–1 test for chaos’ and strange nonchaotic attractors. preprint https://people.bath.ac.uk/jhpd20/publications/sna.pdf (2008).
 24.
Hu, J., Tung, W.w, Gao, J. & Cao, Y. Reliability of the 01 test for chaos. Phys. Rev. E 72, 056207 (2005).
 25.
Kulp, C. & Zunino, L. Discriminating chaotic and stochastic dynamics through the permutation spectrum test. Chaos: an Interdisciplinary. J. Nonlinear Sci. 24, 033116 (2014).
 26.
Timmer, J., Häußler, S., Lauk, M. & Lücking, C.H. Pathological tremors: deterministic chaos or nonlinear stochastic oscillators? Chaos: An Interdisciplinary. J. Nonlinear Sci. 10, 278–288 (2000).
 27.
Uenohara, S. et al. Experimental distinction between chaotic and strange nonchaotic attractors on the basis of consistency. Chaos: an Interdisciplinary. J. Nonlinear Sci. 23, 023110 (2013).
 28.
Huebner, U., Abraham, N. & Weiss, C. Dimensions and entropies of chaotic intensity pulsations in a singlemode farinfrared nh 3 laser. Phys. Rev. A 40, 6354 (1989).
 29.
Lindner, J. F. et al. Strange nonchaotic stars. Phys. Rev. Lett. 114, 054101 (2015).
 30.
Hurrell, J. W., Kushnir, Y., Ottersen, G. & Visbeck, M. An overview of the North Atlantic Oscillation. North Atl. Oscillation: Climatic Significance Environ. Impact 134, 1–35 (2003).
 31.
Goldberger, A. L. et al. Physiobank, Physiotoolkit, and Physionet: components of a new research resource for complex physiologic signals. Circulation 101, e215–e220 (2000).
 32.
Bandt, C. & Pompe, B. Permutation entropy: a natural complexity measure for time series. Phys. Rev. Lett. 88, 174102 (2002).
 33.
Theiler, J., Eubank, S., Longtin, A., Galdrikian, B. & Farmer, J. D. Testing for nonlinearity in time series: the method of surrogate data. Phys. D: Nonlinear Phenom. 58, 77–94 (1992).
 34.
Lancaster, G., Iatsenko, D., Pidde, A., Ticcinelli, V. & Stefanovska, A. Surrogate data for hypothesis testing of physical systems. Phys. Rep. 748, 1–60 (2018).
 35.
Jamšek, J., Paluš, M. & Stefanovska, A. Detecting couplings between interacting oscillators with timevarying basic frequencies: Instantaneous wavelet bispectrum and information theoretic approach. Phys. Rev. E 81, 036207 (2010).
 36.
Schreiber, T. Extremely simple nonlinear noisereduction method. Phys. Rev. E 47, 2401 (1993).
 37.
Gottwald, G.A. & Melbourne, I. in Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, vol. 460, 603–611 (The Royal Society, 2004).
 38.
Gottwald, G. A. & Melbourne, I. Testing for chaos in deterministic systems with noise. Phys. D: Nonlinear Phenom. 212, 100–110 (2005).
 39.
Gottwald, G. A. & Melbourne, I. On the implementation of the 0–1 test for chaos. SIAM J. Appl. Dynamical Syst. 8, 129–145 (2009).
 40.
Gottwald, G. A. & Melbourne, I. Comment on “reliability of the 01 test for chaos”. Phys. Rev. E 77, 028201 (2008).
 41.
Guevara, M. R. & Glass, L. Phase locking, period doubling bifurcations and chaos in a mathematical model of a periodically driven oscillator: A theory for the entrainment of biological oscillators and the generation of cardiac dysrhythmias. J. Math. Biol. 14, 1–23 (1982).
 42.
Horio, Y., Aihara, K. & Yamamoto, O. Neuronsynapse IC chipset for largescale chaotic neural networks. IEEE Trans. Neural Netw. 14, 1393–1404 (2003).
 43.
Trostel, M. L., Misplon, M. Z., Aragoneses, A. & Pattanayak, A. K. Characterizing complex dynamics in the classical and semiclassical duffing oscillator using ordinal patterns analysis. Entropy 20, 40 (2018).
 44.
Amigó, J. M. The equality of KolmogorovSinai entropy and metric permutation entropy generalized. Phys. D: Nonlinear Phenom. 241, 789–793 (2012).
 45.
Baillie, R. T., Cecen, A. A. & Erkal, C. Normal heartbeat series are nonchaotic, nonlinear, and multifractal: New evidence from semiparametric and parametric tests. Chaos: An Interdisciplinary. J. Nonlinear Sci. 19, 028503 (2009).
 46.
Zhang, J., Holden, A., Monfredi, O., Boyett, M. R. & Zhang, H. Stochastic vagal modulation of cardiac pacemaking may lead to erroneous identification of cardiac “chaos”. Chaos 19, 028509 (2009).
 47.
Ovchinnikov, I. Introduction to supersymmetric theory of stochastics. Entropy 18, 108 (2016).
 48.
Letellier, C. & Aguirre, L. A. Investigating nonlinear dynamics from time series: The influence of symmetries and the choice of observables. Chaos 12, 549–558 (2002).
 49.
Izhikevich, E. M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003).
 50.
Venkatesan, A. & Lakshmanan, M. Interruption of torus doubling bifurcation and genesis of strange nonchaotic attractors in a quasiperiodically forced map: Mechanisms and their characterizations. Phys. Rev. E 63, 026219 (2001).
 51.
Grebogi, C., Ott, E., Pelikan, S. & Yorke, J. A. Strange attractors that are not chaotic. Phys. D: Nonlinear Phenom. 13, 261–268 (1984).
 52.
May, R. M. Simple mathematical models with very complicated dynamics. Nature 261, 459 (1976).
 53.
Lorenz, E. N. Deterministic nonperiodic flow. J. Atmos. Sci. 20, 130–141 (1963).
 54.
Richter, H. The generalized henon maps: examples for higherdimensional chaos. Int. J. Bifurc. Chaos 12, 1371–1384 (2002).
 55.
Freitas, U. S., Letellier, C. & Aguirre, L. A. Failure in distinguishing colored noise from chaos using the “noise titration” technique. Phys. Rev. E 79, 035201 (2009).
 56.
Nicolau, J. Stationary processes that look like random walks the bounded random walk process in discrete and continuous time. Econom. Theory 18, 99–118 (2002).
 57.
Timmer, J. Power of surrogate data testing with respect to nonstationarity. Phys. Rev. E 58, 5153 (1998).
 58.
Zhivomirov, H. A method for colored noise generation. Rom. J. Acoust. Vib. 15, 14–19 (2018).
 59.
Rössler, O. E. An equation for continuous chaos. Phys. Lett. A 57, 397–398 (1976).
 60.
Hammel, S., Jones, C. & Moloney, J. Global dynamical behavior of the optical field in a ring cavity. JOSA B 2, 552–564 (1985).
 61.
Hénon, M. in The Theory of Chaotic Attractors, 94–102 (Springer, 1976).
 62.
Kulp, C. W. & Smith, S. Characterization of noisy symbolic time series. Phys. Rev. E 83, 026201 (2011).
 63.
Bernardini, D. & Litak, G. An overview of 01 test for chaos. J. Braz. Soc. Mech. Sci. Eng. 38, 1433–1450 (2016).
 64.
Zunino, L. & Kulp, C. W. Detecting nonlinearity in short and noisy time series using the permutation entropy. Phys. Lett. A 381, 3627–3635 (2017).
 65.
Riedl, M., Müller, A. & Wessel, N. Practical considerations of permutation entropy. Eur. Phys. J. Spec. Top. 222, 249–262 (2013).
 66.
Leontitsis, A. Chaotic systems toolbox, https://www.mathworks.com/matlabcentral/fileexchange/1597chaoticsystemstoolbox. (2004).
 67.
Matthews, P. 0–1 test for chaos. https://www.mathworks.com/matlabcentral/fileexchange/2505001testforchaos. (2009).
 68.
EyébéFouda, J. S. A., Bodo, B., Sabat, S. L. & Effa, J. Y. A modified 01 test for chaos detection in oversampled time series observations. Int. J. Bifurc. Chaos 24, 1450063 (2014).
 69.
Hawkins, R. Permutation entropies and chaos. https://pdfs.semanticscholar.org/e5d9/65c915e62694a3a8a99863477911cc837992.pdf (2015).
 70.
Politi, A. Quantifying the dynamical complexity of chaotic time series. Phys. Rev. Lett. 118, 144101 (2017).
 71.
Bandt, C., Keller, G. & Pompe, B. Entropy of interval maps via permutations. Nonlinearity 15, 1595 (2002).
 72.
Keller, K., Unakafov, A. M. & Unakafova, V. A. On the relation of KS entropy and permutation entropy. Phys. D: Nonlinear Phenom. 241, 1477–1481 (2012).
 73.
Pesin, Y. B. Characteristic Lyapunov exponents and smooth ergodic theory. Russian Math. Surv. 32, 55–114 (1977).
 74.
Liley, D. T., Cadusch, P. J. & Wright, J. J. A continuum theory of electrocortical activity. Neurocomputing 26, 795–800 (1999).
 75.
Liley, D. T., Cadusch, P. J. & Dafilis, M. P. A spatially continuous mean field theory of electrocortical activity. Netw.: Comput. Neural Syst. 13, 67–113 (2002).
 76.
Izhikevich, E. M. Which model to use for cortical spiking neurons? IEEE Trans. Neural Netw. 15, 1063–1070 (2004).
 77.
Gatti, A. et al. Cyclically changing phenomenon of cancer cells in an untreated hemophilia patient. Blood 41, 771–773 (1973).
 78.
Farmer, J. D. Chaotic attractors of an infinitedimensional dynamical system. Phys. D: Nonlinear Phenom. 4, 366–393 (1982).
 79.
Haken, H. Analogy between higher instabilities in fluids and lasers. Phys. Lett. A 53, 77–78 (1975).
 80.
Poland, D. Cooperative catalysis and chemical chaos: a chemical model for the Lorenz equations. Phys. D: Nonlinear Phenom. 65, 86–99 (1993).
 81.
Ramasubramanian, K. & Sriram, M. A comparative study of computation of Lyapunov spectra with different algorithms. Phys. D: Nonlinear Phenom. 139, 72–86 (2000).
 82.
Poon, C. S., & Barahona, M. Titration of chaos with added noise. Proceedings of the national academy of sciences, 98, 7107–7112. (2001).
 83.
Patterson, K. Unit Root Tests in Time Series Volume 2: Extensions and Developments, vol. 2 (Palgrave Macmillan, 2012).
 84.
Patterson, K. Unit Root Tests in Time Series Volume 1: Key Concepts and Problems (Springer, 2011).
 85.
Ikeda, K., Daido, H. & Akimoto, O. Optical turbulence: chaotic behavior of transmitted light from a ring cavity. Phys. Rev. Lett. 45, 709 (1980).
 86.
Galias, Z. Rigorous investigation of the Ikeda map by means of interval arithmetic. Nonlinearity 15, 1759 (2002).
 87.
Ivancevic, V. G. & Ivancevic, T. T. Computational mind: a complex dynamics perspective, vol. 60 (Springer, 2007).
 88.
Glass, L. in Nonlinear dynamics in physiology and medicine, 123–148 (Springer, 2003).
 89.
Barnett, L. & Seth, A. K. The MVGC multivariate Granger causality toolbox: a new approach to Grangercausal inference. J. Neurosci. methods 223, 50–68 (2014).
 90.
Aihara, K., Takabe, T. & Toyoda, M. Chaotic neural networks. Phys. Lett. A 144, 333–340 (1990).
 91.
Fernández, I., Hernández, C. N. & Pacheco, J. M. Is the North Atlantic Oscillation just a pink noise? Phys. A: Stat. Mech. its Appl. 323, 705–714 (2003).
 92.
Stephenson, D. B., Pavan, V. & Bojariu, R. Is the North Atlantic Oscillation a random walk? Int. J. Climatol. 20, 1–18 (2000).
 93.
Collette, C. & Ausloos, M. Scaling analysis and evolution equation of the North Atlantic Oscillation index fluctuations. Int. J. Mod. Phys. C. 15, 1353–1366 (2004).
 94.
Lind, P. G., Mora, A., Haase, M. & Gallas, J. A. Minimizing stochasticity in the NAO index. Int. J. Bifurc. Chaos 17, 3461–3466 (2007).
 95.
Martínez Santafé, M. D., Lana Pons, F. J., Burgueño, A. & Serra de Larrocha, C. Predictability of the monthly North Atlantic Oscillation index based on fractal analyses and dynamic system theory. Nonlinear Proc. Geophys. 17, 93–101 (2010).
 96.
Fernández, I., Pacheco, J. M. & Quintana, M. P. Pinkness of the North Atlantic Oscillation signal revisited. Phys. A: Stat. Mech. its Appl. 389, 5801–5807 (2010).
 97.
Lynch, S. Dynamical systems with applications using Matlab® (Springer, 2014).
Acknowledgements
We thank Seiji Uenohara and colleagues for kindly sending us data recorded from their chaotic neuron integrated circuit, and we thank John F. Lindner for sending us data on the stellar flux of KIC 5520878. We also thank Michael Jansson for his feedback on stationarity testing.
Author information
Affiliations
Contributions
D.T. conceived of the project, wrote the code for the Chaos Decision Tree Algorithm, analyzed results, and wrote the manuscript. F.T.S. and M.D. contributed to discussing, editing, and revising the paper.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Toker, D., Sommer, F.T. & D’Esposito, M. A simple method for detecting chaos in nature. Commun Biol 3, 11 (2020). https://doi.org/10.1038/s4200301907159
Received:
Accepted:
Published:
Further reading

Nonlinear delay differential equations and their application to modeling biological network motifs
Nature Communications (2021)

Coexistence of chaotic and complexity dynamics of fluctuations with longrange temporal correlations under typical condition for formation of multiple anodic double layers in DC glow discharge plasma
Nonlinear Dynamics (2020)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.