Discrete Structure of the Brain Rhythms

Neuronal activity in the brain generates synchronous oscillations of the Local Field Potential (LFP). The traditional analyses of the LFPs are based on decomposing the signal into simpler components, such as sinusoidal harmonics. However, a common drawback of such methods is that the decomposition primitives are usually presumed from the onset, which may bias our understanding of the signal’s structure. Here, we introduce an alternative approach that allows an impartial, high resolution, hands-off decomposition of the brain waves into a small number of discrete, frequency-modulated oscillatory processes, which we call oscillons. In particular, we demonstrate that mouse hippocampal LFP contain a single oscillon that occupies the θ-frequency band and a couple of γ-oscillons that correspond, respectively, to slow and fast γ-waves. Since the oscillons were identified empirically, they may represent the actual, physical structure of synchronous oscillations in neuronal ensembles, whereas Fourier-defined “brain waves” are nothing but poorly resolved oscillons.

where z = x + iy is a complex variable (i.e., the series expansion (2) is an extension of (1) into the entire complex plane), and of its Padé approximant-a ratio of two polynomials P N−1 (z) and Q N (z), that approximates G(z) to the 2N-th order of z [2]. Oscillatory component. In the analyses of the oscillatory signals, the N roots z p , p = 1, ..., N, of the polynomial Q N (z)-the poles of the Padé approximant-play the role of the discrete Fourier harmonics, z l , in the DFT: they capture the spectral structure of the signal [3][4][5]. Indeed, consider a signal r(t) obtained as a superposition of N P damped oscillators, r(t) = Σ p A p e −α p t cos(ω p t + ϕ p ), (4) where A p is the amplitude of the pth oscillator with a damping exponent α p , frequency ω p and phase ϕ p . If the signal is sampled at a frequency S , then it generates a discrete time series, r k = Σ N P p=1 c p e iω (+) p k/S + c * p e iω (−) p k/S , k ∈ Z, where ω (±) = iα p ± ω p and c p = A p e iϕ p /2 . The generating function of this series, is a rational fraction of degree (2N P − 1)/2N P with poles The phase of each pole z p defines the frequency ω p , and its magnitude defines the damping constant α p . The residue of z p defines the amplitude A p and the phase ϕ p of the corresponding oscillator.
Notice that poles come in complex conjugate pairs and have to lie either outside of the unit circle, if the signal is damped ( ω p = α p > 0), or on the circle if the signal has no damping. Noise component. If a signal is perturbed by an additive noise ξ(t), then the generating function (3) of the resulting "noisy" time series s n = r n + ξ n is the sum of the "regular" and the "noisy" part, A remarkable theorem proven by H. Steinhaus [8] establishes that the poles of Ξ(z) concentrate, with probability 1, at the unit circle (SFig. 1B). In other words, the generating function of a random data series is an analytic function inside the unit disk, possessing a dense set of poles as |z| approaches 1. Thus, the total generating function of the full signal G(z) = R(z) + Ξ(z) has a finite number of poles contributed by R(z) and an infinite number of poles contributed by Ξ(z). , also concentrate in a close vicinity of the unit circle, in accordance with Steinhaus' theorem [8]. For illustration purposes, the number of Padé-poles shown on panel B is much larger than the number of harmonics shown on panel A. These poles are not constrained to any a priori selected locations; in fact, their positions in the complex plane C 1 are dictated solely by the signal's structure, which ultimately leads to the super-resolution property [5][6][7]. C. According to the Froissart's theory [3,9], zeros and poles that represent the noise component of the signal form close pairs-the Froissart doublets. A zoom-in into a small segment of the unit circle shows many Froissart doublets (zeros shown as blue dots), and two isolated poles that represent the regular, oscillatory part of the signal.
A key property of the Padé approximant to Ξ(z) is that its poles occur in close vicinities of its zeros, forming the so-called Froissart doublets [9][10][11] that can be easily detected numerically (Fig. 1C). In our analyses, the typical distance in a pole-zero pair is smaller than 10 −6 − 10 −7 in the standard Euclidean metric on C 1 . We hence identified such pairs as the ones smaller than a critical distance δ = 10 −5 . These results are stable: injecting small amounts of white and colored noise into the signal (about 10 −3 of the signal's mean amplitude, i.e., at least ten times more than the signal's natural noise level) does not alter the reconstructed positions of the regular poles and hence the parameters the spectral waves remain the same as the "perturbed" Froissart doublets are removed.
J-matrix formalism. In order to obtain a Padé approximation to G(z) in the entire complex plane, the Ξ(z) has to be analytically extend through its natural boundary, which remains an open problem of complex analysis. However, the "J-matrix approach" developed in a recent series of publications [3][4][5], allows addressing this problem in practical terms. The generating function G(z) can be associated with a tri-diagonal Hilbert space operator J that has G(z) as its resolvent matrix element, G(z) = e 0 |(J − z1) −1 |e 0 , e 0 = (1, 0, ...) [2]. In accordance with Steinhaus' theorem, the spectrum of J consists of two parts: an essential spectrum with support on the unit circle, which represents the noise component and a discrete spectrum, containing a finite number of poles outside the unit circle, which represent the regular component of the signal (a finite number of damped oscillators). In the spectrum of finite order truncations J N of the J-operator, the poles of the Froissart doublets take the place of the essential spectrum. Moreover, these finite matrices can be explicitly constructed as follows. Let us consider the set of subdiagonal Padé approximations to the generating function of a given time series defined by (3). The polynomials Q N (z) satisfy a third order recursive relation which can be written in a matrix form, J N V = zV where J N is the (tridiagonal) finite order matrix approximation to the J-operator of order N + 1. The column vector V is defined by the polynomials Q N , V T = [Q 0 (z), Q 1 (z), ..., Q N (z)]. The zeros of Q N+1 (z) define the eigenvalues (z 0 , z 1 , ..., z N ) of J N and therefore the poles of R N . The same procedure applied to P N (with a slightly modified matrix) gives us the zeros of G N , thus completely characterizing R N itself.
Short Time Padé Transform, (STPT) is analogous to the standard Short Time Fourier Transform (STFT) method [12]. Starting with a segment s 1 , s 2 , ..., s N centered at t 1 , we compute the Padé approximants, identify and discard the Froissart doublets, and then evaluate the frequencies, ω q (t 1 ), the amplitudes, A q (t 1 ) and the phases, ϕ q (t 1 ), associated with the stable poles z 1 , z 2 , ...z p 1 , q = 1, ..., p 1 . After that, the window is shifted by ∆T to the position centered at t 2 , and the same analysis is applied to the next segment of the time series, revealing the frequencies, ω i (t 2 ), the amplitudes, A i (t 2 ) and the phases, ϕ i (t 2 ), i = 1, ..., p 2 , and so on.
Separating the noise from the oscillations. The Froissart doublets and the regular poles of G(z), exhibit qualitatively different behaviors in response to changes of the DPT algorithms' parameters. If the size of time window T W in the STPT is altered, or as it is shifted from one segment of the time series to another, or if the order of the Padé approximant is changed, the Froissartpaired poles move significantly and irregularly around the unit circle, as one would expect from a structure that represents noise. In contrast, the poles associated with the regular part of the signal remain stable and isolated. These differences can be easily detected numerically, producing the computational DPT method [3,4].
Note that every data point obtained by the STPT method is obtained independently: evaluation of N frequencies at each time-step, identification of the "noisy" vs. "regular" frequencies, etc. does not affect the values obtained at the other time-steps and hence the pattern formed by the data points is completely empirical.
Computing the parameters of the spectral waves. Since the instantaneous parameters of the oscillons are computed independently based on a finite number of data points, the reconstructed spectral waves contain gaps and other irregularities. We therefore construct the smoothened spectral waves by interpolating the "raw" traces of the regular frequencies over the uniformly spaced time points, and compute the mean parameters ω q,0 , ω q,i , Ω q,i , and ϕ q,i in the expansion ω q (t) ≡ ∂ t φ q = ω q,0 + ω q,1 sin(Ω q,1 t + ϕ q,1 ) + ω q,2 sin(Ω q,2 t + ϕ q,2 ) + . . . .
using the standard DFT methods. The SFig. 2 illustrates how the inherent conflict between the time and the frequency resolutions obscures the spectral waves in the Fourier spectrogram. The horizontal stripes on each Fourier spectrogram indicate the magnitude of the spectral resolution, ∆ f . The spectral resolution of the Fourier spectrogram becomes comparable to the frequency scale of the spectral waves only for T W = 100 msec (third panel from the top), but the temporal resolution at this value exceeds the characteristic period of the spectral waves. Increasing the frequency resolution broadens the window size beyond the spectral waves' period (bottom panel) and vice versa, increasing temporal resolution destroys the frequency resolution (top two panels). As a result, the spectral waves remain unresolved by the DFT, which can only detect a band of increased amplitudes, but not the detailed pattern of the oscillating frequencies.
The wavelet spectrograms (scalograms) of the same signal using three different wavelets are shown on SFig. 3. To illustrate the effectiveness of the proposed method, we simulated a superposition of five artificial oscillons with the amplitudes A 1 = 0.5, A 2 = 0.3, A 3 = 0.15, A 4 = 0.1 and A 5 = 0.05, the mean frequencies are ω 1,0 = 5 Hz, ω 2,0 ≈ 20 Hz, ω 3,0 ≈ 30 Hz, ω 4,0 ≈ 40 Hz, and ω 5,0 ≈ 50 Hz, and five modulating frequencies ω 1,1 = 2 Hz, ω 2,1 = 2.5 Hz, ω 3,1 = 4 Hz, ω 4,1 = 5 Hz and ω 2,1 = 6 Hz respectively. The amplitudes of the frequency modulations are approximately π Hz in all cases. The resulting "spectral waves" are shown as black sinusoids in the background of the four panels of SFig. 4. Each panel corresponds to a particular window width: T W = 0.10 sec, T W = 0.15 sec, T W = 0.2 sec and T W = 0.25 sec, at the sampling rate of S = 1000 Hz. In the first case, the DPT is therefore based on N 1 = 100 data points per window, i.e., 50 sample frequencies occupying the range between 0 and 500 Hz, or about one frequency per 10 Hz interval. As shown on the SFig. 4A, this (or smaller) values are insufficient for resolving oscillons with magnitude ±π: the undulatory pattern is not captured. In the second case, each window contains N 2 = 150 points, or one frequency per ≈ 6.6 Hz, and the spectral waves become apparent (SFig. 4B). If window becomes bigger, N 3 = 200 points (SFig. 4C) or N 4 = 250 points (SFig. 4D), the temporal resolution suffers: the patterns of all spectral waves become averaged over the window width. As a result the upper spectral waves produce sidebands and the lowest spectral waves the flatten out. We emphasize however, that the original signal can be reconstructed with high precision in all cases; the issue is only whether the spectral can or cannot be resolved.
For comparison, the corresponding Fourier spectrograms and the wavelet scalogram computed using Daubechies' wavelet of level 24 are shown on SFig. 5.