The significance of neural inter-frequency power correlations

It is of great interest in neuroscience to determine what frequency bands in the brain have covarying power. This would help us robustly identify the frequency signatures of neural processes. However to date, to the best of the author’s knowledge, a comprehensive statistical approach to this question that accounts for intra-frequency autocorrelation, frequency-domain oversampling, and multiple testing under dependency has not been undertaken. As such, this work presents a novel statistical significance test for correlated power across frequency bands for a broad class of non-stationary time series. It is validated on synthetic data. It is then used to test all of the inter-frequency power correlations between 0.2 and 8500 Hz in continuous intracortical extracellular neural recordings in Macaque M1, using a very large, publicly available dataset. The recordings were Current Source Density referenced and were recorded with a Utah array. The results support previous results in the literature that show that neural processes in M1 have power signatures across a very broad range of frequency bands. In particular, the power in LFP frequency bands as low as 20 Hz was found to almost always be statistically significantly correlated to the power in kHz frequency ranges. It is proposed that this test can also be used to discover the superimposed frequency domain signatures of all the neural processes in a neural signal, allowing us to identify every interesting neural frequency band.


Testing the Normality of the Phase-Randomisation Null Distributions
The vast majority of null distributions produced by FT phase-randomisation were normal. As such, they were compatible with the multiple testing procedure from [1]. The normality was confirmed visually for each analysed recording by using indicator maps of the statistically normal distributions. Fig. 1 shows the indicator map of normal distributions for a small number of randomly selected recordings, as an example.

Spectral Interpolation Implementation
The original paper on Spectral Interpolation (SI) outlines the general method [4]. In this work, to remove the line noise, we transformed the signal to the Fourier domain using the FFT. We identified the peak line noise frequency at ∼60 Hz. We then spectrally interpolated it and all of its harmonics up to the Nyquist frequency. The interpolated segments had a width of 4 Hz. To perform the SI, we averaged the power in the 3 Hz band from either side of the interpolated segment to calculate the edge values of the interpolated segment. We then fitted a 1 st order polynomial to between the two edge values. We set the interpolated segments' frequency magnitudes as equal to the fitted line, and randomised the phases. This process is shown in Fig. 2.
We also spectrally interpolated across either one or two sharp components that were often present in the Fourier transforms of the recordings. These components varied in location between ∼2.4-2.8 kHz, and did not correspond to line noise harmonics. These interpolated segments were generally wider than 4 Hz. We also spectrally interpolated the frequency domain between 55 and 65 Hz, due to abnormally large components in this range. We performed no other SI or artifact removal.

Current Source Density Referencing Implementation
For a review of Current Source Density (CSD) referencing, see [5]. In a Utah array, CSD involves subtracting the average of the four electrodes around the referenced electrode from the recording. In this work, the procedure involved lowpass filtering the referencing signals at 500 Hz. For this we used the MATLAB function lowpass.We then averaged the lowpassed data, from the 4 electrodes that formed a square around the referenced electrode, to produce the reference signal. This reference signal was then subtracted from the referenced signal. Some of the electrodes existed on the edge/corner of the Utah array. In these cases, for references we used only the 3/2 electrodes that would have been used had it had 4 neighbours. Examples of the different geographies are shown in Fig. 3 (a). An example of CSD referencing of a neural signal is shown in Fig. 3 (b). The CSD referencing is shown in Fig. 3 (c-d).
The neural medium may be non-capacitive [6]. As such, using only sub-500 Hz components in the reference signal was a somewhat arbitrary choice. However, it had the benefit of preventing high amplitude spikes from affecting the referencing.

Clipping Implementation
For the final pre-processing step, the CSD-referenced data was clipped. First, we removed samples x n−k , ... , x n so that k was minimised while ensuring that x n−k−1 was within 1% of x 1 . On average, this removed around 50 time steps from the end of the signal, and so had a negligible effect on the signal lengths. Then the signal was zero-meaned. Clipping the end of ⃗ x and zero-meaning aided the assumption of periodicity in the Fourier transform of each scale to hold. The main consequence of this is that the power at every scale was closer to zero at the beginning and end of the zero-centered clipped sequence. This somewhat mitigated the effects of cyclic discontinuities during phase-randomisation of each scale. This is not strictly necessary, as cyclic discontinuities, if they are part of the intra-scale signal, are part of the intra-scale signal. As such it is not obvious that they, or their downstream effects on ⃗ ϕ a,b and thus the null distribution, should be mitigated. However, in this work they were somewhat mitigated, and in doing so clipping was preferable to using a classical window, which would have distorted the normalisation of the ISPC. This is because windowing produces smaller standard deviation at the edges of the signal. This can be observed in Fig. 4 (d).
Generally, the effects of clipping, pre-whitening, and using a Hamming window prior to pre-whitening on the time and Fourier transform of a neural signal are shown in Fig. 4. In Fig. 4, pre-whitening is used to demonstrate the effects of cyclic discontinuities. E.g., Fig. 4 (b) shows that cyclic discontinuities can introduce strong artificial high frequency components at the beginning and end of the signal. In phaserandomisation, these artificial components are spread throughout the signal. Since this is unwanted, we need some way to suppress cyclic discontinuities, e.g. clipping or windowing.
It is not necessary for the first index of the clipped segment to be the first value of the original signal. It can be any arbitrary index. Generally, the clipped segment needs to be of sufficient length, between indices k 1 , k 2 such that x k1 ≈ x k2 and 1 ≤ k 1 ≪ k 2 ≤ n. However, in this work the first value of the whole signal was used as the first value of the clipped segment, i.e. k 1 = 1.

Pseudo-Random Number Generation for Monte Carlo White Noise Processes
A subtle point worth mentioning is that the MATLAB rand function uses Mersenne Twisters to produce pseudo-random numbers. The use of Mersenne Twister's in MC simulations has been criticised [7]. As such, the authors used the PCG family of RNGs [8] to produce pseudo-random numbers for the white noise processes. The state seed was set as 1 for all cases, and the sequence seed was set as n + v. We used a MC sequence length of either n = F s × 350 s = 8544922, n = F s × 400 s = 9765625, or n = F s × 500 s = 12207031 samples. The C code from [8] was adapted to create the 32-bit white noise processes and save them to '.txt' files. These were then accessed by the appropriate MATLAB functions. In MATLAB, the white noise processes were then processed as described in Section 4.4.1 and Fig. 10 in the main text.