## Introduction

Working memory is the foundation for goal-directed behaviours in which short-term storage and manipulation of sensory information are crucial for successful task execution. In understanding working memory as a core cognitive process it is essential to investigate the selection, initiation, and termination of information-processing, in other words, stages of the memory process. In our study we utilised one of the most widely used experimental method for memory investigation—the Deese–Roediger–McDermott (DRM) paradigm1,2 in which the temporal subprocesses of working memory such as encoding and retrieval are separated. This paradigm is a popular research tool because of its methodological simplicity and controllability. Originally used to study long-term memory, it was later adapted to the working memory and functional magnetic resonance imaging (fMRI) environment3,4, allowing measurement of neural activity while a person memorises and retrieves information. In the classical DRM paradigm participants are presented with a list of semantically related words at the encoding stage but the paradigm has been successfully used with other stimuli which allowed us to employ the modified versions of the DRM paradigm in four experimental visual tasks: two verbal and two nonverbal. Studies on neural correlates of working memory revealed mainly activations of the prefrontal or visual regions5. In recent years, functional activity in general has been intensely analysed with a variety of methods6. However, blood oxygen-level-dependent (BOLD) signals have a nontrivially associated autocorrelation and cross-correlation structure7 and remain notoriously challenging to analyse due to their very low temporal resolution. Motivated by prior work, we applied the fractal methodology to test for regional differences in BOLD scaling properties between the tasks and experimental phases of a working memory experiment.

Many natural systems representing diverse scientific disciplines such as physics, chemistry, biology, economics, and social sciences can be considered complex systems8. These systems consist of numerous nonlinearly interacting elements and reveal extremely complex dynamical behaviour challenging to characterise by the standard analytic tools. Despite their complexity, the systems reveal some universal features, with fractality being one of the most intriguing effects among them. It is the consequence of scale-free—multiscale—data/system organisation and, thus, the system’s complexity can be quantitatively measured by fractal dimensions. Moreover, recognising the fractal structure can manifest the system’s underlying process, like self-organised criticality or cascade processes9,10. A clear example is a human brain, for which fractal organisation has been identified in morphological and physiological properties11,12. Moreover, in recent years this methodology, as very effective in detecting long-range temporal correlations, has also been developed in the cross-correlation case and used to quantify the nonlinear coupling between time series in economic systems13.

However, the proper quantification of the temporal organisation of the data is a subtle task, especially when the data are nonstationary14. For fractals, the considered data’s correlation structure obeys a power law which denotes long-range temporal or spatial correlations. Thus, the system (or signal) properties are evaluated on different scales to detect such correlations. To do this, one can use the autocorrelation function with correlations estimated at different time lags. However, determining the correlation in this way is problematic due to the noise superimposed on the data collected15,16. As a result, such data nonstationarity can lead to inaccurate estimation of the scaling exponents. Thus, one is interested in methodology, eliminating the trends without knowing their origin and shape. In this respect, two approaches are commonly used15: one related to the investigation of the signal in the frequency domain and the other directly in the time domain. In the former case, analysis of the signal wavelet transform reveals the correlation structure of the data at different levels of temporal resolution. One can also use Fourier transform methods to assess the scaling exponents as a complementary method. However, in this case, the nonstationarity can affect the calculations. In the latter approach, through estimating the detrended variance on different time scales, one can evaluate the Hurst exponents and quantify the level of data persistency.

Fractal-like or spectral methods can also help capture the information hidden in physiological signals. These methods are used successfully in the study of EEG signals17,18,19, identifying the signature of scale-free dynamics. The temporal characteristics of the fMRI signals have also been demonstrated to follow the power-law relationship expressed in the frequency domain f in the form $$P(f) \propto 1/f^\beta$$ with $$\beta >0$$20 and the spatiotemporal functional correlations have been related to the dynamics of a critical system21. Recently, also the fractal methodology was also applied to analyse fMRI data22. Moreover, several studies showed that the fractal properties can vary between brain regions as well as during task activation23,24,25.

Our study focused on the temporal organisation of data related to involvement in various cognitive tasks. To this end, we applied the fractal methodology to investigate the correlation structure of the time series related to four different experimental visual tasks: nonverbal (abstract objects requiring global or local information processing) and verbal (words or pseudowords), phases (information encoding and retrieval) and times of day of a working memory experiment. Two well-established methods of estimating the self-affinity parameter representing power-law autocorrelation signal properties, namely detrended fluctuation analysis and power spectrum density, have been applied in this context. Moreover, the nonlinear cross-correlation coefficient has been used to quantify the strength of the coupling between the time series under investigation. Our results confirm that both fractal characteristics of the time series organization and cross-correlation between brain areas are strongly associated with the brain activity related to engagement in a given task.

The organisation of the paper is as follows; in “Data” we provide a summary of data specification and preprocessing. Section “Methods” presents the methods of fractal, spectral, and cross-correlation analysis; a brief summary of the consecutive steps of data preprocessing and analysis is depicted in Fig. 1. Results of the analyses are presented in “Results”. Finally, in “Discussion” the findings are discussed and conclusions presented.

## Results

### Analysis of the fractal organisation of the time series

At the beginning, we estimated the scaling properties of the time series extracted from the specific brain regions (ROIs; regions of interest) in four experimental tasks (visual-verbal: semantic, SEM, and phonological, PHO, and visual-nonverbal: local information processing, LOC, and global information processing, GLO) by applying both the DFA and PSD presented in “Methods”. Examples of the analysed data are depicted in “Data”. For each session, we estimated the fluctuation function F(s) and the power spectral density S(f) of the time series in each ROI in a given task. The corresponding quantities were averaged over all sessions, yielding the averages $$\langle F(s)\rangle$$ and $$\langle S(f) \rangle$$ for each ROI and task separately. Based on the power-law dependence they clearly follow, as shown in detail in “Detrended fluctuation analysis”, the Hurst and $$\beta$$ exponents were estimated for both tasks and the resting state and depicted in Fig. 2. The difference in these exponents between the resting state and the tasks is clear. The extreme values of the scaling exponents were obtained for the resting state with $$\beta \in [0.1, 1.4]$$ and $$H\in [0.9, 1.2]$$. In most cases, the theoretical relation between $$\beta$$ and H (7) was fulfilled. Still, we note that in some ROIs the $$\beta$$ exponent dropped and assumed low values, which was not observed for the Hurst exponent. This is due to the nonstationarity of the signal, which is removed in the DFA-based methods but can strongly influence the $$\beta$$ values.

The estimated exponents indicate strongly correlated signals similar to the 1/f process, which is a ubiquitous phenomenon in physical and biological processes. However, the signals corresponding to task performance were much less autocorrelated and characterised by exponents $$\beta$$ and H within the range between [0.2, 0.8] and [0.49, 0.9], respectively. Moreover, the variation of the exponents for different ROIs was similar for all kinds of task. In figures such as Fig. 2, we group ROIs belonging to the same resting-state networks (RSN) for greater interpretability. The Hurst exponent assumed the highest values—suggesting a strong persistence of the signals—for the ROIs belonging to the Auditory RSN. On the other hand, the smallest H values characterise Visual II RSN.

To investigate possible differences in the fractal signal properties recorded in subsequent experimental phases, we extracted the time series separately from the memory encoding and retrieval phases, as described in Fig. 1 (see also examples in “Data preprocessing”). The PSD and DFA characteristics were then estimated and compared with respect to the time of day when the experiment was performed (morning, evening), the tasks and the phases of each task.

In Figs. 3 and 4, the estimated exponents $$\beta$$ and H in the encoding and retrieval phases are presented. There are clear differences from the results obtained for the entire time series. The H values in the segmented tasks are higher than in the entire ones. On the other hand, the values of the resting state H obtained from similarly preprocessed data (cf. “Data preprocessing”) reveal a slightly weaker correlation level than for the original data. However, the most exciting observations come from comparing the scaling properties of different experimental phases. All tasks generally decreased H exponents with respect to the resting state (by 0.049 on average, PHO and SEM more than GLO and LOC), and even more so in the encoding phase (by another 0.020, especially PHO and SEM, then GLO but without interaction for LOC). For the encoding phase, the Hurst exponent variation is greater than those estimated for the retrieval phase. The variance of H due to ROIs is more than twice the residual variance of the mixed model, indicating that finding interactions between tasks and individual ROIs should be the next step.

For the AAL regions belonging to the Visual and Sensorymotor RSNs, the values of H drop significantly. This effect is especially strong for verbal tasks (SEM and PHO) for which $$H \approx 0.5$$ (Visual II) and $$H \approx 0.7$$ (Motor), which is characteristic of uncorrelated and moderately positively correlated data, respectively. It is not as conspicuous in the case of visuospatial tasks (GLO and LOC) where a significant difference between resting state and task is mainly visible for Visual RSN. The Hurst exponent’s highest values indicate the 1/f process identified for the Auditory RSN, similarly to the resting state data. The H values in Visual RSN are higher in the memory retrieval phase than in encoding; however, there is still a noticeable decrease with respect to the resting state. Moreover, in retrieval, there is hardly any difference between verbal and visuospatial tasks. We have found no significant effect of time-of-day (TOD, morning and evening sessions), but there was one significant positive interaction of TOD with the PHO task. The code for and full output of the statistical model are provided in the Supplementary Listing 1 and Table C.1.

To illustrate the difference in the hierarchical structure of the analysed signals, we applied the wavelet transform. This method is capable of revealing the self-similar properties of the signals, which can be presented as a heat map on a space-scale half-plane called a scalogram. The scalogram in Fig. 5 represents the sample signals (ROI no. 54, Visual II RSN, TOD evening) for which the most evident differences between the scaling properties of two tasks, GLO and SEM, appear with respect to the experimental phase. The $$W_{\psi }$$ amplitude is coded by 256 colours. It is easy to notice the hierarchical structure underlying the fluctuations. The wavelet transform produces a kind of branching structure with large regions of high amplitude on a large scale and a much denser distribution of singularities on the smaller one. Moreover, for the semantic task in the encoding phase, the concentration of the high amplitude on a small scale is also noticeable. It indicates the high volatility of the signal and the strong singularities quantified by small values of scaling exponents H.

### Cross-correlations study

From the perspective of cross-correlation analysis, we can identify which ROIs are the most correlated with the other ones. To measure it, we averaged $$\rho (q=1,s=10)$$ over all subjects; next, we calculated the absolute difference of each element of the correlation matrix between the four tasks and resting-state; finally, for a given ROI we averaged the difference over all the other ROIs, $$\langle \rho (q=1,s=10)\rangle$$; the results are presented in Fig. 6.

In the case of visuospatial tasks (GLO and LOC), the largest difference from the resting state is reached for Visual I and Visual II RSNs. For the verbal tasks (PHO and SEM), again the largest difference is for the Visual II region, but also for the sensory-motor and part of the executive areas, especially in the encoding phase of the experiment.

We also checked whether the largest eigenvalues calculated for the correlation matrix of 116 ROIs in the task were significantly different from the resting state. As expected, naturally ordered eigenvalues, $$\lambda _i$$, showed much weaker differences, since they were associated with noisier eigenvectors than the clustered eigenvalues, as visible in Fig. 7. Henceforth, we focus only on the clustered eigenvalues, $$\tilde{\lambda }_i$$. The eigenvalues of Pearson correlations showed fewer differences, which appeared dispersed among several eigenvalues and had smaller magnitudes (for instance, when testing for all pairwise differences between GLO, LOC, PHO, SEM and resting-state, $$\tilde{\lambda }_i$$ with $$i=5, 11, 13$$ showed 2-4 significant differences each and their magnitudes lied between 0.9 and 2.5). From the perspective of $$\rho (q=1,s=10)$$ coefficient, taking into account also nonlinear correlations, more differences appeared, they were localised in larger eigenvalues and had larger magnitudes (in the same pairwise comparisons as above, $$\tilde{\lambda }_1$$ contained six differences of magnitude between 4.0 and 8.8, $$\tilde{\lambda }_2$$ four differences ranging between 3.4 and 4.6, and many smaller ones for $$i=3,4,5,7,9,10,15$$). There were significant differences between morning and evening; however, they appeared only in three eigenvalues for Pearson and two for $$\rho (q=1,s=10)$$ correlations. For details, see Supplementary Tables B.2B.7.

In particular, Fig. 7 (bottom) shows that the largest clustered eigenvalue indicates differences not only between the resting state and the tasks, but also between the encoding and retrieval phases. Similarly, the largest clustered eigenvalue shows differences not only between the resting state and GLO, LOC, PHO, and SEM (4.5***, 4.0**, 6.6*** and 8.8***, respectively), but also between SEM and both GLO and LOC (− 4.3**, − 4.8***). Such differences (either the tasks vs resting state or GLO vs PHO and SEM, and LOC vs PHO) repeat also in other eigenvalues, although they are fewer per eigenvalue.

In Fig. 7 we show an example of an eigenvector associated with the largest eigenvalue of the $$\rho$$ correlation matrix. Clustering of the eigenvectors was effective, as reflected by the smaller error bars compared to the unclustered principal eigenvectors. This eigenvector encodes, for instance, opposite activations of such RSNs as Executive Function and DMN versus Sensorymotor and Visual II, with some other RSNs visibly divided, e.g., Visual III or Basal Ganglia. The principal eigenvector also shows less variance in the resting state than in encoding and retrieval phases. Particularly more active in tasks are the Executive Function, DMN, Sensorymotor, Visual II and III RSNs. Even though the pattern between encoding and retrieval is almost identical, the minima and maxima have a systematically greater spread in encoding, reflecting, among others, their greater consistency across subjects and tasks. Naturally, the eigenvectors corresponding to the other results in Supplementary Tables B.2B.5 encode subtler effects. They show different patterns of ROI contributions, and each will need a separate interpretation and validation.

## Discussion

Analysis of BOLD signal scaling properties reveals significant differences between task performance and resting state. Resting-state signals are more regular than those related to task phases (see Fig. 2). This observation is quantitatively described by the Hurst exponents $$\langle H \rangle \approx 1.1$$ for resting state and $$\langle H \rangle \approx 0.7$$ for tasks. The most distinct BOLD signals are related to Auditory (highly correlated), AAL ROI 83-84 (left and right superior temporal poles), and 87-88 (left and right middle temporal poles), and Visual II (minimally correlated), ROI 53-54 (left and right inferior occipital gyrus). The temporal pole is a heterogeneous structure and many studies have demonstrated its role in various cognitive functions29. It is engaged in both visual memory processing30 and semantic functions31,32. Previous research confirmed the dense connections between the dorsolateral temporal pole and the medial temporal cortex and proved the involvement of the temporal pole in memory33. The inferior occipital gyrus belonging to the Visual II network participates in primary visual processing, as well as word identification as part of the visual word form area34. These results are also consistent with the study by Churchill et al. (2016)25, who showed that cognitive effort decreases fractal scaling throughout the brain. It should be noted that the value of H in resting-state indicates 1/f process commonly identified in nature.

However, the most intriguing results come from the analysis of the encoding and retrieval phases. The Hurst exponents estimated especially for the Visual II network are low compared to the other ROIs, whereby the differences are the most significant for the encoding phase. Visual cortex activity is associated with the processing of stimuli in working memory tasks and has been shown to distinguish correct and incorrect stimuli at both encoding and retrieval5. In retrieval, parts of the early visual processing areas had been found to be preferentially linked to correct recognition when compared to false ones, and these differences were linked to a sensory reactivation of memorised stimuli3,35, namely, true recognitions were associated with more sensory details than false ones. Similarly, visual cortex activity had also been linked to imagery retrieval of sensory details when recognising information from memory36. Taking these findings into consideration, the differences in H values between our tasks (the lowest H in visual-verbal tasks and particularly high differences during encoding) could be linked to a larger amount of sensory details (to be encoded and then retrieved) in visuospatial stimuli than in verbal material. Based on the results obtained, it is easy to distinguish between the experimental phase (encoding/retrieval) and the task type (visual-verbal/nonverbal). The recent EEG study37 provides evidence for dynamics of encoding and maintenance processes of working memory. The results show a temporally stable coding scheme with possible dynamics during working memory maintenance. The results of our study using fMRI and fractal and spectral analysis indicate differences in dynamics within encoding and retrieval phases. Both studies confirm a dynamic-processing model of working memory and demonstrate the necessity of applying the new methodological and computational approaches.

According to the detrended cross-correlation analysis, $$\rho (q)$$, the most significant differences in the level of correlation as compared to resting-state are in the areas of Visual I and Visual II for the visuospatial tasks and in the Visual II and Sensorymotor networks for the verbal tasks. The above differences are more visible when correlations are measured with the coefficient $$\rho (q,s)$$, which, in addition to linear correlations, can identify nonlinear relationships. From this perspective, the most outstanding networks are DMN (default mode network), Sensorymotor, Cerebellum, and Visual II (ROI 53,54—inferior occipital gyrus). DMN consists of the posterior cingulate cortex, angular gyrus, and precuneus and is traditionally thought to be active while mind-wandering at wakeful rest. However, more recent studies showed its involvement during thoughts about task performance38. The precuneus had been found to be involved in the encoding and retrieval processes of episodic memory, executive functions, and visuospatial imaginary (for a review, see: Cavanna and Trimble, 200639). Rao and colleagues (2003)40 showed that both the angular gyrus and the precuneus are activated during the spatial location information processing. Furthermore, some studies confirmed the involvement of the precuneus in attention41. The cerebellum, which is similar in structure to the cerebral cortex, has its own functional architecture, where individual lobules are involved in various motor and cognitive functions (for a review, see: Stoodley and Schmachmann, 200942). Among others, lobule VI and vermis had been reported to be activated during spatial, attentional, and working memory tasks43,44. Our results are in line with the studies mentioned above, showing differences in correlations in lobule VI and vermis of the cerebellum. Sensorimotor network (ROI: 1, 2—left and right precentral gyrus, 57, 58—left and right postcentral gyrus, and 69, 70—left and right paracentral lobules) is activated primarily during processing of sensory and motor information. However, there is evidence that the precentral gyrus is involved in the retrieval of false memories45. Both the precentral and postcentral gyri were also previously reported in a similar visuospatial working memory to show correctness-related differences in activity when younger adults processed a lure46. The one limitation of the presented analysis is that it does not allow distinguishing between positive responses and false memories. Another technique based on extracting short significant fMRI events can be used for this purpose47.

## Conclusions

To our knowledge, this is the first time that spectral and fractal analysis were used to investigate the encoding and retrieval processes of the working memory. The presented results clearly show that the temporal organisation of the fMRI time series contains distinctive information about brain activity related to different types of tasks and phases of working memory. Analysis of the scaling properties of the signals revealed that those corresponding to task performance are significantly different from those in the resting state. The differences in signals between encoding and retrieval phases showed that the used methods of analysis are sensitive to dynamic changes in these processes. The results of the fractal and spectral analysis presented in our study—consistent with previous research related to visuospatial and verbal information processing, working memory (encoding and retrieval), and executive functions—shed new light on these processes.

## Data

Below, in “Participants” and “Tasks” we provide a summary of the data from the experiment described at length by Lewandowska et al. (2018)48. Details on fMRI data acquisition and preprocessing can be found in Fafrowicz et al. (2019)49. In “Data preprocessing” we provide processing details specific to the current paper.

### Participants

The volunteers first underwent an online sleep-wake assessment that included night sleep quality—Pittsburgh Sleep Quality Index (PSQI)50and daytime sleepiness—Epworth Sleepiness Scale (ESS)51, and completed sleep-wake assessment diurnal preference as measured by the Chronotype Questionnaire52. Exclusion criteria were: age below 19 or above 35, left-handedness, psychiatric or neurological disorders, dependence on drugs, alcohol or nicotine, shift work or travel involving moving between more than two time zones within the past two months, ESS score above 10 points. Next, they were qualified for the study with genetic testing for the polymorphism of the clock gene PER3, which was considered a hallmark of extreme diurnal preferences53. Finally, 54 subjects were selected (32 females, mean age: 24.17±3.56 y.o.) divided into 26 morning-oriented and 28 evening-oriented types based on the Chronotype Questionnaire and PER3 genotyping.

The volunteers signed an informed consent form and were remunerated for their participation in the experiment. The study was carried out in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee at the Institute of Applied Psychology at the Jagiellonian University.

Data were collected at two times of the day (TOD; morning and evening), four experimental visual tasks (verbal: semantic and phonological, ‘SEM’ and ‘PHO’, and nonverbal: processing of global or local characteristics of abstract objects, ‘GLO’ and ‘LOC’), two phases of each task (encoding, ‘ENC’, and retrieval, ‘RET’) and resting state (‘rest’). The four tasks were based on the Deese-Roediger-McDermott paradigm1,2 and specifically the version dedicated to studying short-term/working memory distortions4. In the semantic task, the stimuli were Polish words matched by semantic similarity. In the phonological task, meaningless pseudowords characterised by phonological similarity were used. On the other hand, in the global processing task, the stimuli were abstract figures requiring holistic processing (differing in the number of overlapping similarities). In the local processing task, the stimuli differed in one specific detail and thus required local processing.

In each task, participants were asked to memorise sets of stimuli (encoding phase, 1.2–1.8 s) which were followed by either simple distractors (in verbal tasks, 2.2 s) or masks (in visuospatial tasks, 1.2 s). Then (after about 6.1 s), another single stimulus, called probe, appeared on the screen (2 s). The probe stimulus came in one of three possible types: positive (it had appeared in encoding), negative (it was new and dissimilar to the encoding ones) or lure (it was new but associated with or similar to one of the encoding stimuli). At this point (retrieval phase), participants were asked to answer whether the stimulus had appeared in the preceding set (with the right-hand key for ”yes” and the left-hand key for ”no” responses). After an 8.4-s intertrial interval, a fixation point (450 ms) and a blank screen (100 ms) were presented, followed by a new set.

There were 60 sets of stimuli followed by 25 positive probes, 25 lures, and 10 negative probes in each task. Sample experimental tasks are depicted in Supplementary Fig. A.1. The behavioural data on the response accuracy and reaction times for each task and probe type are presented in Supplementary Table B.1.

Each participant performed the tasks twice: during morning and evening sessions, two versions of each task were created and presented in a random way, and the order of the presented stimuli was randomised as well. The resting state with eyes open was recorded for 10 minutes. One day before the study was conducted, participants were familiarised with the experimental procedure and underwent a training session. Further details of the experimental paradigm can be found in48.

### Data preprocessing

BOLD signals were collected at repetition time TR $$= 1.8\text { s}$$ with a 3T Siemens Skyra MR System and underwent the standard preprocessing described in49. The signals were then averaged within 116 brain regions (Regions of Interest, ROIs) from the Automated Anatomical Labeling (AAL1) brain atlas54. For a given experimental session, one can form the data in a matrix with 116 columns corresponding to the ROIs and the number of rows corresponding to the length of the time series. Examples of the time series are shown in Fig. 8 (left panel).

To highlight possible differences between the encoding and retrieval phases, we additionally preprocessed the time series as follows. For each experimental session, from the ROI time series matrix, all data segments related to the encoding (retrieval) phase were extracted and concatenated, in order of appearance, into a new matrix. Each segment started at the stimulus onset and was  10 s long (6–7 TRs), which approximately corresponds to the time width of the peak of the haemodynamic response function. In the case of encoding, the segments included presenting the distractor, and in the case of retrieval, they included the intertrial interval. For a given session, concatenation of segments from all 60 stimuli resulted in 400-TR long time series. To evaluate the statistical significance of the results, we also analysed the time series of resting state with eyes open preprocessed similarly to the task data. Therefore, in this case, the time series length of 400 time points was cut into nonoverlapping segments of $$~10\text { s}$$ (the same length as in the task), which were then randomly shuffled, see Fig. 8 (right panel). Thus, we considered the time series corresponding to the memorisation, retrieval, and resting-state phases separately.

## Methods

Analysis of the time series’ correlation structure is one of the primary methods for identifying patterns or dependencies in the data and inferring the system’s organisation responsible for generating the signal. We decided on applying two techniques of autocorrelation time series analysis, namely, detrended fluctuation analysis (DFA)55 and power spectral density (PSD)56. These two complementary methods can quantitatively describe the temporal multiscale organisation of the data and, hence, its fractal properties. Moreover, to qualitatively present the hierarchical structure of the time series, we applied wavelet-based multiresolution analysis. To characterise cross-correlations between the time series, matrices of the q-dependent detrended correlation coefficients (DCC) were analysed57. The diagram representing the consecutive steps of the data preprocessing and analysis is depicted in Fig. 1.

### Fractal analysis of the time series

#### Detrended fluctuation analysis

One of the most reliable and commonly used methods to detect the self-affinity of the signal is the detrended fluctuation analysis proposed by Peng et al.55 to analyse DNA. The algorithm consists of the following steps. For a time series $$x_i$$ of length N ($$i=1,2,\ldots ,N$$), first, calculate the profile

\begin{aligned} X\left( j\right) =\sum _{i=1}^j[x_{i}-\langle x \rangle ], \end{aligned}
(1)

where $$\langle . \rangle$$ denotes the average of $$x_i$$ over the entire time series. Next, the profile is divided into $$2N_s$$ nonoverlapping segments of length s (where $$N_s=\lfloor N/s \rfloor$$ and $$\lfloor . \rfloor$$ denotes the floor function), numbered by an index $$\nu$$, starting from the beginning and the end of the time series. The local trend is estimated in each part $$\nu$$ by fitting a polynomial $$P^{(m)}_{\nu }$$ of the order m to the data and then is subtracted. The polynomial order m is a parameter of the method. According to the study14 a reasonable choice is $$m=2$$, which we used in our analysis. In each segment, the detrended variance is calculated

\begin{aligned} F^2(\nu ,s)=\frac{1}{s}\sum _{k=1}^{s}\left( X\left( (\nu -1)s+k\right) -P^{(m)}_{\nu }(k)\right) ^2. \end{aligned}
(2)

Finally, the scale-dependent fluctuation function is estimated

\begin{aligned} F(s)=\sqrt{\frac{1}{2N_s} \sum _{\nu =1}^{2N_s}F^{2}(\nu ,s)}. \end{aligned}
(3)

For a fractal time series, one observes the power-law behaviour

\begin{aligned} F(s)\sim s^H, \end{aligned}
(4)

where H denotes the Hurst exponent. Therefore, if the time series is only short-range correlated, the Hurst exponent assumes the value $$\sim$$ 0.5. In the case of long-range fractal-correlated time series, a deviation of H from 0.5 is observed, respectively,

• $$0.5<H<1$$ for positively autocorrelated data (persistent signal),

• $$0<H<0.5$$ for negatively autocorrelated one (antipersistent signal).

The results for the GLO and SEM tasks (visuospatial and verbal, respectively) and resting-state are presented in Fig. 9. Since the present study focuses on short-term dependencies in the signals, the plots are restricted to a range of small scales. The presented characteristics clearly follow power laws within the scale range, s, considered. Furthermore, the scaling properties depend on the ROI, which can be easily seen by their different slopes of $$\langle F(s)\rangle$$ and $$\langle S(f) \rangle$$ in Fig. 9 (left panels). These quantities were recomputed after dividing the tasks into the experimental phase (encoding and retrieval) averaged over all participants, as shown in Fig. 9 (right panels). Even though the scaling length after the additional processing is shorter than the results for the full-length data, the scaling proprieties can still be identified.

#### Power spectrum and wavelet transform analysis

One of the main mathematical tools used to identify quasiperiodicity in time series is power spectrum analysis. It is able to characterise the linear correlations of the time series via the distribution of its variance (energy) at specific frequencies. The spectrum of the time series $$x_j$$ of length N ($$j=0,1,\ldots ,N-1$$) is estimated via the square of the Fourier transform modulus:

\begin{aligned} S(f)=\left| \sum _{j=0}^{N-1}x_je^{-2\pi i f j/N}\right| ^2, \end{aligned}
(5)

where $$f=0,1,\ldots ,N-1$$. In the case of fractal signals, the scaling relation is observed

\begin{aligned} S(f)\thicksim 1/f^\beta , \end{aligned}
(6)

where $$\beta$$ is a spectral scaling exponent related to the Hurst exponent via the relation

\begin{aligned} \beta = 2H-1. \end{aligned}
(7)

Thus, DFA and PSD can be considered complementary methods of quantifying linear long-range correlations in fractal time series.

To emphasise differences in the hierarchical organisation of the time series analysed, we applied wavelet analysis58. By its virtue, the time series is decomposed into different periodic components taking into account the time of their occurrence. Thus, the wavelet transform can be considered a mathematical microscope that provides insight into the intrinsic organisation of the time series. The wavelet transform for discrete signals is defined as

\begin{aligned} W_{\psi }\left[ s,k\right] =1/\sqrt{s}\sum _{j=1}^{N}x_j\psi \left( j-k/s\right) , \end{aligned}
(8)

where $$\psi$$ is the wavelet well localised in the time and frequency domain, s denotes the scale parameter, and k is the wavelet’s position in time. In this work, we used the Mexican hat (the second derivative of the Gaussian) as a wavelet.

### Cross-correlation analysis

To describe the nonlinear cross-correlations, in addition to the standard Pearson correlation coefficient59, we used q-dependent DCC, $$\rho (q,s)$$57. It allows one to quantify the cross-correlations between two time series $$x_i$$ and $$y_i$$ both at particular time scales s and with respect to the amplitude of fluctuations filtered by q. The procedure of estimating $$\rho (q,s)$$ extends the formulas (2) and (3) to the case of the covariance function. Given the profiles $$X\left( j\right)$$ and $$Y\left( j\right)$$ of the time series [cf. (1)] the covariance is estimated as

\begin{aligned} {\begin{matrix} F_{xy}^{2}(\nu ,s)=\frac{1}{s}\sum _{k=1}^{s} \left( X((\nu -1)s+k)-P^{(m)}_{X,\nu }(k)\right) \times \left( Y((\nu -1)s+k)-P^{(m)}_{Y,\nu }(k)\right) , \end{matrix}} \end{aligned}
(9)

for segments $$\nu =1,\ldots ,N_s$$ and

\begin{aligned} {\begin{matrix} F_{xy}^{2}(\nu ,s)=\frac{1}{s}\sum _{k=1}^{s} \left( X(T-(\nu -N_s)s+k)-P^{(m)}_{X,\nu }(k)\right) \times \left( Y(T-(\nu -N_s)s+k)-P^{(m)}_{Y,\nu }(k)\right) , \end{matrix}} \end{aligned}
(10)

for $$\nu =N_s+1,\ldots ,2N_s$$, where $$P^{(m)}_{X,\nu }$$ and $$P^{(m)}_{Y,\nu }$$ denote polynomials fitted to the X(j) and Y(j) profiles, respectively. Then, q-th order covariance function13 is

\begin{aligned} F_{xy}^{q}(s)=\frac{1}{2N_s}\sum _{\nu =1}^{2N_s} \mathrm{sign}\left( F_{xy}^{2}(\nu ,s)\right) \left| F_{xy}^{2}(\nu ,s)\right| ^{q/2}, \end{aligned}
(11)

where $$\mathrm{sign}(F_{xy}^{2}(\nu ,s))$$ denotes the sign of $$F_{xy}^{2}(\nu ,s)$$. The parameter q plays the role of a signal amplitude filter. Finally, the q-dependent DCC, $$\rho (q,s)$$, is estimated as follows

\begin{aligned} \rho (q,s) = {F_{xy}^q(s) \over \sqrt{ F_{xx}^q(s) F_{yy}^q(s)}}, \end{aligned}
(12)

where $$F_{xx}^{q}$$, $$F_{yy}^{q}$$ are calculated by means of (9) and (10) when only X(j) or Y(j) is separately considered. Furthermore, in our calculations we used $$q=1$$, which ensures that the coefficient $$\rho (q,s)$$ assumes values within the range $$[-1,1]$$, making its interpretation analogous to the Pearson correlation coefficient57.

### Eigenvector alignment

Having computed the correlation matrix and its eigendecomposition for each configuration of the independent variables (subjectTODtaskphase), one would like to statistically compare the distributions of eigenvalues between these configurations. Naïvely, the groups of all the largest eigenvalues could be compared, then the groups of all the second largest eigenvalues, and so on. However, the order of eigenvalues corresponding to a given correlation pattern may be contingent on interindividual variability. If the effect of, say, TOD were a shift in the distribution location of a given eigenvalue, then we might as well expect this shift to cause a change in the order of neighbouring eigenvalues. In such a case, the naïve approach would compare the eigenvalue distributions of different correlation patterns.

For these reasons, the eigenvalues were first grouped into sets corresponding to the same correlation pattern. The grouping was obtained by agglomerative hierarchical clustering of the eigenvectors. We used Ward’s method with squared Euclidean distance d(uv) modified to allow reflection $$\tilde{d}(u,v)=\min (d(u,v),d(-u,v))$$, since the eigenvector opposites remain eigenvectors. The clustering was performed on eigenvectors coming from all possible (subjectTODtaskphase) configurations associated with large eigenvalues $$\lambda > 2$$. This arbitrary cut-off lies above the standard upper fence for outliers ($$Q_3+1.5 IQR =1.5$$ on average) and results in $$13.5\pm 1.1$$ valid eigenvalues per configuration. Consequently, the clustering was set to produce 15 clusters. Additionally, when in a given cluster more than one eigenvector belonging to a given subject had the same (TODtaskphase) configuration, only the one associated with the largest eigenvalue was kept, and the others were deleted from further analysis ($$24\%$$ in total). The clusters were then ordered according to the average of member eigenvalues.

### Statistical tests

To assess the statistical reliability of the results, we performed an analysis of the randomly shuffled data. This procedure destroys all temporal correlations in the time series but preserves its statistical distribution. The scaling properties of the surrogates can be quantified by the Hurst exponent $$H = 0.5$$. We also tested a mixed linear model for the global effects of TOD, task, task phase and their interactions, with ROIs as random effects. We performed a stepwise model selection by dropping single terms based on $$\chi ^2$$ tests.

We report tests that compare the distributions of correlation matrix eigenvalues in natural order and clustered eigenvalues. Individual data points corresponded to single subjects in a given condition. We used linear models with eigenvalues as the dependent variable and the independent variables: TOD (morning and evening), task (GLO, LOC, PHO, SEM, and resting-state), and task phase (encoding and retrieval; for resting-state we used three independent random samples) and all pairwise interactions. Next, we performed a stepwise model selection by dropping single terms based on F-tests. To assess the differences between levels within the remaining factors, due to the exploratory nature of the study, post-hoc we utilised the more conservative Scheffé method60. These results are provided below the legends in Fig. 7, right panels. We use the following significance codes: p<0.001 ‘***’, 0.01 ‘**’, 0.05 ‘*’. The R code and the data needed to reproduce all statistical tests’ results are provided in the Supplementary Information S2-S6.