Time-varying quantum channel models for superconducting qubits

The decoherence effects experienced by the qubits of a quantum processor are generally characterized using the amplitude damping time (T1) and the dephasing time (T2). Quantum channel models that exist at the time of writing assume that these parameters are fixed and invariant. However, recent experimental studies have shown that they exhibit a time-varying (TV) behaviour. These time-dependant fluctuations of T1 and T2, which become even more pronounced in the case of superconducting qubits, imply that conventional static quantum channel models do not capture the noise dynamics experienced by realistic qubits with sufficient precision. In this article, we study how the fluctuations of T1 and T2 can be included in quantum channel models. We propose the idea of time-varying quantum channel (TVQC) models, and we show how they provide a more realistic portrayal of decoherence effects than static models in some instances. We also discuss the divergence that exists between TVQCs and their static counterparts by means of a metric known as the diamond norm. In many circumstances this divergence can be significant, which indicates that the time-dependent nature of decoherence must be considered, in order to construct models that capture the real nature of quantum devices.


INTRODUCTION
Quantum error correction (QEC) is considered to be an essential building block in the endeavor of the scientific community to construct fully operational devices capable of conveying the advantages that quantum technology heralds. The value of these error correction strategies, commonly known as quantum error correction codes (QECCs), lies in the fact that they protect quantum information from decoherence, a phenomenon to which quantum information is so sensitive that quantum computation is essentially unfeasible in the absence of error correction schemes. In consequence, the quantum information community has gone to extraordinary lengths to construct QECCs that are capable of efficiently reversing the deleterious effects that quantum information experiences due to the interaction between qubits and their surrounding environment. This has led to the design of several promising families of QECCs, such as quantum Reed-Muller codes 1 , quantum low-density parity-check (QLDPC) codes 2 , quantum low-density generator matrix codes 3-5 , quantum convolutional codes 6 , quantum turbo codes (QTCs) [7][8][9][10][11] , and quantum topological codes 12,13 . It is important to point out that the aforementioned QECCs are utilized in the context of conventional qubit-based quantum computation, in which the physical elements are realized by discrete two-level systems. Other promising routes toward universal quantum computation and error correction exist, such as bosonic codes 14,15 . In this paper, we focus on error correction based on two-level physical constructions, and all the discussions refer to this specific approach to QECC.
Quantum error models that describe the decoherence processes that corrupt quantum information become a necessity, when seeking to construct any of the aforementioned error correction methods. The amplitude damping channel, N AD , and the combined amplitude and phase damping channel, N APD , are a pair of widely used quantum channels that provide a mathematical abstraction that describes the decoherence phenomenon. However, such channels cannot be efficiently simulated in a classical computer for qubit counts that exceed a small limit. For this reason, a quantum information theory technique known as twirling has been used, in order to approximate N AD and N APD to the classically tractable family of Pauli channels, N P 16 . The dynamics of the N APD channel depend on the qubit relaxation time, T 1 , and the dephasing time, T 2 , while the N AD channel depends solely on T 1 . These dependencies are also displayed by the Pauli channel families obtained by twirling the original channels. All these models consider T 1 and T 2 to be fixed parameters (i.e., they do not fluctuate over time). This implies that the noise dynamics experienced by the qubits in a quantum device are identical for each quantum information processing task, independently of when the task is performed.
The assumption that T 1 and T 2 are time invariant has been disproven in recent experimental studies on quantum processors [17][18][19][20][21] . The data presented in these studies showed that these parameters experience time variations of up to 50% of their mean value in the sample data, which strongly suggests that the dynamics of the decoherence effects change drastically as a function of time. These fluctuations suggest that qubit-based QECCs implemented in superconducting circuits will not perform, in average, as predicted by considering quantum channels to be static through all the error correction rounds. A possible solution to deal with these fluctuations when designing QECCs is to assume a worst parameters scenario. In this way, one can assure that the QECCs will operate reliably for any realization. However, the resource consumption of QECCs designed in this manner would be far from optimal, since a higher number of physical qubits (due to the lower rate or longer blocklength needed to correct the worst-case scenario) would be required. This will result in a rather suboptimal solution. Therefore, it is relevant to include the time variation of the decoherence parameters into the quantum channel model, and to study the implications of such variations on QEC.
In this paper, we amalgamate the findings of the aforementioned studies with the existing models for quantum noise, proposing timevarying quantum channels (TVQCs) for superconducting qubits, N ðρ; ω; tÞ. The proposed model is characterized by the fact that the Kraus operators describing its dynamics are random processes whose behavior is governed by the haphazard nature of T 1 and T 2 . We use the diamond norm 22,23 distance between quantum channels jjN 1 À N 2 jj Å to show that neglecting the fluctuating of T 1 and T 2 may result in an unrealistic model for quantum noise. By analyzing the statistical properties of the decoherence parameters, we assess when the TV noise channel model will substantially deviate from the static channel assumption. Finally, we provide a qualitative analysis regarding the implications of this decoherence model on QECCs by simulating Kitaev toric codes and QTCs.

Time-varying quantum channels
We define a TVQC, N ðρ; ω; tÞ, as where the E k (ω, t) linear operators are the so-called Kraus operators of the operator-sum representation of a quantum channel. Note that the matrices {E k (ω, t)} are continuous-time random processes that will determine the time variations of the TVQC. Consequently, the channel dynamics will experience temporal fluctuations determined by the random processes {E k (ω, t)}. Decoherence arises from a wide range of physical processes involved in the interaction of the qubits with their environment. Nonetheless, a fairly complete mathematical model of these harmful noise effects can be obtained by combining the timevarying amplitude damping channel (TVAD), and the time-varying phase damping or dephasing channel (TVPD) 16 {T 1 (ω, t)} and {T 2 (ω, t)} are modeled as wide-sense stationary random processes with means μ T1 and μ T2 , respectively, and with an experimentally measured power spectral density (PSD, the Fourier transform of the covariance function), K j ðτÞ ¼ E½ðT j ðω; t þ ΔtÞ À μ Tj ÞðT j ðω; tÞ À μ Tj Þ, j = 1, 2, where E denotes the expected value.

Twirled approximations of time-varying quantum channels
The twirled approximations provide us with approximated quantum channel models that incorporate the time-dependant fluctuations of T 1 and T 2 , and at the same time can be efficiently implemented in classical computers 16 . The time-varying (TV) Pauli twirl approximation, N PTA ðρ; ω; tÞ, is the Pauli channel 16,24 obtained by twirling a TVQC by the n-fold Pauli group P n .
Twirling the TVAD channel will lead to the Pauli channel (TVADPTA) described by the probabilities that each of the Pauli matrices has of taking place. Note that in this context these probabilities are realizations of the random processes 16,24 : p I ðω; tÞ ¼ 1 À p x ðω; tÞ À p y ðω; tÞ À p z ðω; tÞ; For the TVAPD channel, the TVAPDPTA approximation is described by the realizations of the following stochastic processes for each of the Pauli matrices 16,24 p I ðω; tÞ ¼ 1 À p x ðω; tÞ À p y ðω; tÞ À p z ðω; tÞ; p x ðω; tÞ ¼ p y ðω; tÞ ¼ 1 4 ð1 À e where, once again, T 1 (ω, t) and T 2 (ω, t) are stochastic processes. Another twirled channel of interest is the TV Clifford twirl approximation, N CTA ðρ; ω; tÞ 16 , which for the TVAD channel will be a depolarizing channel with depolarizing parameter 16 where, once more, T 1 (ω, t) and T 2 (ω, t) are stochastic processes.
Modeling of T 1 and T 2 For superconducting qubits, the authors in refs. [17][18][19][20]26 , carried out experimental analyses to characterize the time-dependant nature of T 1 and T 2 . In particular 17,18 mainly focus on the fluctuations of T 1 , whereas in refs. 19,26 both T 1 and T 2 were studied, with an indepth analysis of the dephasing. Let us now discuss how the random processes {T j (ω, t)}, j = 1, 2 can be modeled. In ref. 17 , the authors model fT 1 ðω; tÞ À μ T1 g as the sum of three independent zero mean stationary Gaussian random processes, two of which have a Lorentzian PSD and the remaining one having a flat PSD of level h 0 (i.e., white noise). That is, Lor j ðω; tÞ þ Nðω; tÞ; (11) where N(ω, t) denotes the white Gaussian process and Lor j (ω, t), j = 1, 2 represents the two Gaussian random processes with Lorentzian PSD. Table 1 shows the values obtained for the parameters of the noise processes in Eq. (11) from experimental data measured for the superconducting qubits that were constructed in ref. 17 (refer to "Methods" for details regarding Lorentzian noise processes and their PSD). Note from this table that the ratio between the PSD level at f = 0 of the Lorentzian processes and the level of the white noise, 4ðA j Þ 2 τ 0 j =h 0 , is of the order of 10 8 . Therefore, the contribution of the white noise is negligible when compared to the contribution of the Lor j (ω, t) processes. This means that the PSD bandwidth of the process T 1 is approximately given by BW ¼ max j¼1;2 f1=τ j 0 g. This also implies that its covariance function K 1 (Δt) is approximately constant for jΔtj << T c ¼ 1 BW , with Δt in the order of microseconds and T c in the order of minutes (as is conventional in the context of fading channels 27 , we refer to T c as coherence time). Taking into account that the processing time of current state-of-the-art quantum algorithms, t algo , is around a few microseconds, it is reasonable to assume that during the execution of a quantum algorithm, the realization of the stochastic process T 1 , remains constant. In other words, T 1 (ω, t) can be modeled as a random variable T 1 ðωÞ ¼ T 1 ðω; tÞj t¼0 , where t = 0 has been chosen without any loss of generality since the random process T 1 is stationary. Each realization of such random variable will remain constant for t ∈ [0, t algo ], t algo ≪ T c . Specifically, T 1 (ω) can be considered to be a truncated Gaussian random variable (the truncation is necessary as negative T 1 values do not make physical sense) with probability density function T 1 ðωÞ $ GN ½0;1 ðμ T 1 ; σ 2 T 1 Þ, where the variance is obtained, based on the parameters of Table 1, by integrating the PSD of Eq. (11) in the frequency band [−BW, BW]. We use GN ½a;b ðμ; σ 2 Þ to denote a truncated normal random variable with mean μ and variance σ 2 , truncated in the region [a, b].
Regarding {T 2 (ω, t)}, the authors of refs. 17,19,20 made additional measurements to characterize the fluctuations of this parameter. In ref. 17 , the qubit dephasing was studied by observing the fluctuations of the qubit frequency (f 01 ). The slow integrated fluctuations of the qubit frequency give rise to the pure dephasing time 28 , T ϕ . Qubit dephasing is a combination between relaxation and pure dephasing quantified by the relation 26 The qubits from ref. 17 were showed to be T 1 -limited (T 2 ≈ 2T 1 ). This fact implies that the pure dephasing noise will not be noticeable since from Eq. (12), T 2 ≈ 2T 1 → 1/T ϕ ≈ 0. Therefore, for T 1 -limited superconducting qubits, neglecting the contribution of T ϕ is a reasonable assumption and this will result in a TVAD channel.
However, most of the state-of-the-art quantum processors cannot achieve the T 2 ≈ 2T 1 limit. In these cases, the existence of pure dephasing must be taken into account, giving rise to the TVAPD channel. To that end, the authors of refs. 19,26 studied the mechanism of pure dephasing for non-T 1 -limited and modeled the fluctuations of the qubit frequency, as a random process based on the combination of a Lorentzian stochastic process and a 1/f 1.1 process. It is easy to corroborate that the coherence time of this stochastic processes is in the order of magnitude of minutes (BW ≈ 1 mHz set by the Lorentzian process 19 ). Therefore, following the same reasoning used for the relaxation time fluctuations, and taking into account that pure dephasing time arises from the qubit frequency process, it is reasonable to model T ϕ as a random variable T ϕ ðωÞ ¼ T ϕ ðω; tÞj t¼0 whose realizations will remain constant for t ∈ [0, t algo ], t algo ≪ T c . Moreover, T ϕ (ω) can be considered to be a truncated Gaussian random variable with probability density function T ϕ ðωÞ $ GN ½0;1 ðμ Tϕ ; σ 2 Tϕ Þ. Note that by modeling T ϕ (ω) this way, and T 1 as T 1 ðωÞ $ GN ½0;1 ðμ T1 ; σ 2 T1 Þ, the depahsing parameter T 2 (ω) required for the TVAPD channel, is now obtained from expression (12).
It should be pointed out that although in this paper the derived time-variant channel models are based on the statistical characterization of the parameters T 1 and T 2 from the experimental results in refs. 17,19,26 , they are also applicable to any superconducting quantum processor whose decoherence parameters exhibit slow fluctuations. Moreover, the model is also applicable to any quantum-coherent two-level system that presents similar time dependencies, regardless of its physical implementation. That is, when the coherence time, T c , of the underlying stochastic processes that describe their dynamic behavior is larger than the algorithm processing time.

Numerical results
In this section, we assess the difference between the widely employed static channel models and the proposed TVQCs. We make this comparison by using a metric known as the diamond norm distance (see "Methods") jjN ðμ T1 ; μ T2 Þ À N ðρ; ω; tÞjj Å . More concretely, we perform an extensive numerical analysis over the parameter space, for the different types of TVQCs discussed in the previous section (the particular results for the superconducting qubits of ref. 17 are presented in the Supplementary Note 2).
The parameters required in the computation of the diamond norm distance between the static and TV channels are the mean and the standard deviation of the random variables T 1 (ω), T 2 (ω), and T ϕ (ω). Note from Eq. (12) that T 2 (ω) is a function of the independent random variables T 1 (ω) and T ϕ (ω). Table 2 shows these parameters. The coefficient of variation, c v = σ/μ, defined as the ratio between the standard deviations and means of these random variable are also shown in the table. Note that the other parameters defining the stochastic random processes T 1 (t, ω) and Table 1. Parameters to model T 1 fluctuations in the superconducting qubits of ref. 17 (refer to "Methods" for discussion of the parameters h 0 , A, and τ 0 characterizing Lorentzian random processes). μ T1 refers to the mean relaxation time and σ T1 to its standard deviation. The nomenclature for each case is the same as in ref. 17 and is interpreted as QX_CY, being X the superconducting qubit and Y the cooldown (in ref. 17 several cooldowns were performed and data about T 1 was measured in each of them) of the quantum chip.
T ϕ (t, ω) are not involved in the computation of the mean diamond norm distance. However, they were used to derive the parameters in Table 2 and the probability distributions of random variables T 1 (ω) and T ϕ (ω). The computation of the static channels, denoted as N ðμ T1 ; μ T2 Þ, is done by substituting the parameters T 1 and T 2 in the Kraus operators of the channels (refer to section "Time-varying quantum channels") 16 by the mean EfT 1 g ¼ μ T1 and EfT 2 g ¼ μ T 2 of the random variables T 1 (ω) and T 2 (ω). On the other hand, the realizations for the TV quantum channels are obtained by replacing the parameters T 1 and T 2 in the Kraus operators by the realizations of the random variables T 1 (ω) and T 2 (ω), respectively.
In the previous sections, we have analyzed how the decoherence parameters of superconducting qubits fluctuate through time, and propose TV channel models that describe these varying conditions. We concluded that is reasonable to assume that these parameters do not fluctuate over the runtime of an encoding-decoding round, but they do change from round to round if the rounds are sufficiently separated in time. Note that running quantum algorithms requires a large number of QECC rounds. For example, the quantum algorithm in ref. 29 requires 25 billion surface code cycles in an 8 h runtime. Let L be the number of rounds produced during the quantum processor operation, and consider round k, k = 1, … L. If T ðkÞ 1 ðωÞ and T ðkÞ 2 ðωÞ denote the relaxation and dephasing time associated to this round, then the sequence of random variables fT ðkÞ j ðωÞg L k¼1 , j = 1, 2 would be independent and identically distributed, as explained in section "Modeling of T1 and T2". Note that the duration of a round is what we have previously called the algorithm time, t algo , and its value is upperbounded by minfμ T 1 ; μ T2 g since for times longer than those the superconducting qubit will be very likely in its equilibrium state and, therefore, will be useless as a resource. Later in this section, we will illustrate that QECCs do in fact have cycle times lower than minfμ T1 ; μ T2 g by obtaining the runtime of a Shor code round in the IBM_Q_16_Melbourne processor. To compute the TV channel operator at round k, one needs to obtain realizations t 2 ðωÞ equal to their mean values μ T 1 and μ T 2 , and therefore they will be constant for all rounds.
We are interested in computing the average value of the diamond norm distance for all rounds. That is, Note that as L goes to infinity, Eq. (13) converges to the expected value EfjjN ðμ T1 ; μ T2 ; t ¼ t algo Þ À N ðρ; T ðkÞ 1 ðωÞ; T ðkÞ 2 ðωÞ; t ¼ t algo Þjj Å g. Figure 1 shows the average diamond norm distance versus algorithm time, t, for the different superconducting qubit scenarios of Table 2. The range of values of t is set to t 2 ½0; minfμ T1 ; μ T2 g and the number of rounds to L = 20,000. Note that the algorithm time has been normalized with respect to minfμ T1 ; μ T2 g. The reason behind this normalization is to decouple the simulations from the fact that qubits with longer decoherence parameters will take longer to have significant mean diamond norms. In this way, is easier to compare in a single figure the effect of time variation for different qubit characterisitcs.
Observe that for all the scenarios considered, the mean diamond norm distance increases with the normalized algorithm time. An interesting conclusion that can be drawn from Fig. 1 is that the impact that time fluctuations of the decoherence parameters have on the behavior of the mean diamond norm distance can be singled out by the coefficients of variation, c v (T 1 ) and c v (T ϕ ). The higher the coefficients of variation are, the higher the mean diamond norm distance will be. Therefore, if c v are sufficiently low, a static approach might accurately represent the decoherence processes suffered by the superconducting qubits. Figure 1 also shows that the fluctuations of T 1 are more impactful than the fluctuations of T ϕ . This can be observed for the APD channels and their twirled approximations, since c v (T 1 ) has more impact on the mean diamond norm distance increase than c v (T ϕ ). This is not surprising considering the fact that T 2 is a function not only of T ϕ , but also of T 1 (refer Eq. (12)), so its fluctuations affect both relaxation and dephasing.
Given that QECCs are used to protect quantum information from decoherence, it is desirable for these quantum information processing tasks to be short, since the longer they take the noisier the channel becomes. Therefore, it is convenient to have algorithm times t << minfμ T1 ; μ T2 g, which also render small mean diamond norm distances, meaning that in average the use of static channel models may be reasonable. However, this last conclusion is misleading since as shown later in this section, even if the average norm is small, the spread of this metric for different realizations is considerable when the coefficients of variation are large.
To close our discussion on EfjjN ðμ T1 ; μ T2 Þ À N ðρ; ω; tÞjj Å g, we compare the behavior of this metric for the AD and APD channels to its behavior for the approximated channels. Figure 1 shows how the value of the mean diamond norm distance is smaller for the approximated channels obtained via twirling than for the original channels. This stems from the fact that performing the twirl operation eliminates the off-diagonal elements of the original channels 16 (due to the symmetrization of the channel), and thus their contribution to the diamond norm distance is also eliminated. In any case, Fig. 1 shows that the approximated channels also experience time variation in the decoherence parameters. Furthermore, observe that even though the value of the diamond norm distance is lower for the twirled channels, the shape of the curve that embodies the time dependance of this metric is similar to the curves obtained for the AD and APD channels.
We continue by justifying that the assumption that algorithm times for QECCs are bounded by minfμ T1 ; μ T2 g is reasonable for current quantum processors.
To that end, let us consider one of the simplest QECCs known: the Shor code 30 . Note that in refs. [17][18][19]26 , which are used to obtain the parameters that describe the fluctuations of T 1 and T 2 , We consider the following scenarios for superconducting qubits: T 1 -limited 17 , T 1 ≈ T 2 (ref. 31 ) and T 2 -dominated (T 2 < T 1 ) 39 . c v = σ/μ refers to the coefficient of variation of the random variables. μ T2 is calculated via expression (12).
J. Etxezarreta Martinez et al. no gates are implemented. Thus, we select a state-of-the-art experimental quantum computer, the IBM_Q_16_Melbourne, to get the gate times 31 . IBM quantum computers use superconducting technology for their qubits so we expect the behavior of their decoherence parameters to be similar to the ones considered in refs. 17,19 . From the data in Table 3 for the IBM_Q_16_Melbourne quantum computer, implementing the Shor code would require approximately t ≈ 13.52 μs < minfμ T1 ; μ T2 g for a full encoding-decoding operation. Therefore, it is reasonable to assume that more advanced error correction methods will have processing time durations similar to that or even of t % minfμ T 1 ; μ T2 g, depending on the code construction.
The mean diamond norm distance EfjjN ðμ T 1 ; μ T2 Þ À N ðρ; ω; tÞjj Å g has enabled us to analyze the average difference between static and TV quantum channels. However, while studying the average distance corroborates that static channels and TVQCs differ, it fails to portray the variability of jjN ðμ T1 ; μ T2 Þ À N ðρ; ω; tÞjj Å . Thus, looking solely at the average of this metric may be shortsighted in the sense that additional trends, and results about the nature of TVQCs and their better modeling accuracy in comparison to static models may be overlooked. Consequently, a boxplot analysis of the realizations of the diamond norm distance is performed with the goal of studying the variability of jjN ðμ T1 ; μ T2 Þ À N ðρ; ω; tÞjj Å (see Table 3. IBM Q Melbourne gate parameters. IBM_Q_16_Melbourne values for 1-qubit gate time (t 1Q ), 2-qubit gate time (t 2Q ), and delay time (buffer) between operations (t Δ ) and measurement time (t meas ) 31 . Fig. 1 Mean diamond norm distance EfjjN ðμ T1 ; μ T2 Þ À N ðρ; ω; tÞjj Å g between the static and the proposed time-varying quantum channels.
The simulation for the calculation is done by using the parameters for the different scenarios for superconducting qubits in Table 2. a T 1limited TVAD. b T 1 -limited TVADPTA/TVADCTA. It can be proven that the diamond norm distance between ADPTA and ADCTA channels obtained from AD channels with parameters γ 1 , γ 2 is the same, i.e., jjN ADPTA ðγ 1 Þ À N ADPTA ðγ 2 Þjj Å ¼ jjN ADCTA ðγ 1 Þ À N ADCTA ðγ 2 Þjj Å (proof in Supplementary Note 1), and thus b will be the exact same for the mean diamond norm distance EfjjN ADCTA ðμ T1 Þ À N ADCTA ðρ; ω; tÞjj Å g of the TVADCTA channels.
"Methods" for description of adjusted boxplots for skewed distributions). For the sake of simplicity of exposition, we present the analysis for the T 1 -limited qubit scenario (refer to Supplementary Note 2 for boxplot analysis for the other scenarios). Figure 2 presents these boxplots for the AD and the ADPTA channels, considering different coefficients of variation of the relaxation time random variable T 1 (ω). When c v (T 1 ) = 1% (Fig. 2a,  d), the size of the boxplots are negligible for all the range of algorithm times. This means that when the fluctuations of T 1 around its mean are small, the dispersion of jjN ðμ T1 ; μ T2 Þ À N ðρ; ω; tÞjj Å around its mean is nearly zero and, thus, a static channel approach will be a valid assumption. Increasing the coefficient of variation to c v (T 1 ) = 10% (Fig. 2b, e), the box size and the whisker length begins to widen as the algorithm time increases, meaning that a significantly large number of realizations of jjN ðμ T 1 ; μ T2 Þ À N ðρ; ω; tÞjj Å will be higher than its mean given by Fig. 1. Thus, a static noise assumption may start to fail for algorithm times sufficient large. Finally, for c v (T 1 ) = 25% (Fig. 2c, f) one obtains the large box sizes and whiskers of the boxplots, showing a large variation of the diamond norm distance around its mean. Therefore, adopting static noise modeling strategies in such scenarios represents an erroneous approach.
It is also important to discuss the presence of outliers on these boxplots. These events appear to be significant in number when the coefficient of variation c v (T 1 ) is large. Consequently, even for algorithms that require short processing times, where the mean of the diamond norm distance is very small, the use of static channel if c v (T 1 ) is large might be misleading. For example, for a normalized algorithm time of 0.1 the mean value, EfjjN ðμ T1 Þ À N ðρ; ω; tÞjj Å g, is 0.04 and 0.02 for the AD and ADPTA channels, respectively (refer to Fig. 1a, b) considering a c v (T 1 ) = 25%. Therefore, by looking at these values it will be reasonable to assume that a static channel approach when simulating QECCs will be valid. However, realizations of jjN ðμ T1 Þ À N ðρ; ω; tÞjj Å % 1:2 may occur both for the AD and ADPTA channels (refer to Fig. 2c, f), and the frequency of outlier events seems to be important. Thus, the above assumption fails.
From the above results, it is clear that when the coefficient of variations are large, the diamond norm distance jjN ðμ T1 Þ À N ðρ; ω; tÞjj Å exhibits a significant amount of spread. This entails that events where the TV channel and the static channel substantially differ will be pretty common, even if the mean value of the diamond norm is not very high. Thus, adopting static noise modeling strategies in such scenarios will be an erroneous approach.

DISCUSSION
In this paper, we have presented a TV quantum channel model as a way to include the time-dependant fluctuations of the parameters that define the decoherence effects experienced by superconducting qubits. To that end, we have used state-of-theart experimental results to model the time variations of these parameters and provided an extensive numerical analysis of the proposed TVQC models. Therefore, the proposed model serves to coalesce recent experimental results, regarding the fluctuations of the relaxation time, T 1 , and the dephasing time, T 2 , into a quantum channel model that provides a more realistic portrayal of decoherence effects to the QEC community. The proposed channel models are sufficiently general to include any type of slow variation in the random processes {T 1 (ω, t)} and {T 2 (ω, t)}. Therefore, even though the present analysis is motivated by and based on superconducting qubits, the proposed model is also applicable to other quantum-coherent two-level systems that present similar time-dependant parameter oscillations, such as trapped ions or quantum dots. In our analysis, the criterium chosen to measure the deviation between time-variant and timeinvariant quantum channels has been the diamond norm distance jjN ðμ T1 ; μ T2 Þ À N ðρ; ω; tÞjj Å . Based on the statistical characterization of the slow fluctuation of the random processes T 1 (t, ω), T ϕ (t, ω), and T 2 (t, ω) when compared to the algorithm times, these decoherence parameters can be modeled as random variables taking independent values from round to round. Computing the diamond norm distance average over all rounds in a quantum processing task reveals that by not considering time fluctuations in quantum channel models, an integral part of the decoherence effects are neglected. Thus, the coefficient of variation of the random variables T 1 (ω) and T ϕ (ω) are the adequate parameters to assess how much the TV models will differ from the static channel models.
Furthermore, in order to attain a deeper understanding of the dispersion of the realizations of the norm jjN ðμ T1 ; μ T2 Þ À N ðρ; ω; tÞjj Å random variable, a boxplot analysis of this norm distance has been carried out. The primary conclusion drawn from this boxplot study is that the diamond norm distance exhibits a high degree of variability if the coefficients of variation of the decoherence times are sufficiently high. This is due to the fact that under this condition, the whisker length from the median and the number of outliers in this region is considerably high. It should be mentioned that the state-of-the-art superconducting processors in ref. 17 have a c v (T 1 ) ≈ 20% (refer to Table 1). Therefore, the TV quantum channel models proposed in this article are important to describe the decoherence effects suffered by their qubits.
We have seen that TVQCs deviate significantly from static channel models from the viewpoint of the diamond norm distance. As discussed next, the effect of this norm divergence on error correction is not straightforward. We now provide an example to assess the impact that these fluctuations in the diamond norm distance may have on QECCs when implemented in the superconducting qubits of ref. 17 . Consider an ADCTA channel like the one of scenario QA_C5 (c v (T 1 ) ≈ 25%) at t=μ T 1 ¼ 0:1. The static version of this channel will have a depolarizing probability of p ≈ 0.05. Knowing that the diamond norm for depolarizing channels can be simplified to jjN D ðp 1 Þ À N D ðp 2 Þjj Å ¼ 2jp 1 À p 2 j, it can be shown that for a channel instance of the TVAD Clifford twirl approximated (TVADCTA), when the diamond norm distance between the static and TV channels is jjN ADCTA ðμ T1 Þ À N ADCTA ðρ; ω; tÞjj Å ¼ 0:29, the corresponding depolarizing probability for the TV channel will be p ≈ 0.15. Let us now consider the toric codes with d ∈ {7, 9} 12 and the QTC from ref. 11 designed for the depolarizing channel. These error correction codes are capable of operating at a word error rate (WER) of WER ≈ {5 × 10 −3 , 10 −3 , 10 −6 }, respectively, for p = 0.05 (see "Methods" for definition of WER and description of QECC numerical simulations), while for for p = 0.15 they fail completely, resulting in WER ≈ 1. This shows that the selected QECCs will be operating with a significantly worse WER than that which would be expected based on a static channel model. Although there will also be channel instances for which the particular realization of {T 1 (ω, t)} will result in operation within a region where the WER of the QECC is actually smaller than what is assumed for the static channel, on average, the effect of these realizations on the overall performance of the code will be much smaller than that of the events, where the real channel action is worse than what is reflected by the static model. This winds up degrading the performance of the QECCs, which is reflected by a flattening of the WER curve of the code, as a result of its waterfall region falling at a lower rate. Given that the existence of channel parameter fluctuations with time has been proven by recent experimental outcomes, it is likely that current error correction schemes will actually yield worse performance than what would be expected based on the results that have been obtained for static quantum channels. It is worth mentioning that the diamond norm distance we have chosen for this this example is pretty low. In fact, Fig. 2 portrays that values of up to jjN ADCTA ðμ T1 Þ À N ADCTA ðρ; ω; tÞjj Å % 1:4 can occur. Note that events with high diamond norm will all be due to events where the channel instance is very noisy, i.e., for the example we are considering, the events jjN ADCTA ðμ T1 Þ À N ADCTA ðρ; ω; tÞjj Å ! 0:1858 will all belong to this kind of TV channel realizations.
Let us now consider the toric codes with d ∈ {3, 5} (ref. 12 ). These surface codes operate at a WER ≈ {8 × 10 −2 , 2 × 10 −2 } for p = 0.05 and at a WER ≈ 1 for p = 0.15. Considering that the error rate at the static point is not low enough and given that the waterfall region is pretty flat, for these codes the WER for a good channel realization will not be very different from that of a bad channel realization. Thus, although these codes will experience timevariation effects, these effects will not be as significant as for the previosuly mentioned codes.
We prove this intuition by simulating the discussed QECCs for the TVADCTA channel obtained for the configurations QA_C5 (c v (T 1 ) ≈ 25%) and QA_C6 (c v (T 1 ) ≈ 22%), and comparing the results with the corresponding static channel (see "Methods" for details about the numerical simulations). Figure 3 shows the results of these simulations. A quick inspection of this figure reveals the expected outcomes. It can be observed that the low distance toric codes (d ∈ {3, 5}) do not exhibit a significant degradation of their error rate performance (especially the 3 × 3 toric code; the 5 × 5 toric code shows signs of performance degradation, but at very low depolarizing probabilities). However, the higher distance toric codes (d ∈ {7, 9}) and the QTC are significantly affected by the time fluctuations. This is reflected by a reduction in the steepness of the waterfall region (it becomes flattened) in the case of the TV channel, which compromises the excellent performance that QECCs exhibit for static channel models. The impact of the coefficient of variation can also be observed, as the scenario with higher c v is flattened more.
The above discussion suggests that the manner in which QECCs are affected by the proposed TVQC model is a combination of two factors: how much the decoherence parameters fluctuate (which is quantified by the coefficients of variation) and the steepness of the WER curve, when the QECCs operate over the static channel. This occurs because if the code presents a rather flat WER curve (as a function of the depolarizing probability) in the static channel, the difference in WER between benign and malign channel realizations of the TV channel is not significant, and thus, on average, the overall WER will be similar to that of the corresponding static channel. Since, everything else being equal, shorter error correction codes present flatter waterfall regions, they will be less susceptible to the mismatch. On the other hand, the contrary effect is observed for error correction codes that exhibit a steep waterfall region in the static channel. the contrary effect is observed for error correction codes that exhibit a steep waterfall region in the static channel. In these cases, the difference between a good and a bad channel realization might be either complete decoding success or utter failure, implying that on average, the overall performance in the TV channel will experience substantial degradation. Thus, the better the code performance in the static channel, the more it will be affected in the TV channel model.
In light of the discussion presented in this paper, we consider that the TVQC paradigm is of paramount importance to study and optimize error correction methods that will be capable of protecting qubit-based quantum computers from the hazardous decoherence phenomenon. As a consequence of the irrefutable experimental evidence that decoherence parameters vary in time, the analysis and optimization of QECCs whose performance experiences degradation in the proposed TV channels is of great importance. It is important to remark that the proposed TVQC model is relevant for protocols involving a large number of error correction cycles (e.g., a quantum memory or a long quantum algorithm, such as the one in ref. 29 ). If a very short algorithm or QECC is run just once or a very few number of times, the parameters will not fluctuate and the effects of the proposed channel model will not be noticeable. Herein lies important future work on this topic, such as the study of the theoretical limits for QECC operating over TVQCs and the construction of optimized error correction codes that are apt to address these dynamic scenarios. Note that optimization will only be required for QECCs that present steep waterfall regions in static channels, and only for some code operation points. However, even if the current state-ofthe-art experimentally implemented QECCs will not be significantly affected by the TVQC (as they are generally short codes such as the 3 × 3 toric code discussed above), the proposed channel model will be important in practice when better QECCs are experimentally implemented, such as the 7 × 7 and 9 × 9 toric codes for the near term or the more advanced QTCs or QLDPCs for the long term. These results also speak toward the importance of superconducting qubit construction and cooldown; if they are optimized correctly, the fluctuations relative to the mean will be milder. Traditionally, long mean coherence times are seeked for the qubits to be able to handle algorithms with longer lifetimes, but this should be done jointly with lowering the dispersion in such parameter if more reliable qubits are desired.

Diamond norm distance
The diamond norm distance 22,23 between two quantum channels N 1 and N 2 is defined as where || ⋅ || 1 is the trace norm. The diamond norm considers states ρ that might be entangled with some ancillary system that is not altered by the action of the channels. Consider two quantum channels N 1 ; N 2 and a single channel use. A quantum channel discrimination protocol aims to maximize the probability of correctly identifying which channel has acted on the quantum state if one of them is chosen uniformly at random. The diamond norm distance is related to the minimum probability of error, p e , that a discrimination protocol for N 1 and N 2 can achieve as Therefore, the diamond norm distance serves as a measure of how differently two quantum channels affect input quantum states, and so it is a good metric to assess the dissimilarity between channels.
For Pauli channels N P defined as with p I = 1 − p x − p y − p z , the diamond norm distance has the closed-form expression 23 where k ∈ {I, X, Y, Z}. Amplitude damping channels N AD 16 also have a closed-form expression 32 for the diamond norm distance which is given by where γ 1 and γ 2 are the damping probabilities that define the dynamics of each of the AD channels. The action of phase damping or dephasing channels N PD 16 can be easily rewritten as a function of the Pauli matrices as ZρZ; (19) where λ is interpreted as the scattering probability of a photon without loss of energy. Consequently, the PD channel is equivalent to a pure Pauli dephasing channel, that is, a Pauli channel where the only nontrivial errors are the phase-flip errors Z. As a result, using (17), the diamond norm distance between dephasing channels has the closed-form expression Finally, for quantum channels whose diamond norm distance has no closed-form expression, a method to efficiently compute it via semidefinite programming (SDP) was presented in ref. 33 . The QETLAB tool 34 for MATLAB includes the calculation of diamond norm distances, using those SDP methods. Fig. 3 Performance of the d × d Kitaev toric codes with d ∈ {3, 5, 7, 9} 12 and the QTC proposed in ref. 11 . a 3 × 3 toric code. b 5 × 5 toric code. c 7 × 7 toric code. d 9 × 9 toric code. e QTC. Each panel presents performance for (i) the static channel, (ii) the TVADCTA for scenario QA_C5 (c v (T 1 ) ≈ 25%), and (iii) the TVADCTA for scenario QA_C6 (c v (T 1 ) ≈ 22%).

Lorentzian noise
The noise profile of a Lorentzian process is given by where A is the Lorentzian noise amplitude and τ 0 is the characteristic timescale. The authors in ref. 17 showed that modeling T 1 with the sum of two Lorentzians and a white noise process with fitted parameters exactly matched the experimental data. Noise processes with Lorentzian profiles can be generated by filtering white noise 35 . Note that It can be seen that if we pass white noise with spectral density 4A 2 τ 0 through a circuit with transfer function F = 1/(1 + j2πfτ 0 ), the noise process with the Lorentzian profile of interest will be obtained 35 . Making use of this method, the two required Lorentzian noise processes can be constructed and combined with a white noise process of power h 0 , resulting in the noise profile of the T 1 parameter required to perform numerical simulations.

Adjusted boxplots for skewed distributions
Boxplots are a heavily employed tool in statistics to see how specific statistical data is distributed. A boxplot is comprised of a box whose top edge refers to the third quartile (Q3) and whose bottom edge refers to the first quartile (Q1). A line inside this box represents the median and two whiskers at both ends indicate Q3 + 1.5(Q3 − Q1) for the top whisker and Q1 − 1.5(Q3 − Q1) for the bottom one. Data that is found outside whisker range are considered to be outliers in the dataset and are individually plotted. Boxplots are based on the normal distribution, and so they are not very representative for heavily skewed distributions. Note that the length is the same for the top and the bottom whiskers. Consequently, the real skewness of heavily skewed data will not be represented in such boxplots, with several data being represented as outliers as a result of the long tails that such types of distributions have. In consequence, conventional boxplots do not reflect a valid range of outliers, since the long tail side requires that more data be accounted for as typical values. Adjusted boxplots have been presented in the literature as a methodology to take the imbalanced weight of skewed data into account 36 . The core idea of this adjusted boxplot is to use a robust measure of the skewness of the dataset to define the length of each of the whiskers 36 that employs a measure known as the medcouple (MC) 37 where IQR = Q3 − Q1 refers to the interquartile range. The whiskers of the boxplot will then take the skewness of the distribution into account, depending on whether the distribution is positively skewed (MC > 0), negatively skewed (MC < 0), or symmetric (MC = 0). This way several data considered to be outliers in conventinal boxplots will not be considered as such by adjusted boxplots (skewness should be taken into account for outlier consideration). The probability distribution of jjN ðμ T1 Þ À N ðρ; ω; tÞjj Å appears to have a heavy positive skew, and thus the adjusted version of the boxplot presented in ref. 36 is necessary for a correct representation of such results.

QECC numerical simulation
Monte Carlo computer simulations of the d × d Kitaev toric codes with d ∈ {3, 5, 7, 9} (ref. 12 ) and the QTC of rate 1/9 in (ref. 11 ) have been carried out, in order to estimate their performance for the different scenarios that have been studied. Toric codes belong to the more general family of surface codes 38 and are [[2d 2 , 2, d]] QECCs defined by the grid length of the code d. A blocklength of k = 1000 logical qubits has been selected for the QTC, as in refs. [9][10][11] .
The toric codes are decoded using a minimum weight perfect matching decoder, which is implemented in the QECSIM tool 38 . The QTC is decoded via the decoding algorithm presented in refs. 7,8 , which combines two softin soft-out decoders.
Each round of the numerical simulation is performed by generating an N-qubit Pauli operator, calculating its associated syndrome, and finally running the decoding algorithm, using the syndrome as its input. Once the logical error is estimated it is compared with the channel error in order to decide if the decoding round was succesful. The operational figure of merit selected in order to evaluate the performance of these QEC schemes is the WER, which is the probability that at least one qubit of the received block is incorrectly decoded.
Regarding the numerical Monte Carlo methods used in order to estimate the WER of the Kitaev toric codes and the QTC, the following rule of thumb has been used in order to select the number of blocks to be transmitted, N blocks 9-11 : N blocks ¼ 100 WER : As explained in refs. [9][10][11] , under the assumption that the observed error events are independent, this results in a 95% confidence interval of about ð0:8ŴER; 1:25ŴERÞ, whereŴER refers to the empirically estimated value for the WER.

DATA AVAILABILITY
The data that supports the findings of this study are available from the corresponding authors upon reasonable request.