Suppressing qubit dephasing using real-time Hamiltonian estimation

Unwanted interaction between a quantum system and its fluctuating environment leads to decoherence and is the primary obstacle to establishing a scalable quantum information processing architecture. Strategies such as environmental and materials engineering, quantum error correction and dynamical decoupling can mitigate decoherence, but generally increase experimental complexity. Here we improve coherence in a qubit using real-time Hamiltonian parameter estimation. Using a rapidly converging Bayesian approach, we precisely measure the splitting in a singlet-triplet spin qubit faster than the surrounding nuclear bath fluctuates. We continuously adjust qubit control parameters based on this information, thereby improving the inhomogenously broadened coherence time from tens of nanoseconds to >2 μs. Because the technique demonstrated here is compatible with arbitrary qubit operations, it is a natural complement to quantum error correction and can be used to improve the performance of a wide variety of qubits in both meteorological and quantum information processing applications.

indicating that they are capable of consistent singleshot readout. The difference in the heights of the two peaks is caused by residual exchange (J ) during evolution, which causes the axis of evolution around the Bloch sphere to be non-orthogonal to the initial state. For the Bayesian estimate, which requires discretized data (r k = ±1), we choose a threshold (grey dashed line) corresponding to the minimum between the peaks for the adaptive control on the FPGA.The dashed line is chosen as the threshold for estimating ∆B z with the FPGA. When using the same estimation sequence, post-processed oscillations (blue) and data taken using adaptive control (red) show the same decay, indicating similar performance of the estimation. The post-processing technique allows us to explore estimation sequences that are too fast for the FPGA.

Supplementary Note 1 FPGA and experimental apparatus
The reflected readout drive signal returns to room temperature through a cryogenic circulator and amplifier at 4K. The signal is amplified again at room temperature before being demodulated to DC. This DC signal is split and sent to a digitizing card (AlazarTech 660) in a computer and a home built correlated double sampler (CDS). The CDS digitizes the signal and performs a local reference subtraction to reject low frequency noise. The resulting 16 bit signal is converted to a low voltage digital signal and sent to the FPGA for processing. The FPGA is a National Instruments model PXI-7841R and is clocked at 40MHz to maximize processing speed. The probability P (∆B z |m k ) is computed for 256 consecutive frequencies in the estimation bandwidth, B, in two parallel processes on the FPGA to decrease calculation time. Since B ≈ 40MHz is larger than the residual fluctuations of ∆B z , we increase the frequency resolution by computing the Bayesian estimate of ∆B z for the the middle 256 frequencies inside of B. For these parameters, the minimum calculation time is 3.7µs for a single t k . The probability distributions are stored and updated as single-precision floating-point numbers, since we find that single-precision improves the accuracy of the estimator over fixedpoint numbers.
After estimating ∆B z , the FPGA returns the index (an integer between 1 and 256) of the most probable frequency, which must be converted to a voltage to control the VCO. To do so, we apply a linear transformation to the index, V = G × i nd ex +O, where the O controls the detuning of the driving frequency. We tune the G to maximize T * 2 using adaptive control (Figure 3a).

Supplementary Note 2 Software Post Processing
To compare post-processing with adaptive control, we first perform the same estimation sequence for both software postprocessing and adaptive control, with a 250 kHz repetition rate, t samp = 12 ns and N =120, followed by an operation sequence of 30 measurements. We find T * 2 = 2148 ± 30 ns with software and T * 2 = 2066 ns with adaptive control, showing good agreement between the two approaches ( Figure 4a).
For the software post-processing, we can reduce the amount of diffusion that occurs during the operation sequence by performing only one verification measurement following the same estimation sequence, enhancing T * 2 , to 2580 ± 40 ns. For the software rescaling in Fig. 4d, the 109 estimations were performed in 225 µs instead of the 440 µs used by the FPGA, yielding T * 2 = 2840 ± 30 ns. This is likely limited by diffusion and the precision of the estimator with N =109.