Introduction

Quantum computers are general purpose devices that leverage quantum mechanical behaviour to outperform their classical counterparts by reducing the computational time and/or the required physical resources.1 Excitement about quantum computation was initially fuelled by the prime-factorization algorithm developed by Shor,2 which is most popularly associated with the ability to attack currently used cyber security protocols. More importantly, it provided a paradigmatic example of dramatic exponential improvement in computational speed when compared with classical algorithms. It has since been realised that the potential power of quantum computers could have far reaching applications, from quantum chemistry and the associated benefits for medicine and drug discovery,3 to quantum machine learning and artificial intelligence.4 Even before reaching these lofty goals, there may also be practical uses for noisy intermediate-scale quantum (NISQ) devices.5

These quantum devices can be implemented in a large number of ways, for example, using ultracold trapped ions6,7,8,9,10,11 cavity quantum electrodynamics (QED),12,13,14,15 photonic circuits,16,17,18 silicon quantum dots,19,20,21 and theoretically even by braiding, as yet unobserved, exotic collective excitations called non-abelian anyons.22,23,24 One of the most promising approaches is using superconducting circuits,25,26,27 where recent advances have resulted in devices consisting of up to 72 qubits, pushing us ever closer to realising so-called quantum supremacy.28 The apparent proximity of current devices to this milestone makes it timely to review the current capabilities and limitations of quantum computers.

Richard Feynman’s original idea was to simulate quantum many-body dynamics—a notoriously hard problem for a classical computer—by using another quantum system.29 Over the last couple of decades, this approach of using a purpose built quantum simulator has been extremely successful in accessing physics beyond the reach of numerics on a classical computer. Most notable are cold atom experiments with optical lattices30,31,32,33,34,35,36 where the natural evolution of the atoms corresponds to a high accuracy to that of a local Hamiltonian of choice. This has, for example, allowed us to study Hubbard model physics31,35 and many-body localized systems32,33,34 in two dimensions. More recently, there have also been advances in trapped ion quantum simulators, which have the benefit of being able to implement long range interactions,6,7,37 and have been used to study the Schwinger-mechanism of pair production/annihilation in 1D lattice QED7 and Floquet time crystals.38

Universal quantum computers are also increasingly looking like a feasible setting for simulating quantum dynamics. One of the biggest advantages of using a quantum computer for this purpose is the flexibility it offers. A single quantum device could in principle perform simulations that currently require several different experiments, using disparate methods. Furthermore, it should be possible to access new physics not currently accessible, most notably, the simulation of lattice gauge theories with dynamical gauge fields. These are ubiquitous in the theoretical study of strongly-correlated quantum matter but require multi-body couplings, which have so far proven difficult to achieve in experiment.39,40

A particularly exciting opportunity has been provided by IBM in the form of an online quantum computing network called IBM Q. This consists of a set of small quantum computers of 5 and 16 qubits that are available freely to the public, two 20 qubit machines accessible by IBM Q partners,41 and the qiskit python API42 for programming the devices. The publicly available resources have already resulted in a spread of results, such as calculating the ground state of simple molecules,3,43 creating and measuring highly entangled many qubit states,44,45 implementing quantum algorithms,46,47,48 and simulating non-equilibrium dynamics in the transverse-field Ising,49,50 Heisenberg51,52 and Schwinger53 models, as just a few examples. Given the infancy of quantum computing efforts, all these results are understandably small scale and of limited accuracy.

IBM is not alone in their efforts to make quantum computing more mainstream, with Microsoft introducing the \(Q\)-sharp programming language,54 Google developing the Circq python library,55 and Rigetti providing their own Quantum Cloud Service and Forest SDK built on Python.56 Rigetti and IonQ also provide selective public access to hardware—based on superconducting qubits and trapped ions, respectively. All of these resources are allowing a lot of hands on experience with quantum computers from researchers and the general public all around the world. Furthermore, it highlights that there are currently several parallel efforts of research and development from industry focussed on quantum computing, quantum programming and cloud-based services that are flexible and prepared for future hardware.

Given the current stage of development, and the immense expectation from the public and physics communities, it is timely to critically assess and benchmark the state-of-the-art. In this article we consider the far-from-equilibrium dynamics of global quantum quenches simulated on an IBM 20 qubit quantum computer. The models that we consider are of central importance to condensed matter physics and display a wide range of phenomenologies. By measuring a range of physical correlators, we can assess the capabilities and limitations of current quantum computers for simulating quantum dynamics.

Results

Setup

In this article, we study global quantum quenches,57,58 that is we calculate local observables and correlators of the form

$$\left\langle \psi (t)\left|{\hat{O}}_{j}\right|\psi (t)\right\rangle ,\qquad \left\langle \psi (t)\left|{\hat{O}}_{j}{\hat{O}}_{k}\right|\psi (t)\right\rangle ,$$
(1)

where \({\hat{O}}_{j}\) are local operators and the time-dependent states are

$$\left|\psi (t)\right\rangle ={e}^{-i\hat{H}t}\left|\psi (0)\right\rangle ,$$
(2)

and where \(\left|\psi (0)\right\rangle\) is the initial state, which differs globally from an eigenstate of \(\hat{H}\). In other words, we can consider \(\left|\psi (0)\right\rangle\) to be the ground state of a (time-independent) preparation Hamiltonian, \({\hat{H}}_{0}=\hat{H}-h\hat{V}\), where \(\hat{V}\) is a global perturbation, and the perturbation strength \(h\) is instantaneously quenched from zero at time \(t=0\).

We consider one dimensional spin-1/2 chains, consisting of \(N\) spins, initially prepared in either a domain wall configuration, \(\left|\cdots \,\downarrow \downarrow \downarrow \uparrow \uparrow \uparrow \,\cdots \ \right\rangle\), or Néel state, \(\left|\cdots \,\uparrow \downarrow \uparrow \downarrow \uparrow \,\cdots \ \right\rangle\). Quenches from these initial states are particularly easy to prepare due to the local tensor-product structure of the initial state and have been studied in cold atom experiments.32 The time evolution after the quantum quench will be governed by a Hamiltonian of the form

$$\hat{H}=-J\sum _{j=1}^{N-1}\left({\hat{\sigma }}_{j}^{x}{\hat{\sigma }}_{j+1}^{x}+{\hat{\sigma }}_{j}^{y}{\hat{\sigma }}_{j+1}^{y}\right)+U\sum _{j=1}^{N-1}{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{j+1}^{z}+\sum _{j=1}^{N}{h}_{j}{\hat{\sigma }}_{j}^{z},$$
(3)

with \(J\,{>}\,0\), and \({\hat{\sigma }}_{j}^{\alpha }\) are the Pauli matrices with eigenvalues \(\pm\!1\). This model can also be written in terms of hard-core boson, or spinless fermion Hamiltonian via the Jordan-Wigner transformation.

We will consider four distinct cases for the Hamiltonian parameters:

  1. (i)

    The XX chain, defined by \(U=0\) and \({h}_{j}=0\). This is a uniform, non-interacting model.

  2. (ii)

    Disordered XX chain for \(U=0\) and \({h}_{j}\) uniformly randomly sampled from the interval \([-h,h]\). This is the prototypical lattice model for Anderson localization.59

  3. (iii)

    XXZ spin chain, defined by \(U\,{\ne}\, 0\) and \({h}_{j}=0\). This is a particular case of the Heisenberg model for quantum magnetism.

  4. (iv)

    XXZ chain with linear potential, defined by \(U\,{>}\,0\) and \({h}_{j}=hj\), i.e., a linearly increasing potential. This model was recently studied in the context of many-body localization without disorder,60,61 where Wannier-Stark localization62 is induced by the linear potential.

This family of Hamiltonians covers a large range of physics in condensed matter, from the integrable limit,57 to many-body quantum magnetism,63 to that of many-body localization.64,65 These Hamiltonians are perfect testbeds for digital quantum simulation since they can be directly simulated on a quantum computer – the spins-1/2 of the former correspond directly to the qubits of the latter, and the local connectivity of the qubits is suited for the local form of the Hamiltonian.

In our simulations we achieve the time evolution using a Trotter decomposition66,67 of the unitary time evolution operator \(\hat{U}(t)={e}^{-i\hat{H}t}\). Our simulation proceeds by the following steps. First, we create the initial state, which is done by applying Pauli-X operations to the default initial state of the IBM machine, \(\left|\cdots \uparrow \uparrow \uparrow \cdots \ \right\rangle\). Second, we discretize time and the evolution becomes a product of discrete evolution operations \(\hat{U}(t)=\hat{U}(\Delta t)\cdots \hat{U}(\Delta t)\). Note that in our plots we include additional data points by varying \(\delta t\) in the final discrete evolution step. Third, we trotterize the discrete operators \(U(\Delta t)\) into a product of one- and two-qubit unitaries. Finally, we decompose these unitaries into the CNOT and single qubit rotation gates that can be directly applied on the IBM devices. Please see Fig. 1 for a schematic of this procedure, and see Methods for more details.

Fig. 1
figure 1

Schematic of the implementation of Trotterized evolution. a The procedure of adding more Trotter steps to reach longer times. b, c are the basic and symmetric Trotter decomposition, respectively, for the Hamiltonian Eq. (3). In b the operators are \({\hat{A}}_{j}={e}^{-i{h}_{j}{\hat{\sigma }}_{j}^{z}\Delta t},\qquad {\hat{B}}_{j}={e}^{-i(U{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{j+1}^{z}-J({\hat{\sigma }}_{j}^{x}{\hat{\sigma }}_{j+1}^{x}+{\hat{\sigma }}_{j}^{y}{\hat{\sigma }}_{j+1}^{y}))\Delta t}\) and in c they are \({\hat{A}}_{j}={e}^{-i{h}_{j}{\hat{\sigma }}_{j}^{z}\frac{\Delta t}{2}},\quad {\hat{B}}_{j}={e}^{-i(U{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{j+1}^{z}-J({\hat{\sigma }}_{j}^{x}{\hat{\sigma }}_{j+1}^{x}+{\hat{\sigma }}_{j}^{y}{\hat{\sigma }}_{j+1}^{y}))\frac{\Delta t}{2}},{\hat{C}}_{j}={e}^{-i(U{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{j+1}^{z}-J({\hat{\sigma }}_{j}^{x}{\hat{\sigma }}_{j+1}^{x}+{\hat{\sigma }}_{j}^{y}{\hat{\sigma }}_{j+1}^{y}))\Delta t}\).

Once we have constructed the state \(\left|\psi (t)\right\rangle\), we then perform measurements, which allows us to construct the quantities of interest in Eq. (1). We will only consider correlators of \({\hat{\sigma }}_{j}^{z}\) operators, where we only need to make measurements in a single (\(z\)-basis) for the spins. In the following we will use 8192 measurements per data point, which means that the statistical error for these local correlators is \(\sim\!\!0.01\), which is too small to be included in our figures.

We consider three different types of dynamical quantities:

  • The local magnetization

    $${M}_{j}(t)=\langle \psi (t)|{\hat{\sigma }}_{j}^{z}|\psi (t)\rangle .$$
    (4)

    In the case of a domain wall initial state, we will also compute

    $${N}_{\text{half}}(t)=\sum _{j=1}^{N/2}\left\langle \psi (t)\right|\ \frac{{\hat{\sigma }}_j^{z}+1}{2}\ \left|\psi (t)\right\rangle ,$$
    (5)

    which is equal to zero at \(t=0\) and grows as the domain wall spreads.

  • The connected equal-time correlator

    $$\ \ {C}_{jk}(t)=\left\langle \psi (t)\right|{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{k}^{z}\left|\psi (t)\right\rangle -\left\langle \psi (t)\right|{\hat{\sigma }}_{j}^{z}\left|\psi (t)\right\rangle \left\langle \psi (t)\right|{\hat{\sigma }}_{k}^{z}\left|\psi (t)\right\rangle .$$
    (6)

    Note, the connected form of this correlator measures the quantum correlators between distant spins but is not sensitive to the classical correlations in our initial states.

  • The quantum Fisher information (QFI). We will consider a particular case of the QFI for a pure state, namely,

    $$\ \ {F}_{Q}(t)=\sum _{jk}{s}_{j}{s}_{k}\left\langle \psi (t)\right|{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{k}^{z}\left|\psi (t)\right\rangle -{\left(\sum _{j}{s}_{j}\left\langle \psi (t)\right|{\hat{\sigma }}_{j}^{z}\left|\psi (t)\right\rangle \right)}^{2},$$
    (7)

    where \({s}_{j}=+1\) for left half of the sites \(j\) and \({s}_{j}=-1\) for the right half at \(t=0\). More generally, the QFI (for a pure state) is defined as the variance, \(4(\langle {\hat{O}}^{2}\rangle -{\langle \hat{O}\rangle }^{2})\), where \(\hat{O}\) is a sum of local operators, which each have a spectrum of unit width. In our case we have \(\hat{O}=\frac{1}{2}{\sum }_{j}{s}_{j}{\hat{\sigma }}_{j}^{z}\). The QFI is an entanglement witness,68,69,70 and for our chosen definition in Eq. (7) is also closely related to the von Neumann entanglement entropy.71,72,73

The IBM quantum computers

The quantum computer that we use is the latest 20-qubit IBM device, codenamed ibmq_poughkeepsie. It consists of a two-dimensional array of qubits that have local connectivity. We can perform arbitrary single qubit rotations and controlled-NOT (CNOT) gates between connected qubits, see Methods. For the data presented in this paper the average readout errors, CNOT errors, and T2 (dephasing) times were approximately 4%, 2% and 90 μs, respectively. An important point to note is that the IBM machines are recalibrated on an approximately daily basis, which means the data can vary across days. Crucially, we find that the our results are qualitatively reproducible, and we compare data obtained across three consecutive days in the Methods.

To benchmark the accuracy of the simulation, we compare the data with a numerical implementation of the Trotter evolution, as well as continuous-time exact diagonalization (ED). The errors in our results are strongly influenced by the number of CNOT gates in the corresponding quantum circuit. One of the reasons for this is that the fidelities of these two qubit gates are an order of magnitude worse than the single qubit gates. The CNOTs also take a longer real-world time to implement than the single qubit gates. The increased implementation time of the circuits increases the potential for errors due to energy relaxation and dephasing, parametrized by the T1 and T2 times, as well as other environmental effects and cross-talk. On the IBM device, we use \(N\) = 6, 8, 10 of the qubits as our system with 4 or 5 symmetric Trotter steps. These qubits are chosen as the connected subset with the lowest average CNOT errors such that the single qubit measurement errors and T2 decoherence times do not exceed a certain threshold. Please see Methods for more details about the quantum device and details of the algorithm used to select the qubits.

Local magnetization

First, we consider results for the uniform XX spin chain Hamiltonian with \(U=0\) and \({h}_{j}=0\) (case (i)), quenched from a domain wall configuration, shown in Fig. 2. Figure 2a–c show a comparison of the magnetization of the fourth (middle) and sixth (end) spins of the chain as computed by exact diagonalization with continuous time evolution, a numerical implementation of the Trotter decomposition and the corresponding data from the IBM machine. The data from the machine is further split into the raw data (orange triangles) and constrained data (red squares). The constrained data only considers those measurement outcomes that have the same total magnetization as the initial state—which is a conserved quantity—that is, we restrict to the physical Hilbert space of the Hamiltonian we are simulating. We discuss this rudimentary error mitigation method further in the context of quantifying the accuracy of the quantum computer (see Fig. 6b), and it will be used in all subsequent figures. See Methods for more details.

Fig. 2
figure 2

Results for a global quench from a domain wall initial state. a, b The local magnetization of the fourth and sixth spins of the chain with \(N=6\), and c is for the sixth spin for longer times. Plotted are the result of exact diagonalization, numerical Trotter evolution, and the experimental data, both in raw form (IBM) and when the conserved quantities are imposed (IBM constrained), see the main text. d–f The time and site resolved IBM results for the local magnetisation for \(N=6,8,10\) sites, respectively. g The corresponding numerical symmetric Trotter evolution for \(N=10\). Data were obtained on 12 March 2019 for all figures except (c), which were obtained on 3 April 2019.

The data in Fig. 2a, b show that while all curves show reasonable agreement at short times—for instance, we have a delayed decay of the magnetization in Fig. 2b—the accuracy of the IBM data becomes bad very quickly. For times \(Jt\,{>}\,1\) the magnetization as measured on the machine approaches zero, which is the expected average value if the system thermalizes or if we were to randomly sample states. The agreement between the numerical results obtained from exact diagonalization (ED) and trotterization shows that the inaccuracy of the results is due to the machine and not our approximation of the evolution. This rapid decay in the accuracy with number of Trotter steps was also observed in ref. 49 for the transverse field Ising model on the 5 and 16 qubit IBM machines.

In Fig. 2c we consider the evolution over a longer time window, up to \(Jt=10\), rather than \(Jt=2.5\). We see that the data very quickly approaches \(0\) and remains there, indicating that the system has equilibrated. From this figure we can see that due to the quality of the machine we want to take \(\Delta t\) in our Trotter steps as large as possible so that we can reach longer times without thermalizing. However, for the Trotter evolution to be accurate we want \(\Delta t\) as small as possible. The \(\Delta t\) that we use are chosen (without much optimization) to maintain a reasonable accuracy in both the IBM data and the numerical trotterization.

Next, we consider the site dependence of the magnetization shown in Fig. 2d–f, for \(N=6,8,10\) sites and compared with the numerically computed Trotter evolution shown in Fig. 2g. In these figures we see a clear qualitative agreement between the experimental and numerical results at short times, particularly by the presence of a linear light-cone causality structure for the spreading of the domain wall. This qualitative agreement, however, also worsens at longer times, and as we increase the system size. In particular, in Fig. 2d we can see a marked decrease in accuracy every three time steps. The origin can be explained as follows. Each block of three time steps is computed for the same number of time steps with \(\Delta t\) varying in the final Trotter steps (as explained earlier). It is when we add an additional time step for the next block—and thus increase the number of gates in the quantum circuit—that we see a drop in the accuracy. This behaviour is also seen in Fig. 6b where we show the number of measurements that are in the physical Hilbert space. There is a clear decrease in the percentage after the introduction of each new Trotter step.

Short-time many-body physics

While the results in Fig. 2 may at first seem discouraging for quantitative large-scale dynamical simulations, we will show in this section that we may still observe qualitative behaviour associated with non-trivial quantum phenomena. Here, we consider \({N}_{\text{half}}(t)\), defined in Eq. (5), after quenching from the domain wall initial state for three different cases.

First, we consider the disordered XX chain (case (ii)), which is known to exhibit Anderson Localization in 1D for all values of the disorder strength, \(h\).74 As a consequence of increasing the disorder strength, the extent of the spreading of the domain wall, and consequently the growth of \({N}_{\text{half}}\) is reduced (indicated by a black arrow). We are able to reproduce this behaviour qualitatively for short times on the IBM machine as shown in Fig. 3a. The corresponding numerical Trotter results are shown in the inset, which shows that while the accuracy of the results is quite low, the qualitative behaviour is still captured. Once again, we see that the data is biased towards the scrambled value as the number of Trotter steps is increased, which in the present case is \(1.5\).

Fig. 3
figure 3

Results for short time behaviour of \({N}_{\text{half}}\) after a quench from a domain wall initial state. In all subfigures the inset shows the corresponding results of numerical Trotter evolution. a The disordered XX spin chain with disorder strength controlled by \(h\) using a symmetric Trotter decomposition. b The XXZ spin chain with nearest neighbour interactions parametrized by \(U\) with basic trotterization. c The XXZ spin chain with a linear potential with slope \(h=1.5\), interaction strength \(U\), and a symmetric Trotter decomposition. a, b were obtained on 12 March 2019 and c on the 10 May 2019.

Next, we consider the uniform XXZ chain (case (iii)) with \(U\,{>}\,0\) and \({h}_{j}=0\). In contrast to the previous cases, this Hamiltonian is interacting and thus describes true many-body physics. At short times, the spreading of the domain wall is also hindered by the energy cost of the additional interactions between neighbouring spins. Once again, we can see this behaviour qualitatively in the experimental results, shown in Fig. 3b. Note that while the short time results are similar to the previous case, it is due to a different physical mechanism and is a many-body effect. The long-time behaviour would be starkly different from that of the Anderson localized case, which is, however, beyond the current capabilities of the IBM quantum computers.

In the third case we combine features of the previous two models and include both on-site potential energies and interactions. We will consider the XXZ spin chain with a linear potential (case (iv)), and both \(U\,{>}\,0\) and \(h\,{>}\,0\). If we compare with \(U=0\), that is, a linear potential alone, then the eigenstates will be localized,60,61,62 and thus the spreading of the domain wall will be limited. If we have \(U\,{>}\,0\), then the two energy costs can compensate. For instance, consider flipping the middle two spins, then there will be an increase in energy due to the potential but a decrease in the interaction energy. Therefore, the presence of interactions makes it easier for the domain wall to spread resulting in an increase of \({N}_{\text{half}}\) as a function of \(U\) (see black arrow). This simple argument of energetics is confirmed by the numerical results in the inset of Fig. 3c, and is qualitatively reproduced in the experimental data shown in the main figure. Note, however, that this trend is less pronounced than in the previous two cases. This is in part due to the small scale of the changes in the exact results, as well as the bias towards the thermal value of 1.5 at long times.

In this data, the addition of disorder and interactions lead to similar qualitative behaviour on the time scales that we have considered. However, at longer times there is a clear difference between the two cases. In the former we have localization behaviour leading to the long-time persistence of the initial imbalance, whereas interactions generically lead to ergodic and thermalizing behaviour, resulting in the loss of this information in local observables. With the current devices we are unfortunately unable to distinguish these two different regimes.

Spreading of correlations—light cone

In the previous section, all the data correspond to some combination of local average magnetizations. Here, we look at the correlations between pairs of spatially separated spins, and how these correlations spread. Lieb and Robinson showed that for a local Hamiltonian, correlations can spread at most linearly and display a light-cone causality structure.75 For instance, for tight-binding spinless fermions, correlations spread with a speed proportional to their maximum group velocity, \(v=4J\).57

In Fig. 4 we compare ED, numerical trotterization and experimental results for the connected spin correlator defined in Eq. (6), for \({h}_{j}=U=0\). We measure correlations between the first and the \({j}^{\text{th}}\) spin after a quench from a charge density wave initial state. The ED results show a clear ballistic spreading of correlations, which is also qualitatively reproduced by the numerical trotterization and the IBM data. As with all previous results, the agreement between the numerical and IBM results is best at short times, but the IBM data are able to capture the point where the correlations reach the system size. Furthermore, for this free model, there are only significant correlations along the light-cone and not within it, which is also approximately captured by the experimental data. There does, however, seem to be a slightly faster light-cone velocity, which is most evident in the shift of the position of the peak on site 6. This indicates that there is an effective renormalization of the Hamiltonian parameters, particularly \(J\), due to the errors in the machine.

Fig. 4
figure 4

Results for the connected spin correlator defined in Eq. (6). Data are computed using: a exact diagonalization, b numerical trotterization, c the IBM quantum device. Data are shown for \(N=6,{h}_{j}=0\) and \(U=0\), using a symmetric trotterization, and obtained on 12 March 2019.

Quantum fisher information

Finally, we consider the quantum Fisher information defined in Eq. (7) starting from both the Néel and domain wall initial states, shown in Fig. 5a, b, respectively. In this section, we will consider the model of (case (i)), for which the quantum Fisher information has two important relations to entanglement, which we outline below. Note that the quantum Fisher information has a more general definition for mixed states, which reduces to our definition for pure states. We do not consider the more general definition since we are simulating unitary evolution from a pure quantum state and we are not able to differentiate between the unitary and non-unitary errors occurring in our circuits.

Fig. 5
figure 5

The quantum Fisher information. As computed by ED, numerical trotterization and using the IBM quantum device for \({h}_{j}=0\) and \(U=0\) and a symmetric trotterization. Data are compared with the bipartite von Neumann entanglement entropy \({S}_{\text{vN}}\) with an equal left/right partition, scaled by \(a{N}^{-1}\) with \(a=32/5\), see main text. a Quench from the Néel initial state. b Quench from the domain wall initial state. Data were obtained on 12 March 2019.

For non-interacting models (as is the case for the Hamiltonian of (case (i))), there is a direct relationship between the bipartite von Neumann entanglement entropy and the magnetization fluctuations.71,72,73 The former is defined by \({S}_{\text{vN}}=-\text{Tr}\,[{\rho }^{A}\mathrm{ln}{\rho }^{A}]\), where \({\rho }^{A}\) is the reduced density matrix for half of the system. The variance of the half chain magnetization is proportional to our definition of the QFI and we have the approximate relation

$${S}_{\text{vN}}\approx \frac{5}{32}{F}_{Q},$$
(8)

see ref. 73 for details of this cumulant expansion. In MBL systems the QFI—using instead the staggered magnetization for the operator \(\hat{O}\) in Eq. (7)—also appears to mimic the bipartite entanglement entropy and grows logarithmically after a quantum quench.76

The QFI is also a multi-partite entanglement witness,68,69,77 and in particular, if the state is separable then we have that \({F}_{Q}\le N\), which is known as the shot noise limit in quantum metrology.78,79,80 For a general entangled state, however, the QFI is bounded by \({F}_{Q}\le {N}^{2}\), and a value of \({F}_{Q}/N\ge m\), indicates at least \(m+1\)-partite entanglement. Note that the QFI is sensitive to the choice of the operator \(\hat{O}\), and for example, if we choose \(\hat{O}\propto {\sum }_{j}{\hat{\sigma }}_{j}^{z}\), the total magnetization, then \({F}_{Q}=0\), since our models conserve the total magnetization.

Let us now consider the IBM results for the QFI, starting with a quench from a Néel initial state shown in Fig. 5a. We first note that the numerical results for the QFI (black) closely follow the bipartite von Neumann entanglement entropy (blue). The results from the IBM machine are also able to reproduce this behaviour quite accurately, characterized by the linear growth with a maximum just after \(Jt=1\) due finite size effects. The fact that we measure \({F}_{Q}/N\,{>}\,1\) also implies entanglement in the state on the IBM quantum computer.

In Fig. 5b, we consider a quench from a domain wall. In this case both the numerical and experimental simulations have the same qualitative behaviour but are further off in absolute value, as compared with Fig. 5a. The QFI computed on the IBM machine once again is able to reproduce the behaviour of the von Neummann entanglement entropy.

The entanglement entropy is generally a difficult quantity to measure. It typically requires some form of state tomography, which consists of a set of measurements in a number of different bases that grows exponentially with system size, rendering it impractical for large systems. Furthermore, even with low error rates, the resulting density matrix may be unphysical.45,81 The quantum Fisher information may provide an alternative in certain circumstances because it can be significantly easier to compute – for the definition that we consider we only need to measure in a single basis.

Quantifying the accuracy of the quantum computer

While in the previous sections we showed that the current IBM device can qualitatively reproduce physical behaviour, it is also important to develop practical quantitative measures of their accuracy. These measures should allow us to track the evolution of the quantum computers as they are developed and improved, as well as potentially providing further insight into the various sources of error that are present.

Going beyond the reported gate errors, one of the simplest things to measure is the violation of the Mermin inequality,82

$${M}_{\text{GHZ}}=\left|\left\langle \hat{X}\hat{Y}\hat{Y}\right\rangle +\left\langle \hat{Y}\hat{X}\hat{Y}\right\rangle +\left\langle \hat{Y}\hat{Y}\hat{X}\right\rangle -\left\langle \hat{X}\hat{X}\hat{X}\right\rangle \right|\le 2,$$
(9)

where \(\hat{X},\hat{Y}\) are the \(x\) and \(y\) Pauli-operators, and we omit the consecutive site labels on the operators. This inequality should hold for a classical theory with local realism. If we consider the GHZ state \(\left|\psi \right\rangle =\frac{1}{\sqrt{2}}(\left|\,\uparrow \uparrow \uparrow \right\rangle -\left|\,\downarrow \downarrow \downarrow \right\rangle )\), then \({M}_{\text{GHZ}}=4\) and this bound is maximally violated and can be used to demonstrate the "quantumness" of the machine. In Fig. 6a we show the values computed across 4 consecutive days for the first three qubits used for the \(N=6\) simulations. This data shows that we are consistently able to violate the Mermin inequality.

Fig. 6
figure 6

Quantifying the accuracy of the device. a Values of \({M}_{\text{GHZ}}\) defined in Eq. (9) computed on the IBM device for a GHZ state on three qubits. Values of \({M}_{\text{GHZ}}>2\) cannot be explained by a classical theory of local realism, see main text. b The percentage of measured states that are in the physical Hilbert space as a function of time after a quench from a domain wall. c Data for the computation of the quantity \(\sqrt{{\hat{U}}^{-1}(t)\hat{U}(t)}\), as performed on the IBM device, after a quench from a Néel state. Data were obtained for \(N=6\) across four consecutive days in 2019. Dashed lines indicate the average value for a randomly selected state.

As can be seen in the methods, the fidelities of individual gates do not necessarily reflect the accuracy of a simulation across many qubits. It may, therefore, make sense to consider more practical measures of quality that more directly relate to the simulations we are performing. We use the physical conservation laws of the evolution to improve the accuracy of our simulations by throwing away measurements of unphysical states. The number of measurements that are kept/thrown away can also be taken as a quantitative measure of the effective accuracy of the machine since for a perfect quantum computer we should find that all measured states are in the physical Hilbert. In Fig. 6b, we show the percentage of measured states that are physical for 4 consecutive days, showing the variation in the effective quality of the device.

As an unbiased measure of the quality of our simulation we suggest to compute the identity in the form \(\hat{I}=\sqrt{{\hat{U}}^{-1}(t)\hat{U}(t)}\). By implementing a circuit to perform the forward and backward time evolution, the probability of returning to the initial state measures the accuracy of the implementation of the unitary \(\hat{U}(t)\). If the implementation is non-unitary then we should expect decay with an increasing number of Trotter steps. We should also expect decay of this quantity if there are only unitary errors since this quantity takes the form of a Loschmidt echo, which measures the sensitivity of the system to perturbations. It does not rely on any special properties of \(\hat{U}(t)\), such as the presence and knowledge of conserved quantities. It therefore provides an unbiased measure of the quality of the implementation of \(\hat{U}(t)\) and of the quantum device.

We show the computation of the identity performed on the IBM machine across four consecutive days in Fig. 6c. In an ideal machine this quantity should be identically equal to 1 for all times. However, with each additional Trotter step—corresponding to the observed plateaux—the accuracy drops significantly. We also note that the behaviour observed in Fig. 6b, c are very similar, and both reflect the accuracy of the simulations in the previous sections.

Discussion

Probably the most striking feature of our results from the IBM machines is the low quantitative accuracy when compared with exact numerics. Considering the limited system sizes and time scales that we can reach; it highlights the current limitations of these quantum devices. Most importantly, this shows that whilst the number of qubits is now reaching limits beyond the capabilities of classical computers, the error rates and/or the isolation of these quantum computers is not yet sufficient for useful computations, at least for accurately simulating quantum dynamics. Due to the large array of possible error sources, pinning down the most damaging for our simulations is a difficult task, and is an import practical area for future research.

Despite this unfortunate conclusion, we are still able to access a range of qualitative physical behaviours demonstrating non-trivial simulations of quantum dynamics. We were able to compute a range of expectation values and two-point correlators, and observe behaviour associated with localization, many-body interactions, and the ballistic spreading of quantum correlations. We also observed the compensation of different energy costs due to on-site potentials and neighbouring site interactions and witnessed the generation of entanglement due to unitary evolution through the QFI.

The goal of using quantum computers for dynamical simulations is to be able to access systems intractable using classical algorithms. The limitations of the current devices that we have observed in our results demonstrate the need for improved quality of the machines and not simply adding more qubits. In this type of simulation, the number of gates, and thus the real execution time of the quantum circuits, grows linearly with the system size and with the number of Trotter steps. This means that we can estimate that we would need at least an order of magnitude improvement in a combination of the gate fidelities and/or T1/T2 times to get close to achieving this goal.

One of the biggest challenges facing the field of quantum computation is how to deal with errors. Although this is primarily an engineering issue to increase the quality, isolation and control of the devices, there is also the theoretical contribution of error correction methods.1,83,84 A particularly promising avenue for error correction is to use surface codes.85,86 One big advantage of these methods is the moderately low fidelities required for them to work effectively. As the size and quality of the quantum computers increases, it is hoped that these error correction schemes will allow us to rapidly increase their scale, and with it their utility. In the meantime, there may also be more room for practical error mitigation schemes, such as the one that we have used, to get the most out of NISQ devices.

While we have considered a range of correlation functions and physical mechanisms, there is still much that can be learnt about the current quantum computers and how well they can simulate quantum dynamics for condensed matter systems. In particular, there are physical mechanisms beyond those that we have considered here. For instance, models with gauge coupling to dynamical gauge fields,7,39,40,53 and the physics of confinement.87,88 In these settings there is also hope that interesting physics can be extracted from the short-time dynamics and thus may be suited to the current machines. The combination of disorder and interactions, resulting in the many-body localized phase, is also currently a particularly active area of research.32,33,64,65 The investigation of the transition between MBL and ergodic dynamics may also benefit in the future from quantum computation. It is notoriously difficult to study numerically due to the requirement of large systems and/or long-time simulations.89,90,91

In conclusion, digital quantum simulation is still in its infancy and we have shown that it requires an order of magnitude improvement in fidelity and coherence until it will realistically outperform classical computers, at least applied to dynamical problems of interest in condensed matter physics. However, while it is hard to predict the pace of technological progress, our results will serve as a useful benchmark for improvements in the foreseeable future; and in the long run they will provide a snapshot of capabilities at the beginning of a new quantum simulation era.

Methods

Implementation: trotterized evolution

For our global quench protocol, we first need to prepare the initial state of our system. Since the IBM quantum computers are initialized in the state \(\left|\,\uparrow \uparrow \,\cdots \ \right\rangle\) by default, both of our choices of initial states are tensor product states in the \(z\)-basis and thus can be created by applications of the Pauli \(X\) gate. Next, we need to implement the time evolution, which proceeds by three main steps that we will briefly outline here.

First, we discretize time, that is, we split the time evolution operator, \(\hat{U}(t)={e}^{-i\hat{H}t}\), into a sequence of discrete operators, i.e., \(\hat{U}(t)=\hat{U}(\Delta t)\cdots \hat{U}(\Delta t)\) with fixed \(\Delta t\). Each application of the discrete operator \(\hat{U}(\Delta t)\) is called a Trotter step. This is illustrated in Fig. 1a where we apply an increasing number of Trotter steps to reach later times.

In our simulations we fix \(\Delta t\) to fix the accuracy of our approximation. However, since we can only implement up to \(5\) Trotter steps, we add additional data points by varying \(\Delta t\) in the final Trotter step. More explictly, consider Trotter steps \(M\) and \(M+1\), we can add additional data points by using the evolution operator

$${e}^{-i\hat{H}t}\approx \hat{U}{(\Delta t)}^{M}\hat{U}(\delta t),$$
(10)

where \(\delta t=\Delta t/r\), where \(r\) is the number of data points we want between \(t=M\Delta t\) and \(t=(M+1)\Delta t\). Since the accuracy of the trotter decomposition \(U(\delta t)\) is better than that of \(U(\Delta t)\) (see below), these extra data points will have errors intermediate between that of the \({M}^{\text{th}}\) and \({(M+1)}^{\text{th}}\) steps.

Second, we perform a Trotter decomposition of (trotterize) the unitary evolutions, that is, we approximately decompose the operator \(\hat{U}(\Delta t)\) into a sequence of unitaries that act on at most two neighbouring qubits. In the following we use either a basic or symmetric Trotter decomposition, shown in Fig. 1b, c, respectively. For \(m\) Trotter steps of length \(\Delta t\), the error of the symmetric decomposition is of order \({\mathcal{O}}(m{(\Delta t)}^{3})\), compared with \({\mathcal{O}}(m{(\Delta t)}^{2})\) for the basic decomposition. Note, however, that due to the symmetric structure, when we apply several Trotter steps we can combine several layers of gates. This means that we only need an extra two layers of gates compared with the basic decomposition regardless of the number of Trotter steps.

Third, we must efficiently decompose these two qubit operators into the gates that can be directly implemented on the quantum device, which are the CNOT gate and arbitrary single qubit unitaries. An efficient decomposition is found in ref. 92, which we summarise below. The result is that if \(U\,{\ne}\, 0\) then \(\hat{B}\) and \(\hat{C}\) (defined in the figure caption) can be implemented using three CNOTs and with \(U=0\) this can be reduced to only two CNOTs.

Trotter Decomposition

In this paper, we use a Trotter decomposition (commonly known as a trotterization) of the unitary time evolution operator \(\hat{U}(t)={e}^{-i\hat{H}t}\). That is, we want to approximate these operators by a sequence of more easily implemented operators, namely those that act on at most two qubits.

As a starting point, consider a Hamiltonian of the form \(\hat{H}=\hat{A}+\hat{B}\), where \([\hat{A},\hat{B}]\,{\ne}\, 0\). Then, since these operators do not commute in general, we have that

$${e}^{-i\hat{H}t}={e}^{-i\hat{A}t}{e}^{-i\hat{B}t}+{\mathcal{O}}({t}^{2}),$$
(11)

which can be naturally extended to Hamiltonians that are sums of more than two terms. To use this fact for trotterized evolution we can use the two steps outlined in the main text. We will now go through the details of these steps in more detail, for the Hamiltonian Eq. (3):

$$\hat{H}=-J\sum _{j=1}^{N-1}\left({\hat{\sigma }}_{j}^{x}{\hat{\sigma }}_{j+1}^{x}+{\hat{\sigma }}_{j}^{y}{\hat{\sigma }}_{j+1}^{y}\right)+U\sum _{j=1}^{N-1}{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{j+1}^{z}+\sum _{j=1}^{N}{h}_{j}{\hat{\sigma }}_{j}^{z},$$
(12)

which we rewrite here for convenience.

First, we discretize time and split the evolution operator \(\hat{U}(t)\) into a product of discrete evolution operators \(\hat{U}(\Delta t)\), i.e.,

$${e}^{-i\hat{H}t}={\left({e}^{-i\hat{H}\Delta t}\right)}^{M},$$
(13)

where \(\Delta t=t/M\) and \(M\) is the number of "Trotter steps". Since the Hamiltonian commutes with itself, Eq. (13) is exact. To perform time evolution, we will typically fix \(\Delta t\) and increase the number of Trotter steps \(M\) to reach later times.

The second step is to approximate each of these discrete time operators in a similar manner to Eq. (11). For notational simplicity, let us first define the operators

$${\hat{A}}_{j}={e}^{-i{h}_{j}{\hat{\sigma }}_{j}^{z}\Delta t},\qquad {\hat{B}}_{j}={e}^{-i(U{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{j+1}^{z}-J({\hat{\sigma }}_{j}^{x}{\hat{\sigma }}_{j+1}^{x}+{\hat{\sigma }}_{j}^{y}{\hat{\sigma }}_{j+1}^{y}))\Delta t}.$$
(14)

Using these operators, we can make the approximation

$${e}^{-i\hat{H}\Delta t}=\left(\prod _{j}{\hat{A}}_{j}\right)\left(\prod _{j\,\text{even}\,}{\hat{B}}_{j}\right)\left(\prod _{j\,\text{odd}\,}{\hat{B}}_{j}\right)+{\mathcal{O}}({(\Delta t)}^{2}),$$
(15)

which corresponds to the schematic quantum circuit show in Fig. 1b in the main text, and which we will refer to as the basic trotterization. If we wish to evolve to time \(t=M\Delta t\), then we find that the error is \({\mathcal{O}}(\Delta t)\), which is controlled by the size of the Trotter step \(\Delta t\). We can, therefore, improve the accuracy of the approximation by decreasing \(\Delta t\), however, this must be balanced against the cost of needing more Trotter steps, as explained in the main text.

To improve the accuracy of our simulations, we can use better approximations to the discrete evolution operators by way of higher-order Trotter decompositions.93 The leading error term in Eq. (11) is due to the non-zero commutator \([\hat{A},\hat{B}]\). By compensating for this error, we can increase the order of the leading error term.

The only higher-order decomposition that we will consider is the symmetrized Trotter step. Let us again start with the simple case of \(\hat{H}=\hat{A}+\hat{B}\). The symmetric decomposition would then be

$${e}^{-i\hat{H}t}={e}^{-i\frac{t}{2}\hat{A}}{e}^{-it\hat{B}}{e}^{-i\frac{t}{2}\hat{A}}+{\mathcal{O}}({t}^{3}).$$
(16)

The error in this decomposition is of higher-order due to the symmetry which ensures that \(U(-t)=U{(t)}^{\dagger }={U}^{-1}(t)\). This means that the even-order error terms vanish and the leading error is \(\sim \!{(\Delta t)}^{3}\). See ref. 93 for more details and for an iterative method for constructing higher-order decompositions.

For the Hamiltonian Eq. (3) in question, let us again make some definitions to simplify notation

$$\begin{array}{c}{\hat{A}}_{j}={e}^{-i{h}_{j}{\hat{\sigma }}_{j}^{z}\frac{\Delta t}{2}},\quad {\hat{B}}_{j}={e}^{-i(U{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{j+1}^{z}-J({\hat{\sigma }}_{j}^{x}{\hat{\sigma }}_{j+1}^{x}+{\hat{\sigma }}_{j}^{y}{\hat{\sigma }}_{j+1}^{y}))\frac{\Delta t}{2}},\\ {\hat{C}}_{j}={e}^{-i(U{\hat{\sigma }}_{j}^{z}{\hat{\sigma }}_{j+1}^{z}-J({\hat{\sigma }}_{j}^{x}{\hat{\sigma }}_{j+1}^{x}+{\hat{\sigma }}_{j}^{y}{\hat{\sigma }}_{j+1}^{y}))\Delta t},\end{array}$$
(17)

then the symmetric Trotter decomposition is

$${e}^{-i\hat{H}\Delta t}=\left(\prod _{j}{\hat{A}}_{j}\right)\left(\prod _{j\,\text{even}\,}{\hat{B}}_{j}\right)\left(\prod _{j\,\text{odd}\,}{\hat{C}}_{j}\right)\left(\prod _{j\,\text{even}\,}{\hat{B}}_{j}\right)\left(\prod _{j}{\hat{A}}_{j}\right)+{\mathcal{O}}({(\Delta t)}^{3}),$$
(18)

which is shown schematically in Fig. 1c of the main text.

Measurement

When making a measurement we will find the system in one of the many-body states \(\left|\lambda \right\rangle\) in this basis, e.g., \(\left|\,\uparrow \downarrow \downarrow \cdots \ \right\rangle\) or \(\left|\,\downarrow \uparrow \downarrow \cdots \ \right\rangle\). By performing multiple runs and measurements, we can approximate the probability of measuring each of the many-body states, that is, we can extract \({\left|{\alpha }_{\lambda }\right|}^{2}\), where \({\alpha }_{\lambda }\) is the probability amplitude for the state \(\left|\lambda \right\rangle\). These probabilities can then be used to construct the observables. To be more concrete, consider the expectation value of the operator \({\hat{\sigma }}_{j}^{z}\) on site \(j\). This is computed as follows,

$$\left\langle \psi (t)\right|{\hat{\sigma }}_{j}^{z}\left|\psi (t)\right\rangle =\sum _{\lambda :j=\uparrow }| {\alpha }_{\lambda }{| }^{2}-\sum _{\lambda ^{\prime} :j=\downarrow }| {\alpha }_{\lambda ^{\prime} }{| }^{2},$$
(19)

where the sums are over all states with the \({j}^{\text{th}}\) spin up or down respectively, and \(| {\alpha }_{\lambda }{| }^{2}\) is the proportion of measurements for which we found the state \(\lambda\). In all the following experiments we will use 8192 measurements per data point, which means that the statistical error for these local correlators is \(\sim 0.01\), which is too small to be included in our figures.

Error mitigation: physical Hilbert space

As we noted in the previous section, due to the presence of conserved quantities in the Hamiltonian evolution, the Hilbert space of states splits into those that are physically allowed by the evolution and those that are not. In particular, the models we consider have conserved net magnetization \({S}_{z}={\sum }_{j}{\hat{\sigma }}_{j}^{z}\). This fact turns out to be advantageous, and allows us to perform rudimentary error mitigation – a term that we use to distinguish it from scalable error correction methods, since in this case we deal only with the lowest order errors. The idea is to simply disregard any measurements for states outside of the physical Hilbert space. Let use present a simplified argument for why this might be a good thing to do, where we will first assume that only bit-flips can occur.

A single bit-flip in the course of the evolution will take us outside of the Hilbert space, and let us denote the probability of this happening as \(\Delta\). However, the lowest order of errors within the physical Hilbert space is \({\Delta }^{2}\), i.e., we need at least two bit-flip errors to get back to the same total magnetization. Hence, by discarding counts outside of the physical Hilbert space we reduce the leading order error to \({\Delta }^{2}\). If the probability, \(\Delta\), is sufficiently small, then we can rely on these perturbative arguments. However, if the error rate is large enough, then multiple bit-flip errors can become significant and the error mitigation will be ineffective.

In Fig. 6b, we show the percentage of measurements that are within the physical Hilbert space, that is, the percentage of measurements that are kept. For the first data points, we are simply measuring the initial state, which has an average measurement fidelity per qubit of \(\sim\!\! 95 \%\) leading to a value of approximately \({(95 \% )}^{6}\approx 70 \%\). At long times the number of retained states stabilises and approaches a value that corresponds to the percentage of states in the total Hilbert space that are physical – in other words, the percentage probability that a randomly selected state is in the physical subspace. Once this number of discarded measurements is reached, the errors are very large and the error mitigation is no longer effective.

Bit-flips are not the only type of errors that could occur. As an example, there are also phase errors, which do not necessarily change the net magnetization. However, we note that the constrained data does typically have improved accuracy, and so we use the restriction to the physical Hilbert space for all our subsequent data.

Quantum Circuits

Here, we will go over some of the details necessary to implement the trotterized evolution operators on the IBM devices. We will only cover those elements of direct relevance to this paper and refer the reader to ref. 1 for an introduction to quantum circuits and quantum computation. We will first introduce the quantum gates that can be implemented on the IBM quantum computers, and then decompose the two qubit unitary operations that appear in our trotterized evolution operator in terms of the elementary one and two qubit gates.

A quantum circuit consists of an array of quantum channels – which represent the physical qubits—and a series of quantum gates that are applied to them. These quantum gates are unitary operators that can be applied to one or more of the qubits. The IBM quantum devices can implement the CNOT gate along with an arbitrary single qubit gate, parametrized by three phases. The cobination of these gates forms a universal set that is, any \(N\) qubit gate can be implemented using a combination of these gates, and in principle an arbitrary quantum computation can be performed.

There is a collection of single particle gates that are useful for writing quantum circuits. We write down a list of the most frequently used gates and how they are implemented on the IBM machines. Consider the computational basis to be the tensor product of single qubit states in the \(z\)-basis, i.e., \(\{\left|\,\uparrow \right\rangle ,\left|\,\downarrow \right\rangle \}\). All matrix forms of the gates are given in this basis and all measurements are made in this \(z\)-basis. It is important to note that gate multiplication reads left to right, whereas matrix multiplication is right to left, i.e. .

Firstly, we have the Pauli matrices, which in the standard quantum information notation are

(20)

Secondly, we have the Hadamard and the \(S\) and \(T\) phase gates,

(21)

And finally, we have the \(X,Y\) and \(Z\) rotation gates,

(22)

which correspond to rotations of the qubit around the \(x,y\) and \(z\) axes respectively. All single qubit gates can be written as a product of these rotation gates, up to a phase. This phase is global and is not measurable and can therefore be omitted. In the IBM machine, all single qubit gates can be directly implemented using

(23)

For slightly less general gates the IBM computer implements either \({U}_{2}(\phi ,\lambda )={U}_{3}(0,\phi ,\lambda )\) or \({U}_{1}(\lambda )={U}_{3}(0,0,\lambda )\), which use fewer physical operations and shorter real time. Before running the circuits, we can combine all strings of single qubit gates into a single one of these three single qubit gates, using the functions available in qiskit.42

The most important two qubit gate for our purposes is the CNOT gate

(24)

This gate flips the second qubit depending on the state of the first. This gate allows the two qubits to become entangled, and combined with general single qubit gates forms a set capable of universal quantum computation, see ref. 1 for a proof. The CNOT is the only multi-qubit gate currently that can be directly implemented on the IBM quantum machines.

Also of interest to us is the reversed CNOT gate

(25)

where we differentiate the reversed CNOT from the CNOT because of the directionality of the IBM machines, i.e., only CNOTs in a given direction can be implemented along the qubit connections. If a CNOT is applied (programmatically) in the wrong directly, the above transformation using Hadamard gates will be applied by qiskit implicitly. Since the single qubit gate fidelities are an order of magnitude better than that of the CNOTs this transformation is not costly, and these additional gates will often be incorporated into other strings of single qubit gates.

Change of basis

When we make a measurement on the quantum machine it is with respect to a given basis, which we take to be the \(z\)-basis. However, we may choose to change the basis for several reasons such as: to measure different operators, to prepare an initial state, or to apply a gate which is more efficiently implemented in a different basis.

We will consider only local changes of basis, i.e. a change of basis for the individual qubits. While a general local change of basis can be implemented using the general single qubit gates above, the most frequently used will be those that change from the Z basis to the X or Y basis.

To change to the X basis, we use the Hadamard gate, \(H\). This implements the transformation

$$Z\to X,\qquad X\to Z,\qquad Y\to -Y.$$
(26)

Note that this mapping is its own inverse, which is a result of the Hadamard gate being both unitary and Hermitian.

To change to the Y basis, we use a combination of the Hadamard and \(S\) gates. We can implement the basis change with the combination \(HSH\), which maps

$$Z\to Y,\qquad Y\to Z,\qquad X\to -X.$$
(27)

Note that this combination of gates is not its own inverse but instead is \(H{S}^{\dagger }H\).

Two qubit gates

The Trotter decomposition allows us to write the general unitary time evolution operator approximately as a product of single and two qubit unitary operators. We, therefore, want to find a way to write a general two qubit unitary in terms of the CNOT gate and single qubit gates that can be applied directly on the IBM devices. The optimal decomposition is found in ref. 92. We briefly review the main results of this paper that are of direct relevance to us.

The optimal decomposition uses the fact that a general matrix in \(U(4)\) can be decomposed as \(U=({A}_{1}\otimes {A}_{2})\cdot N(\alpha ,\beta ,\gamma )\cdot ({A}_{3}\otimes {A}_{4})\),94 where

$$N(\alpha ,\beta ,\gamma )=\exp \left[i\left(\alpha {\sigma }^{x}\otimes {\sigma }^{x}+\beta {\sigma }^{y}\otimes {\sigma }^{y}+\gamma {\sigma }^{z}\otimes {\sigma }^{z}\right)\right].$$
(28)

As a quantum circuit, this can be written as

(29)

This operator \(N(\alpha ,\beta ,\gamma )\) is of direct interest to us for quantum dynamics since it is already of the form required for our Trotter decomposition. This gate can be constructed using a minimum of three CNOTs. The optimal decomposition for \(N(\alpha ,\beta ,\gamma )\) is given by the quantum circuit

(30)

where \(\theta =\frac{\pi }{2}-2\gamma ,\phi =2\alpha -\frac{\pi }{2}\), and \(\lambda =\frac{\pi }{2}-2\beta\). Note that in ref. 92 they use a different sign convention for the rotation gates. Despite the apparent asymmetry of the decomposition, this sequence of gates is symmetric with respect to swapping the two qubits.

For certain cases of our Hamiltonian, namely when \(U=0\), the \(N(\alpha ,\beta ,\gamma )\) gate is more general than we need. By restricting ourselves to \(N(\alpha ,0,\gamma )\) (plus single qubit basis changes) we can reduce the number of CNOTs required to two. This gives us access to matrices of the form

$$N(\alpha ,0,\gamma )=\exp [i(\alpha {\sigma }^{x}\otimes {\sigma }^{x}+\gamma {\sigma }^{z}\otimes {\sigma }^{z})].$$
(31)

We can proceed with the help of the so-called Magic Matrix92,94

(32)

Using this matrix we find \({{\mathcal{M}}}^{\dagger }N(\alpha ,0,\gamma ){\mathcal{M}}={e}^{i\gamma {\sigma }^{z}}\otimes {e}^{i\alpha {\sigma }^{z}}\), which in turn implies that \(N(\alpha ,0,\gamma )={\mathcal{M}}({e}^{i\gamma {\sigma }^{z}}\otimes {e}^{i\alpha {\sigma }^{z}}){{\mathcal{M}}}^{\dagger }\), since \({\mathcal{M}}\) is unitary. As a quantum circuit this can be written as

(33)

This gate can be further simplified by noting that a product of single qubit gates is another single qubit gate. Furthermore, \([S,{R}_{z}(\theta )]=0\), and \(H{R}_{z}(\theta )H={R}_{x}(\theta )\) which gives

(34)

where we have arbitrarily flipped the circuit with respect to the two qubits.

Choosing the best qubits

In all our numerics we used between 6 and 10 of the qubits of the IBM machines, which is only a subset of the available 20 qubits. Hence, we wish to find the best such subset so that we get the most accurate results from the machine. We do this by using a simple iterative algorithm, which we will outline here. We note that "best" is a matter of definition involving the balance of many different parameters. We define best to mean the set of qubits that has the lowest average CNOT errors. We do, however, impose restrictions on the allowed measurement error and T2 times.

The algorithm consists of the following steps:

  1. 1.

    Restrict the set of allowed qubits to \(\tilde{N}\) by discarding any for which the measurement error exceeds a measurement threshold or has a T2 time lower than a given threshold.

  2. 2.

    List all CNOTs that connect the allowed qubits.

  3. 3.

    Set value \(M=N-1\), where \(N\) is the number of qubits.

  4. 4.

    Construct a restricted list of \(M\) CNOTs that have the lowest errors.

  5. 5.

    From this restricted list of CNOTs, iteratively, construct all chains that connect \(N\) qubits.

  6. 6.

    If no such chain is found and \(M\,{<}\,\tilde{N}\), then set \(M=M+1\) and repeat from step 4. If no chain is found and \(M=\tilde{N}\), then add an extra qubit to the allowed list, increasing \(\tilde{N}\to \tilde{N}+1\) by increasing the measurement threshold and repeat from step 2.

  7. 7.

    Once a set of possible chains is found, pick the one with the lowest average CNOT error.

We note that this algorithm is not necessarily efficient or scalable, but it works for the current size of the quantum computers. See refs. 95,96 for a alternative approaches. For NISQ devices, we expect that we will similarly want to pick the best subset of qubits, rather than all of them, to achieve greater accuracy simulations and computations. Therefore, an efficient algorithm for large systems is likely to be important.

Data across different days

One practical issue with using the current IBM machines, is the fact that they are recalibrated on approximately a daily basis. Due to the large errors inherent in the devices, this can have a large effect on the results of a simulation and even mean that a different set of qubits is the "best" on different days. This has the unfortunate consequence that the data obtained from the machine will depend on the day on which it was computed. In this section we show a comparison of the results across three days, demonstrating this variability. Thankfully, we also show that the conclusions that we have made in the previous section remain valid independent of the day on which we performed the computations.

First, in Fig. 7a we show the constrained IBM data for the local density of the end spin—corresponding to that shown in Fig. 2b—obtained across the three days. This demonstrates the large fluctuations in our simulations that are due to the different calibrations of the device. For comparison, we show the results from two runs from the same day separated by approximately 10 h in Fig. 7a. Unlike in Fig. 7a the IBM device was not recalibrated between runs. The difference between the runs (blue squares) is of the order of 5%, which is comparable to the measurement and statistical errors.

Fig. 7
figure 7

Comparing data from different days. a Simulation of the magnetization on the end spin compared across three consecutive days. b Comparison of two separate runs for the local magnetization on the end site after a quench from a domain wall with \(N=6\) and Hamiltonian (case (i)). The data sets were obtained on 6 May 2019 with approximately 10 h between runs. The blue squares show the difference between runs. In both figures we impose the conservation laws, see main text.

Fortunately, the day-to-day fluctuations do not affect the qualitative behaviour that is simulated by the machine. For example, in Fig. 8 we compare the data for the spin correlator, corresponding to Fig. 4c of the main text. While some of the behaviour is reproduced in all three subfigures—such as the linear light-cone spreading at short-times—the quality of the simulations in Fig. 8a is quite clearly better. For instance, we see a much clearer linear spread of the correlations, and it is the only subfigure 8a where the correlations convincingly reach the boundary of the system.

Fig. 8
figure 8

Data computed across three consecutive days for the spin correlator. Corresponding to Fig. 4 of the main text. The subfigures are labelled by the dates on which the simulation was run.

We also consistently see the correct qualitative behaviour for the spread of particles, \({N}_{\text{half}}\), in Fig. 9. Here, we show the data corresponding to Fig. 3a. While the behaviour differs quantitatively across the days, the qualitative physical behaviour is clear in all plots. This verifies the robustness of the results that we presented above.

Fig. 9
figure 9

Data computed across three consecutive days. Corresponding to Fig. 3a. The date for each subfigure is shown at the top.

Finally, in Table 1, we show the different qubits that were chosen on three consecutive days. We also show the minimum, average and maximum values for the readout errors, CNOT errors and T2 decoherence times for the 6-qubit chain. These values were computed using the calibration data provided by IBM. This table shows that across the three day period, the optimal set of qubits can change and the corresponding values can fluctuate significantly.

Table 1 Values for IBM machine for the three consecutive days of simulations.

While there is clear variation across the three subsequent days that we studied, we point out that the parameters provided by IBM, shown in Table 1, are not necessarily good indicators of the quality of the simulation. For instance, the numbers in the table might suggest that on 12 March 2019 the accuracy of the quantum computer was the worst of the three, yet we find the opposite when considering the data for our simulations, see e.g., Fig. 8. While it is true that the results obtained from the other IBM devices are generally worse, which also reflects a significant difference in gate errors etc., it seems that the small set of numbers in Table 1 is not able to fully characterize the machine.