Introduction

The quantum simulation of quantum chemistry is one of the most anticipated applications of both near-term and fault-tolerant quantum computing. The idea to use quantum processors for simulating quantum systems dates back to Feynman1 and was later formalized by Lloyd,2 who together with Abrams, also developed the first algorithms for simulating fermions.3 The idea to use such simulations to prepare ground states in quantum chemistry was proposed by Aspuru-Guzik et al.4

That original work simulated the quantum chemistry Hamiltonian in a Gaussian orbital basis. While Gaussian orbitals are compact for molecules, they lead to complex Hamiltonians. Initial approaches had gate complexity \({\cal{O}}(N^{10})\),5,6 and the current lowest scaling algorithm in that representation has gate complexity of roughly \(\tilde {\cal{O}}(N^4)\),7 where N is number of Gaussian orbitals. Note that we use the notation \(\tilde {\cal{O}}( \cdot )\) to indicate an asymptotic upper bound suppressing polylogarithmic factors.

Recently, Babbush et al.,8 showed that using a plane wave basis restores structure to the Hamiltonian which enables more efficient algorithms. Currently, the two best algorithms simulating the plane wave Hamiltonian are one with \({\cal{O}}(N)\) spatial complexity and \({\cal{O}}(N^3)\) gate complexity (with small constant factors)9 and one with \({\cal{O}}(N\,{\mathrm{log}}\,N)\) spatial complexity and \(\tilde {\cal{O}}(N^2)\) gate complexity (with large constant factors),10 where N is the number of plane waves. The scaling of refs 9,10 assumes constant resolution and volume proportional to N. However, a more appropriate assumption when studying molecules is to take volume proportional to the number of electrons η, in which case the method of ref. 9 yields gate complexity \(\tilde {\cal{O}}(N^{10/3}{\mathrm{/}}\eta ^{1/3} + N^{8/3}{\mathrm{/}}\eta ^{2/3})\) and ref. 10 yields gate complexity \(\tilde {\cal{O}}(N^{8/3}{\mathrm{/}}\eta ^{2/3})\). The reason for taking volume proportional to η is that for condensed-phase systems (e.g., periodic materials) the electron density is independent of computational cell volume and for single molecules the wavefunction dies off exponentially in space outside of a volume scaling linearly in η.

While basis set discretization error is suppressed asymptotically as \({\cal{O}}(1{\mathrm{/}}N)\) regardless of whether N is the number of plane waves11,12 or Gaussians,13,14 there is a significant constant factor difference. Plane waves are the standard for treating periodic systems but one needs roughly a hundred times more plane waves than Gaussians8 to reach the accuracy needed to predict chemical reaction rates. Since requiring a hundred times more qubits is impractical in most contexts, this limits the applicability of these recent algorithms8,9,10,15,16 for molecules.

This work solves the plane wave resolution problem by introducing an algorithm with \({\cal{O}}(\eta \,{\mathrm{log}}\,N)\) spatial complexity and \(\tilde {\cal{O}}(N^{1/3}\eta ^{8/3})\) gate complexity. With this sublinear scaling in N, one can perform simulations with a huge number of plane waves at relatively low cost. Our approach is based on simulating a first-quantized momentum space representation of the potential from the rotating frame of the kinetic operator by using recently introduced interaction picture simulation techniques.10 While the actual implementations have little in common, our algorithm is conceptually dual to the interaction picture work of ref. 10 which simulates a second-quantized plane wave dual representation of the kinetic operator from the rotating frame of the potential. It is also possible to achieve sublinear scaling in basis size without the interaction picture technique; we briefly discuss how qubitization17 could be used to obtain \(\tilde {\cal{O}}(\eta ^{4/3}N^{2/3} + \eta ^{8/3}N^{1/3})\) scaling.

The reason for our greatly increased efficiency is that the complexity scales like the maximum possible energy representable in the basis. In second quantization, the basis includes states with up to η = N electrons, which results in very high energy, even though these states are not used in the simulation. In contrast, first quantization sets the number of electrons at η. There is still some polynomial scaling with N, which is a result of the increased resolution meaning that electrons can be closer together. If we were to consider constant resolution (as in refs 8,9,10), our complexity would actually be polylogarithmic in N, representing an exponential speedup in basis size.

Results

Encoding quantum simulations of electronic structure in momentum space first quantization

We will represent our system of η particles in N orbitals using first quantization. Thus, we require η registers (one for each particle) of size log N (indexing which orbitals are occupied). Since electrons are antisymmetric our registers will encode the wavefunction as

$${\begin{array}{*{20}{l}}|\psi \rangle = \mathop {\sum}\limits_{p_\ell \in G} {\alpha _{p_1 \cdots p_\eta }} |p_1 \cdots p_i \cdots p_j \cdots p_\eta \rangle \\= - \mathop {\sum}\limits_{p_\ell \in G} {\alpha _{p_1 \cdots p_\eta }} |p_1 \cdots p_j \cdots p_i \cdots p_\eta \rangle \end{array}}$$
(1)

where G is a set of N spin-orbitals, so the summation goes over all subsets of the orbitals that contain η unique elements. The second line of the above equation is being used to convey that, due to antisymmetry, swapping any two electron registers induces a phase of −1. We will specialize to plane wave orbitals in three dimensions and ignore the spin for simplicity, so G = [−N1/3/2, N1/3/2]3\({\mathbb{Z}}^3\). Using plane waves,

$$\langle r_1 \cdots r_\eta |p_1 \cdots p_\eta \rangle \equiv \sqrt {\frac{1}{{{\mathrm{\Omega }}^\eta }}} \mathop {\prod}\limits_{j = 1}^\eta {e^{ - i\,k_{p_j} \cdot r_j}}$$
(2)

where rj is the position of electron j in real space, Ω is the computational cell volume, and kp = 2πp1/3 is the wavenumber of plane wave p.

Unlike in second quantization where antisymmetry is enforced in the operators so that product states of qubits encode Slater determinants,18,19,20,21,22,23 the antisymmetrization indicated in the second line of Eq. (1) must be enforced explicitly in the wavefunction since the computational basis states in Eq. (2) are not antisymmetric. However, any initial state can be antisymmetrized with gate complexity \({\cal{O}}(\eta \,{\mathrm{log}}\,\eta \,{\mathrm{log}}\,N)\) using the techniques recently introduced in ref. 24. Evolution under the Hamiltonian will maintain antisymmetry provided that it exists in the initial state (a consequence of fermionic Hamiltonians commuting with the electron permutation operator).

The use of first quantization dates back to the earliest work in quantum simulation.2,3,25,26,27 Though less common for fermionic systems, several papers have analyzed chemistry simulations using first quantization of real space grids.28,29,30 Such grids are incompatible with a Galerkin formulation (the usual discretization strategy used in chemistry involving integrals over the basis) and require methods such as finite-difference discretization, which lack the variational bounds on basis error guaranteed by the Galerkin formulation. Real space grids also have different convergence properties; for example, ref. 30 finds that in order to maintain constant precision in the representation of certain states, the inverse grid spacing must sometimes scale exponentially in particle number.

Two previous papers31,32 have presented simulation algorithms within a Gaussian orbital basis at spatial complexity \({\cal{O}}(\eta \,{\mathrm{log}}\,N)\). These approaches do not use first quantization (they still enforce symmetry in the operators rather than in the wavefunction); instead, refs 31,32 simulate a block of fixed particle number in the second-quantized Hamiltonian known as the configuration interaction matrix. The more efficient of these two approaches has \(\tilde {\cal{O}}(\eta ^2N^3)\) gate complexity,32 so our \(\tilde {\cal{O}}(\eta ^{8/3}N^{1/3})\) gate complexity is a substantial improvement.

By integrating the plane wave basis functions with the Laplacian and Coulomb operators in the usual Galerkin formulation33 we obtain H = T + U + V such that

$$T = \mathop {\sum}\limits_{j = 1}^\eta {\mathop {\sum}\limits_{p \in G} {\frac{{\left\| {k_p} \right\|^2}}{2}} } |p\rangle \langle p|_j$$
(3)
$$U = - \frac{{4\pi }}{{\mathrm{\Omega }}}\mathop{\sum}\limits_{\ell =1}^L \mathop {\sum}\limits_{j = 1}^\eta\ \sum\limits_{\mathop{p,q \in G}\limits_{p \ne q}} \left( \zeta _\ell \frac{e^{i\,k_{q - p} \cdot R_\ell }}{\left\| {k_{p - q}} \right\|^2} \right) |p\rangle \langle q|_j$$
(4)
$$V = \frac{{2\pi }}{{\mathrm{\Omega}}}\sum\limits_{\mathop {i,j = 1}\limits_{i \ne j}}^{\eta} {\sum\limits_{p,q \in G} {\sum\limits_{\mathop {\nu \in {G_0}}\limits_{(\mathop {p + \nu ) \in G}\limits_{(q - \nu ) \in G}}} {\frac{1}{{{{\left\| {{k_\nu }} \right\|}^{2}}}}|p + \nu \rangle \langle p{|_i}\cdot |q - \nu \rangle \langle q{|_j}} } }$$
(5)

where T is the kinetic operator, U is the external potential operator, and V is the two-body Coulomb operator. The set G0 is [−N1/3, N1/3]3/{(0, 0, 0)} \({\mathbb{Z}}^3\), \(R_\ell\) are nuclear coordinates, \(\zeta _\ell\) are nuclear charges, L is the number of nuclei, and we use |q〉〈p|j as shorthand notation for

$$I_1 \otimes \cdots \otimes |q\rangle \langle p|_j \otimes \cdots \otimes I_\eta .$$
(6)

While this Hamiltonian corresponds to a cubic cell with periodic boundaries, our approach can be easily extended to different lattice geometries (including non-orthogonal unit cells) and systems of reduced periodicity.34 Note that we have chosen the frequencies in G0 to span twice the range as those in G in order to cover the maximum momenta that may be exchanged.

Simulating chemistry in the interaction picture

Our scheme for simulation builds on the interaction picture approach introduced in ref. 10. This approach is useful for performing simulation of a Hamiltonian H = A + B where norms of A and B differ significantly so that ||A||  ||B||. The idea is to perform the simulation in the interaction picture in the rotating frame of A so that the large norm of A does not enter the simulation complexity in the usual way.

The principle in ref. 10 is similar to Hamiltonian simulation via a Taylor series,35 except that the expression used to approximate the evolution for time t is

$$e^{ - i(A + B)t} \approx {\sum\limits_{k = 0}^{K - 1}} {( - i)^{k}} {\int\nolimits_{0}^{t}} {\mathrm{d}}t_{1} {\int\nolimits_{t_1}^{t}} {\mathrm{d}}t_{2} \cdots {\int\nolimits_{t_{k - 1}}^{t}} {\mathrm{d}}t_{k} {\cal{I}}_{k}$$
(7)
$${\cal{I}}_k = e^{ - iA(t - t_k)}Be^{ - iA(t_k - t_{k - 1})}B \cdots e^{ - iA(t_2 - t_1)}Be^{ - iAt_1}.$$

The operation given by this expression can be implemented by using a linear combination of unitaries (LCU) approach.36 The operator B is expressed as a linear combination of unitaries and the time is discretized, so Eq. (7) is a linear combination of unitaries which can be implemented using a control register and oblivious amplitude amplification.37 For a short time, the cutoff K can be chosen logarithmic in the inverse error. To implement evolution for long times, the time is broken up into a number of time segments of length τ, and this expression is used on each of those segments.

The overall complexity depends on the value of λ, which is the sum of the weights of the unitaries when expressing B as a sum of unitaries. To simulate within error ϵ the number of segments used is \({\cal{O}}(\lambda t)\), and \(K = {\cal{O}}({\mathrm{log}}(\lambda t{\mathrm{/}}\epsilon ){\mathrm{/log}}\,{\mathrm{log}}(\lambda t{\mathrm{/}}\epsilon ))\). The complexity in terms of LCU applications of B and evolutions eiAτ is therefore

$${\cal{O}}\left( {\lambda t\frac{{{\mathrm{log}}(\lambda t/\epsilon )}}{{{\mathrm{log}}\,{\mathrm{log}}(\lambda t/\epsilon )}}} \right).$$
(8)

There is also a multiplicative factor of log(tA/ϵλ) for the gate complexity, which originates from the complexity of preparing the ancilla states used for the time. This result is given in Lemma 6 of ref. 10. To interpret the result as given in ref. 10, note that the ‘HAM−T’ oracle mentioned in that work includes the evolution under A. That is why the complexity quoted there for the number of applications of eiAτ does not include a logarithmic factor.

In quantum chemistry one often decomposes the Hamiltonian into three components H = T + U + V, and it is natural to group U and V together, because they usually commute with each other but not with T. The work of ref. 10 focused on the simulation of chemistry in second quantization where \(\left\| {U + V} \right\| = {\cal{O}}(N^{7/3}/\Omega ^{1/3})\) and \(\left\| T \right\| = {\cal{O}}(N^{5/3}/\Omega ^{2/3})\), so \(\left\| {U + V} \right\| \gg \left\| T \right\|\). However, for first-quantized momentum space we will observe the reverse trend that \(\left\| {U + V} \right\| \ll \left\| T \right\|\) when \(N \gg \eta\).

We therefore choose that A = T and B = U + V, and need to express the potential as a linear combination of unitaries in momentum space,

$$B = U + V = \mathop {\sum}\limits_{s = 1}^S {w_s} H_s{\mkern 1mu} ,\qquad \lambda = \mathop {\sum}\limits_{s = 1}^S {w_s} {\mkern 1mu} ,$$
(9)

where ws are positive scalars and Hs are unitary operators. The convention in this paper is that the ws are real and non-negative, with any phases included in the Hs. A subtlety here is that when expressing U and V as a sum of unitaries we need to account for cases where addition or subtraction with ν would give values outside G. To account for this, we can express U and V as

$$U = \mathop {\sum}\limits_{\nu \in G_0} {\mathop {\sum}\limits_{\ell = 1}^L {\frac{{2\pi \zeta _\ell }}{{{\mathrm{\Omega }}\left\| {k_\nu } \right\|^2}}} } \mathop {\sum}\limits_{j = 1}^\eta {\mathop {\sum}\limits_{x \in \{ 0,1\} } {\left( { - e^{ - i\,k_\nu \cdot R_\ell }\mathop {\sum}\limits_{p \in G} {( - 1)^{x[(p - \nu ) \notin G]}} |p - \nu \rangle \langle p|_j} \right)} }$$
(10)
$$V = \mathop {\sum}\limits_{\nu \in G_0} \frac{\pi }{{\mathrm{\Omega }}\left\| {k_\nu } \right\|^2} \sum\limits_{\mathop{i,j = 1}\limits_{i \ne j}}^\eta \sum\limits_{x \in \{ 0,1\} } \left(\sum\limits_{p,q \in G} {( - 1)} ^{x([(p + \nu ) \notin G] \vee [(q - \nu ) \notin G])}|p + \nu \rangle \langle p|_i \cdot |q - \nu \rangle \langle q|_j \right),$$
(11)

and it is apparent that the parts in between the large parentheses above are unitary, so we take them to be the operators Hs in Eq. (9). In Eq. (10) we use the convention that Booleans correspond to 0 for false and 1 for true. This modification ensures that there is no contribution to the sum from parts where the additions or subtractions would result in values outside G. For example, for U, if (p − ν) is not in G, then the value of (p − ν) G is interpreted as 1. This means that we have

$$\mathop {\sum}\limits_{x \in \{ 0,1\} } {( - 1)^x} = 1 - 1 = 0.$$
(12)

Thus, we see that λ is asymptotically equal to η2 times

$$\frac{1}{{\mathrm{\Omega }}} {\sum\limits_{\nu \in G_0}} {\frac{{{\mathrm{\Omega }}^{2/3}}}{{\nu ^2}}} \le \frac{1}{{\mathrm{\Omega }}}{\int\nolimits_0^{2\pi }} {{\int\nolimits_0^\pi} {{\int\nolimits_0^{N^{1/3}}}{\mathrm{d}}r}}\, \ {\mathrm{d}}\phi \ {\mathrm{d}}\theta \frac{{{\mathrm{\Omega }}^{2/3}}}{{r^2}}r^2\;{\mathrm{sin}}\theta$$
$$= {\cal{O}}\left( {N^{1/3}/{\mathrm{\Omega }}^{1/3}} \right) = {\cal{O}}\left( {N^{1/3}/\eta ^{1/3}} \right)$$
(13)

where in the last line we use Ω η, which is typical for molecules.8 From this, we find that \(\lambda = {\cal{O}}(\eta ^{5/3}N^{1/3})\).

The scaling obtained in Eq. (13) corresponds to the maximum possible value of U + V. Changing the orbital basis does not change the maximum possible energy. If one considers the dual basis to the plane waves, then the orbitals are spatially localized with a minimum separation proportional to (Ω/N)1/3.8 The minimum separation between electrons proportional to (Ω/N)1/3 gives a potential energy scaling as (N/Ω)1/3, as given in Eq. (13). In contrast, the kinetic energy T has worse scaling with N because the maximum speed scales as (N/Ω)1/3 (the inverse of the separation), which is squared to give a kinetic energy proportional to (N/Ω)2/3.8

Performing simulation in the kinetic frame

To implement our algorithm we need to realize eiTτ as well as realize (U + V)/λ via a linear combination of unitaries. Using Eq. (3) we express eiTτ as

$$\mathop {\sum}\limits_{p_\ell \in G} {{\mathrm{exp}}} \left[ { - \frac{{i\tau }}{2}\mathop {\sum}\limits_{j = 1}^\eta {\left\| {k_{p_j}} \right\|^2} } \right]|p_1\rangle \langle p_1|_1 \cdots |p_\eta \rangle \langle p_\eta |_\eta .$$
(14)

Therefore, in order to apply this operator, we just need to increment through each of the η electron registers to calculate the sum of kp2, then apply a phase rotation according to that result. The complexity of calculating the square η times is \({\cal{O}}(\eta \,{\mathrm{log}}^2\,N)\) (assuming we are using an elementary multiplication algorithm). The complexity of the controlled rotations is \({\cal{O}}({\mathrm{log}}(\eta N))\), though there will be an additional logarithmic factor if we consider complexity in terms of T gates for circuit synthesis.

To apply the U + V operator we will need a select operation and a prepare operation. We use one qubit which selects between performing U and V. For V (the two-electron potential) the select LCU oracle will be

$${\textsc{select}}|0\rangle |i\rangle |j\rangle |\nu \rangle |p_1\rangle _1 \cdots |p_i\rangle _i \cdots |p_j\rangle _j \cdots |p_\eta \rangle _\eta \mapsto$$
$$|0\rangle |i\rangle |j\rangle |\nu \rangle |p_1\rangle _1 \cdots |p_i + \nu \rangle _i \cdots |p_j - \nu \rangle _j \cdots |p_\eta \rangle _\eta .$$
(15)

This operation has complexity \({\cal{O}}(\eta \,{\mathrm{log}}\,N)\), because we can iterate through each of the η electron registers checking if the register number is equal to i or j, and if it is then adding ν (for i) or subtracting ν (for j). For U (the nuclear term), we need to apply

$${\textsc{select}}|1\rangle |\ell \rangle |j\rangle |\nu \rangle |p_1\rangle _1 \cdots |p_j\rangle _j \cdots |p_\eta \rangle _\eta \mapsto$$
$$- e^{ - ik_\nu \cdot R_\ell }|1\rangle |\ell \rangle |j\rangle |\nu \rangle |p_1\rangle _1 \cdots |p_j - \nu \rangle _j \cdots |p_\eta \rangle _\eta .$$
(16)

We again need to iterate through the registers, and subtract ν if the register number is equal to j, which gives complexity \({\cal{O}}(\eta \,{\mathrm{log}}\,N)\). The register |i〉 is replaced with \(|\ell \rangle\), and we need to apply a phase factor \(e^{ - ik_\nu \cdot R_\ell }\). This phase factor can be obtained by first computing the dot product \(k_\nu \cdot R_\ell\), which has complexity \({\cal{O}}({\mathrm{log}}\,N\,{\mathrm{log}}(1/\delta _R))\), where δR is the relative precision with which the positions of the nuclei are specified. For L nuclei (note that L ≤ η), we will have an additional complexity of \({\cal{O}}(L\,{\mathrm{log}}(1/\delta _R))\) in order to access a classical database for the positions of the nuclei \(R_\ell\). Then, applying the controlled rotation has complexity \({\cal{O}}({\mathrm{log}}\,N + {\mathrm{log}}(1/\delta _R))\).

In order to take account of the modification involving x that is referred to in Eq. (12), we would have an additional control qubit for x which would be prepared in an equal superposition. When doing the additions and subtractions, one would check if they give values outside G and perform a Z operation on that ancilla if any of the results were outside G.

Let δ be the allowable error in the prepare and select operations. The number of times these operations need to be performed is \(\tilde {\cal{O}}(\lambda t)\), so to obtain total error no greater than \(\epsilon\) we can take \({\mathrm{log}}(1/\delta ) = {\cal{O}}\left( {{\mathrm{log}}\left( {\lambda t/\epsilon } \right)} \right)\). Since λ is polynomial in η and N, we have \({\mathrm{log}}(1/\delta ) = {\cal{O}}\left( {{\mathrm{log}}\left( {\eta Nt/\epsilon } \right)} \right)\). The error in the implementation of U/λ due to the error in the positions of the nuclei is \({\cal{O}}(\delta _RN^{1/3}Z/\eta )\). Since the total nuclear charge should be the same as the number of electrons (since the total charge is zero), we can cancel Z and η. Then we also obtain \({\mathrm{log}}(1/\delta _R) = {\cal{O}}\left( {{\mathrm{log}}\left( {\eta Nt/\epsilon } \right)} \right)\).

The prepare operation must act as

$$\begin{array}{l}{\textsc{prepare}}|0\rangle ^{ \otimes ({\mathrm{log}}N + 2{\mathrm{log}}\eta + 1)} \\\mapsto \left( {|0\rangle \mathop {\sum}\limits_{i,j = 1}^\eta {\mathop {\sum}\limits_{\nu \in G_0} {\sqrt {\frac{{2\pi }}{{\lambda {\mathrm{\Omega }}\left\| {k_\nu } \right\|^2}}} } } |\nu \rangle |i\rangle |j\rangle } \right. \left. \,+ {|1\rangle \mathop {\sum}\limits_{j = 1}^\eta {\mathop {\sum}\limits_{\ell = 1}^L {\mathop {\sum}\limits_{\nu \in G_0} {\sqrt {\frac{{4\pi \zeta _\ell }}{{\lambda {\mathrm{\Omega }}\left\| {k_\nu } \right\|^2}}} } } } |\nu \rangle |\ell \rangle |j\rangle } \right).\end{array}$$
(17)

This state preparation can be performed by initially rotating the first qubit to give the correct weighting between the U and V terms. We prepare the register |j〉 in an equal superposition. If the first qubit is zero (for the V component) we also prepare the penultimate register in an equal superposition over |i〉. We do not need to explicitly eliminate the case i = j, because in that case the operation performed is the identity and therefore has no effect on the evolution. Preparing an equal superposition over η values has complexity \({\cal{O}}({\mathrm{log}}\,\eta )\).

In the case that the first qubit is one (for the U component), we need to prepare the penultimate register in a superposition over \(|\ell \rangle\) with weightings \(\sqrt {\zeta _\ell }\). The nuclear charges \(\zeta _\ell\) will be given by a classical database with complexity \({\cal{O}}(L)\). To accomplish this one can use the QROM and subsampling strategies discussed in refs 7,9,38. Again recall that L ≤ η. In fact, usually Lη and the equality is only saturated for systems consistingly entirely of hydrogen atoms. For a material, in practice there will be a limited number of nuclear charges with nuclei in a regular array, so this complexity will instead be logarithmic in L. Similarly, for the selected operation, a regular array of nuclei will mean that the complexity of applying the phase factor \(e^{ - ik_\nu \cdot R_\ell }\) is logarithmic in L.

The key difficulty in implementing prepare is realizing the superposition over ν with weightings 1/kν. That is, we aim to prepare a state proportional to

$$\mathop {\sum}\limits_{\nu \in G_0} {\frac{1}{{\left\| {k_\nu } \right\|}}} |\nu \rangle .$$
(18)

We describe this procedure in the next section. If there is a regular array of nuclei then the overall complexity obtained is \({\cal{O}}({\mathrm{log}}(\eta Nt/\epsilon ){\mathrm{log}}\,N)\), where we use the fact that L < η. If a full classical database for the nuclei is required, then the complexity will have an additional factor of \({\cal{O}}(L\,{\mathrm{log}}(\eta Nt/\epsilon ))\).

Between implementing eiTτ, select, and prepare, the dominant cost is \({\cal{O}}(\eta \,{\mathrm{log}}^2N)\) for implementing eiTτ. There is also a cost of \({\cal{O}}({\mathrm{log}}(\eta Nt/\epsilon ){\mathrm{log}}\,N)\) for computing \(k_\nu \cdot R_\ell\), but in practice it should be smaller. The factor of \({\mathrm{log}}(t\left\| A \right\|/\epsilon \lambda )\) from ref. 10 will also be smaller. These are the costs of a single segment, and the number of segments is given by Eq. (8) as \(\tilde {\cal{O}}(\lambda t)\), with \(\lambda = {\cal{O}}(\eta ^{5/3}N^{1/3})\). Thus, the total complexity is \(\tilde {\cal{O}}(\eta ^{8/3}N^{1/3}t)\).

Preparing the momentum state

In this section we develop a surprisingly efficient algorithm for preparing the state in Eq. (18). Since this step would otherwise be the bottleneck of our algorithm, our implementation is crucial for the overall scaling. The general approach is to use a series of larger and larger nested cubes, each of which is larger than the previous by a factor of 2. The index μ controls which cube we consider. For each μ we prepare a set of ν values in that cube. We initially prepare a superposition state

$$\frac{1}{{\sqrt {2^{n + 1} - 4} }}\mathop {\sum}\limits_{\mu = 2}^n {\sqrt {2^\mu } } |\mu \rangle$$
(19)

which ensures that we obtain the correct weighting for each cube. This state may be prepared with complexity \({\cal{O}}(n)\), which is low cost because n is logarithmic in N. The overall preparation will be efficient since the value of 1/kν does not vary by a large amount within each cube, so the amplitude for success is large. The variation of 1/kν between cubes is accounted for by weighting in the initial superposition over μ.

To simplify the description of the state preparation, we assume that the representation of the integers for ν uses sign bits. The sign bits will need to be taken account of in the addition circuits. It also needs to be taken account of in the preparation, because there are two distinct combinations that correspond to zero. If each νx, νy, and νz is represented by n bits, then each will give numbers from −(2n−1 − 1) to 2n−1 − 1. That is, we have N1/3 = 2n−1 − 1. Controlled by μ we perform Hadamards on μ of the qubits representing νx, νy, νz to represent the values from −(2μ−1 − 1) to 2μ−1 − 1.

As mentioned above, due to the representation of the integers the number zero is represented twice, with a plus sign and a minus sign. To ensure that all numbers have the same weighting at this stage, we will flag a minus zero as a failure. The total number of combinations before flagging the failure is 23μ so the squared amplitude is the inverse of this. Therefore, the state at this stage is

$$\frac{1}{{\sqrt {2^{n + 1} - 4} }}\mathop {\sum}\limits_{\mu = 2}^n {\mathop {\sum}\limits_{\nu _x,\nu _y,\nu _z = - (2^{\mu - 1} - 1)}^{2^{\mu - 1} - 1} {2^{ - \mu }} } |\mu \rangle |\nu _x\rangle |\nu _y\rangle |\nu _z\rangle .$$
(20)

Next, we test whether all of νx, νy, νz are smaller than (in absolute value) 2μ−2. If they are, then the point is inside the box for the next lower value of μ, and we flag failure on an ancilla qubit. Note that for μ = 2 this means that we test whether ν = 0, which we need to omit. This requires testing if all of three bits for νx, νy, νz are zero. The three bits that are tested depend on μ, so the complexity is \({\cal{O}}(n)\) (due to the need to check all 3n qubits). The state excluding the failures can then be given as

$$\frac{1}{{\sqrt {2^{n + 1} - 4} }}\mathop {\sum}\limits_{\mu = 2}^n {\mathop {\sum}\limits_{\nu \in B_\mu } {\frac{1}{{2^\mu }}} } |\mu \rangle |\nu _x\rangle |\nu _y\rangle |\nu _z\rangle ,$$
(21)

where Bμ (for box μ) is the set of ν such that the absolute values of νx, νy, νz are less than 2μ−1, but it is not the case that they are all less than 2μ−2. That is,

$${\begin{array}{*{20}{l}}B_\mu = \left\{ {\nu |(0 \le |\nu _x| < 2^{\mu - 1}) \wedge (0 \le |\nu _y| < 2^{\mu - 1})} \right.\wedge (0 \le |\nu _z| < 2^{\mu - 1})\\ \wedge \left( {(|\nu _x| \ge 2^{\mu - 2})} \right.\left. { \vee (|\nu _y| \ge 2^{\mu - 2}) \vee \left. {(|\nu _z| \ge 2^{\mu - 2})} \right)} \right\}.\end{array}}$$
(22)

Next we prepare an ancilla register in an equal superposition of |m〉 for m = 0 to M − 1, where M is a power of two and is chosen to be large enough to provide a sufficiently accurate approximation of the overall state preparation. The preparation of the superposition for m can be obtained entirely using Hadamards. We test the inequality

$$(2^{\mu - 2}/\left\| \nu \right\|)^2 \,>\, m/M.$$
(23)

The left-hand side can be as large as 1 in this region, because we can have just one of νx, νy, νz as large as 2μ−2, and the other two equal to zero. That is, we are at the center of a face of the inner cube. In order to avoid divisions which are costly to implement, the inequality testing will be performed as

$$(2^{\mu - 2})^2M \,>\, m\left(\nu _x^2 + \nu _y^2 + \nu _z^2\right).$$
(24)

The resulting state will be (omitting the parts where the inequality is not satisfied)

$$\frac{1}{{\sqrt {M(2^{n + 1} - 4)} }}\mathop {\sum}\limits_{\mu = 2}^n {\mathop {\sum}\limits_{\nu \in B_\mu } {\mathop {\sum}\limits_{m = 0}^{Q - 1} {\frac{1}{{2^\mu }}} } } |\mu \rangle |\nu _x\rangle |\nu _y\rangle |\nu _z\rangle |m\rangle ,$$
(25)

where \(Q = \lceil {M(2^{\mu - 2}/\left\| \nu \right\|)^2}\rceil\) is the number of values of m satisfying the inequality. The amplitude for each ν will then be proportional to the square root of the number of values of m, and so is

$$\sqrt {\frac{{\left\lceil {M(2^{\mu - 2}/\left\| \nu \right\|)^2} \right\rceil }}{{M2^{2\mu }(2^{n + 1} - 4)}}} \approx \frac{1}{{4\sqrt {2^{n + 1} - 4} }}\frac{1}{{\left\| \nu \right\|}}.$$
(26)

The two sides are approximately equal for large M, and in that limit we obtain amplitudes proportional to 1/ν, as required. We have omitted the parts of the state flagged as failure, and the norm squared of the success state gives the probability for success. In the limit of large M, the norm squared is

$${P_n} = \frac{1}{{2^5}({2^n} - 2)}\sum\limits_{\mathop {{\nu_{x}},{\nu_{y}},{\nu_{z}} = - ({2^{n - 1}} - 1)}\limits_{\nu \ne 0} }^{{2^{n - 1}} - 1} {\frac{1}{{\nu_{x}^{2} + \nu_{y}^{2} + \nu_{z}^{2}}}.}$$
(27)

In the limit of large n, this expression can be approximated by an integral, which gives

$$P_n \approx \frac{1}{8}{\int\nolimits_0^1} {\mathrm{d}x}{\int\nolimits_0^1} {\mathrm{d}y} {\int\nolimits_0^1} {\mathrm{d}z} \frac{1}{{x^2 + y^2 + z^2}}.$$
(28)

The integral is the box integral B3(−2) in ref. 39. In ref. 39, B3(−2) = 3C2(−2, 1), and C2(−2, 1) is given by Eq. (40) of that work with a = 1. That gives the asymptotic value

$$P_n \approx \frac{3}{8}\left[ {{\mathrm{Ti}}_{\mathrm{2}}(3 - \sqrt 8 ) - G + \frac{\pi }{2}{\mathrm{log}}(1 + \sqrt 2 )} \right] = 0.2398 \ldots ,$$
(29)

where Ti2 is the Lewin inverse tangent integral and G is the Catalan constant. After a single step of amplitude amplification, the probability of failure is

$${\mathrm{sin}}^2\left( {3{\mathrm{arccos}}\left( {\sqrt {P_n} } \right)} \right) \approx 0.001261 \ldots .$$
(30)

Numerically we find P2 = 11/48, and it increases with n towards the analytically predicted value. Using a single step of amplitude amplification brings the probability of success close to 1 (see Fig. 1). Note that the “failure” here does not mean that the entire algorithm fails. In the case of “failure” of the state preparation one can simply perform the identity instead of the controlled unitaries in the select operation. The result is that a small known amount of the identity is added to the Hamiltonian, which can just be subtracted from any eigenvalue estimates. The amplitude amplification triples the complexity for the state preparation.

Fig. 1
figure 1

The failure probability for the state preparation after a single step of amplitude amplification. The horizontal dotted line shows the predicted asymptotic value from Eq. (30)

Next we consider the error in the state preparation due to the finite value of M. The relevant quantity is the sum of the errors in the squared amplitudes, as that gives the error in the weightings of the operations applied to the target state. That error is upper bounded by

$$\frac{1}{{M(2^{n + 1} - 4)}}\mathop {\sum}\limits_{\mu = 2}^n {\mathop {\sum}\limits_{\nu \in B_\mu } {\frac{1}{{2^{2\mu }}}} } \,\,{<}\, \frac{1}{{M(2^{n + 1} - 4)}}\mathop {\sum}\limits_{\mu = 2}^n {2^\mu } = \frac{1}{M}.$$
(31)

This error corresponds to the error in implementation of (U + V)/λ. As discussed above, the error in this implementation, δ, can satisfy \({\mathrm{log}}(1/\delta ) = {\cal{O}}({\mathrm{log}}(\eta Nt/\epsilon ))\). Since \(\delta = {\cal{O}}(1/M)\), we can take the number of bits of M as \({\mathrm{log}}\,M = {\cal{O}}({\mathrm{log}}(\eta Nt/\epsilon ))\). The complexities of the steps of this procedure are:

  1. 1.

    The register |μ〉 can be represented in unary, so the state preparation takes a number of gates (rotations and controlled rotations) equal to \(n - 1 = {\cal{O}}({\mathrm{log}}\,N)\), because the dimension is logarithmic in N.

  2. 2.

    The superposition over νx, νy, νz can be produced with \(3n = {\cal{O}}({\mathrm{log}}\,N)\) controlled Hadamards. These Hadamards can be controlled by qubits of the unary register used for |μ〉.

  3. 3.

    Testing whether the negative zero has been obtained can be performed with a multiply-controlled Toffoli with n controls, which has complexity \({\cal{O}}(n) = {\cal{O}}({\mathrm{log}}\,N)\).

  4. 4.

    Testing whether the value of ν is inside the inner box can be performed by using a series of n multiply-controlled Toffolis with 4 controls (with a unary qubit for |μ〉), and one qubit each from the registers for each of the components of ν. The complexity is therefore \({\cal{O}}(n) = {\cal{O}}({\mathrm{log}}\,N)\).

  5. 5.

    The preparation of the equal superposition over m has complexity \({\cal{O}}({\mathrm{log}}(1/\delta ))\) Hadamards.

  6. 6.

    The inequality test involves multiplications, and therefore has complexity given by the product of the number of digits (better-scaling algorithms would only perform better for an unrealistically large number of digits). The complexity is therefore \({\cal{O}}({\mathrm{log}}(1/\delta ){\mathrm{log}}\,N)\).

The inequality test is the most costly step due to the multiplications, and gives the overall cost of the state preparation algorithm. Nevertheless, it still has logarithmic cost, so the complexity of the prepare operation will be negligible compared to the costs of the other steps. This concludes our procedure for realizing the state in Eq. (18), which completes our presentation of the overall simulation procedure.

Discussion

The low scaling dependence of our methods on N allows us to easily overcome the constant factor difference in resolution between plane waves and Gaussians. In fact, using these algorithms we expect that one can achieve precisions limited only by relativistic effects and the Born-Oppenheimer approximation. However, the latter limitation can also be alleviated by our approach since one can use enough plane waves to reasonably span the energy scales required for momentum transfer between nuclei and electrons, and thus support simulations with explicit quantum treatment of the nuclei. We also expect that our approach could be viable for the first generation of fault-tolerant quantum computers.

Let us consider the calculation of the FeMoco cofactor of the Nitrogenase enzyme discussed in refs 40,41,42 which involved 54 electrons and 108 Gaussian spin-orbitals. FeMoco is the active site of biological Nitrogen fixation and its electronic structure has remained elusive to classical methods. The work of ref. 40 found that roughly 1015 T gates would be required, which translates to needing roughly 108 physical qubits if implemented in the surface code with gates at 10−3 error rate. The large qubit count here arises from needing to parallelize magic state distillation (the system register would need only about 105 physical qubits). In comparison, the \({\cal{O}}(N^3)\) scaling algorithm of ref. 9 has been shown to require less than 109 T gates to solve a molecule with 100 plane wave spin-orbitals (which is not enough resolution for FeMoco).

Supposing we use 106 plane wave spin-orbitals for these 54 electrons our algorithm would require roughly 103 logical qubits (which can be encoded in roughly 106 physical qubits under the architecture assumptions discussed in ref. 9, which are more conservative than those in ref. 40). Under these assumptions the value of η8/3N1/3 is only about 4 × 106, though there will be significant logarithmic and constant factors in the gate complexity. In comparison N8/3/η2/3 for the approach of ref. 10 would be about 7 × 1014. While further work would be needed to determine the precise gate counts, it seems reasonable that gate counts would be low enough to perform magic state distillation in series with a single T factory. This back-of-the-envelope estimate suggests that our approach could surpass the accuracy of the FeMoco simulation discussed in ref. 40 while using fewer physical qubits. Using such a large basis may also alleviate the need for active space perturbation techniques such as those discussed in ref. 43.

Although we have used the interaction picture to achieve our \(\tilde {\cal{O}}(\eta ^{8/3}N^{1/3}t)\) complexity, it is also possible to achieve sublinear complexity in N without the interaction picture. This is because the value of λ associated with the kinetic operator T is \({\cal{O}}(\eta ^{1/3}N^{2/3})\).8 Thus, one could use qubitization and signal processing17,44 where T is simulated using LCU methods. Then, the overall complexity of our approach would be \(\tilde {\cal{O}}(\eta ^{8/3}N^{1/3}t + \eta ^{4/3}N^{2/3}t)\), and the constant factors in the scaling might be smaller.