Variational Monte Carlo with Large Patched Transformers

Large language models, like transformers, have recently demonstrated immense powers in text and image generation. This success is driven by the ability to capture long-range correlations between elements in a sequence. The same feature makes the transformer a powerful wavefunction ansatz that addresses the challenge of describing correlations in simulations of qubit systems. Here we consider two-dimensional Rydberg atom arrays to demonstrate that transformers reach higher accuracies than conventional recurrent neural networks for variational ground state searches. We further introduce large, patched transformer models, which consider a sequence of large atom patches, and show that this architecture significantly accelerates the simulations. The proposed architectures reconstruct ground states with accuracies beyond state-of-the-art quantum Monte Carlo methods, allowing for the study of large Rydberg systems in different phases of matter and at phase transitions. Our high-accuracy ground state representations at reasonable computational costs promise new insights into general large-scale quantum many-body systems.

A particularly promising choice are autoregressive neural networks such as the PixelCNN [26] and RNNs [7,8,13,28], which can find ground states and reconstruct quantum states from data with high accuracies.These models consider qubit systems in sequential order, providing an efficient wavefunction encoding.However, these setups experience limitations for systems with strong correlations between qubits far apart in the sequence, which, for example, happens for two-dimensional qubit systems [7,8,26].
Similar to the RNN or PixelCNN approaches, transformer (TF) models [32] can be used as a wavefunction ansatz by considering a sequence of qubits [14,[33][34][35][36][37][38] or for simulating quantum dynamics [39,40].Due to their non-recurrent nature and the ability to highlight the influence of specific previous sequence elements, TF * stefanie.czischek@uottawa.camodels perform better at covering long-range correlations [32], promising to overcome the limitations of RNNs and PixelCNNs [34,35].In this work, we analyze the performance of the TF wavefunction ansatz for variational ground state searches and observe improved accuracies in the representation of quantum states compared to the RNN approach.
Inspired by the introduction of the vision transformer, which enables the efficient application of TF models for image processing and generation tasks [41], and by previous works in the field [25,34,35], we study RNN and TF models that consider sequences of patches of qubits.This approach reduces the sequence length and thus the computational cost significantly, while accurately capturing correlations within the patch.For further improvements we introduce large, patched transformers (LPTF) consisting of a powerful patched TF model followed by a computationally efficient patched RNN that breaks large inputs into smaller sub-patches.This architecture allows for an efficient consideration of large patches in the input sequence of the TF network, further reducing the sequence length.
Analyzing different shapes and sizes of input patches, we demonstrate that LPTFs can represent ground states of Rydberg atom arrays with accuracies beyond the RNN ansatz and traditional quantum Monte Carlo simulations, while requiring reasonable computational costs.Our results are consistent in different phases of matter and at quantum phase transitions.While we show that LPTFs can significantly improve numerical investigations of the considered Rydberg models, the introduced network model can similarly be applied to general qubit systems.The results presented in this work propose that the LPTF model can substantially advance numerical studies of quantum many-body physics.

Rydberg atom arrays
Rydberg atoms, which we use as a qubit model to benchmark our numerical approaches, can be prepared in the ground state |g⟩ and in a highly excited (Rydberg) state |r⟩ [42-46, 48, 49].We specifically consider the atoms arranged on square lattices of different system sizes, as illustrated in Fig. 1a.
The system of N = L × L atoms is described by the Rydberg Hamiltonian [42,43], with the detuning δ and the Rabi oscillation with frequency Ω generated by an external laser driving.Here we use the off-diagonal operator σx i = |g⟩ i ⟨r| i + |r⟩ i ⟨g| i and the occupation number operator ni = |r⟩ i ⟨r| i .The last term in the Hamiltonian describes a van-der-Waals interaction between atoms at positions r i and r j , with V i,j = ΩR 6  b / |r i − r j | 6 , and Rydberg blockade radius R b .We further choose the lattice spacing a = 1.By tuning the free parameters in the Rydberg Hamiltonian, the system can be prepared in various phases of matter, separated by different kinds of phase transitions [46-48, 51, 52].The Rydberg Hamiltonian is stoquastic [54], resulting in a positive and real-valued ground-state wavefunction [44,48,50].More details on Rydberg atom arrays are provided in the Methods section.

Recurrent neural networks and transformers
Recurrent neural networks (RNNs) provide a powerful wavefunction ansatz that can variationally find ground state representations of quantum many-body systems [4, 6-8, 13, 25, 28, 29].For this, the possibility to naturally encode probability distributions in RNNs allows the representation of squared wavefunction amplitudes |Ψ (σ) | 2 .Samples drawn from the encoded distribution correspond to state configurations that can be interpreted as outputs of projective measurements, as illustrated in Fig. 1d.
To represent the wavefunction amplitudes Ψ (σ) = ⟨σ|Ψ⟩ of a qubit system, such as an array of Rydberg atoms, a sequential order is defined over the system.
Each atom is iteratively used as an input to the RNN cell, the core element of the network structure which we choose to be a Gated Recurrent Unit (GRU) [55] inspired by [6,7].In addition, the RNN cell receives the state of internal hidden units as input.This state is adapted in each iteration and propagated over the input sequence, generating a memory effect.The network output p RNN (σ i |σ <i ; W) at each iteration can be interpreted as the probability of the next atom σ i being in either the ground or the Rydberg state, conditioned on the configuration of all previous atoms σ <i in the sequence, with variational weights W in the RNN cell.See Fig. 1d and the Methods section for more details.From this output, the state σ i of the next atom is sampled and used autoregressively as input in the next RNN iteration, as illustrated in Fig. 1b [4,6,7].We then train the RNN such that it approximates a target state Ψ (σ), ( While we focus on positive, real-valued wave functions in this work, RNNs can represent general wave functions by including complex phases as a second network output [7].
The global phase of the encoded state is then expressed as the sum over single-qubit phases.
The RNN has shown high accuracies for representing ground states of various quantum systems.However, its sequential nature and the encoding of all information in the hidden unit state pose a challenge for capturing longrange correlations [4, 6-8, 13, 28, 29].Here we refer to correlations between atoms that appear far from each other in the RNN sequence but not necessarily in the qubit system.Alternative autoregressive network models, such as the PixelCNN, experience similar limitations.These models cover correlations via convolutions with a kernel of a specific size.However, due to increasing computational costs, kernel sizes are commonly chosen rather small, so that the PixelCNN is as well limited to capturing only local correlations in qubit systems [26].Specific RNN structures that better match the lattice structure in the considered model, such as two-dimensional RNNs for two-dimensional quantum systems, can overcome this limitation [7,25,28,29].An alternative approach to improve the representation of long-range correlations is to use transformer (TF) architectures as a wavefunction ansatz [14,[33][34][35].These provide a similar autoregressive behavior but do not have a recurrent setup and naturally capture all-to-all interactions [32].
While autoregressively using the states of individual atoms as sequential input similar to the RNN, a masked self-attention layer in the TF setup provides trained connections to all previous elements in the sequence [32].
FIG. 1. Illustrating different network models.a, Square lattice of N = 4 × 4 Rydberg atoms randomly occupying the ground state (white) and the Rydberg state (blue).Dash-colored squares indicate patches used as network inputs.b, Recurrent neural network (RNN) processing sequence.The RNN cell iteratively receives input sequence elements σi together with a hidden state.At each iteration, the output is used as the next input.The index (p) denotes the input size in the patched RNN.c, Autoregressive transformer (TF) processing sequence, similar to the RNN in b.The multi-headed masked self-attention layer generates weighted connections to previous input sequence elements.For simplicity, we only include the feed-forward layers (FFLs) in the scheme.d, Single patched RNN iteration on inputs of patch size p = 2 × 2 [indicated with (4)].A softmax function creates a probability distribution pRNN over all possible patch states conditioned on previous sequence elements.The state of the next patch is sampled and used as next input state.e, Single patched TF iteration.The input patch is embedded into a state of dimension dH, and a positional encoding keeps track of the sequence order.The signal is sent into the transformer cell (gray) which we generate similar to [32] and apply T times independently.The output of the transformer cells is used like in the patched RNN to sample the next input.f, Single large, patched transformer (LPTF) iteration for patch size p = 4 × 4 [indicated with (16)] and sub-patch size ps = 2 × 2. A patched TF model receives a large input patch, and the transformer cell output is propagated as a hidden state hTF to a patched RNN.The patched RNN autoregressively constructs the input patch of the next LPTF iteration, reducing the output dimension.See Methods for more details on the network models.
See Fig. 1e and the Methods section for more details.These trainable connections generate all-to-all interactions between the atoms in the system and allow the highlighting of high-impact connections or strong correlations.This setup thus proposes to represent strongly correlated quantum systems with higher accuracy than the RNN model [14,34,35].As illustrated in Fig. 1c, the TF model outputs probability distributions which provide an autoregressive wavefunction ansatz Ψ TF (σ; W) = p TF (σ; W) [14,34,35], as further explained in the Methods section.Similarly to the RNN, the TF network can represent complex-valued wave functions by adding a second output representing the single-qubit phases [7].
In Fig. 2a,b, we compare the performance of RNNs (blue) and TFs (orange) when representing ground states of Rydberg arrays with N = 8×8 (a) and N = 16×16 (b) atoms.Here we fix R b = 7 1/6 ≈ 1.383 and Ω = δ = 1, which brings us into the vicinity of the transition between the disordered and the striated phase [48].We variationally train the network models by minimizing the energy expectation value, corresponding to a variational Monte Carlo method [4,6,23,56], see Methods section.If not stated otherwise, the energy expectation values in this work are evaluated on N s = 512 samples generated from the network, which we consider in mini batches of K = 256 samples.We obtained satisfactory results with d H = 128 hidden neurons in the RNN and the equivalent embedding dimension d H = 128 in the TF model.To benchmark the performance of the two models, we show the difference between the ground state energies H QMC obtained from quantum Monte Carlo (QMC) simulations at zero temperature [53], and the energy expectation value, extracted from network samples σ s .Here we use the local energy, with |Ψ W ⟩ denoting the wavefunction encoded in either  the RNN or the TF network, as discussed in the Methods section.In the QMC simulations, we use the stochastic series expansion approach presented in [53] and evaluate the expectation value on N s = 7 × 10 4 samples generated from seven independent sample chains.Both system sizes show that TFs converge to the ground state energy within fewer training iterations than the RNN.Additionally, for the larger system in Fig. 2b, TFs outperform RNNs significantly and reach higher accuracies in the ground state energy.This result demonstrates the expected improved performance.We, however, also find that this enhancement comes at the cost of increased computational runtimes τ in hours (h) for 2 × 10 4 training iterations.With τ ≈ 1.5h and τ ≈ 16h for N = 8×8 and N = 16×16 atoms, RNNs process much faster than TFs with τ ≈ 9.5h and τ ≈ 144h, respectively.Fig. 2a,b suggest stopping the TF training after fewer iterations due to the faster convergence, but the computational runtime is still too long to allow scaling to large system sizes.
We obtained QMC runtimes as τ ≈ 18h for N = 8 × 8 and τ ≈ 24h for N = 16 × 16 for a single run generating N s = 10 4 samples, showing a more efficient scaling with system size than the network simulations.This behavior can be understood when considering the scaling of the computational cost for generating an individual sample, which is O (N ) for the RNN and QMC, and O N 2 for the TF.In addition, the network models need to evaluate energy expectation values in each training iteration, which comes at complexity O N 2 for the RNN and at complexity O N 3 for the TF, see Methods for more details.However, due to its non-recurrent setup, the TF enables a parallelization of the energy expectation value evaluation, which is not possible for the RNN ansatz, as further discussed in the Methods.The computational complexity for QMC scales as O (N ) for both sampling and energy evaluation [53].Thus, while the QMC requires longer runtimes than the RNN for small system sizes, it is expected to outperform both the RNN and the TF for larger systems.

Patched inputs
To address the exceeding computational runtime of TF models, we take inspiration from the vision transformer [41] and consider patches of atoms as inputs to both considered network architectures, as illustrated in Fig. 1d,e.This reduces the sequence length to N/p elements for patch size p, leading to a sampling complexity of O (N/p) for the patched RNN and O N 2 /p 2 for the patched TF model, as well as an energy evaluation complexity of O N 2 /p and O N 3 /p 2 , respectively.
We first use patches of p = 2 × 2 atoms.The network output is then a probability distribution over the 2 p = 16 states the atoms in the patch can take, from which the next patch is sampled and used autoregressively as input in the following iteration.As demonstrated in previous works [25,34,35], this significantly reduces the computational runtime due to the shorter sequence length.In addition, we expect it to capture correlations between neighboring atoms with higher accuracies by directly encoding them in the output probabilities.The patched models can also be modified to include complex phases as a second network output, which then correspond to the sum of phases of individual qubits in the patch [7].
Fig. 2c and d show the results for the same N = 8 × 8 and N = 16 × 16 atom Rydberg array ground states as in panels a and b, using the patched RNN (green) and the patched TF setup (red) with p = 2 × 2. The network hyperparameters are the same as in the RNN and the TF network in a and b.The computational runtime reduces significantly to τ ≈ 0.5h and τ ≈ 3h, using the patched RNN and the patched TF model for N = 8 × 8 atoms, and to τ ≈ 2h and τ ≈ 28h, respectively, for N = 16 × 16 atoms.Convergence further happens within fewer training iterations than for the unpatched networks, and all representations reach energy values within the QMC errors.We even observe energies below the QMC results, which always remain within the QMC uncertainties and thus do not violate the variational principle which we expect to be satisfied for the number of samples we use to evaluate energy expectation values and for the small variances we observe [56].These energies propose that the patched networks find the ground state with better accuracy than the QMC simulations using N s = 7 × 10 4 samples.The QMC accuracy can be further increased by using more samples, where the uncertainty decreases as ∝ 1/ √ N s for uncorrelated samples [53].However, samples in a single QMC chain are correlated, resulting in an uncertainty scaling ∝ τ auto /N s with autocorrelation time τ auto depending on the evaluated observable [53].Even though the computational cost of QMC scales linearly with the sample chain size N s and is thus more efficient than the RNN or the TF approach, which require the generation of N s samples in each training iteration, we found that reaching higher QMC precisions comes at runtimes that exceed the patched RNN and the patched TF due to long autocorrelation times for large system sizes.

Large, patched transformers
Based on the results with p = 2 × 2, we expect even shorter computational runtimes and higher representation accuracies from larger patch sizes.However, as illustrated in Fig. 1d,e, the network output dimension scales exponentially with the input patch size, encoding the probability distribution over all possible patch states.This output scaling leads to the sampling cost scaling as O (2 p N/p) for the patched RNN and as O N 2 /p 2 + 2 p N/p for the patched TF network, as well as energy evaluation costs scaling as O 2 p N 2 /p and O N 3 /p 2 + 2 p N 2 /p , respectively, see Methods.A hierarchical softmax approach is often used in image processing to efficiently address this exponential scaling [57].
Here we introduce large, patched transformers (LPTFs) as an alternative way to enable efficient patch size scaling.
As shown in Fig. 1f, the LPTF model uses a patched TF setup and passes the TF state into a patched RNN as the initial hidden state.The patched RNN splits the input patch into smaller sub-patches of size p s = 2 × 2, reducing the output of the LPTF model to the proba-bility distribution over the 2 ps = 16 sub-patch states, as further discussed in the Methods.The sampling complexity for this model is reduced to O N 2 /p 2 + 2 ps N/p s and the energy evaluation complexity takes the form O N 3 /p 2 + 2 ps N 2 /p s , as derived in the Methods section.Generally, we can use both the patched RNN and the patched TF architecture as base network and subnetwork.We choose this setup here to combine the high accuracies reached with the patched TF network for large system sizes with the computational efficiency of the patched RNN, which can still accurately represent small systems (see Fig. 2a).Being a combination of a TF network and an RNN, the LPTF can similarly be modified to include complex phases as a second network output.
In Fig. 2c,d, we compare the performance of the LPTF model to the previously considered network architectures, where we choose p = 4 × 4 (purple) and p = 8 × 8 (brown), with p s = 2 × 2, using the same hyperparameters for all networks.These models require more training iterations than the patched TF architecture to converge but reach accuracies comparable to the patched RNN and the patched TF network.Even though more training iterations are required, the computational runtimes are reduced to τ ≈ 1h for N = 8 × 8, p = 4 × 4, as well as τ ≈ 9h and τ ≈ 4.5h for N = 16 × 16 with p = 4 × 4 and p = 8 × 8, respectively.Thus, overall, we obtain convergence within shorter computational runtime.
The observed runtimes are also shorter than QMC runs, even though QMC is expected to outperform the network models for large system sizes due to the linear scaling of computational costs with N .However, QMC is based on the generation of a chain of correlated samples.For large system sizes, autocorrelation times between samples in the chain increase and the ergodicity of the sampling process is not necessarily guaranteed [53].Since these limitations do not arise for the exact sampling process in autoregressive ANN methods [7,26], computationally efficient architectures such as the LPTF are still promising candidates for accurate studies of large quantum many-body systems.
Fig. 2e and f show the variances σ 2 (E) of the energy expectation values obtained with all considered network architectures.As expected [7], they decrease to zero when converging to the ground state energies.This behavior confirms the accurate ground state reconstruction, while the smoothness of all curves demonstrates stable training processes.
We can further increase the patch size p in the LPTF architecture, from which we expect even shorter runtimes.However, this also increases the patch size that needs to be reconstructed with the patched RNN.We thus expect the accuracy to decrease for large p if we keep p s = 2 × 2 fixed.Fig. 3a    atoms.We keep the parameters at R b = 7 1/6 , δ = Ω = 1, and evaluate the QMC energies on N s = 7 × 10 4 samples from seven independent chains [53], where the computational cost for QMC scales as O (N ) with the system size.Each LPTF data point corresponds to an average over training iterations 19,000 to 20,000 of ten independently trained networks with the same setup as for Fig. 2. We vary the input patch size between p = 4 × 4 and p = 16 × 16, where we also consider rectangular-shaped patches while fixing p s = 2 × 2. We ensure that the system size always divides by the input patch size.
As expected, the energy accuracies decrease with increasing patch size, which might result from the limited representational power of the patched RNN for large input p and small p s and from the increased amount of information that is encoded in each network iteration.We find accuracies below the QMC uncertainty for up to p = 8 × 8, which still proposes a significant speedup compared to single-atom inputs in the TF model, see Fig. 2d.Fig. 3b shows the computational runtimes of single training iteration steps for the different patch and system sizes.Each data point shows an average over 2 × 10 4 training iterations in a single network.We find a rapid decrease in computation times for small patches while we observe convergence to steady times for larger patches.This behavior results from the increased memory required by larger patch sizes, which forces us to decrease the mini-batch size K of samples for the energy evaluation, see Methods.Smaller mini-batch sizes lead to increased runtimes, which compete with the acceleration from the reduced sequence lengths.
We cannot find a conclusive dependence on the patch shape, with rectangular patches showing a similar behavior as squared patches.Thus, the only important factor is the overall patch size, and we conclude that input patches around p = 8 × 8 atoms provide a good compromise with reduced computation times and high energy accuracies.

Phases of matter in Rydberg atom arrays
We now explore the performance of LPTFs at different points in the Rydberg phase diagram by varying the detuning from δ = 0 to δ = 3 and fixing R b = 3 1/6 ≈ 1.2, Ω = 1.With this, we drive the sys-tem over the transition between the disordered and the checkerboard phase [46,48].The order parameter for the checkerboard phase is given by the staggered magnetization [46], where i runs over all N = L × L atoms and n i = |r⟩ i ⟨r| i is the occupation number operator acting on atom i.The expectation value denotes the average over sample configurations generated via QMC or from the trained network.Fig. 4 shows the staggered magnetization when tuning δ over the phase transition, where we compare LPTF and QMC simulations.The QMC data points show the average of N s = 7 × 10 5 samples generated from seven independent chains [53].The LPTF data is averaged over training iterations 11,000 to 12,000 of five independently trained networks.We look at systems with N = 8 × 8 and N = 16 × 16 atoms, choosing patch sizes p = 4 × 4 and p = 8 × 8, respectively, with p s = 2 × 2.
The LPTF model captures the phase transition accurately for both system sizes, overlapping closely with the QMC results for all δ and showing small uncertainties.In the inset in Fig. 4, we plot the absolute difference between the staggered magnetizations obtained with QMC and LPTFs for both system sizes.The most challenging regime to simulate is at δ ≈ 1.2, where we find the phase transition in the main panel.Here the observed difference is ∼ 10 −2 , demonstrating the high accuracies reachable with the LPTF approach.In the vicinity of the phase transition, the QMC uncertainties increase.This behavior is related to long autocorrelation times τ auto in the individual sample chains and the uncertainty scaling as ∝ τ auto /N s [53].The errors in the LPTF simulations remain small here, demonstrating a consistent and accurate outcome in all independent networks.

DISCUSSION
We explored the power of transformer (TF) models [32] in representing ground states of two-dimensional Rydberg atom arrays of different sizes by benchmarking them on quantum Monte Carlo simulations [53].Our work provides a careful performance comparison of TF models with a recurrent neural network (RNN) wavefunction ansatz [4,6,7], showing that TFs reach higher accuracies, especially for larger system sizes, but require longer computational runtimes.We accelerate the network evaluation using patches of atoms as network inputs inspired by the vision transformer [41] and demonstrate that these models significantly improve computational runtime and reachable accuracies.
Based on the obtained results, we introduce large, patched transformers (LPTFs), which consist of a patched TF network whose output is used as the initial hidden unit state of a patched RNN.This model enables larger input patch sizes which are broken down into smaller patches in the patched RNN, keeping the network output dimension at reasonable size.
The LPTF models reach accuracies below the QMC uncertainties for ground states obtained with a fixed number of samples, while requiring significantly reduced computational runtimes compared to traditional neural network models.We are further able to scale the considered system sizes beyond most recent numerical studies, while keeping the accuracies high and computational costs reasonable [8,46,52,53].These observations promise the ability to study the scaling behavior of Rydberg atom arrays to large system sizes, allowing an in-depth exploration of the underlying phase diagram.While such studies go beyond the scope of this proofof-principle work, we leave it open for future follow-up works.
Our results show that the LPTF model performs similarly well in different phases of matter in the Rydberg system and accurately captures phase transitions.While we focus on Rydberg atom arrays, the introduced approach can be applied to general quantum many-body systems, where complex-valued wave functions can be represented by adding a second output to the autoregressive network architecture as in [7].While we expect the inclusion of complex phases to make the training process harder [7], modifications of the LPTF setup can be explored in future works to study more complex or larger qubit systems.Such modifications include larger network models with more transformer cells, or higher embedding dimensions which increase the network expressivity [58].Additionally, larger input patch sizes can be achieved by including multiple patched RNN and patched TF components in the LPTF architecture, which successively reduce the sub-patch sizes.We further expect that the performance of LPTFs can be enhanced with a data-based initialization, as discussed in [8,31].
Our results and possible future improvements promise high-quality representations of quantum states in various models and phases of matter at affordable computational costs.This prospect proposes significant advances in the modeling of quantum many-body systems, promising insightful follow-up works exploring new physical phenomena.

Rydberg atom arrays
We apply our numerical methods on Rydberg atom arrays as an example for qubit systems.In state-ofthe-art experiments, Rydberg atoms are individually addressed via optical tweezers that allow for precise ar-rangements on arbitrary lattices in up to three dimensions [44,45,48,49].Fluorescent imaging techniques are then used to perform projective measurements in the Rydberg excitation basis.Such accurate and well-controlled experimental realizations are accompanied by intensive numerical investigations, which have unveiled a great variety of phases of matter, separated by quantum phase transitions, in which Rydberg atom systems can be prepared [46-48, 51, 52].The atoms on the lattice interact strongly via the Rydberg many-body Hamiltonian in Eq. ( 1) [42,43].The Rydberg blockade radius R b defines a region within which simultaneous excitations of two atoms are penalized.
The ground states of this Rydberg Hamiltonian are fully described by positive, real-valued wavefunctions so that the outcomes of measurements in the Rydberg occupation basis provide complete information about ground state wavefunctions [44,48,50].We can thus model ground state wavefunctions with real-valued neural network model architectures [6,7].In this work, we choose Ω = 1 and describe the system in terms of the detuning δ and the Rydberg blockade radius R b .We further consider square lattices of N = L × L atoms with lattice spacing a = 1 and open boundary conditions.

Recurrent neural network quantum states
Recurrent neural networks (RNNs) are generative network architectures that are optimized to deal with sequential data [55,59].They naturally encode a probability distribution and enable efficient sample data generation.As illustrated in Fig. 1b,d, the RNN input is given by individual elements σ i of dimension d I from a given data sequence σ, and a hidden state h i of dimension d H .We use the initial input states σ 0 = 0 and h 0 = 0. Throughout this work, we choose d H = 128.The input is processed in the RNN cell, where non-linear transformations defined via variational parameters W are applied.Here we use the Gated Recurrent Unit (GRU) [55] as RNN cell, which is applied at each iteration with shared weights [59].
We then apply two fully connected projection layers on the hidden state, the first followed by a rectified linear unit (ReLU) activation function and the second followed by a softmax activation function (not layers are not shown in Fig. 1d).This setup generates an output vector of dimension d O which is interpreted as a probability distribution over all possible output values [7].The hidden state configuration is propagated over the input sequence encoding information of previous inputs and generating a memory effect.This setup conditions the output probability on all previous sequence elements, p RNN (σ i+1 |σ i , . . ., σ 1 ; W ), from which an output state is sampled.Here, we use this output as the input element σ i+1 of the next iteration, running the network in an autoregressive manner.In this case, the joint probability of the generated sequence is given by p RNN (σ; W) = i p RNN (σ i |σ i−1 , . . ., σ 1 ; W ) [7].
To use the RNN as a wavefunction ansatz to represent quantum states, we consider the quantum system as a sequence of qubits, sampling the state of one qubit at a time and using it as the input in the next RNN iteration [4,6,7].The hidden state propagation captures correlations in the qubit system by carrying information about previously sampled qubit configurations.We then interpret the probability distribution encoded in the RNN as the squared wavefunction amplitude of the represented quantum state, p RNN (σ; This ansatz can model the complete information of ground states in the considered Rydberg Hamiltonian, Eq. ( 1).Samples generated from the RNN then correspond to outcomes of projective measurements in the computational basis and can be used to estimate expectation values of general observables Ô [1,2,4,6,7,23], where we introduce the local observable, This local observable is evaluated and averaged over N s samples σ s generated from the RNN.To find the ground state representation of a given Hamiltonian in the RNN, we use a gradient descent training algorithm to minimize the energy expectation value ⟨E⟩ = ⟨ Ĥ⟩, which can be similarly evaluated using samples from the RNN as stated in Eq. ( 3) and Eq. ( 4) [4,6,7,23,56].We train the RNN using the Adam optimizer with parameters β 1 = 0.9, trainable network parameters.

Transformer quantum states
Transformer (TF) models can be applied to sequential data similarly to RNNs.Such models do not include a recurrent behavior but are based on self-attention, which provides access to all elements in the sequence and enables the dynamical highlighting of salient information.We use the TF model as introduced in [32] and restrict it to the encoder part only [14,[33][34][35].
As illustrated in Fig. 1e, the TF model first embeds the given input vector.This embedding corresponds to a linear projection of the input vector of dimension d I to a vector of embedding dimension d H with trainable parameters W I .As a next step, the positional encoding matrix is evaluated and added to the embedded input vector to include information about the positions of the input elements in the sequence [32].To keep this information when propagating the signal through the TF structure, the overall embedding dimension d H of internal states is conserved.Throughout this work, we choose d H = 128.
The embedded input with positional encoding is passed to the transformer cell, where the query, key, and value matrices are generated, and multi-headed self-attention is applied [32].We use a masked self-attention mechanism to ensure the TF model is autoregressive, like the RNN.The output vector of the masked self-attention is then added to the embedded input state, and the sum is normed before being fed into two feed-forward layers.The normalization is a trained process to improve the network training stability and adds 2d H trainable parameters.Similar to [32], we apply one feed-forward layer with a ReLU activation function, followed by a linear feed-forward layer, where the weights and biases of both layers are trainable.The first feed-forward layer projects the input into a vector of size d FF = 2048, while the second feed-forward layer projects it back to size d H .The output of the feed-forward layers is again added to the output vector of the self-attention cell, and the sum is normalized, see Fig. 1e.
The entire transformer cell, including the self-attention and feed-forward layers, as well as the add-and-norm operations, can be applied multiple times independently to improve the network expressivity.We obtain satisfying results with T = 2.
To represent quantum states similarly to the RNN ansatz, we apply two fully connected layers with trainable weights to the output of the transformer cell.The first layer conserves the dimension d H and is followed by a ReLU activation function.The second layer projects the output to a vector of output dimension d O and is followed by a softmax activation function.These two layers are not shown in the diagram in Fig. 1e.After the softmax activation function, the output can be treated the same way as the RNN output, and it can be interpreted as a probability distribution from which the next qubit state in the sequence can be sampled [32].We train the TF model the same way as the RNN, using the Adam optimizer to minimize the energy expectation value via gradient descent.The energy expectation value is obtained in the same way as in the RNN, see Eq. ( 3) and Eq. ( 4), and we choose the same values as in the RNN approach for all hyperparameters involved in the training process.

Positional encoding
Since the TF model does not include a recurrent behavior as the RNN, it does not provide any information about the order of the sequence elements by default.A positional encoding algorithm is used to include information about the position of each input.We use the algorithm as proposed in [32], which creates a matrix of dimension L×d H with sequence length L and embedding dimension d H .The individual elements are calculated via with 0 ≤ l < L indexing the sequence elements, and 0 ≤ i < d H /2 the column indices of the output matrix.The resulting matrix is added to the embedded input element of the TF setup, which linearly projects the input vector to a vector of dimension d H using trainable weights.This operation gives each element a unique information about its position in the sequence.

The self-attention mechanism
The self-attention mechanism, as introduced in [32] and illustrated in Fig. 5, projects each embedded sequence element σ i of dimension d H to a query vector q i , a key vector k i , and a value vector v i of dimensions d H , with trainable weight matrices W q , W k , W v of dimension d H × d H . Query, key, and value vectors of all input FIG. 5. Illustration of the attention mechanism applied to sequence element σ3.Each sequence element σj is projected onto a query (qj), key (kj), and value (vj) vector, and the dot product of q3 is taken with each key vector kj, j ∈ {1, 2, 3, 4}.Adding a mask matrix eliminates the influence of all elements later in the sequence by adding mi,j>i = −∞, while contributions of previously sampled sequence elements remain unchanged with m i,j≤i = 0, ensuring the autoregressive behavior.A softmax activation function is applied to all signals before a dot product is taken with the corresponding value vectors.The sum of all resulting signals provides the attention outcome for the input σ3.This algorithm is the same as introduced in [32].
elements can be summarized in the corresponding matrices, with sequence length L.
The attention mechanism then maps the queries and key-value pairs to an output for each sequence element, allowing for highlighting of connections to sequence elements with important information.For each sequence element σ i , the dot product of the query vector q i with the key vector k j for all j ∈ {1, . . ., L} is evaluated.We then add a masking term m i,j to the signal, which is given by, This ensures that the self-attention only considers previous elements in the sequence and does not look at later elements that still need to be determined in the autoregressive behavior.Applying a softmax activation function to all signals after adding the mask ensures that the contributions of all later sequence elements with m i,j = −∞ are driven to zero.We then take the dot product of each signal with the corresponding value vector v j and sum all signals to generate the output of the attention mechanism.The complete attention formalism can thus be summarized as, where the mask matrix M has entries m i,j as in Eq. ( 13).
Here we further use multi-headed attention, as discussed in [32].This approach linearly projects each query, key, and value vector to h vectors with individually trainable projection matrices.We thus end up with h heads with modified query, key, and value vectors on which the attention mechanism is applied, where we choose h = 8 throughout this work, as we find it to yield satisfying results.The linear projection further reduces the dimension of the query, key, and value vectors to d H /h, so that the outputs of the individual heads can be concatenated to yield the total output dimension d H of the attention mechanism.We then scale the outcome of each query-key dot product with the factor 1/ d H /h in Eq. ( 14) [32].
The output of the multi-headed attention formalism is given by, with output weight matrix W O and query, key, and value weight matrices W Q l , W K l , and W V l for head l.A TF model given an input of dimension d I has an embedding matrix W I of dimension d I × d H , with embedding dimension d H .The weight matrices in the multiheaded self-attention mechanism then have dimensions trainable variational parameters.

Patched network models
The bottleneck of the RNN and TF wavefunction ansatz is the iteration of the network cell over the entire qubit sequence.This computationally expensive step needs to be done for each sample that is generated, as well as each time a wavefunction Ψ (σ; W) is calculated, which is required to evaluate non-diagonal observables, see Eq. (7).We reduce the number of iterations per network call by shortening the input sequence and in return increasing the dimension of the input vector.
As illustrated in Fig. 1, for two-dimensional Rydberg atom arrays, we consider patches of p qubits arranged in squares or rectangles.We flatten these patches into binary input vectors of dimension d I = p.This modification increases the network input dimension, which is, however, projected to the unaffected hidden state dimension in the RNN cell or the embedding dimension in the TF model.Thus, the computational cost of evaluating the network cell is barely affected by the increased input dimension, but the shorter sequence length leads to significantly reduced computational runtimes.In addition to this expected speed-up, we expect the patched network models to capture local correlations in the system with higher accuracy.Neighboring qubit states are now considered at the same iteration, and their information is not encoded in the network state.
The network output uses one-hot encoding of the patched quantum states, so that the output vector is of dimension d O = 2 p .Each entry represents one possible state of the qubit patch, see Fig. 1d,e.This output dimension, and with this the computational cost of evaluating the softmax function, thus scales exponentially with the patch size.In this work, we only consider patches up to p = 2 × 2 for the patched network models and introduce large, patched TFs to deal with larger patch sizes.
The patched RNN with p = 2 × 2 has input dimension d I = 4 and output dimension d O = 16, so Eq. ( 8) leads to 70,032 trainable network parameters.For the same input and output dimension, the patched TF model has 1,203,406 trainable network parameters, according to Eq. (17).

Large, patched transformer models
In the large, patched transformer (LPTF) model, we apply the TF network to a patch of p qubits.However, we abort the TF model in Fig. 1e right after the transformer cell and do not apply the fully connected layers and the softmax activation function.Instead, we use the generated output state of the transformer cell as an input hidden state to a patched RNN with the hidden-unit dimension matching the embedding dimension d H of the TF model.This patched RNN breaks up the large input patch into smaller sub-patches of size p s , where we always choose a sub-patch size of p s = 2 × 2 in this work.We then use the patched RNN model to iteratively sample the quantum states of these sub-patches in the same way as when applying the patched RNN to the full system size.The only difference is that the initial hidden state is provided by the TF output, h 0 = h TF , see Fig. 1f for an illustration.
The total number of trainable network parameters in the LPTF setup is then given by a combination of Eq. ( 8) and Eq. ( 17), where the two fully connected layers at the TF output are subtracted, We use the input dimension d I = p for the patched TF and d I = p s for the patched RNN, as well as the output dimension d O = 2 ps .In this work, we choose p s = 2 × 2, which yields 1,256,208 + 128p trainable parameters with d H = 128, d FF = 2048, and T = 2.For the choice p = 8 × 8, we thus get 1,264,400 variational network parameters.

Computational complexity
The process of finding ground state representations with ANNs can be divided into two steps, the generation of samples from the network and the evaluation of energy expectation values according to Eq. ( 6) and Eq. ( 7) in each training iteration.We start with analyzing the sample complexity for the different network architectures.As we choose the hidden and the embedding dimension d H fixed and equal for all architectures, we consider it as a constant in the complexity analysis.
Generating a single sample σ from p RNN (σ; W) encoded in an RNN requires N executions of the RNN cell, leading to a computational cost of O (N ).By considering the patched RNN, we reduce the sequence length from N to N/p, so that only N/p RNN cells are evaluated.However, the output dimension in this case is 2 p , so each evaluation of the RNN cell requires 2 p products to evaluate the outcome probability distribution.This leads to an overall sampling complexity of O (N/p2 p ) for the patched RNN.
In order to generate a single sample σ from p TF (σ; W) encoded in the TF network, we similarly need to evaluate the transformer cell N times.However, the attention algorithm itself requires the computation of N multiplications that need to be evaluated in each pass through the network.Additionally, the transformer cell contains a projection of the embedded state to a vector of size d FF ≫ d H , which requires significantly more multiplication operations than an RNN cell evaluation.Thus, drawing a sample from the TF model comes at computational complexity O (N [N + d eff ]).When introducing the patched TF model, we similarly reduce the sequence length to N/p, so that the full network needs to be evaluated N/p times and the attention mechanism requires N/p multiplications.Also here the output scales as 2 p , leading to a computational cost of O (N/p [N/p + d FF + 2 p ]).
In the LPTF, the transformer cell is followed by a patched RNN with p/p s cells.Since each LPTF iteration requires the evaluation of one such RNN, we evaluate the transformer cell and p/p s RNN cells N/p times to generate a single sample σ.While the TF network output is kept at embedding dimension d H , the RNN output is of dimension 2 ps , leading to a computational complexity of This shows a significant reduction of the sampling complexity compared to the patched TF model and explains the observed efficiency of our introduced LPTF architecture.
Next, we consider the complexity of evaluating energy expectation values.While the evaluation of the diagonal part is given by a linear average over all N s samples and thus scales linearly with the system size N , evaluating the off-diagonal part for each sample σ s according to Eq. ( 7) requires the evaluation of Ψ (σ ′ ; W) for all σ ′ corresponding to σ s with one atom flipped.This leads to N evaluations of Ψ (σ ′ ; W) for each sample, which is obtained by passing σ ′ through the network architecture and obtaining the output probability p RNN (σ ′ ; W) or p TF (σ ′ ; W).
Thus, the patched RNN with N/p network cells needs to be evaluated N times for each sample, leading to a computational complexity of O N s 2 p N 2 /p for obtaining the energy expectation value.
Similarly, the patched TF model with N/p iterations and N/p multiplications in the attention mechanism is evaluated N times for each sample, leading to a computational cost of O N s N 3 /p 2 + d FF N 2 /p + 2 p N 2 /p .For the LPTF we accordingly obtain O N s N 3 /p 2 + d FF N 2 /p + 2 ps N 2 /p s .The required memory scaling behaves similarly for the discussed network architectures.This scaling can be reduced using optimized implementation algorithms as discussed in the next section.While the TF and LPTF show a worse scaling than the RNN, the evaluation of the off-diagonal energy terms can be parallelized for these two models.Since no autoregressive sampling is required for this task, all N/p masked self-attention layers can be evaluated in parallel, significantly reducing the computational runtime.This parallelization is not possible for the RNN due to its recurrent nature, where the hidden state needs to be evaluated for each individual RNN cell before it can be passed to the next iteration.
The generation of samples with QMC is of computational complexity O (V N ), with average interaction strength V over the system.The energy estimation also scales as O (V N ) and only needs to be done at the end of the run after all samples have been generated, which is in contrast to the neural network approach that requires the evaluated energy in each training iteration [53].While QMC thus shows much more promising computational complexity than all three neural network methods when considering the scaling to large system sizes, we observe that it requires longer computational runtimes than the LPTF for the system sizes considered in this work.Considering the uncertainties of expectation values, we find that QMC simulations require far more samples (N s = 7 × 10 4 in Fig. 2, Fig. 3, and N s = 7 × 10 5 in Fig. 4) than the ANN approaches (N s = 512).However, the ANN approaches require the generation of N s samples in each training iteration, so that overall more samples are generated.The higher uncertainties in the QMC simulations are caused by correlations in the generated sample chains, where autocorrelation times grow with increasing system sizes.Furthermore, ergodicity in the QMC sampling process is not guaranteed for large systems [53].These problems do not arise in the exact and independent autoregressive sampling of the neural network algorithms, explaining the lower uncertainties observed for smaller sample sizes.At the same time, these observations limit accurate QMC simulations to small system sizes.

Implementation details
We train the network to find the ground state of the Rydberg Hamiltonian by minimizing the energy expectation value, which we evaluate using Eq. ( 6) and Eq.(7) with the Hamiltonian operator Ĥ, where Ψ (σ; W) denotes a wavefunction encoded in one of the discussed network models.To optimize the variational network parameters, we use the gradient descent algorithm the same way as discussed in [6,7], with gradients and local energy The training process requires the evaluation of the gradients of Ψ (σ; W).To reduce the necessary amount of memory, we always generate a batch of N s samples from the network without evaluating the gradients.We then pass each sample through the network again to obtain the wavefunction amplitude with the corresponding gradients.This approach requires 2N s network passes instead of N s , but evaluating gradients on a given input sequence is less memory-consuming than evaluating gradients on an autoregressive process in PyTorch [60].We can further reduce the required memory by dividing the total batch of N s samples into mini batches of K samples each, which are evaluated in separate processes.This reduces the memory scaling by a factor K/N s per pass, while requiring N s /K network passes instead of one.Thus, the smaller we choose the mini batches, the less memory is required, but the longer the computational runtime.If not stated otherwise, we choose K = 256 in this work.
Considering the off-diagonal term in the Rydberg Hamiltonian, its contribution to the energy expectation value is given by E off = N i=1 ⟨σ x i ⟩.Thus, calculating the local energy E loc (σ s ; W) using Eq. ( 7) requires for each sampled state σ s to evaluate Ψ (σ ′ ; W) for all σ ′ that correspond to the sampled state with one atom flipped from the ground to the excited state or vice versa [6,7].Instead of passing N states through the network for each sample we generate, we can reduce the required memory and accelerate our algorithm by splitting the atom sequence into D equally sized parts.In each part, the wavefunction amplitude is evaluated for the states where each of the N/D atoms is flipped.When calculating the wavefunction amplitudes using sequential networks, the flipping of one atom only affects the calculation for atoms that appear later in the sequence.We store the network outcome for the original state σ s after each N/D atoms and evaluate each group starting from this stored value.We then only need to pass the sequence from the first atom of the group to the last atom in the system to the network.This ansatz can be used since no gradient needs to be evaluated on this off-diagonal term and corresponds to using mini batches of atoms.This way we reduce the amount of required memory by a factor D/N per pass, while requiring N/D network passes instead of only one.While we now evaluate the network N/D times, the sequence length at iteration d is reduced from N/p to (N − [d − 1] D) /p, leading to an accelerated evaluation of the local energies since the input sequence length is reduced in most cases.In this work, we always split the sequence of atoms into D = N/p parts, with p the patch size in the patched network models.
shows the energy difference between QMC and LPTF simulations for ground states of Rydberg arrays with N = 12 × 12 up to N = 40 × 40 Patch size p 10 −5 10 −4

NFIG. 3 .
FIG. 3. Patch size scaling in large, patched transformers (LPTFs).a, Absolute energy difference between ⟨E⟩ [Eq.(3)] evaluated with Ns = 512 large, patched transformer (LPTF) samples and HQMC evaluated with Ns = 7 × 10 4 quantum Monte Carlo (QMC) samples for different system sizes N (colors) as a function of the patch size p.Error bars denote the standard error of the sampling mean.The black dashed line indicates the QMC uncertainty, while black-edged data points show absolute values of energies below the QMC results.b, Computational runtime per training iteration on NVIDIA Tesla P100 GPUs averaged over 2 × 10 4 iterations of LPTFs as in a. Different shapes denote different mini-batch sizes K, with K = 256 for circles, K = 128 for up-pointing triangles, K = 64 for squares, and K = 32 for down-pointing triangles, see Methods.Error bars are smaller than the data points.

β 2 =
0.999, and learning rate ∆ = 0.0005.The GRU cell has three internal weight matrices of dimension d I × d H , with input dimension d I and hidden unit dimension d H , and three internal weight matrices of dimension d H × d H .It furthermore has six internal bias vectors of size d H , and we add two fully connected layers with weight matrices of dimension d H × d H and d H × d O and biases of size d H and d O , respectively, to obtain the desired RNN output vector with output dimension d O [7, 55].Single-qubit inputs give d I = 1 and d O = 2 as we use a one-hot encoded output.Together with d H = 128 as chosen in this work, this leads to a total of 3 and W V l , and d H × d H for W O .Each weight matrix also comes with a bias whose size equals the column-dimension.The two feed-forward layers in the transformer cell have weight matrices of d H ×d FF and d FF ×d H with corresponding biases, and the two norm operations add 4d H trainable parameters.The transformer cell, containing the attention mechanism, the feed-forward layers, and the norm operations, is applied T times with independent variational parameters.After the transformer cell we add two fully connected layers with weight matrices of dimensions d H × d H and d H × d O for output dimension d O .Both layers come with corresponding biases.Single-qubit inputs give d I = 1 and d O = 2, using onehot encoded output.With d H = 128, d FF = 2048, and T = 2, as chosen throughout this work, the TF architec-ture has a total of