## Abstract

Turbulence is a complex and chaotic state of fluid motion. Atmospheric turbulence within the Earth’s atmosphere poses fundamental challenges for applications such as remote sensing, free-space optical communications and astronomical observation due to its rapid evolution across temporal and spatial scales. Conventional methods for studying atmospheric turbulence face hurdles in capturing the wide-field distribution of turbulence due to its transparency and anisoplanatism. Here we develop a light-field-based plug-and-play wide-field wavefront sensor (WWS), facilitating the direct observation of atmospheric turbulence over 1,100 arcsec at 30 Hz. The experimental measurements agreed with the von Kármán turbulence model, further verified using a differential image motion monitor. Attached to an 80 cm telescope, our WWS enables clear turbulence profiling of three layers below an altitude of 750 m and high-resolution aberration-corrected imaging without additional deformable mirrors. The WWS also enables prediction of the evolution of turbulence dynamics within 33 ms using a convolutional recurrent neural network with wide-field measurements, leading to more accurate pre-compensation of turbulence-induced errors during free-space optical communication. Wide-field sensing of dynamic turbulence wavefronts provides new opportunities for studying the evolution of turbulence in the broad field of atmospheric optics.

### Similar content being viewed by others

## Main

As pointed out by Richard P. Feynman, turbulence is the most important unsolved problem of classical physics^{1}. Atmospheric turbulence, in particular, affects the propagation path of light due to the non-uniform distribution of refractive index, resulting in severe wavefront aberrations^{2}. Such spatially non-uniform aberrations decrease the signal-to-noise ratio and spatial resolution of optical systems, making atmospheric turbulence a fundamental obstacle in free-space optical communications^{3,4}, astronomical imaging^{5,6} and remote sensing^{7,8}. However, a comprehensive numerical description remains elusive and there is a lack of instrumentation that is capable of quantitatively observing the dynamic atmospheric turbulence across a broad field^{5}. This stems from the fact that atmospheric turbulence is transparent and anisoplanatic, rendering existing techniques insufficient^{9,10}.

Various efforts have been invested over past decades. Schlieren imaging^{11,12,13} with a coherent light source can qualitatively increase the contrast of refractive-index changes within transparent gases. By contrast, methods such as using a differential image motion monitor (DIMM)^{14,15,16}, lidar-based Doppler radar detection^{17,18,19}, temperature fluctuation techniques^{20,21} and image-based approaches using deep learning^{22} offer valuable quantitative assessments of atmospheric turbulence. These methods gauge the statistical strength of turbulence, and involve parameters such as the Fried parameter *r*_{0} or the refractive-index structure constant *C*_{n}^{2} (ref. ^{23}), which characterize the averaged intensity of turbulence. However, these detection approaches fail to capture the instantaneous aberrated wavefronts, which is critical for real-time applications. Adaptive optics is a powerful technique that uses wavefront sensors, deformable mirrors and control systems to measure and correct aberrated wavefronts for incoherent light. In the realm of wavefront detection, traditional adaptive optics systems predominantly rely on Shack–Hartmann wavefront sensors (SHWS). Although recent advancements have introduced the plenoptic wavefront sensor^{24,25} and various innovative methods have aimed at expanding the dynamic range and enhancing the sensing performance^{26,27}, these developments focus primarily on the measurement contaminated by a single isoplanatic aberration within a relatively small field of view (FOV) and do not effectively address anisoplanatism. To tackle this issue, researchers have ventured into advanced techniques such as multi-conjugate adaptive optics and ground-layer adaptive optics^{28,29,30}, using multiple wavefront sensors to broaden the sensing field to approximately one arcminute. However, extending the field beyond this scale while preserving high precision remains challenging^{9}. Furthermore, the intricate optical configurations in these techniques often lead to bulky system designs and high costs. Recently, we developed digital adaptive optics to estimate and correct spatially non-uniform aberrations through a meta-imaging sensor^{31}. However, the iterative reconstruction process incurs substantial computational costs, hindering its practical application in the real-time observation of dynamic turbulence.

Here we report a light-field-based plug-and-play wide-field wavefront sensor (WWS), facilitating the direct observation of atmospheric turbulence across a wide FOV spanning 1,100 arcsec at video rate (30 Hz). A coarse-to-fine slope-estimation algorithm was developed to detect spatially non-uniform wavefronts with light-field measurements at both high precision and high speed. The extended FOV enables successful profiling of the surface-layer turbulence (<750 m altitude) to distinguish three layers with an altitude resolution of up to 43 m. On the basis of Taylor’s ‘frozen flow’ hypothesis, we can predict the evolved turbulence dynamics in advance of 33 ms using a convolutional recurrent neural network with wide-field measurements, which will lead to a more accurate pre-compensation of turbulence-induced errors during free-space optical communications. Compared with traditional wavefront sensors, we obtain a remarkable 195 nm reduction in the root mean squared error (RMSE) of the predicted wavefronts at the central wavelength of 525 nm. The WWS is highly efficient and cost-effective, as it requires no modification of the major optical systems but only equipment installation on the native image plane. Our method is anticipated to disclose the evolutionary mechanisms of atmospheric turbulence, paving the way towards widespread applications.

## Results

### Principle of the WWS

Atmospheric turbulence can be regarded as spatially variant aberrations that manifest themselves as phase modulations of the pupil plane. Conventional imaging devices are limited to detecting the intensity information of three-dimensional (3D) scenes projected onto a two-dimensional (2D) sensor, and are unable to capture phase disturbances. To overcome this limitation, WWS uses a microlens array (MLA) at the image plane with a complementary metal–oxide–semiconductor (CMOS) sensor placed at the back focal plane of the MLA to detect the spatial variance of the coherence in a parallel way (Fig. 1a). In contrast to traditional wavefront sensors such as SHWS, which are typically placed at the conjugated pupil plane to detect the average wavefront within a small FOV (Fig. 1b), WWS is directly placed on the native image plane to obtain the wavefront information across a large FOV. In this set-up, each microlens samples the wavefront from a different FOV, effectively minimizing cross-talk. The wavefront captured along each view direction (corresponding to a local FOV) is a projection of atmospheric turbulence at various altitudes (Fig. 1c). On arrival of the incident light at each localized region of the MLA, it is decoupled into different angles and mapped onto different sensor pixels at the back focal plane, corresponding to different subapertures (Fig. 1a). Subsequently, the subaperture images can be formed by extracting and recombining pixels at the same relative position behind different microlenses together. This process is called pixel realignment^{32} (Extended Data Fig. 1a).

During imaging, turbulence introduces different aberrated wavefronts at the pupil plane for different local isoplanatic regions. For each sub-FOV with an isoplanatic aberration, the wavefront can be approximated as multiple segmented linear phase modulations at different subapertures, resulting in different lateral shifts for different subaperture images (Fig. 1d). The lateral shifts are proportional to the local slope of each segmented wavefront (see detailed derivation in the Methods). When the aberrations are distributed across the wide FOV in a spatially variant manner, we can observe continuous distortions across each subaperture image. In contrast to the plenoptic wavefront sensor, considering an averaged slope value for a local region, we have developed a coarse-to-fine slope-map estimation strategy for the calculation of anisoplanatic turbulence. The slope map is continuous and characterized by a few sparsely distributed control points, each of which represents an isoplanatic sub-FOV. Initially, we select a reference subaperture image and calculate the overall shifts for all of the control points across other subaperture images. Subsequently, we perform a precise computation of the subpixel translation for each individual control point (Fig. 1e and Extended Data Fig. 1b). The quantity of sparse control points is adjustable based on the required isoplanatic sub-FOV size, typically around 10 arcsec in the visible spectrum. Moreover, increasing the number of sub-FOVs does not lead to higher computational costs (Extended Data Fig. 1c). After successfully acquiring the slope maps, we can reconstruct the turbulence wavefronts by projecting these maps onto Zernike polynomials. This projection is facilitated using a pretrained MLP network, which offers a simplified alternative to more complex architectures (Fig. 1e and Methods). This approach balances precision with computational efficiency, making the WWS suitable for video-rate observations (Extended Data Fig. 2).

To determine the optimal parameters of the MLA, the sensor pixel size is first chosen according to the diffraction limit resolution. Then, a larger pitch size of each microlens with the fixed pixel size leads to more subapertures, generating a denser sampling of the wavefront. However, a larger pitch size also reduces the spatial resolution of each subaperture image, compromising the accuracy of the slope-map estimation. We conduct a numerical simulation to identify the ideal configuration, which indicated that each microlens covering 15 × 15 pixels offers the best performance for aberration estimation and correction (Extended Data Fig. 3a,b).

### Characterization of the WWS

To quantitatively evaluate the WWS, we conducted both simulations and imaging experiments. We first simulated the images captured by the WWS under anisoplanatic atmospheric turbulence (Extended Data Fig. 3c,d). Unlike our previous meta-imaging sensor that required multiple frames with lateral shifting to enhance the precision of aberration estimation, the WWS leverages previous knowledge that turbulence-induced distortion varies smoothly and continuously. As a result, the WWS achieves a comparable performance with subpixel accuracy using only a single snapshot (Fig. 2a and Extended Data Fig. 3d,e). This capability is crucial for capturing the dynamics of transient turbulence. By calculating the slopes with a global distribution of textures instead of local image features around several pixels, the WWS also demonstrates strong noise robustness, even for a signal-to-noise ratio as low as 10 dB (Extended Data Fig. 4). We also compared the performance of our method with that of a correlating Shack–Hartmann wavefront sensor (C-SHWS; see detailed simulation parameters in the Methods^{33,34}). The WWS shows a comparable performance to the C-SHWS across the entire FOV (Fig. 2b), but the C-SHWS is still limited to sensing a single isoplanatic region at a time (Extended Data Fig. 5). Through parallel computing using multiple graphical processing units, our WWS achieves wavefront sensing across an FOV of 1,100 arcsec in under 30 ms, enabling the video-rate observation of a wide range of atmospheric turbulence. This speed is over 10,000 times faster than in previous studies^{31,35} (Fig. 2c).

To further validate the performance, we conducted imaging experiments of the lunar surface using the 80 cm Tsinghua-National Astronomical Observatories of China (Tsinghua-NAOC) telescope. Turbulence-induced scanning (TIS) is proposed for the WWS to enhance the spatial sampling rate of subaperture images using the intrinsic tip–tilt components when imaging through the turbulence (see Methods and Extended Data Fig. 6a,b). TIS can effectively eliminate the motion artefacts due to the turbulence dynamics without the need for active scanning (Extended Data Fig. 7a–e). The reconstructed turbulent wavefronts can then be applied directly in the incoherent synthetic aperture algorithm^{31} to achieve high-resolution imaging, with a comparable performance to previous iterative algorithms but a much faster speed. Both contrast and resolution outperformed the traditional 2D imaging without the need for additional deformable mirrors (Fig. 2d,e). To further demonstrate the stability in long-term sensing scenarios, we plotted a 60 s (15 Hz, 900 frame) kymograph (Fig. 2f–h) of the dashed white line in Fig. 2e. As the reconstruction performance is very sensitive to the accuracy of aberration estimation, our results further validate the precision of wavefront sensing achieved using the WWS, facilitating high-speed aberration-corrected imaging across a wide FOV with low computational costs.

### Observation of wide-field atmospheric turbulence

Observing wide-field atmospheric turbulence has long been a challenge. With WWS, we can now achieve the direct observation of atmospheric turbulence evolution over 1,100 arcsec (each sub-FOV covers 36 × 36 arcsec for better visualization in Fig. 3a and Supplementary Video 1). The frozen-flow characteristics^{36,37,38,39} of turbulence can be clearly observed as the phenomenon gradually moves from the upper right to the lower left (Fig. 3a). The finer version in Fig. 3b offers a more detailed revelation of the atmospheric distribution (each sub-FOV covers 18 × 18 arcsec). To analyse the turbulence statistics, we calculated the inter-mode normalized covariance matrix (Fig. 3c and Methods) of the 4th–35th Zernike-mode coefficients (according to Noll^{40}) based on 1,000 frames of the experimental measurements (Supplementary Table 1). The experimental covariance matrix accords well with the von Kármán turbulence model (Methods and Supplementary Table 2) with a 10 m outer scale (Fig. 3d and Extended Data Fig. 8)^{41,42}, further demonstrating the effectiveness of our method. In addition, we conducted a statistical analysis of the atmospheric turbulence wavefronts at different time intervals (Fig. 3e). For each interval, we can obtain a 2D distribution of the Fried parameter *r*_{0} across the FOV, which indicates the spatial variance of the turbulence strength (insets in Fig. 3e). Our results show quite a uniform distribution of *r*_{0} across nearly 1,000 arcsec during the observation period. Then, we compared our observations obtained from 00:20 to 00:50 GMT + 8 on 8 April 2023 (Xinglong Observatory) with DIMM measurements^{26,27}. The absolute values of the two measurements are closely aligned and exhibit a similar trend of variation (Fig. 3e). However, owing to the differences in observation duration (each observation session lasted about 30 s for the WWS and around 1–2 min for the DIMM), the WWS measurements showed localized intensity oscillations, in line with the fluctuation characteristics of atmospheric turbulence. Furthermore, our method enabled dynamic analysis to explore the temporal evolution of atmospheric turbulence (Fig. 3f). The temporal correlation diminishes progressively as the time intervals increase, with the rate of decay influenced by various factors such as the wind speed and the optical aperture size of the telescope. It is important to note that post-decoherence atmospheric turbulence does not exhibit a perfectly zero-mean distribution, primarily due to the dome effect during the experiments^{43,44,45}.

WWS measurements can also be effectively combined with methods such as slope detection and ranging (SLODAR) to achieve the precise profiling of atmospheric turbulence, restoring the 3D turbulence distribution^{46,47}. Conventionally, SLODAR requires multiple SHWS pairs targeting various view directions or equipped on different telescopes to calculate the cross-correlations between their wavefronts^{48,49}. A larger angular separation between each pair, denoted as δ*θ* (Fig. 3b), introduces a larger baseline, enabling a finer altitude resolution δ*h* for turbulence profiling, especially in near ground layers (Methods). In the meantime, more independent pairs with the same δ*θ* yield more reliable results. Using our method, a single WWS can obtain large numbers of cross-correlation results for different δ*θ* values. These diverse results can then be cross-referenced against each other, enhancing the performance of turbulence profiling. We selected 300 frames of our experimental measurements with each sub-FOV covering 36 × 36 arcsec. The cross-correlations of wavefront slopes were computed and temporally averaged with various reference sub-FOVs (some results are shown in Fig. 3g). The intensity distributions along the orange dotted lines (Fig. 3g) indicate the variation in turbulence strength along different altitudes. Each δ*θ* corresponds to 40 independent sub-FOV pairs in our experiment, and the averaged profiles were plotted (Fig. 3h). As δ*θ* was increased to 180 arcsec, we successfully recovered three turbulence peak profiles at distinct altitudes under 750 m.

### Turbulence prediction with the WWS

The atmospheric turbulence results in signal fading and elevated bit error rates in free-space optical communication, thereby diminishing the reliability of communication links. To address this problem, wavefront pre-compensation has been developed^{2}. This technique involves wavefront sensing in the downlink, followed by wavefront compensation in the uplink (Fig. 4a). However, the time discrepancy δ*t* between the uplink and downlink can introduce compensation errors due to the rapid spatiotemporal evolution of atmospheric turbulence. The wide-field measurements of the WWS make it feasible to predict the turbulence evolution on the basis of Taylor’s frozen-flow hypothesis^{36,37} (Fig. 4b). It posits that the spatial movement of turbulence within a specific time window can be treated as a unified translation of the wavefronts:

For a specific position \(\vec{r}\) within the entire FOV, t_{2} > t_{1} indicates different times, \(\varphi \left(\vec{r},t\right)\) denotes the turbulence wavefront and \(\vec{v}\) represents the transverse velocity. Conventional methods, however, are typically confined to detecting a local wavefront \(\varphi \left(\vec{{r}_{1}},{t}_{1}\right)\) and \(\vec{v}\) within a small FOV. Accurately predicting \(\varphi \left(\vec{{r}_{1}},{t}_{2}\right)\) or \(\varphi \left(\vec{{r}_{2}},{t}_{1}\right)\) on the basis of solely measuring \(\varphi \left(\vec{{r}_{1}},{t}_{1}\right)\) is challenging due to the intricate and nonlinear characteristics of turbulence. Leveraging the wide-field measurements by the WWS, we thus developed a residual convolutional long short-term memory (ConvLSTM) network (Fig. 4c and Methods) to achieve the precise prediction of atmospheric turbulence in advance (Supplementary Video 2), which can enable more accurate pre-compensation in free-space optical communications.

In our experiments, we selected five frames at 30 Hz as the input to predict the turbulence wavefront at the sixth frame (about 33 ms later). We used two kinds of metric to evaluate the prediction performance, including the *R*^{2} and the RMSE of the residual wavefront. The latter metric is directly related to the seeing condition. We divided our time-lapse measurements with a total of 6,000 frames into a training set (the first 5,400 frames) and testing set (the following 600 frames). Different from targeting a small isoplanatic field^{50}, our approach enables prediction over a wide area that ranges from 51 to 356 arcsec (along the diagonal) using the wide-field measurements up to 865 arcsec as the input. On the basis of the frozen-flow hypothesis, the turbulence distribution over a larger FOV provides more information for precise and stable predictions (Extended Data Fig. 9a,b). The wind speed plays a critical role in defining the maximum size of the FOV that can effectively contribute to the prediction. Similarly, the number of input frames also influences the outcome. Our ablation study indicated that the accuracy of the predictions reaches a saturation point after the input of five frames (Extended Data Fig. 9c,d). When predicting a single sub-FOV covering 51 arcsec, the WWS improved the mean value of *R*^{2} from 0.45 to 0.86, exhibiting a better performance with reduced variance than traditional wavefront sensors (Fig. 4d). Correspondingly, the RMSE of the residual wavefront was reduced by 195 nm with the central wavelength at 525 nm (Fig. 4e). A typical example is shown in Fig. 4f with an improvement of the signal intensity after pre-compensation, which is hard to be obtained using traditional methods. We further evaluated the generalization capability of our network. By fine-tuning the network with a modest dataset from another day (540 frames), we can attain a comparable performance (Fig. 4g,h). Moreover, by training and testing the network with different temporal sampling rates, we found that a higher frame rate can increase the prediction accuracy further (Extended Data Fig. 9e,f). The experimental results demonstrate the potential of atmospheric turbulence prediction using wide-field observations and deep learning, which facilitates turbulence correction in broad applications.

## Discussion

In this work, we introduced the WWS to directly observe wide-field atmospheric turbulence. The WWS is a cost-effective plug-and-play solution that can be easily adapted to most existing systems for quantitative wide-field wavefront sensing without additional modification. This flexibility liberates turbulence studies from the constraints of complicated optical configurations. Moreover, the scope of the turbulence observation using the WWS can be extended from the atmosphere to the ground surface (Extended Data Fig. 10a,b). In addition to using an extended source such as the moon, point sources such as sparse stars can also serve as targets for turbulence observation (Extended Data Fig. 10c–f). However, the long exposure time required will blur the light-field measurements of the WWS (30 Hz) due to the dynamic turbulence, whereas traditional adaptive optics can facilitate long-exposure imaging with real-time feedback of up to kilohertz. Future advancements in computational resources and high-speed cameras may address this problem. We anticipate that the wide-field observation capabilities of the WWS will unlock new possibilities in studying the evolution of atmospheric turbulence and support diverse practical applications.

## Methods

### Experimental set-up and data process

The hardware instrument of our WWS primarily consists of a CMOS with an MLA attached on top. For our prototype, we used a Flare 48M30-CX camera (IO Industries) with a resolution of 7,920 × 6,004 pixels to enable wide-field observation. The sensor used was a CMOSIS 50000 with a pixel size of 4.6 μm and a peak wavelength of quantum efficiency at 525 nm. Each microlens has a diameter of 69 μm, covering a region of 15 × 15 pixels. A higher spatial sampling rate of the pupil plane was achieved by covering more pixels with a single microlens. The MLA used has an F-number of 10 and a focal length of 690 μm, tailored to match the telescope specifications. It should be noted that all experiments were conducted using the 80 cm Tsinghua-NAOC telescope at the Xinglong Observatory of the NAOC. The CMOS photosensitive area was positioned at the back focal plane of the MLA, and the relative rotation, pitch and yaw between the two components were adjusted using a compact five-axis stage (PY005, Thorlabs). During the experiments, the prototype was place directly at the image plane of the telescope. The exposure time, shooting time and acquisition frame rate for all of the observation measurements in this paper are provided in Supplementary Table 1. The 1st–35th Zernike coefficients for atmospheric turbulence were estimated, following the conventions according to Noll^{40}. As turbulence is a zero-mean random process, we obtained the system aberration by calculating the mean value over a period along each FOV. In turbulence correction (Fig. 2d,e,g), the 1st–35th Zernike modes were utilized, retaining the system aberration. For atmospheric turbulence prediction (Fig. 4), the 4th–35th Zernike modes were used, excluding the system aberration component.

### Coarse-to-fine slope-estimation algorithm

According to the principle of the WWS, the impact of atmospheric turbulence on imaging can be interpreted as distorted subaperture images through the MLA. In the presence of isoplanatic aberration Φ^{0}(*f*_{x}), there will be a translational difference between subaperture images, which can be expressed as follows:

where ||*H*_{u}(*x*)||^{2} is the *u*th subaperture image, \({\Phi }_{u}^{0}\left(\;{f}_{x}-{f}_{x}^{\;u}\right)\) corresponds to the aberrated phase Φ^{0}(*f*_{x}) at the subregion *P*_{u} of the pupil plane, *f*_{x} represents the coordinates in the pupil plane; \({f}_{x}^{\;u}\) denotes the centre of *P*_{u}; *x* is defined in the image plane, representing a unique FOV direction (Fig. 1c); \(\Delta {s}_{u}^{0}\) denotes the relative displacement of *H*_{u} caused by Φ^{0}(*f*_{x}), which is also proportional to the overall phase gradient^{51} of Φ^{0}(*f*_{x}) at \({f}_{x}={f}_{x}^{\;u}\). Therefore, the isoplanatic aberration Φ^{0}(*f*_{x}) will introduce a uniform lateral shift \(\Delta {s}_{u}^{0}\) for ||*H*_{u}(*x*)||^{2} (Fig. 1d). However, under anisoplanatic aberrations Φ(*x*,*f*_{x}), each local region of ||*H*_{u}(*x*)||^{2} has its own lateral shift, resulting in global image distortion Δ*s*_{u}(*x*) (Fig. 1a). In practice, we designate a reference subaperture and compute its slope map Δ*s*_{u}(*x*) with respect to other subaperture images (Fig. 1e).

To achieve subpixel-precision slope maps, we propose a coarse-to-fine slope-estimation algorithm based on the PyTorch framework^{52}. This algorithm aims to estimate the spatially non-uniform lateral shifts between distorted image pairs, denoted as *I*_{0} and *I*_{t}, specifically caused by anisoplanatic turbulence. As the subaperture images realigned from the light-field image exhibit varying background intensity distributions, which is detrimental to slope estimation, we normalize *I*_{0} and *I*_{t} using intensity maps *M*_{0} and *M*_{t} to achieve a de-intensified version with amplified features, denoted as *U*_{0} and *U*_{t}. Here, *M*_{0} and *M*_{t} are obtained by applying a 0.05-fold adaptive average pooling and a 20-fold bicubic interpolation, as illustrated in Extended Data Fig. 1b. The objective function is defined as below:

where *U*_{0} is the de-intensified referenced image and Δ*s*(*x*) represents the pixel shifting from *U*_{t} to *U*_{0}. The optimization is divided into two parts: first, we estimate the global translation (coarse slope map); then we optimize the local lateral shifts (fine slope map). This design significantly improves the accuracy of the algorithm and reduces the number of iterations. It is important to note that the turbulence-induced distortion is globally smooth and continuous. This enables us to optimize a sparse slope map Δ*s*(*x*′), determined by the number of control points, to represent the dense distribution of atmospheric turbulence (Extended Data Fig. 1b). We then upsample Δ*s*(*x*′) to Δ*s*(*x*) using bicubic interpolation and *U*_{t}(*x*) is warped to *U*_{t}(*x* + Δ*s*(*x*)). The mean squared error loss between *U*_{t}(*x* + Δ*s*(*x*)) and *U*_{0}(*x*) is computed within the 95% central region to avoid boundary issues. The estimation of the global translation takes approximately 20 epochs with a learning rate of 0.01, followed by refinement of the local lateral shifts with at least ten epochs and a learning rate of 0.001. An MLP network is used for projecting the slope map to the Zernike polynomials. The MLP uses ReLU as activation function and comprises two hidden layers, with 300 nodes in the first and 500 nodes in the second^{53}. The computational time of slope estimation followed by MLP projection for each frame is tested using eight NVIDIA GeForce RTX 4090 graphics cards in Fig. 2c.

### Numerical simulation of the WWS

To assess the accuracy of the algorithm in aberration estimation, we performed simulations by incorporating a globally non-uniform distribution of aberrations (7 × 10) into subaperture images with a FOV spanning 541 arcsec (Fig. 2a,b). The aberrations covered the 4th–35th modes of the Zernike polynomials, with different RMS values. Each microlens covers 225 (15 × 15) pixels, with each pixel corresponding to 0.12 arcsecond as in an 80 cm telescope, and the F-number of the microlens was 10, matching the system parameters. WWS (1 × 1) illustrates the situation as the snapshot light-field imaging where the presence of the microlens leads to a spatial sampling loss for each subaperture image (Fig. 2a). WWS 5 × 5 and 15 × 15 correspond to different scanning times, equivalent to an increase in subaperture spatial sampling compared with 1 × 1. The 15 × 15 resolution means that each subaperture image has the same pixel numbers as the original light-field image. This enhancement can be achieved through a piezo stage in a meta-imaging sensor or the TIS method integrated within our WWS. The experimental results demonstrate that our method maintains high accuracy in wavefront estimation, even on low-resolution subaperture images, showcasing its subpixel-precision capabilities. Furthermore, we introduced Gaussian noise directly into the subaperture images to demonstrate the noise robustness of the WWS (Extended Data Fig. 4). In our simulations (Fig. 2b), the configuration of the C-SHWS is adapted from the specifications of the Thorlabs WFS20-70AR wavefront sensor. There are 15 × 15 subapertures with 15 × 15 pixels for each subaperture. The pixel size is equivalent to 0.8 arcsec and the centre wavelength is set at 525 nm.

### Atmospheric turbulence simulation

The atmospheric turbulence simulations were based fully on the von Kármán turbulence model^{37} and Taylor’s frozen-flow hypothesis, where the 3D volume of atmospheric turbulence is simplified as a compilation of vertically discrete phase-screen layers. Every turbulent layer is locally homogeneous and isotropic, without any interaction between other layers. The von Kármán phase structure function^{38}, *D*_{φ}(*r*), is given by

where *k*_{1} ≈ 0.172 and *k*_{2} ≈ 1.006 are constants, *r*_{0} is the Fried parameter, *L*_{0} is the outer scale and \({K}_{\frac{5}{6}}\left(\cdot \right)\) is the modified Bessel function of the second kind. In our simulations, the global *r*_{0} is set to 3.6 cm at 525 nm wavelength and 0° zenith angle, and the number of layers is set to four. The actual global *r*_{0} estimation during observation is carried out using a DIMM^{14,54,55} at the same observatory^{15}. It is worth noting that different definitions of *L*_{0} may lead to controversial values, typically ranging from a few metres to more than 2 km (refs. ^{42,56}). We hereby follow the definition^{14,37,42,57}, and use *L*_{0} = 10 m in all simulations. Owing to the ongoing debate about the outer scale, we also conducted simulations for cases where *L*_{0} is set to distinct values (Extended Data Fig. 8). The results demonstrate that the setting of the *L*_{0} value does not have a significant impact. Consequently, the phase structure function *D*_{φ}(*r*) can be defined as follows:

where *φ*(*x*) is the wavefront phase at point *x*, \(\left\langle \cdot \right\rangle\) denotes the ensembled average, \(\left\langle \varphi \left(0\right){\varphi \left(0\right)}^{T}\right\rangle\) is the spatial covariance of wavefront phase at *x* = 0. Now we consider the Zernike representation of the von Kármán atmospheric turbulence model. Following the definition from Noll^{40}, the Zernike polynomial expansion of an arbitrary phase is given by the following matrix–vector multiplication:

where *φ* is the vectorized phase, \(a=\left(\begin{array}{c}\begin{array}{c}{a}_{1}\\ {a}_{2}\end{array}\\ \begin{array}{c}\vdots \\ {a}_{n}\end{array}\end{array}\right)\) is a vector of Zernike coefficients up to the *n*th mode, *Z* = [*z*_{1}, *z*_{2},… *z*_{n}] is a concatenation of Zernike phase vectors, *X* is the pseudo-inverse of *Z*, as *Z* is not necessarily square. Finally, the spatial covariance matrix of Zernike coefficients (Fig. 3d) can be acquired from:

where \({\left(\cdot \right)}^{{\mathrm{T}}}\) denotes the transpose.

### Atmospheric turbulence profiling with SLODAR

The knowledge of the statistical properties of atmospheric turbulence, for example, the Fried parameter^{23} (*r*^{0}), the vertical profile of turbulence strength^{58} (*C*_{n}^{2}) and the temporal evolution of atmospheric turbulence^{36}, is fundamental in a variety of ground-based observational applications^{5,59,60,61}. First proposed in 2002, SLODAR^{48} is an atmospheric turbulence profiling method based on optical triangulation, and has been widely used at many astronomical observatories^{46,47,62,63}. It estimates the time-averaged cross-correlation of the wavefront slopes measured from two (or a few) target FOVs that are relatively close to each other:

The cross-correlation of the measured slopes was accumulated. For this, *s*_{ij}(*t*) is the wavefront slope in subaperture (*i*,*j*) at time *t* and \({s}_{\left(i,\;j\right)}^{{\prime} }\left(t\right)\) is the slope for the corresponding subaperture of the second FOV. The angle brackets denote averaging over many independent frames, and typically 1,000 frames over 30 s are required; *O*(δ*i*,δ*j*) is the total number of overlapped subapertures for separation (δ*i*,δ*j*). The cross-correlations of all possible (δ*i*,δ*j*) form a 2D correlation map. A turbulent layer located at a given altitude will appear as a peak on the correlation map along a specific direction. The altitude is determined by the angular separation δ*θ* and the position of the peak in the cross-correlation between FOV pairs. The altitude resolution (δ*h*) and the maximum altitude recovered by a SLODAR system (*H*_{max}) are given by:

In our experiments, *w* = 0.8/15 ≈ 0.053 m represents the subaperture size, *ζ* = 31.65° denotes the zenith angle and *N* = 15 is the number of subapertures along the diameter of the telescope pupil^{64}. The results corresponding to the same angular separation were averaged for more reliable turbulence profiling, reducing the impact of device noise or turbulence stochasticity. In addition, the available angular baselines of WWS measurements can range from 36 to a few hundred arcseconds, facilitating an extremely high-altitude resolution of 43 m. Three turbulent layers are clearly seen in the correlation maps, and good agreements are found between different baselines, as shown in Fig. 3h.

### Turbulence-induced scanning

TIS is a method for increasing the spatial sampling rate of subaperture images within the WWS. In principle, the dynamic atmospheric turbulence will introduce continuous distortions for each subaperture image (Fig. 1d). The global image distortion can be treated as many local lateral shifts within each sub-FOV. The lateral shifts can be seen as a form of dense spatial sampling (Extended Data Fig. 7c), termed TIS. Unlike previous methods that use a piezo stage to achieve uniform dense sampling, TIS is characterized by non-uniform and irregular sampling. The specific procedure is as follows. First, the temporal light-field data are realigned into different subaperture images \(\left\{{H}_{u,{t}_{n}}\left(x\right){|u}=\mathrm{1,2},\ldots ,225{;\,n}=\mathrm{1,2},\ldots ,9\right\}\), where *n* denotes the number of imaging frames. Taking one subaperture image, \({H}_{1,{t}_{n}}\left(x\right)\), as an example, the sampling points of \({H}_{1,{t}_{n}}\) are uniformly and sparsely distributed (Extended Data Fig. 7d). Using the slope-estimation algorithm, we estimated the flow maps \(\Delta{s}_{1,{t}_{n}}\left(x\right)\) between the reference frame \({H}_{{1,t}_{5}}\left(x\right)\) and the other frames \({H}_{{1,t}_{n}}\left(x\right)\). Subsequently, the spatial coordinates of sparsely sampled points at different frames can be obtained:

Furthermore, we merged the nine frames \(\left\{{H}_{1,{t}_{n}}\left(x\right){|n}=\mathrm{1,2},\ldots ,9\right\}\) along with sparse sampling pixels into a single frame with non-uniform densely sampled pixels \(\mathop{\bigcup }\nolimits_{n=1}^{n=9}{H}_{1,{t}_{n}}\left(x\right)\) (Extended Data Fig. 7c). Using their relative spatial coordinates \(\mathop{\bigcup }\nolimits_{n=1}^{n=9}x+\Delta {s}_{1,{t}_{n}}\left(x\right)\), we can obtain a uniform dense-sampled image (Extended Data Fig. 7d):

Here, SI represents the scatter interpolation. TIS can be applied to all of the subaperture images \(\left\{{H}_{u,{t}_{5}}^{{\prime} }\left(x\right){|u}=\mathrm{1,2},\ldots ,225\right\}\) for high-resolution imaging through the atmospheric turbulence instead of the additional hardware scanning methods^{31}.

### Atmospheric turbulence prediction

We developed a residual ConvLSTM network to accomplish the atmospheric turbulence prediction. The LSTM can leverage the temporal information of the measurements, and the convolution can make full use of the spatial information^{65}. The network architecture is shown in Fig. 4c. The first block is a two-layer ConvLSTM with kernel size of five and the number of hidden-layer channels is 128; the second block is a five-layer ConvLSTM with kernel size of three and the number of hidden-layer channels is 32. A residual structure was induced between the two blocks to improve the prediction performance. A fully connected layer is placed at the end of the network to adjust the output dimension. At each frame, the input wavefront phase is characterized as a tensor of size (*c*,*h*,*w*), where *h*,*w*, represents the FOV in measurements and *c* represents the coefficients of Zernike polynomials. We divide our time-lapse wide-field measurements (a total of 6,000 frames captured at 30 Hz) into a training set (the first 5,400 frames) and a testing set (the remaining 600 frames). In the ‘finetune’ experiments, a separate set of 600 frames captured on another day was used for training and testing. To account for the variations in the scales of input Zernike coefficients across different modes, a separate pre-normalization was performed on each channel before they were injected into the network. The pre-normalization ensures that the model does not focus solely on low-order modes with larger absolute values but also effectively fits the high-order modes. By addressing this issue, the network achieves a more balanced and accurate representation of the input measurements. The mean absolute error is adopted as the loss function. The input frames are from [*t* − *L*, *t* − 1], where *L* represents the number of input frames for atmospheric turbulence prediction, and we computed the loss on the output frames [*t* − *L* + 1, *t*], which has been experimentally shown to help the model to converge faster. The parameter *L* is set according to the ablation study in Extended Data Fig. 9c,d. The objective function is defined as follows: \({\mathcal{L}}={{||}{P}_{\left[t-L+1,t\right]}-{\hat{P}}_{\left[t-L+1,t\right]}{||}}_{1},\) where *P* is the ground truth and \(\hat{P}\) is the predictive result of the network. The model was trained and tested using an Intel i9-10900X central processing unit, with 64 GB random-access memory and an NVIDIA GeForce RTX 3090 graphical processing unit.

## Data availability

Demo data and pretrained model weights are publicly available via Zenodo at https://doi.org/10.5281/zenodo.11063855 (ref. ^{66}). Owing to the large size of the raw data, a subset of the raw WWS data is available via Zenodo at https://doi.org/10.5281/zenodo.11063896 (ref. ^{67}) and https://doi.org/10.5281/zenodo.11071397 (ref. ^{68}). Datasets (Supplementary Table 1) are available from L.F. upon reasonable request. Source data are provided with this paper.

## Code availability

Codes for the whole pipeline of the WWS are available via GitHub at https://github.com/freemercury/Widefield_wavefront_sensor.git.

## References

Feynman, R. P., Leighton, R. B., Sands, M. & Hafner, E. M.

*The Feynman Lectures on Physics Vol. I*(Addison–Wesley, 1964)Tyson, R. K. & Frazier, B. W.

*Principles of Adaptive Optics*(CRC, 2022)Ricklin, J. C. & Davidson, F. M. Atmospheric turbulence effects on a partially coherent Gaussian beam: implications for free-space laser communication.

*J. Opt. Soc. Am. A***19**, 1794–1802 (2002).Zhu, X. & Kahn, J. M. Free-space optical communication through atmospheric turbulence channels.

*IEEE Trans. Commun.***50**, 1293–1300 (2002).Roddier, F. in

*Progress in Optics*Vol. 19 (ed. Wolf, E.) 281–376 (Elsevier, 1981).Kolmogorov, A. N. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers.

*Dokl. Akad. Nauk SSSR***30**, 301 (1941).Strasburg, J. D. & Harper, W. W. Impact of atmospheric turbulence on beam propagation. In

*Proc. SPIE 5413, Laser Systems Technology II*(eds Thompson, W. E. & Brunson, R. L.) 93–102 (SPIE, 2004).Lagouarde, J. P., Irvine, M. & Dupont, S. Atmospheric turbulence induced errors on measurements of surface temperature from space.

*Remote Sens. Environ.***168**, 40–53 (2015).Roggemann, M. C. & Welsh, B. M.

*Imaging Through Turbulence*(CRC, 2018)Fried, D. L. Anisoplanatism in adaptive optics.

*J. Opt. Soc. Am.***72**, 52–61 (1982).Settles, G. S.

*Schlieren and Shadowgraph Techniques: Visualizing Phenomena in Transparent Media*(Springer, 2001)Skeen, S. A., Manin, J. & Pickett, L. M. Simultaneous formaldehyde PLIF and high-speed schlieren imaging for ignition visualization in high-pressure spray flames.

*Proc. Combust. Inst.***35**, 3167–3174 (2015).Hargather, M. J. & Settles, G. S. Natural-background-oriented schlieren imaging.

*Exp. Fluids***48**, 59–68 (2010).Tokovinin, A. From differential image motion to seeing.

*Publ. Astron. Soc. Pac.***114**, 1156 (2002).Liu, L. Y. et al. Seeing measurements for the Guoshoujing Telescope (LAMOST) site with DIMM.

*Res. Astron. Astrophys.***10**, 1061 (2010).Kornilov, V. et al. Combined MASS–DIMM instruments for atmospheric turbulence studies.

*Mon. Not. R. Astron. Soc.***382**, 1268–1278 (2007).Eberhard, W. L., Cupp, R. E. & Healy, K. R. Doppler lidar measurement of profiles of turbulence and momentum flux.

*J. Atmos. Ocean. Technol.***6**, 809–819 (1989).Engelmann, R. et al. Lidar observations of the vertical aerosol flux in the planetary boundary layer.

*J. Atmos. Ocean. Technol.***25**, 1296–1306 (2008).Browning, K. A. & Watkins, C. D. Observations of clear air turbulence by high power radar.

*Nature***227**, 260–263 (1970).Barletti, R., Ceppatelli, G., Paternò, L., Righini, A. & Speroni, N. Astronomical site testing with balloon borne radiosondes: results about atmospheric turbulence, solar seeing and stellar scintillation.

*Astron. Astrophys.***54**, 649–659 (1977).Wu, S. et al. Measurement and analysis of atmospheric optical turbulence in Lhasa based on thermosonde.

*J. Atmos. Sol. Terr. Phys.***201**, 105241 (2020).Wang, Y., Jin, D., Chen, J. & Bai, X. Revelation of hidden 2D atmospheric turbulence strength fields from turbulence effects in infrared imaging.

*Nat. Comput. Sci.***3**, 687–699 (2023).Fried, D. L. Optical heterodyne detection of an atmospherically distorted signal wave front.

*Proc. IEEE***55**, 57–77 (1967).Chen, H. et al. Deep learning assisted plenoptic wavefront sensor for direct wavefront detection.

*Opt. Express***31**, 2989–3004 (2023).Jiang, W. Overview of adaptive optics development.

*Optoelectron. Eng.***45**, 170489 (2018).Wu, Y., Sharma, M. K. & Veeraraghavan, A. WISH: wavefront imaging sensor with high resolution.

*Light Sci. Appl.***8**, 44 (2019).Feng, B. Y. et al. NeuWS: neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media.

*Sci. Adv.***9**, eadg4671 (2023).Stuik, R. et al. GALACSI – the ground layer adaptive optics system for MUSE.

*New Astron. Rev.***49**, 618–624 (2006).Tokovinin, A. Seeing improvement with ground-layer adaptive optics.

*Publ. Astron. Soc. Pac.***116**, 941 (2004).Rigaut, F. & Neichel, B. Multiconjugate adaptive optics for astronomy.

*Annu. Rev. Astron. Astrophys.***56**, 277–314 (2018).Wu, J. et al. An integrated imaging sensor for aberration-corrected 3D photography.

*Nature***612**, 62–71 (2022).Zhang, Z. & Levoy, M. Wigner distributions and how they relate to the light field. In

*2009 IEEE International Conference on Computational Photography (ICCP)*1–10 (IEEE, 2009).Michau, V. et al. Shack–Hartmann wavefront sensing with extended sources. In

*Proc. SPIE 6303, Atmospheric Optical Modeling, Measurement, and Simulation II*(eds Hammel, S. M. & Kohnle, A.) 63030B (SPIE, 2006).Townson, M. J., Kellerer, A. & Saunter, C. D. Improved shift estimates on extended Shack–Hartmann wavefront sensor images.

*Mon. Not. R. Astron. Soc.***452**, 4022–4028 (2015).Wu, J. et al. Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale.

*Cell***184**, 3318–3332 (2021).Taylor, G. I. The spectrum of turbulence.

*Proc. R. Soc. Lond. A***164**, 476–490 (1938).Winker, D. M. Effect of a finite outer scale on the Zernike decomposition of atmospheric optical turbulence.

*J. Opt. Soc. Am. A***8**, 1568–1573 (1991).Poyneer, L., van Dam, M. & Véran, J.-P. Experimental verification of the frozen flow atmospheric turbulence assumption with use of astronomical adaptive optics telemetry.

*J. Opt. Soc. Am. A***26**, 833–846 (2009).Gendron, E. & Léna, P. Single layer atmospheric turbulence demonstrated by adaptive optics observations.

*Astrophys. Space Sci.***239**, 221–228 (1996).Noll, R. J. Zernike polynomials and atmospheric turbulence.

*J. Opt. Soc. Am.***66**, 207–211 (1976).Ziad, A. et al. Comparison of measurements of the outer scale of turbulence by three different techniques.

*Appl. Opt.***43**, 2316–2324 (2004).Ziad, A. Review of the outer scale of the atmospheric turbulence. In

*Proc. SPIE 9909, Adaptive Optics Systems V*(eds Marchetti, E. et al.) 99091K (SPIE, 2016).Lai, O., Withington, J. K., Laugier, R. & Chun, M. Direct measure of dome seeing with a localized optical turbulence sensor.

*Mon. Not. R. Astron. Soc.***484**, 5568–5577 (2019).Guesalaga, A., Neichel, B., Cortés, A., Béchet, C. & Guzmán, D. Using the

*C*_{n}^{2}and wind profiler method with wide-field laser-guide-stars adaptive optics to quantify the frozen-flow decay.*Mon. Not. R. Astron. Soc.***440**, 1925–1933 (2014).Tallis, M. et al. Effects of mirror seeing on high-contrast adaptive optics instruments.

*J. Astron. Telesc. Instrum. Syst.***6**, 15002 (2020).Avila, R. et al. LOLAS: an optical turbulence profiler in the atmospheric boundary layer with extreme altitude resolution.

*Mon. Not. R. Astron. Soc.***387**, 1511–1516 (2008).Osborn, J., Wilson, R., Butterley, T., Shepherd, H. & Sarazin, M. Profiling the surface layer of optical turbulence with SLODAR.

*Mon. Not. R. Astron. Soc.***406**, 1405–1408 (2010).Wilson, R. W. SLODAR: measuring optical turbulence altitude with a Shack–Hartmann wavefront sensor.

*Mon. Not. R. Astron. Soc.***337**, 103–108 (2002).Osborn, J. et al. Optical turbulence profiling with Stereo-SCIDAR for VLT and ELT.

*Mon. Not. R. Astron. Soc.***478**, 825–834 (2018).van Kooten, M., Doelman, N. & Kenworthy, M. Impact of time-variant turbulence behavior on prediction for adaptive optics systems.

*J. Opt. Soc. Am. A***36**, 731–740 (2019).Platt, B. C. & Shack, R. History and principles of Shack–Hartmann wavefront sensing.

*J. Refract. Surg.***17**, S573–S577 (2001).Paszke, A. et al. Automatic differentiation in PyTorch (2017).

Nair, V. & Hinton, G. E. Rectified linear units improve restricted Boltzmann machines. In

*Proc. 27th International Conference on Machine Learning*(eds Fürnkranz, J. & Joachims, T.) 807–814 (2010).Ma, B. et al. Night-time measurements of astronomical seeing at Dome A in Antarctica.

*Nature***583**, 771–774 (2020).Sarazin, M. & Roddier, F. The ESO differential image motion monitor.

*Astron. Astrophys.***227**, 294–300 (1990).Avila, R. et al. Theoretical spatiotemporal analysis of angle of arrival induced by atmospheric turbulence as observed with the grating scale monitor experiment.

*J. Opt. Soc. Am. A***14**, 3070–3082 (1997).Conan, R., Borgnino, J., Ziad, A. & Martin, F. Analytical solution for the covariance and for the decorrelation time of the angle of arrival of a wave front corrugated by atmospheric turbulence.

*J. Opt. Soc. Am. A***17**, 1807–1818 (2000).Tatarski, V. I., Silverman, R. A. & Chako, N. Wave propagation in a turbulent medium.

*Phys. Today***14**, 46–51 (1961).Ellerbroek, B. L. & Rigaut, F. Methods for correcting tilt anisoplanatism in laser-guide-star-based multiconjugate adaptive optics.

*J. Opt. Soc. Am. A***18**, 2539–2547 (2001).Le Louarn, M., Hubin, N., Sarazin, M. & Tokovinin, A. New challenges for adaptive optics: extremely large telescopes.

*Mon. Not. R. Astron. Soc.***317**, 535–544 (2000).Fusco, T., Conan, J. M., Mugnier, L. M., Michau, V. & Rousset, G. Characterization of adaptive optics point spread function for anisoplanatic imaging. Application to stellar field deconvolution.

*Astron. Astrophys. Suppl. Ser.***142**, 149–156 (2000).Wang, L., Schöck, M. & Chanan, G. Atmospheric turbulence profiling with SLODAR using multiple adaptive optics wavefront sensors.

*Appl. Opt.***47**, 1880–1892 (2008).Laidlaw, D. J. et al. Optimizing the accuracy and efficiency of optical turbulence profiling using adaptive optics telemetry for extremely large telescopes.

*Mon. Not. R. Astron. Soc.***483**, 4341–4353 (2019).Butterley, T., Wilson, R. W. & Sarazin, M. Determination of the profile of atmospheric optical turbulence strength from SLODAR data.

*Mon. Not. R. Astron. Soc.***369**, 835–845 (2006).Shi, X. et al. Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In

*Proc. Advances in Neural Information Processing Systems 28*(eds Cortes, C. et al.) 802–810 (Neural Information Processing Systems Foundation, 2015).Hao Y. et al. Demo data and model weights for “Direct observation of atmospheric turbulence with a video-rate wide-field wavefront sensor”.

*Zenodo*https://doi.org/10.5281/zenodo.11063855 (2024).Hao Y. et al. Raw data for “Direct observation of atmospheric turbulence with a video-rate wide-field wavefront sensor” (part 1).

*Zenodo*https://doi.org/10.5281/zenodo.11063896 (2024).Hao Y. et al. Raw data for “Direct observation of atmospheric turbulence with a video-rate wide-field wavefront sensor” (part 2).

*Zenodo*https://doi.org/10.5281/zenodo.11071397 (2024).

## Acknowledgements

We acknowledge the support from the staff of the Xinglong 80 cm telescope of Tsinghua-National Astronomical Observatories, Chinese Academy of Sciences. This project is supported in part by the National Natural Science Foundation of China (numbers 62125106, 61860206003 and 62088102 (to L.F.) and number 62222508 (to J.W.)) and in part by Ministry of Science and Technology of China (contract number 2021ZD0109901 (to L.F.)).

## Author information

### Authors and Affiliations

### Contributions

J.W., L.F. and Q.D. conceived the project. J.W. designed the project. Y.G., S.W., J.W. and L.Z. designed and built the optical system. Y.G., Y.H. and H.Z. developed the whole pipeline of reconstruction algorithms and conducted the numerical simulations. Y.H. and Y.G. conducted performance optimization on the algorithms. H.Z. designed and implemented the turbulence profiling experiment. Y.G., Y.H., S.W. and H.Z. conducted the telescope experiments. Y.H. and Y.G. conducted the turbulence prediction experiments. Y.G. and S.W. conducted the other imaging experiments. Q.D., L.F. and J.W. supervised the work. Y.G., H.Z., Y.H. and Y.Z. prepared the figures. Y.G., H.Z. and Y.H. wrote the original manuscript with input from all authors. J.W., Q.D. and L.F. reviewed and edited the manuscript.

### Corresponding authors

## Ethics declarations

### Competing interests

The authors declare no competing interests.

## Peer review

### Peer review information

*Nature Photonics* thanks David Brady and Chao Zuo for their contribution to the peer review of this work.

## Additional information

**Publisher’s note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Extended data

### Extended Data Fig. 1 Flowchart of the wavefront sensing pipeline and analysis of computational speed for the wide-field wavefront sensor.

**a**, Flow chart of pixel realignment. Resize and rotation of the raw measurements are employed to correct minor assembly errors between the complementary metal-oxide semiconductor (CMOS) and the micro-lens array. The “doughnut” appearance of microlens in the light-field image is caused by the obstruction of the secondary mirror. **b**, Pipeline of the coarse-to-fine slope-map estimation algorithm. **c**, Analysis of computational time for different numbers of control points inside the same field of view (FOV) with 1,100 arcseconds along the diagonal.

### Extended Data Fig. 2 Comparisons between plenoptic wavefront sensor and wide-field wavefront sensor.

**a**, The network architecture of learning-based plenoptic wavefront sensing^{24} used for comparison. **b**, The multilayer perceptron for our wide-field wavefront sensor. **c**, Box plots of residual wavefront errors (root mean square error, RMSE) obtained by plenoptic wavefront sensor and wide-field wavefront sensor. The aberrations are generated randomly with a maximum Zernike mode of 35 (Noll) and the average aberration level (root mean square, RMS) is 1.5λ. We tested n = 1000 different aberrations for comparison. **d**, The average time cost for both methods. Box plots (**c**) show the median, 25% and 75% quartiles, maximum and minimum values, and the outliers.

### Extended Data Fig. 3 Numerical evaluations of microlens pitch sizes on wavefront sensing performance.

**a**, Curves of residual wavefront errors (for isoplanatic aberrations) \({\rm{\sigma }}\) versus different pitch sizes of microlens in terms of different Zernike modes involved. The sensor pixel size is fixed as 4.6 μm and the microlens pitch size corresponds to different sub-aperture numbers. \({\rm{\sigma }}\) is calculated by the ratio of the residual phase RMS to the peak-to-valley value of the added aberration. The FOV of each image is 100×100 square arcseconds. The aberration RMS is 1\(\lambda\). **b**, Curves of residual wavefront errors (for isoplanatic aberrations) \({\rm{\sigma }}\) versus different microlens pitch sizes in terms of different aberration levels with a maximum Zernike mode of 35. Data in (**a-b**) are represented by mean ± s.d. and the number of isoplanatic aberrations *n* = 50. **c**, One simulated sub-aperture images obtained by the wide-field wavefront sensor with 7×10 spatially variant aberrations applied in a FOV across 541 arcseconds with a sensor pixel size of 0.12 arcsecond based on an 80-cm telescope. The aberrations are generated randomly with a maximum Zernike mode of 35 (in the order of Noll) and the average aberration RMS is 1\(\lambda\). **d**, Simulated ground truth aberrations across the whole FOV. **e**, Estimated aberrations by wide-field wavefront sensor.

### Extended Data Fig. 4 Numerical evaluations of the noise robustness for wide-field wavefront sensor.

**a-b**, One of the simulated sub-aperture images of wide-field wavefront sensor with 7×10 spatially variant aberrations with different signal-to-noise ratios (SNRs) by directly adding Gaussian noise to sub-aperture images. The aberrations are generated randomly with a maximum Zernike mode of 35 (Noll) and the average aberration level (RMS) is 1\(\lambda\). **c**, Box plots of the relative residual wavefront errors \({\rm{\sigma }}\) versus different SNRs. Box plots (**c**) show the median, 25 and 75% quartiles and maximum and minimum values. The number of spatially variant aberrations *n* = 70.

### Extended Data Fig. 5 Schematic simulation pipeline of the correlating Shack-Hartmann wavefront sensor.

**a**, The simulation pipeline of correlating Shack-Hartmann wavefront sensor (C-SHWFS) with an extended object. First, we simulate the sub-aperture images of a point-source SHWFS. Then, the sub-aperture images of C-SHWFS can be obtained by two-dimensional (2D) convolution between the sub-aperture images of SHWFS and isoplanatic patches of the extended object, whose dimensions are determined by the actual FOV of the wavefront sensor. Wavefront slopes of C-SHWFS are estimated by 2D cross-correlation among the corresponding sub-aperture images and the wavefront is acquired by projecting the slopes to Zernike polynomials.

### Extended Data Fig. 6 Comparison of turbulence-corrected imaging methods between meta-imaging sensor and wide-field wavefront sensor.

**a**, Meta-imaging sensor requires lateral scanning through a piezo stage to increase the spatial sampling rate of sub-aperture images, which consequently improves the precision of aberration estimation. The densely sampled sub-aperture images are divided into blocks and local turbulence wavefronts are iteratively optimized. High-resolution (HR) images are then reconstructed block by block based on a simulated ideal point spread function (PSF). **b**, Wide-field wavefront sensor achieves densely sampled sub-aperture images utilizing the turbulence-induced scanning (TIS) method, without the need for additional scanning hardware. Moreover, the global atmospheric turbulence wavefront can be obtained in real time. By incorporating the global turbulence wavefronts, high-speed aberration-corrected imaging across a wide FOV with low computational costs can be achieved.

### Extended Data Fig. 7 Principle of turbulence-induced scanning.

**a**, The schematic diagram of TIS for each sub-aperture image. Due to the inherent spatial sampling loss in the snapshot light-field measurements, each sub-aperture image can be regarded as a uniform sparse sampling. The temporal evolution of atmospheric turbulence induces multiple sampling positions for the same target at different frames. We merge the uniform sparse sampling information of all frames into a single sub-aperture image with non-uniform dense sampling. Then, through the scattered interpolation method, a uniform dense sampling sub-aperture image can be obtained. **b**, Single sub-aperture images with sparse sampling. Under short exposure, the resolution of each sub-aperture is mainly limited by insufficient sampling rather than blurring caused by turbulence. **c**, Single sub-aperture images with non-uniform dense sampling. The non-uniformity leads to severe artefacts. **d**, Single sub-aperture images after TIS. **e**, Traditional 2D imaging results.

### Extended Data Fig. 8 Normalized covariance matrix of different Zernike modes for the von Kármán turbulence model with different outer scales.

**a-d**, Normalized covariance Matrix for 4^{th} - 35^{th} Zernike modes of the simulated atmospheric turbulence based on the von Kármán turbulence model. It can be observed that when the outer scale \({L}_{0}\) is changed, there are no significant changes in the statistical patterns. **e**, A slight difference occurs when \({L}_{0}\) equals to 50 meters.

### Extended Data Fig. 9 Ablation analysis of input measurements parameters in atmospheric turbulence prediction.

**a**, R^{2} (coefficient of determination) of atmospheric turbulence prediction across 51 arcseconds with different input FOV. **b**, RMSE of atmospheric turbulence prediction across 51 arcseconds with different input FOV. Data are mean ± s.d. and *n* is the number of predicted isoplanatic regions covering 51 arcseconds in (**a-f**), where *n* = 570 in (**a**-**b**). **c**, R^{2} of atmospheric turbulence prediction of 51 arcseconds with various lengths of input frames. The input FOV is 865 arcseconds. **d**, RMSE of atmospheric turbulence prediction of 51 arcseconds with various lengths of input frames. The input FOV is 865 arcseconds. *n*_{1} = 594, *n*_{3} = 582, *n*_{5} = 570, *n*_{7} = 558, *n*_{9} = 546 in (**c**-**d**). **e**, R^{2} of atmospheric turbulence prediction of 51 arcseconds with different frame rate. **f**, RMSE of atmospheric turbulence prediction of 51 arcseconds with different frame rate. *n*_{7.5} = 480, *n*_{10} = 510, *n*_{15} = 540, *n*_{30} = 570 in (**e**-**f**).

### Extended Data Fig. 10 Applicability of WWS for diverse scenarios.

**a**, The center view of long-distance (about 2 km) daytime imaging with a wide-field wavefront sensor at 25 Hz. **b**, The aberration maps of ground-surface turbulence observed in **a** at different time points, showing the frozen-flow phenomenon. **c**, The center view of wide-field wavefront sensor with the simulated point sources as targets under anisoplanatic atmospheric turbulence. The RMS of added aberration is 0.5λ, consisting of 4^{th} to 35^{th} Zernike modes. The coarse-to-fine slope estimation algorithm is replaced by centroid extraction with pre-connected-component segmentation in this case. **d**, The ground truth of the turbulence-induced aberrations added to the point sources. **e**, The reconstructed aberrations by wide-field wavefront sensor. **f**, Box plots of the relative residual wavefront errors σ obtained by wide-field wavefront sensor and Shack-Hartmann wavefront sensor. Here, we show the median, 25% and 75% quartiles, maximum and minimum values, and all the data points (*n* = 25).

## Supplementary information

### Supplementary Information

Supplementary Tables 1 and 2.

### Supplementary Video 1

Direct observation of atmospheric turbulence with a video-rate WWS through a ground-based 80 cm telescope. This video showcases the capabilities of our WWS, mounted on the Tsinghua-NAOC 80 cm telescope, to directly observe atmospheric turbulence wavefronts. Sample subaperture images are featured at the timestamp 00:59:19 (GMT + 8) on 8 April 2023. With a resolution of 7,920 × 6,004 pixels, the WWS is capable of conducting real-time observations at a rate of 30 Hz across an expansive field that exceeds 1,100 arcsec. The frozen-flow phenomenon of atmospheric turbulence is vividly captured.

### Supplementary Video 2

Atmospheric turbulence wavefront prediction on the basis on a video-rate WWS. The evolution of turbulence dynamics in advance of 33 ms is successfully predicted using a ConvLSTM neural network, according to the frozen-flow hypothesis and the wide-field measurements.

## Source data

### Source Data Fig. 2

Statistical source data.

### Source Data Fig. 3

Statistical source data.

### Source Data Fig. 4

Statistical source data.

### Source Data Extended Data Fig. 2

Statistical source data.

### Source Data Extended Data Fig. 3

Statistical source data.

### Source Data Extended Data Fig. 4

Statistical source data.

### Source Data Extended Data Fig. 9

Statistical source data.

### Source Data Extended Data Fig. 10

Statistical source data.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Guo, Y., Hao, Y., Wan, S. *et al.* Direct observation of atmospheric turbulence with a video-rate wide-field wavefront sensor.
*Nat. Photon.* (2024). https://doi.org/10.1038/s41566-024-01466-3

Received:

Accepted:

Published:

DOI: https://doi.org/10.1038/s41566-024-01466-3