Virtual-source imaging and repeatability for complex near surface

Based on seismic interferometry, the virtual source (VS) method is able to produce virtual gathers at buried receiver locations by crosscorrelating the direct-downgoing waves with corresponding reflected-upgoing waves from surface-source gathers. Theoretically, the VS records can improve seismic quality with less negative impact from overburdened complexities. However, shallow complex structures and weathering layers at near surface not only severely distort the wavepaths, but also introduce multiples, surface waves, scattering noise, and interference among different wave modes. These additional seismic responsescontaminate both direct-downgoing and reflected-upgoing wavefields. As a result, the VS gathers experience spurious events and unbalanced illuminations associated with distorted radiation patterns. Conventional stacking operator can produce significant artifacts for sources associated with ineffective-wavepath cancellation. We review three publications and summarize a comprehensive workflow to address these issues using data-driven offset stacking, wavelet-crosscorrelation filtering, and radiation-pattern correction. A data-driven offset stacking theme, with each individual source contribution is weighted by certain quality measures, is applied for available offsets. The wavelet crosscorrelation transforms time-offset data into local time-frequency and local time-frequency-wavenumber domains. Filters are designed for the power-spectrum in each domain. The radiation-pattern correction spatially alters the contaminated direct-wavefields using a zero-phase matched filter, such that the filtered wavefield is consistent with the model-based direct P-wavefields observed at buried receiver locations. Our proposed workflow produces significant improvement as demonstrated in the 13 time-lapse field surveys that included substantial repeatability problems across a 17-month survey gap.

where ⊗ denotes crosscorrelation. r A , r B and r S denote the spatial locations of the two receivers at A and B, and at the source location S, respectively. V(r B |r A ; t) is the resulting interferometric data with the time lag t at receiver r B when r A is treated as a virtual source. D(r A |r S ; t) represent the direct-downgoing wavefields recorded at buried receivers r A , while U(r B |r S ; t) represents the reflected-upgoing wavefields at r B . g(r) is the weight operator which is a simple Gaussian function centered at the VS location: π = − g r e R ( ) / 2 r R /2 2 2 , where R is the maximum offset, r is the offset between the source r S and the buried receiver r A . g(r) ranges from 1 at the receiver location to 0 at the maximum offset. Figure 1 summarizes underlying principle that the near-surface complexity degrades the VS response. Direct arrivals D(r A |r S,1 ) and D(r A |r S,2 ) excite from surface sources r S,1 and r S,2 , and are recorded by a buried receiver r A . r S,1 and r S,2 that have the same offset r(light blue solid lines) with respect to r A . In the absence of the wavefield separation capability, we treat the direct arrivals as the downgoing wavefields and the late-arriving reflections as the upgoing wavefields, respectively. Figure 2 shows a typical record. We consider the unprocessed near-offset data as the downgoing wave, usually at offsets less than 30 m and within 100 ms (the time section above the red dash lines). These values were estimated from our experiences that the early arrivals are relatively unpolluted and is still dominated by the direct P-wave. The later-arriving reflections (>300 ms, the time section below the green dashed lines) at all offsets are used as approximations to the upgoing waves. The horizontal distance is 2400 m and the vertical time axis is from 0 s to 2 s (~3300 m).
There are mainly three challenges that need to be addressed: (1) Nonstationary offset stacking. Figure 1 illustrates a scenario in which D(r A |r S,1 ) and U(r B |r S,1 ) share a common raypath from shot r S,1 .and will have the correct compensation for the phase associated with the near-surface propagation by crosscorrelating with U(r B |r S,1 ), and making a stationary contribution to V(r B |r A ). Therefore, it should be added into V(r B |r A ) with maximum weight. However, since the weight g(r) (purple bell curve) is not data driven, it provides a minor weight during offset stacking. In contrast to r S,1 , the near-offset shot r S,1′ , linear mapped from the buried VS to the surface location, does not share the common raypath (green dash lines) with r A and therefore has deconstructive contributions. From the above analysis, we observe that weights g(r) based on the surface location (purple bell curve) could assign inappropriate weights and lead to non-stationary contributions. Other shots such as r S,2 may also qualify for contribution to V(r B |r A ) constructively, as long as its source raypath to r A can cancel with the raypath to r B effectively.

Figure 1.
An illustration of virtual-source redatuming in the presence of near-surface complexity. Two surface sources, with the same distance (blue solid lines) with respect to the VS, generate two direct arrivals recorded by the buried receivers. The straight ray path is broken into a set of piecewise rays (yellow solid lines) and results in inequivalent angle coverage. The surface multiples, near-surface reverberations (yellow dashed lines) are also introduced to contaminate direct arrivals (yellow solid lines). The conventional stacking operator (Gaussian function g(r)) is shown as a purple curve on the top of surface. The surface sources r S generates direct arrivals recorded by the buried receivers r A to r B , which make a positive contribution to VS. The near-surface complexity breaks the raypath into a set of piecewise rays (green solid lines) with potentially complex raypaths. The incorrectly projecting source r S′ (green dashed lines) does not share a common raypath, and therefore makes a nonstationary contribution to the VS. Conventional weighting g(r) assigns inappropriate weights to surface sources and degrades VS responses.
(2) Multi-wave mode contamination. The intricate overburden introduces near-surface reverberations (yellow dashed lines), and S-wave modes which contaminate direct arrivals (yellow solid lines) regardless of careful time-windowing and spatial energy tapering. Considering this, each of the up-and downgoing-wavefields can be written as a summation of different modes: where subscripts P, M, S represent direct P-arrivals, near-surface multiples, and S-waves modes, respectively. We add subscripts L(land data) in front of P, M, S to avoid subscript conflictions in other sections. (D LP , U LP ), (D LM , U LM ), (D LS and U LS ) are the associated up-and downgoing components, respectively. Only the crosscorrelation of the downgoing D LP (r A |r S ; t) with the upgoing U LP (r B |r S ; t) in Eq. (2) would produce a kinematically correct event. Other terms, D i (r A |r S ; t) * U j (r B |r S ; t) where i is not the same as j, would typically generate spurious events. (3) Unbalanced illuminations. The straight ray path breaks into a set of piecewise rays (yellow and green solid lines). This generates an inequivalent angle coverages (θ 1 of D(r A |r S,1 ) ≠ θ 2 of D(r A |r S,2 )), although r A is covered by the same offset therefore the VS illumination is heavily biased beneath the overburden. www.nature.com/scientificreports www.nature.com/scientificreports/

Method: A Comprehensive VS Workflow to Tackle Complex Near-Surface
Step 0: Review of Continuous wavelet transform. At the beginning of the methods section, we briefly review the fundamentals of continuous wavelet transformation (CWT) as it underpins the following sections heavily. The CWT is used to decompose a signal into wavelets. Wavelets are small oscillations that are highly localized in time. While the Fourier Transformation decomposes a signal into infinite length sines and cosines, it effectively loses all time-localization information. The CWT's basic functions are scaled and shifted versions of the time-localized mother wavelet. The CWT is used to construct the TF representation of a signal and offers good time and frequency localization [22][23][24][25][26][27][28] . The CWT is an excellent tool for mapping changing properties of non-stationary seismic signals.
For a given 1D seismic trace D(t), the forward CWT expands it from 1D to a 2D signal: where Ψ ϕ (f α , τ) is a 2D wavelet coefficient, and ϕ(t) denotes the Ricker source wavelet. The symbols f α and τ are Ricker wavelet frequency and local time delay, respectively. The inverse wavelet transformation brings the 2D signal back to the time domain: where C ϕ is a frequency scaling factor. A similar process can be applied to 2D seismic gather D(r, t), in which multi-dimension transformation is given by where k r and r represent wavenumber and spatial distance, respectively, and Ψ ϕ (f α , τ, k r ) denotes the 3D wavelet coefficients including times τ, frequencies f α , and wavenumbers k r .
Step 1: Data-driven weight offset stacking. The design of stacking weights for the VS have rarely been fully studied in the existing literature (Zhao,et al. 21 ). In contrast to previous works, this study proposes a fully data-driven weighting approach and wavelet domain implementation. The proposed new approach is formulated as Our goal is to adaptively choose weighting coefficients w(r S ) for all sources within the available offsets such that V w (r B |r A ; t) can be computed with less contribution from non-stationary raypaths. The coefficients w rS are entirely data-driven and directly computed from downgoing and upgoing wavefields. In order to evaluate the quality of individual r S contributions to the retrieved V w(rS) (r B |r A ; t), we use V(r B |r A ; t) (Eq. (1)) to predict upgoing wavefields

pred B S A S B A
where * denotes convolution. Then we use the quality of the upgoing prediction U pred (r B |r S ;t) to obtain weight coefficients. For time-lapse monitoring purposes, we customize Eq. (7) to be a target-oriented function focusing on the target reservoir. This function is designed to include a variety of seismic attributes such as τ, f α , k r (TFK) of the target pre-stack events. Therefore we first perform transformation (3) for each upgoing trace Ψ ϕ (f α , τ) to obtain the 2D wavelet coefficients. To improve the robustness of the obtained weights, Eq. (5) is adapted with U pred (r B |r S ; t) along the space dimension: The initial weighting coefficients is computed as follows Equation (9) represents the normalized cross-spectrum w rS between the predicted U pred (r B |r S ) and the recorded U(r B |r S ) in the TFK domain Figure 3 illustrates a cross-spectrum TFK cube from a selected event. We may restrict the range of to retain the target features, and mask other unassociated events. As discussed in the introduction section, the target horizon (2000 m) is at about 1 s-1.4 s, and has an effective bandwidth of about 20 to 60 Hz 1 . Therefore we limit the ranges of τ and f α , and sum over the corresponding frequencies, time lags, and wavenumbers to produce w rS . Equation (9) is then applied to these time-lapse datasets. www.nature.com/scientificreports www.nature.com/scientificreports/ The resulting stacking weights for the conventional and the new stacking are compared in Fig. 4. The offset stacking of survey S1 (Fig. 4b) and survey S11 (Fig. 4c) offers data-driven, and survey dependent weights, therefore, respond adaptively to the changes between times in the near-surface at a given VS. In contrast, the conventional spatial taper (Fig. 4a) only provides a static weight distribution for all surveys. Generally, the offset weights decrease with offset and tend to be smaller than the equivalent weights from the conventional taper. Among all surface sources to each of the selected VS locations, the offset weights show similar contribution patterns for different surveys, but also adapt to time-lapse changes shown in the minor pattern dissimilarity. It also can be observed that the near-offset shots have larger contributions. Due to the data-driven nature of the weighting  . Stacking weights from 9 shot lines of surface sources that contribute to VS (#60) including those from (a) a conventional stacking operator of survey S1 and S11, (b) from the offset stack of survey S1, and (c) from the offset stack of survey S11. (2019) 9:16656 | https://doi.org/10.1038/s41598-019-53146-w www.nature.com/scientificreports www.nature.com/scientificreports/ coefficients, they may be influenced by noise and experience large lateral variations. A smoothing function can be applied to stabilize the data-driven stacking process. We summarize this section of step 1 in a workflow, colored as green, in Step 2: Wavelet-crosscorrelation filtering for multi-wave mode contamination. Crosscorrelation can be used to determine the relative time delay between two seismic signals. The wavelet coefficients provide a local time and frequency distribution of the seismic traces. Equation (1) can be written in the form of the wavelet crosscorrelation 29 : are the wavelet coefficients of direct-downgoing D(r A |r S ) and reflection-upgoing waves U(r B |r S ), respectively. The wavelet crosscorrelation function WV V(rB|rA) (f α ,τ) allows the detection of nonstationary coherence structures and the potential time-lag between two seismic traces.
It can be shown that WV(f α , τ) 26 can be related to the classical crosscorrelation V(r B |r A ) in the Fourier domain: 2 Figure 5. The summarized workflow is colored in green for the section of step 1. The input data is from the buried geophones, and the output comprises adaptive weights for VS offset stacking.
where WV(f α , ω) is the wavelet crosscorrelation coefficient represented in the wavelet frequency and angular-frequency domain. V(ω) and ϕ(f α ω) denotes the cross-spectrum and mother wavelet in the Fourier domain, respectively. Equation (11) suggests that WV(α, ω) has the same phase spectrum as V(ω), and ∠ represents the phase spectrum. In Eq. (12), the amplitude spectrum |WV(f α , ω)| is the cross-spectrum |V(ω)| weighted by the factor f α |ϕ(f α ω)| 2 . Note that WV(f α , ω) is a complex-valued function. Subsequent filtering is applied to the amplitude spectrum only. The original phase of the recorded data is retained to honor the kinematics of the Green's function extracted from the VS process. The 2D TF domain filter, denoted as H(f α , τ), is applied to the obtained wavelet crosscorrelation WV i (f α , τ): o i , Equation (14) denotes the 2D convolution operator, and WV o (f α , τ) is the output wavelet crosscorrelation in the TF domain. H(f α , τ) may take various 2D filter forms. For the discussion here, we use the first derivative of a 2D Gaussian filter as an example: where σ fα and σ τ are the standard deviations along f α ,τ axes that parameterize the filter. Small σ fα and σ τ values lead to higher localization resolution along the f α ,τ axes. The Gaussian filter can be replaced by other filters depending on the purpose of processing. As an example, the resulting wavelet coefficients before and after TF filtering are illustrated in Figs 6 and 7, respectively. The strong energy events (yellow-orange patches) represent the reflections generated by the target reflectors, whereas the weaker blue-green events with random shapes represent scattered noise. Removing the spurious events is desirable to enhance the overall S/N prior to correlation. Figure 7a demonstrates that H(f α , τ) strengthens these deep targets by suppressing the background noise. The inverted time series before and after H(f α , τ) filtering is shown in Figs 6b and 7b, respectively. We extend 2D TF H(f α , τ) into 3D TFK H(f α , τ, k) filtering, which is expected to mitigate coherent noise in space/wavenumber dimension. Extending the 2D TF filtering given in Eq. (14), we propose a 3D filtering in the time-frequency-space (TFX) domain as www.nature.com/scientificreports www.nature.com/scientificreports/ As a simple example, a 3D filter for H(f α , τ, k) can be expressed as: can be a simple f α , k bandpass filter allowing a particular propagation velocity to pass while filtering out other wavefield components. As another example, we apply the multi-channel filter H(f α , τ, k) to process an entire VS shot gather contaminated by surface waves (Fig. 8a). A TFX cube is generated by gathering slices from each receiver in response to a common virtual shot to form the TFX cube. The data is then Fourier transformed along the receiver dimension to obtain the corresponding wavenumber spectra. Figure 8b shows the recovered signal after applying the inverse wavelet transform. Figure 8 demonstrates that the wavelet crosscorrelation was able to eliminate residual surface waves with a simple TFX filter, where the traditional VS failed. We summarize this section of step 2 in a workflow, colored in blue,in Fig. 9.
Step 3: Radiation-pattern correction for unbalanced illumination. To address the unbalanced illumination issue, we used a method 30 which iteratively constructs and applies a matched filter for an ideal 3D amplitude spectrum for direct downgoing arrivals. The simplest approach to estimate an isotropic radiation pattern is to compute the Green's function of direct P-waves in a homogenous model 31 . This homogenous model is 2.5D and built independently for every VS, spanning from the buried receiver up to the surface with the radius consisting of the contributing sources. By targeting an optimal approximation to the direct P-wave arrivals before correlating with the reflections, the correct image associated with a cleaner direct P-wave can be enhanced, while near surface induced artifacts are suppressed. The following workflow describes radiation-pattern correction.
Estimate the initial homogeneous model. The length of the straight ray connecting source and receiver is divided by the automatically picked onset arrivals. The output is the velocity estimate for each source-receiver pair. The local initial model above each VS is then derived as the average of the calculated velocities. Using this model, the synthetic direct arrivals are generated using an analytical 3D acoustic Green's function 31 where D P (r A |r S ; t) represents the output synthetic direct P-waves. D P (r A |r S ; t) is then converted into FK domain can be computed similarly. ϕ(t − rχ) still denotes the source Ricker wavelet (Eq. (3)), r is the distance between the surface source r S to the buried VS r A with the constant slowness χ (the inverse of P-wave velocity). 1/4πr is the geometrical spreading of a spherical wavefield in homogenous medium. − ω e t Q 2 represents the attenuation term of seismic amplitude of this analytical 3D acoustic Green's function, where Q is a constant seismic quality factor derived from the data to match waveform amplitudes, and estimated from a nearby acoustic log. The dominant frequencyis 35 Hz to match the wavelet pattern of the target reservoir reflection. We solve Eq. (18) using the initial homogenous velocity, and illustrate the target syntheticresponsesin Fig. 10. www.nature.com/scientificreports www.nature.com/scientificreports/ Estimate the matched filter so that the filtered recorded wavefields match with the modeled wavefields. A cost function in the FK domain is designed to shape the radiation pattern of the field data to match that of the modeled synthetic data as follows  (19) is solved iteratively, ∼ D P,r A is also updated with a new velocity model, and therefore is a function of the matched filter from the previous iteration.
∼ H is updated iteratively using: This is an iterative process since ∼ D P,r A is also updated as a function of ∼ H. The matched filter for the 3D amplitude spectrum ∼ D r A and ∼ D P,r A may be considered as a zero-phase FK filter, enforcing an isotropic radiation pattern associated with the desired direct P-wave, while leaving the phase intact. Since this field example has a good initial model estimation, only five iterations are needed for the inversion of the velocity and the matched filter to converge. The stopping criteriais the summation of the misfit less than 1e −2 . Figure 11 illustrates the gated original direct arrivals of the 3D field common-receiver array (aperture 30 m at 7.5 m sampling interval). Update the homogeneous model velocity. The homogeneous overburden model is iteratively updated, and coupled with the matched filter update. The velocity model is updated by solving the same minimization cost function, Eq. (20), but with the matched filter set to the value obtained from the previous iteration and optimized over slowness χ. After the initial model is built, the slowness can be updated iteratively   (18)) in a vector notation (including all contributing surface sources) with respect to the VS. Since χ i is a constant value, symbol 〈〉 simply averages the gradient www.nature.com/scientificreports www.nature.com/scientificreports/ ∇g χi for all contributing sources. We adapt a linear search to determine the step length α, and χ i is then iteratively updated in TX domain. An updated χ i is plugged back to Eq. (18), and repeats until the iterations are finished.
Apply the estimated matched filter to direct wavefields and perform VS. With a few iterations, ∼ H is applied from Eq. (20) is the final output with radiation pattern correction, and a final Fourier transformation back to V(r B |r A ; t). Again ϕ  e j is isolated from the filter ∼ H to maintain its originality. The adaptively updated matched filter is only applied to the direct-downgoing wavefield prior to crosscorrelation with the upgoing wavefield. Ideally, the matched filter should not affect any time-lapse signal carried by the reflected wavefield after crosscorrelation Figure 12 is the matched filter ∼ H output after the final iteration. In this example, first-arrival picks provide velocity updates with sufficient quality so that the output from the matched filter after five iterations already provides reasonable results. In the time domain, the iteratively computed matched filter effectively regularizes spectrum bandwidth across the gathers, suppresses the coda waves, and stabilizes the wavelet character (Fig. 12). The direct-downgoing wavefields (Fig. 11) gradually approach the ideal P-wave (Fig. 10) associated with the isotropic radiation pattern. The reason we are proposing an iterative inversion theme, is that there is always some error associated with auto-pickers. We summarize this section of step 3 as an workflow (orange)in Fig. 13.    Figure 13. The summarized workflow is colored in orange for the section of step 3. The input data is from the buried geophones, and the output is comprised of an inverted matched filter after radiation-pattern correction.

Results
To assess the potential performance gain of this proposed workflow, the buried receiver data from the same series of field surveys was processed in three stages. These stages involved noise removal, VS redatuming, and CMP stacking. In this study we focus on the second stage, where four estimates of the proposed workflows were evaluated independently for each VS, including conventional procedures, using the offset stacking (the step 1), plus the wavelet-correlation filter (the steps 1 + 2), and the radiation-pattern correction (the steps 1 + 2 + 3). Figure 14 summarizes this proposed comprehensive workflow by organizing Fig. 5 (green), Fig. 9 (blue), and Fig. 13 (orange) in an orderly manner. During the VS processing, an auto-picking algorithm selected the first arrivals and windowed the direct-arrival energy in a 60 ms time window. All four outputs of each VS were obtained in parallel from post-correlation gathers stacked within common offsets. A non-VS CMP stack was created as well for comparison.
We plot the non-VS and proposed VS stacks of a selected survey in Fig. 15. The target horizon and selected areas are marked with green and blue arrows, respectively. The non-VS control section (Fig. 15a) shows good signal continuity on the marked reflectors, whereas the conventional VS stack (Fig. 15b) appears noisier with less reflection continuity in some zones. The VS stack (Fig. 15c) using the offset stack (step 1) shows better signal-to-noise ratio (SNR) and better continuity when compared with the conventional stack. Taken one step Figure 14. The overall workflow consist of step 1 (green), step 2 (blue), and step 3 (orange) for our comprehensive VS processing. The output VS gather combines all proposed three steps and to be used for subsequent imaging. (2019) 9:16656 | https://doi.org/10.1038/s41598-019-53146-w www.nature.com/scientificreports www.nature.com/scientificreports/ further, the VS stack (Fig. 15d), using the offset stack and the wavelet-correlation filter (the steps 1 + 2), demonstrates reduced contamination from the residual surface waves (steeply dipping, cross-cutting events) and scattering noise. Finally, the comprehensive VS stack (Fig. 15e), with the offset stack, the wavelet-correlation filter, and radiation-pattern correction (the steps 1 + 2 + 3), shows better SNR for the major reflectors. Figure 16 shows a 3D volume comprised of all 13 surveys of 2D stacks in terms of a repeatability illustration. Recall that there is no time-lapse signal in this data, so we are focusing on repeatability only. Specifically, we can compare surveys across time at specific sections (the arrows in Fig. 16). Similar repeatability issues can be observed from survey S6 to survey S7 in the non-VS control section (Fig. 16a), particularly at the areas marked by the yellow arrows. This gap is caused by the 17-month acquisition break. The conventional VS (Fig. 16b) shows repeatability improvement but the gap still exists. In contrast, we observe that using our comprehensive workflow gradually mitigates these discontinuities (Fig. 16c,d). Finally, Fig. 16e eliminates this break using a series of our proposed steps (steps 1 + 2 + 3). It shows a marked improvement in repeatability at all reflectors, especially for the time slices at the target level.
As we discussed in the step 1, the offset-stacking theme is a fully data-driven and target-oriented for the reservoir target. As a result, we observe that Fig. 16c strengthens the time-lapse continuity of the reservoir target from survey S6 to survey S7. As a side effect, however, other depths were subjected to experience a loss of time-lapse repeatability, especially the discontinuities around surveys 5-6 and 11-12. There should be other data-driven reasons associated with seismic properties. Specifically, as discussed in step 1, the target response of the monitored reservoir is around 1.0 s-1.4 s with an effective bandwidth of about 20 Hz-60 Hz. The data-driven weight is therefore adjusted to capture its response pattern in the TFK domain (Eq. 9). As a result, this algorithm strengthens the continuity of those events associated with the target TFK properties, whereas other events (e.g. the high-frequency or shallow events) were subjected to repeatability loss. Currently, we are in a preliminary stage of implementation where the stacking weights are determined in a heuristic way instead of through systematic optimization. A more robust and self-adaptive weights is one of the on-going research directions.
We use a normalized root mean square (NRMS) computed over a small time window around the target zone to quantify the repeatability 32 . This is typically calculated between two seismic traces in a given window, divided by their average RMS, expressed as a percentage. NMRS is sensitive to the differences in seismic waveforms and extremely sensitive to the smallest of changes in the data. NRMS is computed between all surveys, which results in 78 NRMS combinations at each common-depth point (CDP). A histogram of NRMS values for each method is displayed in Fig. 17. The non-VS control section (blue line) shows a bi-modal NRMS distribution (with peaks around 20% and 65%). Not surprisingly, the larger NRMS values are for surveys split between the two survey groups S1-S6 and S7-S13. A similar distribution, though narrower and less pronounced than the non-VS, is also observed for the conventional VS (red) and the offset stack VS (green). In contrast, the wavelet VS (black), using steps 1 + 2, bridge the bi-modal gap and result in a single peak value 29%. In the end, the combined process (workflow steps 1 + 2 + 3) produces the best NRMS values centered at about 25% (light blue). These results www.nature.com/scientificreports www.nature.com/scientificreports/ support the observations made from Fig. 16 and indicate that the proposed strategy can effectively reduce overburden effects between surveys separated by a 17 month gap, and an improvement from the conventional VS.

Disucssion and Conclusion
We review three articles and summarize a comprehensive strategy to improve image quality and repeatability of virtual source redatuming in the presence of the complex near-surface. This VS method consists of a data-driven offset stack, wavelet-crosscorrelation, and radiation-pattern correction. In the first stage, the offset stack integrates wavelet transformation and cross-coherence analysis. The offset weight coefficients are directly computed from the data, based on the coherence similarity of the predicted upgoing wavefields and the original upgoing wavefields. The second step of the wavelet-crosscorrelation filtering integrates wavelet transformation, crosscorrelation, non-stationary TF, and TFK filtering. This integration maintains signal coherence across frequency and wavenumber bands, and exploits scale dependency at each frequency and wavenumber to allow for better Figure 16. 3D volume plots of 2D CDP stacks of all 13 surveys including (a) non-virtual source (non-VS), (b) conventional VS, (c) VS using the offset stack, (d) VS using the offset stack and the wavelet-crosscorrelation filtering, and (e) VS using the offset stack, the wavelet-crosscorrelation filtering and the radiation-pattern correction. The green arrows indicate the SNR improvement on the target reservoir. Discontinuity issue between surveys S1-S6 and surveys S7-S13 marked by the yellow arrows. Panel (e) shows the best continuity along the time-lapse dimension.
Scientific RepoRtS | (2019) 9:16656 | https://doi.org/10.1038/s41598-019-53146-w www.nature.com/scientificreports www.nature.com/scientificreports/ noise filtering and signal separation. Effective noise suppression and high-resolution separation of nonstationary signals is achieved using TF, TFK, or TFX filtering of the wavelet-correlation coefficients. Lastly, prior to crosscorrelation, the direct arrivals are iteratively estimated using a zero-phase matched filter incorporating the 3D radiation pattern. This step modifies the radiation pattern of the virtual source, such that it closely resembles that of the ideal direct P-wave with isotropic radiation pattern. Iterative matched filtering effectively corrected the distorted radiation pattern of the direct-downgoing wavefields, and produced a significant improvement in illuminations.
Results from field data tests demonstrate that this comprehensive strategy can effectively attenuate virtual source artifacts, and produce distinct stack images without requiring a near-surface model. As time-lapse noise is mainly caused by near-surface variations over time, by reducing near-surface influences on reflection signals, the described methodology can improve time-lapse repeatability as well as imaging quality. In summary, offset stacking, crosscorrelation and denoising have been considered the three essential components of virtual source redatuming. Within this framework, we have demonstrated the value of our proposed workflows with a challenging onshore time-lapse application in a desert environment. Improvements were confirmed using the 13 time-lapse surveys that included a significant repeatability problem across a 17-month survey gap.