Compressive sensing
09632193 ยท 2017-04-25
Assignee
Inventors
- Chengbo Li (Houston, TX, US)
- Sam T. Kaplan (Oakland, CA, US)
- Charles C. Mosher (Houston, TX, US)
- Joel D. Brewer (Houston, TX, US)
- Robert G. Keys (Houston, TX, US)
Cpc classification
G01V1/36
PHYSICS
G01V2210/57
PHYSICS
International classification
G01V1/36
PHYSICS
Abstract
Computer-implemented method for determining optimal sampling grid during seismic data reconstruction includes: a) constructing an optimization model, via a computing processor, given by min.sub.uSu.sub.1s.t. Rub.sub.2 wherein S is a discrete transform matrix, b is seismic data on an observed grid, u is seismic data on a reconstruction grid, and matrix R is a sampling operator; b) defining mutual coherence as
wherein C is a constant, S is a cardinality of Su, m is proportional to number of seismic traces on the observed grid, and n is proportional to number of seismic traces on the reconstruction grid; c) deriving a mutual coherence proxy, wherein the mutual coherence proxy is a proxy for mutual coherence when S is over-complete and wherein the mutual coherence proxy is exactly the mutual coherence when S is a Fourier transform; and d) determining a sample grid r.sub.*=arg min.sub.r (r).
Claims
1. A computer-implemented method for determining optimal sampling grid during seismic data reconstruction, the method comprising: a) constructing an optimization model, via a computing processor, given by min.sub.uSu.sub.1 s.t. Rub.sub.2 wherein S is a discrete transform matrix, b is seismic data on an observed grid, u is seismic data on a reconstruction grid, represents noise level in observed data, and matrix R is a sampling operator; b) defining mutual coherence as
2. The method of claim 1, wherein the sample grid is determined via randomized greedy algorithm method.
3. The method of claim 2, wherein the randomized greedy algorithm method finds local minimum.
4. The method of claim 1, wherein the sample grid is determined via stochastic global optimization method.
5. The method of claim 1, wherein r.sub.*=arg min.sub.r (r) is non-convex.
6. The method of claim 1, wherein the mutual coherence proxy is derived using fast Fourier transform.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) A more complete understanding of the present invention and benefits thereof may be acquired by referring to the follow description taken in conjunction with the accompanying drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DETAILED DESCRIPTION
(16) Reference will now be made in detail to embodiments of the invention, one or more examples of which are illustrated in the accompanying drawings. Each example is provided by way of explanation of the invention, not as a limitation of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment can be used on another embodiment to yield a still further embodiment. Thus, it is intended that the present invention cover such modifications and variations that come within the scope of the invention.
(17) Some embodiments of the present invention provide tools and methods for reconstructing seismic data utilizing compressive sensing. Convex optimization models used for reconstructing seismic data can fall under at least two categories: synthesis-based convex optimization model and analysis-based convex optimization model (Candes et al., 2008). As used herein, the term convex optimization problem and its related terms such as convex optimization model generally refer to a mathematical programming problem of finding solutions when confronted with conflicting requirements (i.e., optimizing convex functions over convex sets).
(18) Some embodiments of the present invention provides tools and methods for optimizing the analysis-based convex optimization model. At least one embodiment adapts an alternating direction method (Yang and Zhang, 2011) with a variable-splitting technique (Wang et al., 2008; Li, 2011). This allows a user to take advantage of the structure in the seismic data reconstruction problem to provide a more efficient solution. Other advantages will be apparent from the disclosure herein.
(19) According to one or more embodiments of the present invention, a two-dimensional windowed Fourier transform representation of the data (e.g. Mallat, 2009) may be provided. In some embodiments, an irregular acquisition grid may be provided, which is an additional condition for exact recovery given by compressive sensing theory. The irregularity in seismic data can be quantified by mutual coherence which is a function of the irregular acquisition grid and windowed Fourier transform basis (e.g. Elad et al., 2007).
(20) Some embodiments provide tools and methods for interpolated compressive sensing data reconstruction for recovering seismic data to a regular nominal grid that is independent of the observed trace locations. Advantages include, but are not limited to, 1) one can try distinct nominal grids for data reconstruction after acquisition, and 2) positioning errors occurring during acquisition can be taken into account. Other geophysical methods for seismic data reconstruction can rely on the discrete Fourier transform to allow for the arbitrary relation between observed trace locations and the nominal grid. By contrast, in the present invention, the transform (Fourier or otherwise) is applied to the nominal grid, and the burden of the mismatch between observed trace locations and the nominal grid is shifted to a restriction/sampling operator.
(21) Some embodiments provide tools and methods that derive a mutual coherence proxy applicable to the seismic data reconstruction problem. At least one advantage is that this proxy is efficient to compute. More particularly, it is the maximum non-d.c. component of the Fourier transform of the sampling grid. A greedy optimization algorithm (e.g. Tropp, 2004) is used to find an optimal sampling grid, with the mutual coherence proxy giving data independent measure for optimal. The optimization problem is typically non-convex, and so the greedy algorithm finds a locally optimal solution that depends on how the algorithm is initialized.
EXAMPLE 1
(22) Data Reconstruction Model
(23) For data reconstruction, a system is defined, wherein (Herrmann, 2010),
b=RS*x, x=Su,(1)
where b is observed seismic data, and u is reconstructed seismic data. Matrix R is a restriction (i.e. sampling) operator, mapping from the reconstructed seismic data to the observed seismic data. If S is an appropriately chosen dictionary, then x is a sparse representation of u. For most over-complete dictionaries, such as a wavelet, curvelet and windowed Fourier transforms,
S*S=I(2)
Optimization Models
(24) Given the over-complete linear system in equation 1, and observed data b, solution(s) to the reconstructed data u are computed. A frequently used approach from compressive sensing is to solve either basis pursuit (BP) optimization model for noise-free observed data,
min.sub.xx.sub.1 s.t. RS*x=b(3)
or the basis pursuit de-noising (BPDN) optimization model for noisy or imperfect observed data,
min.sub.xx.sub.1 s.t. RS*xb.sub.2.sup.2(4)
where is a representative of the noise level in the observed data. For example, if {tilde over (x)} is the solution to the optimization model in equation 3, then
=S*{tilde over (x)}(5)
are reconstructed data. In solving either the BP or BPDN model, an assumption may be made that the reconstructed data u have a sparse representation under the dictionary S. Solving the optimization models in equations 3 and 4 is often referred to as synthesis-based l.sub.1 recovery (Candes et al., 2008). SPGL1, as proposed by van den Berg and Friendlander (2008), and based on an analysis of the Pareto curve, is one of the most efficient of these methods.
(25) An alternative to the synthesis-based optimization models are analysis-based optimization models for both the noise-free case,
min.sub.uSu.sub.1 s.t. Ru=b(6)
and the noisy case,
min.sub.uSu.sub.1 s.t. Rub.sub.2(7)
Solving the optimization models in equations 6 and 7 is called analysis-based l.sub.1 recovery (Candes et al., 2008). When the dictionary S is orthonormal, synthesis- and analysis-based models are theoretically equivalent. However, according to Candes et al. (2008), when S is overcomplete analysis based optimization models involve fewer unknowns and are computationally easier to solve than their synthesis-based counter-parts. Additionally, analysis-based reconstruction may give more accurate solutions than those obtained from synthesis-based reconstruction (Elad et al., 2007).
Alternating Direction Algorithm with Variable Splitting
(26) The SeisADM algorithm performs analysis-based l.sub.1 recovery based on the optimization model in equation 7. SeisADM is based on the alternating direction method (e.g. Gabay and Mercier, 1976; Glowinski, 1984; Yang and Zhang, 2011). The alternating direction method (ADM) has been widely used to solve inverse problems. It is known as a robust and stable iterative algorithm, but is usually very costly due to its estimation of the gradient for each iteration. Here, a variable splitting technique in combination with ADM is introduced, which utilizes the structure of the seismic data reconstruction model to find an efficient method for solving the optimization model in equation 7. In particular, the fact that S*S=I, and that R*R is a diagonal matrix are utilized. A similar algorithm can be derived for the noise-free case (equation 6) as well.
(27) Starting from equation 7, splitting variables w=Su is introduced to separate the operator S from the non-differentiable l.sub.1 norm, and v=Rub to form a l.sub.2-ball constrained optimization problem (we only need to introduce one splitting variable w to solve the noise-free model (equation 6)). Therefore, equation 7 is equivalent to,
min.sub.u,w,vw.sub.1 s.t. w=Su, v+b=Ru, v.sub.2(8)
Ignoring the l.sub.2-ball constraint (v.sub.2), equation 8 has the corresponding augmented Lagrangian function (Gabay and Mercier, 1976),
(28)
where and are Lagrange multipliers, and and are scalars. SeisADM finds the minimum of the equivalent model in equation 8. It does so by minimizing the augmented Lagrangian function in equation 9 with respect to, separately, w, u and v, and then updating the Lagrange multipliers, and .
(29) For constant u and v, the w-subproblem is,
(30)
Equation 10 is separable with respect to each w.sub.iw and has the closed-form solution (e.g. Li, 2011),
(31)
where sgn (x) is 1 for x>0, 0 for x=0, and 1 for x<0.
(32) For constant w and v, the u-subproblem is,
(33)
Equation 12 is quadratic, with the corresponding normal equations,
(S*S+R*R)=S*(w+)+R*(b+v+)(13)
Since S*S=I and R*R is a diagonal matrix, one can explicitly and efficiently solve equation 13.
(34) For constant w and u, the v-subproblem is,
(35)
The value of v found from solving equation 14 is equivalent to that found from solving,
(36)
(37) Further, if
(38)
then it can be shown that the explicit solution of equation 15 is,
(39)
(40) The SeisADM algorithm is iterative, where for each iteration and are held constant, and the minimum (, {tilde over (v)}, ) of the three sub-problems described above are found. At the end of each iteration, the Lagrange multipliers (Glowinski, 1984) is updated,
{{tilde over ()}=(Suw), {tilde over ()}=(Rubv)(17)
Provided that
(41)
the theoretical convergence of ADM can be guaranteed. Putting all the components together, our algorithm for solving the analysis-based denoising model (equation 7) is summarized in
Numerical Results
(42) Two tests are performed and reported in this section to demonstrate the analysis-based l.sub.1 recovery and efficiency of SeisADM. Specifically, SeisADM is compared with SPGL1. In an effort to make comparisons fair, an effort can be made to optimally tune parameters for both SeisADM and SPGL1.
(43) Synthetic Data Example
(44) For a synthetic example, data generated from the receivers and 7.62 m between adjacent receivers is used. 111 data reconstruction simulations were run, where for each simulation the size of the set of observed traces changes, ranging from 8% to 50% of the total number of reconstructed traces.
(45) The results are shown in
(46)
(47)
(48) Real Data Example
(49) For a real data example, data that were collected with a two-dimensional ocean bottom node acquisition geometry was used. The survey was, specifically, designed in such a way that the shots are recorded on an irregular acquisition grid. The observed data are reconstructed to a regular shot grid with 3105 shot points and 6.25 m between adjacent shots. The observed data for reconstruction are comprised of 564 of these 3105 shot points, giving a set of observed shots that is approximately 18% of the reconstructed shot points. The results are for a single ocean bottom node (common receiver gather).
(50)
(51) Conclusions
(52) In this Example, the seismic data reconstruction problem using compressive sensing was considered. In particular, the significance of the choice of the optimization model, being either synthesis- or analysis-based was investigated. The analysis-based l.sub.1 recovery gave more accurate results than synthesis-based l.sub.1 recovery. A new optimization method for analysis-based l.sub.1 recovery, SeisADM was introduced. SeisADM takes advantage of the properties of the seismic data reconstruction problem to optimize its efficiency. The SeisADM method (used for analysis-based l.sub.1 recovery) required less computation time and behaved more robust, as compared to the SPGL1 method (used for synthesis based l.sub.1 recovery). While the application of SeisADM was to the reconstruction of one spatial dimension, this method may be extended to multi-dimensional data reconstruction problems.
EXAMPLE 2
(53) First, the grids used in this Example are defined: 1) the observed grid is an irregular grid on which seismic data are acquired (i.e. observed trace locations), 2) the nominal grid is a regular grid on which seismic data are reconstructed, and 3) the initial grid is a regular grid from which the observed grid is selected using, for example, a jittered sampling scheme.
(54) Traditionally, it is assumed that the initial grid is identical to the nominal grid, and the observed grid lies on a random or jittered subset of the nominal grid. Under these settings, the model from Herrmann and Hennenfent (2008) may be utilized,
b=Ru, x=Su(18)
where b=[b.sub.1, . . . , b.sub.m].sup.T are observed or acquired seismic data, and u=[u.sub.1, . . . , u.sub.n].sup.T (m n<n) are data on the nominal grid (i.e., the true data). Each of b.sub.i and u.sub.i represents a seismic trace. The operator S is an appropriately chosen dictionary which makes Su sparse or approximately sparse, and R is a restriction/sampling operator which maps data from the nominal grid to the observed grid. Specifically, R is composed by extracting the corresponding rows from an identity matrix. One can recover u by solving an analysis-based basis pursuit denoising model (Cand'es et al., 2008),
min.sub.uSu.sub.1 s.t. Rub.sub.2(19)
where s corresponds to the noise level of the observed data. Many algorithms have been developed to solve this model or its variants, such as SPGL1 (van den Berg and Friendlander, 2008), NESTA (Becker et al., 2009), and YALL1 (Yang and Zhang, 2011).
Interpolated Compressive Sensing
(55) If the observed grid is independent of the nominal grid, then the nominal grid can be determined after data acquisition. To generalize the idea of compressive sensing seismic data reconstruction, the fact that seismic data can be well approximated, locally, using a kth-order polynomial on a regular grid is utilized. For example, k=1 if the seismic data are linear in a local sense. For the sake of clarity, reconstruction of seismic data is shown along one spatial dimension, but the method can be easily extended to higher dimensions.
(56) Denoted are the true locations on the observed grid as p.sub.1, . . . , p.sub.m and the true locations on the nominal grid as l.sub.1, . . . , l.sub.n. For j=1, . . . , m and k<<n,
(57)
This is easy to solve due to the fact that l.sub.1, . . . , l.sub.n are equally spaced. When p.sub.j is not close to the boundary of the nominal grid,
l.sub.s.sub.
Based on the assumption made at the beginning of this section, given u.sub.s.sub.
(58)
(59) Supposing that u(x) denotes the continuous seismic data in some local window, and u(x) is at least k+1 times continuously differentiable. According to the Taylor expansion, the error estimation of Lagrange interpolation is
(60)
for some l.sub.s.sub.
(61) Inspired by equation 5, interpolated restriction operator is
(62)
and the size of the identity matrix I is decided by the number of time samples. Then equation 22 can be rewritten as,
bu(27)
(63) This demonstrates an embodiment of the interpolated compressive sensing model for seismic data reconstruction. Analogous to equation 19, u can be recovered by solving the following optimization problem,
min.sub.uSu.sub.1 s.t. ub.sub.2(28)
(64) One should note that the method described above is fundamentally different from the method which first interpolates the observed data back to nearest points on the nominal grid and then reconstructs using traditional compressive sensing. The proposed method utilizes the unknown data on the nominal grid as an interpolation basis to match the observed data and forms an inverse problem to recover the unknown data. Theoretically, the interpolation error is O(h.sup.k+1) where h is the average spacing of the interpolation basis. Since the nominal grid is much finer than the observed grid (i.e., smaller average spacing), interpolated compressive sensing is expected to be more accurate than first interpolating followed by reconstructing. Moreover, for interpolated compressive sensing, the error could be further attenuated by solving a BP denoising problem such as in equation 28 (Cand'es et al., 2008).
(65) The computational cost is usually dominated by evaluating .sup.T
u and S.sup.TSu at each iteration, which is approximately O(kN) and O(N log N) respectively, assuming S has a fast transform (N is the number of samples). Therefore, for seismic data reconstruction, the computational cost for solving the interpolated compressive sensing problem in equation 28 is comparable to solving the traditional compressive sensing problem in equation 19 when k<<N. As the order k increases, the accuracy of reconstruction may become higher at the cost of increasing computational burden.
(66) If k=1 in equation 22, then our method is called linear-interpolated compressive sensing. Likewise, if k=3, our method is called cubic-interpolated compressive sensing. In our tests, linear- and cubic-interpolated compressive sensing give comparable and satisfactory reconstruction results. The case k>3 may only apply to few extreme cases. The following data examples focus on the linear- and cubic-interpolated compressive sensing data reconstruction.
(67) Synthetic Data Example
(68) In order to simulate the scenario that the nominal grid does not necessarily include the observed grid, and also be able to do quantitative analysis, start with a finer initial grid for jittered sampling, and select a uniform subset from the initial grid as the nominal grid for reconstruction and computing signal to-noise ratios. The l.sub.1 solver used to solve the problem in equation 28 is based on the alternating direction method proposed by Yang and Zhang (2011). Specifically, the results from two special caseslinear and cubicof the proposed interpolated compressive sensing with the results from traditional compressive sensing are compared. In an effort to make fair numerical comparisons, the same solver for both traditional and interpolated compressive sensing is used.
(69) For the synthetic example, data generated from the Sigsbee 2a model (Bergsma, 2001) and a two-dimensional acoustic finite-difference simulation are considered. For each common receiver gather, the data are reconstructed to a nominal grid with 306 shot points, with a spacing of 22:89 m between adjacent shot points. The observed shot points are selected from a regular shot point grid with 7:62 m spacing using a jittered algorithm (Hennenfent and Herrmann, 2008). Experiments were performed where the number of observed shot points varies from 15% to 50% of the 306 grid points on the nominal grid. There was a mismatch between the nominal grid for reconstruction and the initial grid used to generate the observations; therefore, an observed shot-point does not necessarily correspond to any given point on the reconstruction grid, making the interpolated compressive sensing method applicable.
(70) The signal-to-noise ratios for reconstruction results is shown in
(71) A qualitative inspection of
(72) Real Data Example
(73) Marine data was used which were collected by shooting in an irregular acquisition pattern and recorded with a two-dimensional ocean bottom node acquisition geometry. Two reconstruction experiments using this dataset were utilized. In the first, the observed data are reconstructed to a nominal shot grid with 2580 shot points and 7:5 m spacing between adjacent shots. In the second, the observed data are reconstructed to a nominal shot grid with 2037 shot points and 9:5 m spacing between adjacent shots. The observed data for reconstruction are comprised of 400 shot points that are selected from an initial grid with 6:25 m spacing between adjacent shots points, and 3096 grid points. Similar to the synthetic example, there is a mismatch between the nominal grids for reconstruction, and the initial grid used to collect the data. Therefore, as before, an observed shot point does not necessarily correspond to any given point on the nominal grid.
(74)
(75) Even though the seismic data are reconstructed to different nominal grids with different spacing, the results shown in
(76) Conclusions
(77) A novel data reconstruction method, interpolated compressive sensing has been developed. The method allows for a mismatch between the nominal grid that the data are reconstructed to, and the observed grid upon which the data are acquired. This method allows for any dictionary, used in the compressive sensing data reconstruction model, to be applied to the regular nominal grid. The relationship between the observed and nominal grids is given by the interpolated restriction operator. The interpolated restriction operator, in turn, accounts for both the reduced size of the observed grid, and for when a point on the observed grid does not correspond to a nominal grid point. The latter is done by incorporating Lagrange interpolation into the restriction operator. The interpolated compressive sensing method was applied to both synthetic and real data examples, incorporating both 1st and 3rd order Lagrange interpolation into the interpolated restriction operator. The synthetic results compare linear- and cubic-interpolated compressive sensing to traditional compressive sensing, showing a significant increase in the signal-to-noise ratio of the reconstructed data. Finally, the method was applied to a real data example, and observed an uplift in quality as compared to traditional compressive sensing.
EXAMPLE 3
(78) This example finds the optimal sampling grid in a seismic data reconstruction problem. The seismic data reconstruction model can be described as (e.g. Herrmann, 2010),
b=Dx, D=RS*, x=Su(29)
where b are seismic data on the observed grid, and u are data on the reconstruction grid (i.e. the true data). The matrix R is a restriction (i.e. sampling) operator, and maps data from the reconstruction grid to the observed grid. If S is a suitably chosen, possibly over-complete, dictionary, then x will have small cardinality (i.e. l.sub.0-norm).
Compressive Sensing Optimization Model and Mutual Coherence
(79) Given the under-determined system in equation 29 and the data b, the reconstructed seismic data u is found by solving an analysis-based basis pursuit denoising optimization model (e.g. Cand'es et al., 2008),
min.sub.uSu.sub.1 s.t. Rub.sub.2(30)
(80) There are many algorithms that can be employed to find the solution of the optimization model in equation 30. In this Example, a variant (Li et al., 2012) of the alternating direction method (e.g. Yang and Zhang, 2011) is used. At least one goal is to design R (i.e. the sampling grid) such that for a given b and S, u is more likely to be recovered successfully.
(81) Compressive sensing provides theorems that give conditions for a successful data reconstruction. For the moment, we consider the following scenario: 1) SR.sup.nn is an orthonormal matrix, 2) RR.sup.mn with n>m, 3) D=RS* is such that D is a selection of m rows from S*, and 4) D=RS* is such that the columns of D, d.sub.i, have unit energy (d.sub.i.sub.2=1, i=1 . . . n). Under this scenario, solving the optimization program in equation 30 recovers u successfully with overwhelming probability when (Cand'es et al., 2006),
(82)
(83) In equation 31, C is a constant, and S is the cardinality of Su. Importantly, for our analysis, is the mutual coherence and is a function of S and R. In particular (Donoho and Elad, 2002),
(R,S)=max.sub.ij|d*.sub.id.sub.j|, i,j=1 . . . n(32)
(84) This is equivalent to the absolute maximum off-diagonal element of the Gram matrix, G=D*D. Within the context of the seismic data reconstruction problem, n is proportional to the number of seismic traces on the reconstruction grid, and m is proportional to the number of traces on the observed grid. Therefore, if S and C are constant, then for a given number of observed traces, decreasing m increases the chance of a successful data reconstruction.
(85) The relation between mutual coherence (equation 32) and the condition for exact recovery (equation 31), make its analysis appealing. Unfortunately, for problems in seismic data reconstruction it would be prohibitively expensive to compute. However, if S is the discrete Fourier transform matrix, then one can find an efficient method to compute mutual coherence, and use this as a mutual coherence proxy for when S is some over-complete (but perhaps still Fourier derived dictionary such as the windowed Fourier transform.
(86) To derive the mutual coherence proxy, one may begin by following Hennenfent and Herrmann (2008), and note that for the seismic data reconstruction model, R*R is a diagonal matrix with its diagonal being the sampling grid,
(87)
and the Gram matrix is,
(88)
If S is a discrete Fourier transform matrix, then [S].sub.i,j=.sup.ij where =exp(2{square root over (1)}/n), and from equation 35,
(89)
Equation 36 shows that off-diagonal elements of the Gram matrix are equal to the non-d.c. components of the Fourier transform of the sampling grid r. Therefore,
(90)
where {circumflex over (r)}.sub.l are Fourier transform coefficients. Equation 37 can be computed efficiently using the fast Fourier transform, and is our mutual coherence proxy. It is exactly the mutual coherence when S is the Fourier transform, and a proxy for mutual coherence when S is some over-complete dictionary.
Greedy Optimization Algorithm for Acquisition Design
(91) Given the mutual coherence in equation 37, a sampling grid r according to the optimization program is
r.sub.*=arg min.sub.r(r)(38)
where is given by equation 37. The optimization program in equation 38 is non-convex. To find its solution, a randomized greedy algorithm is proposed. One can think of it as a deterministic alternative to the statistical result found in Hennenfent and Herrmann (2008). The algorithm will find a local minimum, and, therefore, does not guarantee convergence to a global minimum. However, in practice, it has been observed that solutions finding a local minimum using our randomized greedy algorithm are sufficient.
(92) The randomized greedy algorithm for solving equation 38 is shown in Algorithm 1. The algorithm is initialized using a regular sampling grid, where the spacing of the regular grid is r=n/m, so that for any integer j{0, 1, . . . , m1}, the elements of r (equation 33) are,
r.sub.i={1, i=jr+1 0, ijr+1(39)
and where for the sake of simplicity in our description, one can assume that n is an integer multiple of m. Dividing the reconstruction grid into m disjoint subsets of size r grid points, and where the jth subset is,
s.sub.j={jrr/2+k|k=1 . . . r}(40)
where x denotes integer component of x. In other words, except at the boundaries of the grid, the jth subset is centered on the jth grid point of the regular observed grid. The ordered sets s.sub.j are stored in I, and we store a corresponding random ordering of these sets using J=PI, and where P is a random perturbation matrix. The algorithm sequentially steps through the sets in J, and uses a jittering technique so that for each of the r elements in s.sub.j, its corresponding grid point is set to 1 while all others are set to 0, producing a new sampling grid r.sub.k. Subsequently, the mutual coherence .sub.k=(r.sub.k) is computed using equation 37, and compared to the mutual coherence of r. If a perturbation,
k.sub.*=arg min.sub.k(r.sub.k)(41)
on r is found that reduces the mutual coherence, then r is set to r.sub.k.sub.
Algorithm 1 Randomized Greedy Algorithm
(93) TABLE-US-00001 r 0, r n / m r.sub.i 1, for i = jr, j = 0,1...,m 1 s.sub.j {jr r / 2 + k|k = 1...r}, j = 0,1,...,m 1 I [s.sub.0 s.sub.1 ... s.sub.m1], J PI for j = 0 .fwdarw. m 1 do s.sub.j [J].sub.j, .sub.0 = (r), r r for k s.sub.j do r.sub.{s.sub.
Synthetic Data Example
(94) For a synthetic data example, data generated from the Sigsbee 2a model (Bergsma, 2001), and a two-dimensional acoustic finite difference simulation were used. The data reconstruction of a single common receiver gather, and where 184 observed traces are reconstructed to a regular grid with 920 sources and 7:62 m between adjacent sources were considered. Hence, the observed data has 20% as many traces as the reconstructed data. In the data reconstruction model (equation 29), S was allowed be a two-dimensional windowed Fourier transform.
(95) The results are shown in
(96) The Monte Carlo realizations of the restriction operator R give, consistently, small values for their mutual coherence proxy (
(97) Real Data Example
(98) For the real data example, data that was collected with a two-dimensional ocean bottom node acquisition geometry were used. The survey was, specifically, designed in such a way that the shots are recorded on an irregular acquisition grid. The observed data is reconstructed to a regular shot grid with 3105 shot points and 6:25 m between adjacent shots. The observed data for reconstruction is comprised of 400 of these 3105 shot points, giving a set of observed shots that is approximately 13% of the size of the set of reconstructed shot points. The results for a single ocean bottom node (common receiver gather) is shown. As was the case for the synthetic data example, S was allowed be a two-dimensional windowed Fourier transform.
(99) Amplitude spectra of the sampling grids (|{circumflex over (r)}.sub.l| in equation 37) are shown in
(100) Finally,
CONCLUSIONS
(101) The seismic data acquisition design problem was considered from the point of view of compressive sensing seismic data reconstruction. In particular, mutual coherence and a greedy optimization algorithm was utilized to design an optimal acquisition grid. With the synthetic example, the signal-to-noise ratio and the mutual coherence are anti-correlated. Additionally, the synthetic example showed that the randomized greedy algorithm gave a mutual coherence that is lower than that found from a Monte Carlo simulation. Further, the signal-to-noise ratio of the reconstruction result produced from the optimal grid found through the greedy algorithm is similar to that found from the Monte Carlo simulation, which can be predicted from the work of Hennenfent and Herrmann (2008). Finally, the choice of mutual coherence proxy using a real data example was validated, and where a qualitative analysis of the reconstruction results was made, comparing a low mutual coherence sampling grid and a high mutual coherence sampling grid of the same survey area.
(102) Although the systems and processes described herein have been described in detail, it should be understood that various changes, substitutions, and alterations can be made without departing from the spirit and scope of the invention as defined by the following claims. Those skilled in the art may be able to study the preferred embodiments and identify other ways to practice the invention that are not exactly as described herein. It is the intent of the inventors that variations and equivalents of the invention are within the scope of the claims while the description, abstract and drawings are not to be used to limit the scope of the invention. The invention is specifically intended to be as broad as the claims below and their equivalents.