Method for recovering a sparse communication signal from a receive signal
09882750 ยท 2018-01-30
Assignee
Inventors
- Lev Borisovich Rapoport (Shenzhen, CN)
- Yanxing Zeng (Hangzhou, CN)
- Jianqiang Shen (Hangzhou, CN)
- Vladimir Iosifovich Ivanov (Moscow, RU)
Cpc classification
International classification
H04L25/03
ELECTRICITY
H03K5/159
ELECTRICITY
H03H7/40
ELECTRICITY
Abstract
The patent application relates to a method for recovering a sparse communication signal from a receive signal, the receive signal being a channel output version of the sparse communication signal, the channel comprising channel coefficients being arranged to form a channel matrix, the method comprising determining a support set indicating a set of first indices of non-zero communication signal coefficients from the channel matrix and the receive signal, determining an estimate of the sparse communication signal upon the basis of the support set, the channel matrix and the receive signal, determining second indices of communication signal coefficients which are not indicated by the support set, and determining the sparse communication signal upon the basis of the support set, the estimate of the sparse communication signal, the second indices and the channel matrix.
Claims
1. A method comprising: determining, by a communication device in a communication network, a support set indicating a set of first indices of non-zero communication signal coefficients from a channel matrix and a receive signal, the receive signal being a channel output version of a sparse communication signal, the channel matrix comprising an arrangement of channel coefficients of the channel; determining, by the communication device, an estimate of the sparse communication signal upon the basis of the support set, the channel matrix and the receive signal; determining, by the communication device, second indices of communication signal coefficients which are not indicated by the support set, wherein determining the second indices of communication signal coefficients comprises minimizing a residual of a dual linear programming problem; and recovering, by the communication device, the sparse communication signal upon the basis of the support set, the estimate of the sparse communication signal, the second indices and the channel matrix.
2. The method according to claim 1, further comprising: updating, by the communication device, the support set upon the basis of the second indices of communication signal coefficients.
3. The method according to claim 2, further comprising: determining, by the communication device, a cardinality of the support set or the updated support set, the cardinality indicating a number of elements of the support set or the updated support set.
4. The method according to claim 1, wherein the sparse communication signal comprises communication signal coefficients and the receive signal comprises receive signal coefficients, and wherein the number of communication signal coefficients is greater than the number of receive signal coefficients.
5. The method according to claim 1, wherein determining the support set or determining the estimate of the sparse communication signal is performed by an orthogonal matching pursuit method.
6. The method according to claim 1, wherein the dual linear programming problem is defined by the following equations:
eH.sup.Tze,
y.sup.Tz.fwdarw.min, wherein H denotes the channel matrix, y denotes the receive signal, e denotes an all unit vector, and z denotes a dual variable.
7. The method according to claim 1, wherein the residual is defined by:
.sub.S*,j.sup.+=1+H.sub.j.sup.Tz.sub.S* or .sub.S*,j.sup.=1H.sub.j.sup.Tz.sub.S*
with z.sub.S*=H.sub.S*(H.sub.S*.sup.TH.sub.S*).sup.1e.sub.S*,
e.sub.S*=(e.sub.S*,1,e.sub.S*,2, . . . ,e.sub.S*,l*).sup.T,
e.sub.S*,i=1 if iS*.sub.+,
e.sub.S*,i=1 if iS.sub.*, and
S*=S*.sub.+S*.sub., wherein S* denotes the support set being divided into two subsets according to signs of the entries of the vector x.sub.S*=(H.sub.S*.sup.TH.sub.S*).sup.1H.sub.S*.sup.Ty, S*.sub.+ denotes a subset of the support set S* corresponding to the positive entries of x.sub.S*, S*.sub. denotes a subset of the support set S* corresponding to the negative entries of x.sub.S*, e.sub.S* denotes an auxiliary vector, e.sub.S*,i denote coefficients of the auxiliary vector e.sub.S* with index i, H.sub.S* denotes a channel matrix comprising columns indicated by the support set S*, z.sub.S* denotes an estimate solution of a dual linear programming problem, H.sub.j denotes a channel matrix comprising a column indicated by index j, and .sub.S*,j .sup.+ or .sub.S*,j.sup. denote the residual.
8. The method according to claim 1, wherein the second indices of communication signal coefficients are determined according to the following equations:
9. The method according to claim 1, wherein recovering the sparse communication signal is performed according to the following equation:
x.sub.S.sub.
10. The method according to claim 2, wherein updating the support set is performed according to the following equation:
S.sup.(1)=S*{j.sup.(1)}\{j.sub.i.sub.
11. The method according to claim 3, wherein determining the support set, determining the estimate of the sparse communication signal, determining the second indices of communication signal coefficients, recovering the sparse communication signal, updating the support set, and determining the cardinality are successively repeated until a stopping criterion is met.
12. The method according to claim 11, wherein the sparse communication signal corresponding to the support set with least cardinality is provided as a recovered sparse communication signal.
13. The method according to claim 11, wherein the stopping criterion is a positive value or a zero value of all residuals of a dual linear programming problem, and wherein a residual is defined by:
.sub.S*,j.sup.+=1+H.sub.j.sup.Tz.sub.S* or .sub.S*,j.sup.=1H.sub.j.sup.Tz.sub.S*
with z.sub.S*=H.sub.S*(H.sub.S*.sup.TH.sub.S*).sup.1e.sub.S*,
e.sub.S*=(e.sub.S*,1,e.sub.S*,2, . . . ,e.sub.S*,l*).sup.T,
e.sub.S*,i=1 if iS*.sub.+,
e.sub.S*,i=1 if iS.sub.*, and
S*=S*.sub.+S*.sub., wherein S* denotes the support set being divided into two subsets according to signs of the entries of the vector x.sub.S*=(H.sub.S*.sup.TH.sub.S*).sup.1H.sub.S*.sup.Ty, S*.sub.+ denotes a subset of the support set S* corresponding to the positive entries of x.sub.S*, S*.sub. denotes a subset of the support set S* corresponding to the negative entries of x.sub.S*, e.sub.S* denotes an auxiliary vector, e.sub.S*,i denote coefficients of the auxiliary vector e.sub.S* with index i, H.sub.S* denotes a channel matrix comprising columns indicated by the support set S*, z.sub.S* denotes an estimate solution of a dual linear programming problem, H.sub.j denotes a channel matrix comprising a column indicated by index j, and .sub.S*,j .sup.+ or .sub.S*,j .sup. denote the residual.
14. The method according to claim 11, wherein the stopping criterion is an expiry of a length of a predetermined time interval.
15. The method according to claim 2, further comprising: determining a cardinality of the support set or the updated support set, the cardinality indicating a number of elements of the support set or the updated support set.
16. The method according to claim 2, wherein the sparse communication signal comprises communication signal coefficients and the receive signal comprises receive signal coefficients, and wherein the number of communication signal coefficients is greater than the number of receive signal coefficients.
17. The method according to claim 2, wherein the dual linear programming problem is defined by the following equations:
eH.sup.Tze,
y.sup.Tz.fwdarw.min, wherein H denotes the channel matrix, y denotes the receive signal, e denotes an all unit vector, and z denotes a dual variable.
18. The method according to claim 1, wherein the residual is defined by:
.sub.S*,j.sup.+=1+H.sub.j.sup.Tz.sub.S* or .sub.S*,j.sup.=1H.sub.j.sup.Tz.sub.S*
with z.sub.S*=H.sub.S*(H.sub.S*.sup.TH.sub.S*).sup.1e.sub.S*,
e.sub.S*=(e.sub.S*,1,e.sub.S*,2, . . . ,e.sub.S*,1*).sup.T,
e.sub.S*,i=1 if iS*.sub.+,
e.sub.S*,i=1 if iS.sub.*, and
S*=S*.sub.+S*.sub., wherein S* denotes the support set being divided into two subsets according to signs of the entries of the vector x.sub.S*=(H.sub.S*.sup.TH.sub.S*).sup.1H.sub.S*.sup.Ty, S*.sub.+ denotes a subset of the support set S* corresponding to the positive entries of x.sub.S*, S*.sub. denotes a subset of the support set S* corresponding to the negative entries of x.sub.S*, e.sub.S* denotes an auxiliary vector, e.sub.S*,i denote coefficients of the auxiliary vector e.sub.S* with index i, H.sub.S* denotes a channel matrix comprising columns indicated by the support set S*, z.sub.S* denotes an estimate solution of a dual linear programming problem, H.sub.j denotes a channel matrix comprising a column indicated by index j, and .sub.S*,j .sup.+ or .sub.S*,j .sup. denote the residual.
19. The method according to claim 2, wherein the second indices of communication signal coefficients are determined according to the following equations:
20. The method according to claim 2, wherein recovering the sparse communication signal is performed according to the following equation:
x.sub.S.sub.
21. A non-transitory computer readable medium containing instructions that, when executed by at least one processor of a communication device in a communication network, cause the at least one processor to: determine a support set indicating a set of first indices of non-zero communication signal coefficients from a channel matrix and a receive signal, the receive signal being a channel output version of a sparse communication signal, the channel matrix comprising an arrangement of channel coefficients of the channel; determine an estimate of the sparse communication signal upon the basis of the support set, the channel matrix and the receive signal; determine second indices of communication signal coefficients which are not indicated by the support set, wherein determining the second indices of communication signal coefficients comprises minimizing a residual of a dual linear programming problem; and recover the sparse communication signal upon the basis of the support set, the estimate of the sparse communication signal, the second indices and the channel matrix.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further embodiments of the patent application will be described with respect to the following figures, in which:
(2)
(3)
DETAILED DESCRIPTION
(4)
(5) The receive signal is a channel output version of the sparse communication signal. The channel comprises channel coefficients being arranged to form a channel matrix.
(6) The method 100 comprises determining 101 a support set indicating a set of first indices of non-zero communication signal coefficients from the channel matrix and the receive signal, determining 103 an estimate of the sparse communication signal upon the basis of the support set, the channel matrix and the receive signal, determining 105 second indices of communication signal coefficients which are not indicated by the support set, and determining 107 the sparse communication signal upon the basis of the support set, the estimate of the sparse communication signal, the second indices and the channel matrix.
(7) The sparse communication signal can be represented by a vector. The sparse communication signal can comprise communication signal coefficients. The communication signal coefficients can be real numbers, e.g., 1.5 or 2.3, or complex numbers, e.g., 1+j or 5+3j.
(8) The receive signal can be represented by a vector. The receive signal can comprise receive signal coefficients. The receive signal coefficients can be real numbers, e.g., 1.3 or 2.7, or complex numbers, e.g., 2+3j or 15j.
(9) The channel can define a linear relationship between the sparse communication signal and the receive signal. The channel can comprise additive noise. The channel coefficients can be real numbers, e.g., 0.3 or 1.5, or complex numbers, e.g., 32j or 1+4j.
(10) The support set can indicate a set of indices of non-zero communication signal coefficients.
(11)
(12) The problem of sparse signals recovery from incomplete measurements can be solved using OMP method and its modifications. Further, there are linear programming approaches LP. However, there are no attempts to combine the OMP and LP into the one numerical scheme.
(13) The n-dimensional vector 0 is called sparse if it has only few non-zero entries among total number n. Sparse compressible vectors can be restored from the fewer number of measurements comparing with total dimension n. Sparseness can appear in different bases, so signals can be represented as sparse vectors in time domain or in frequency domain.
(14) Reconstruction of sparse signals is normally more computationally extensive than just measurement matrix inverse or pseudo-inverse involved in the least squares computations. Moreover, the measurement matrix is usually not invertible, as the system is underdetermined.
(15) Numerically efficient reconstruction algorithms should be suitable for use in real time. The linear system
Hx=y,(1)
where HR.sup.mn, xR.sup.n, yR.sup.m is under-determined if m<n. More precisely, the system (1) is under-determined if rank(H)<n. This kind of systems has infinitely many solutions. The pattern of the system can be schematically represented as described with the following matrix-vector pattern:
(16)
(17) The trade-off for reducing the number of measurements is sparseness of the vectors x. Sparseness means that zeros dominate among the entries of the vector x. The methods of search for most sparse solutions among solutions of the system (1) are the key problem considered in the compressive sensing (CS) theory.
(18) The key problem considered in the (CS) theory can be reformulated as reconstruction of the correct support set, corresponding to the most sparse vector, i.e. the set of significant non-zero entries:
Supp(x)={i:x.sub.i0}.
It has been shown that while
x.sub.0.fwdarw.min,
Hx=y,(P.sub.0)
is the correct formulation of the problem for determination of the most sparse solution, where x.sub.0=Supp(x), the relaxed form of this problem
x.sub.p.fwdarw.min,
Hx=y,(P.sub.p)
is also sufficient, where
(19)
is the l.sub.p-norm of the vector, p>0. Non-convex nature of this problem makes it hard for numerical solution. One of the popular methods for determination of the sparsest solution, i.e. solution having largest number of zero entries, consists in solution of the problem (P.sub.1) which is linear programming problem (LP):
x.sub.1.fwdarw.min,
Hx=y,(P.sub.1), (LP)
where x.sub.1 is the l.sub.1-norm of the vector,
(20)
That is one of the standard methods exploited in the Compressive Sensing (CS) theory.
(21) Its advantages are convexity and applicability of all known convex optimization methods, including linear programming methods such as the simplex method and good performance if compared with the best possible, but NP-hard solution of the problem (P.sub.0).
(22) A more efficient solution for real time implementation is the Orthogonal Matching Pursuit (OMP) algorithm. Its key idea is formulated as follows. Step 0: Initialize the residual r.sup.(0)=y, and initialize the estimate of the support set S.sup.(0)=. Step k=1,2, . . . . Find the column H.sub.j.sub.
(23)
r.sup.(k).sup.
(24) That is a kind of so-called greedy algorithms, the index included into the support set is not taken out of the set. The advantage of this OMP algorithm is its high computational complexity if compared with other methods. The OMP algorithm shows lower performance results when reconstructing the sparse signal if compared with results obtained by (P.sub.1).
(25) The mentioned problem can be solved by combining the OMP method and linear programming method (P.sub.1) into one numerical algorithm.
(26) A structure of the time slot allocated to the problem solution is shown in
(27) Terminating the process immediately after the time slot T.sub. is exhausted, as much as possible is gained compared with conventional OMP making improvements between T.sub.OMP and T.sub..
(28) Consider the LP problem induced by the l.sub.1 minimization that can be equivalently represented as
HuHv=y,u0,v0,
e.sup.Tu+e.sup.Tv.fwdarw.min,(PLP)
where e is the vector consisting of all units, having appropriate size. Here x=uv.
This linear programming problem will be referred to as the primary LP (PLP). Along with (PLP) consider the dual linear programming problem (DLP):
eH.sup.Tze,
y.sup.Tz.fwdarw.min,(DLP)
where z stands for dual variable.
Suppose that OMP has completed resulting in:
S*the last estimate of the support set obtained by OMP, l*card(S*)
H.sub.S*R.sup.ml*,
x.sub.S*=(H.sub.S*.sup.TH.sub.S*).sup.1H.sub.S*.sup.TyR.sup.l*.
(29) The set S* can be further divided into two subsets according to signs of the entries of the vector x.sub.S*:
S*=S*.sub.+S*.sub.
Let define:
e.sub.S*=(e.sub.S*,1,e.sub.S*,2, . . . ,e.sub.S*,l*).sup.T,
e.sub.S*,i=1 if iS*.sub.+,
e.sub.S*,i=1 if iS*.sub..
Generally it is l*m and so, the matrix H.sub.S* is not square.
(30) The current estimate of the DLP solution, corresponding to the basis S*, can be calculated as the minimum norm solution to the under-determined system
H.sub.S*.sup.Tz=e.sub.S*,z.sub.2.sup.2.fwdarw.min,
which has the closed form solution
z.sub.S*=H.sub.S*(H.sub.S*.sup.TH.sub.S*).sup.1e.sub.S*(2)
(31) This vector corresponds to the problem (DLP) that it can be either feasible or not feasible to the system of linear inequalities. It can be either optimal or not optimal to (DLP). Optimality conditions hold, as the set S* is obtained at the last step of the first stage (OMP). So, feasibility conditions of the vector x* obtained at the last iteration of OMP are almost satisfied due to (SC). But (PLP) is dual to (DLP) and feasibility conditions for (PLP) are optimality conditions to (DLP).
(32) So, only feasibility conditions of the vector z.sub.S* are checked. Check residuals of (DLP) for inequalities j.Math.J.sub.S* to see if there are violated conditions or, what is the same, residuals with incorrect sign.
(33) Residuals of (DLP) are
.sub.S*,j.sup.+=1+H.sub.j.sup.Tz.sub.S*,
.sub.S*,j.sup.=1H.sub.j.sup.Tz.sub.S*.(3)
(34) Both .sub.S*,j.sup.+ and .sub.S*,j.sup. can be non-negative for all j. If this condition is satisfied, both (PLP) and (DLP) are solved with solutions x.sub.S* and z.sub.S* respectively. Let one of the residuals be negative. Without loss of generality, let
(35)
Then a new solution
x.sub.S.sub.
can be generated with the following l.sub.1 improvement property:
x.sub.S.sub.
S.sup.(1)=S*{j.sup.(1)}\{j.sub.i.sub.
i.sup.(1) is the number of the entry to be removed from the basis (the serial number in the series 1,2, . . . , l*).
(36) The Enhanced OMP algorithm can be summarized as follows:
(37) TABLE-US-00001 1) k = 0; while(allocated time slot is not exhausted) { 1) perform OMP (or any of its modifications, 10); 2) Start an l.sub.1 improvement step: performing calculations according to (2)-(4); 3) Checkif thecondition(5)issatisfied.If not, terminate the algorithm as (DLP)isfeasibleandso,(PLP)hasanoptimal solution.Otherwise,perform next step; 4) correctsetofactiveindicesandrecalculatethe solution according to (6), (7); The index removed from the basis is not inserted back, so the modification (7) is greedy; 5) Keep the record value of card(S.sup.(k))(or l.sub.1-norm):solution corresponding to theleastcardinality(orleastl.sub.1)isrecorded to be reported as a result; 6) k := k + 1 }
(38) An n-dimensional vector x is sparse if it has only few non-zero entries among total number n.
(39) A sparse compressible vector can be restored from fewer number of measurements comparing with total dimension n.
(40) Sparseness can appear in different bases, so signals can be represented as sparse vectors in time domain or in frequency domain.
(41) Reconstruction of sparse signals is normally more computationally extensive than just measurement matrix inverse. Moreover, the measurement matrix is usually not invertible, as the system is underdetermined.
(42) It is desirable for numerically efficient reconstruction algorithms to be suitable for use in real time.
(43) Recovery of sparse signals from under-determined or incomplete measurement model is can be performed. The linear system
Hx=y
where HR.sup.mn,yR.sup.m,xR.sup.n and m<n is under-determined. Having the pattern
(44)
it has infinitely many solutions: Null(H)+x.sub.0 is a set of solutions together with every single solution x.sub.0. To find the sparsest solution, i.e. solution having largest number of zero entries, can be found as l.sub.1 minimizer:
x.sub.1.fwdarw.min subjected to constraints Hx=y
which is a linear program (LP). That is standard problem setup exploited in the Compressive
Sensing (CS) theory.
(45) LP is computationally extensive. Much easier for real time implementation is the Orthogonal Matching Pursuit (OMP) algorithm. Its key idea is formulated as follows.
(46) Step 0: Initialize the residual r.sub.0=y and estimate of the support set S.sub.0=.
(47) Step k=1, . . . : Find the column H.sub.j.sub.
(48)
and update S.sub.k=S.sub.k-1{j.sub.k}, r.sub.k=yHx.sub.k, where x.sub.k=(H.sub.S.sub.
Steps are repeated until stopping criterion is satisfied.
(49) Suppose the measurement model is described by the equation y=Hx+, xR.sup.n, yR.sup.m with number of measurements m significantly less than the length of the binary word x.
(50) Suppose that not more than k entries of the vector x are non-zero. In other words n,
x.sub.i=0 for i.Math.supp(x),x.sub.i0 for isupp(x),card(supp(x))=k.
Moreover, the matrix H satisfies the Restricted Isometry Property (RIP) property and the following conditions hold
(51)
Stopping criterion for OMP can be chosen like
r.sub.k.sup.Tr.sub.k.sup.2.(SC)
(52) The key idea is to combine advantages of OMP and LP approaches, considering OMP as a first stage of the Enhanced OMP (EOMP), and making several iterations of modified simplex method after (SC) is met, considering S.sub.k as an initial basis, attempting to sequentially modify its content.
(53) The combined algorithm is greedy: the basis is only extended at the first OMP stage; the index removed from the basis at the second stage is not inserted again. Matrix pseudoinverse is modified using low rank modifications of matrix factorizations when performing the second stage.
(54) Consider the LP problem induced by the l.sub.1 minimization. It can be equivalently represented as
HuHv=y,u0,v0,
e.sup.Tu+e.sup.Tv.fwdarw.min(PLP)
where e is the vector consisting of all units, having appropriate size. Here x=uv. This LP will be referred to as the Primary LP (PLP). Along with (PLP) consider the Dual LP (DLP)
eH.sup.Tze,
y.sup.Tz.fwdarw.min(DLP)
where z stands for dual variable.
Suppose that OMP has completed resulting in:
S*the last estimate of the support set obtained by OMP,l*card(S*)
H.sub.S*R.sup.ml*,
x.sub.S*=(H.sub.S*.sup.TH.sub.S*).sup.1H.sub.S*.sup.TyR.sup.l*.
The set S* can be further divided into two subsets according to signs of the entries of the vector x.sub.S*.
S*=S*.sub.+S*.sub.
Let define
e.sub.S*=(e.sub.S*,1,e.sub.S*,2, . . . ,e.sub.S*,l*).sup.T,
e.sub.S*,i=1 if iS*.sub.+,
e.sub.S*,i=1 if iS*.sub..
(55) The current estimate of the DLP solution, corresponding to the basis S*, can be calculated as the minimum norm solution to the under-determined system
H.sub.S*z=e.sub.S*,z.sup.2.fwdarw.min,
which has the closed form solution
z.sub.S*=H.sub.S*(H.sub.S*.sup.TH.sub.S*).sup.1e.sub.S*.
(56) This vector corresponds to the problem (DLP) in that it can be either feasible or not feasible to the system of linear inequalities. It can be either optimal or not optimal to (DLP).
(57) Optimality conditions hold, as the set S* is obtained at the last step of the first stage (OMP). So, feasibility conditions are almost satisfied due to (SC). But (PLP) is dual to (DLP) and feasibility conditions for (PLP) are optimality conditions to (DLP). So, only feasibility conditions can be checked.
(58) Let J.sub.S* be the set of columns indices of the matrix corresponding to the basis S*. Check residuals of (DLP) for inequalities j.Math.J.sub.S*.
(59) Residuals of (DLP) are
.sub.S*,j.sup.+=1+H.sub.j.sup.Tz.sub.S*,
.sub.S*,j.sup.=1H.sub.j.sup.Tz.sub.S*.
(60) Both .sub.S*,j.sup.+ and .sub.S*,j.sup. can be non-negative. If so both (PLP) and (DLP) are solved with solutions x.sub.S* and z.sub.S* respectively. Let one of residuals be negative. Without loss of generality, let .sub.S*,j.sub.
x.sub.S.sub.
can be generated with the following l.sub.1 improvement property:
|x.sub.S.sub.
S.sup.(1)=S*{j.sup.(1)}\{j.sub.i.sub.
i.sup.(1) is the number of the entry to be removed from the basis.
(61) Finally, the algorithm can be summarized as follows:
(62) TABLE-US-00002 while(DLP constraints are violated and number of iterations did not exceed a limit) { perform OMP; make an l.sub.1 improvement step; correct set of active indexes: index removed from the basis is not inserted back; keep record value of card(S.sup.(l)): solution corresponding to the least cardinality isreported. }
(63) In an implementation form, the patent application relates to an enhanced orthogonal matching pursuit for sparse signal recovery.
(64) In an implementation form, the patent application relates to sparse signal detection, especially aspects of its performance quality.
(65) In an implementation form, the patent application relates to a method for recovery of sparse signal combining OMP with one or several l.sub.1 improvement steps comprising, performing OMP, computation of solution of the problem (DLP) according to equations (2), (3), checking residuals (4), (5), making improvement (6), (7), and keeping the record solution.
(66) In an implementation form, the record solution is the best in terms of the cardinality.
(67) In an implementation form, the record solution is the best in terms of the l.sub.1 criterion value.
(68) In an implementation form, the termination condition is feasibility of (DLP) (5).
(69) In an implementation form, the termination condition is exhausting of the time allocated to the problem.
(70) In an implementation form, the patent application allows for better performance if compared to OMP and/or LP.
(71) In an implementation form, low rank matrix factorizations can be applied at each step of the algorithm to reduce the computational load.
(72) In an implementation form, the patent application relates to a method for recovery of sparse signals combining advantages of OMP and LP approaches.