SIGMA DELTA QUANTIZATION FOR IMAGES

20210368115 · 2021-11-25

Assignee

Inventors

Cpc classification

International classification

Abstract

A technique is presented for quantizing pixels of an image using Sigma Delta quantization. In one aspect, pixel values for an image are segmented into columns of pixel values; and for each column in the matrix, pixel values of a given column are quantized using sigma delta modulation. The pixel values in a given column are preferably quantized as a whole, thereby minimizing accumulated quantization error from a starting pixel value in the given column to a current pixel value in the given column. In another aspect, the pixels of an image are quantized using a 2D generalization of Sigma Delta modulation.

Claims

1. A method for quantizing pixels of an image, comprising: receiving, by a sequencing circuit, pixel values for an image captured by a camera, where the pixel values are arranged in a matrix; grouping, by the sequencing circuit, the pixel values of the matrix into columns of pixel values; and for each column in the matrix, quantizing, by an analog to digital converter, pixel values of a given column using sigma delta modulation.

2. The method of claim 1 further comprises quantizing pixel values in a given column as a whole, thereby minimizing accumulated quantization error from a starting pixel value in the given column to a current pixel value in the given column.

3. The method of claim 1 further comprises quantizing two or more columns of pixel values from the matrix in parallel using sigma delta modulation

4. The method of claim 1 further comprises quantizing two or more columns of pixel values from the matrix in series using sigma delta modulation.

5. The method of claim 1 further comprises assembling quantized values for each column into a rectangular array and storing the rectangular array as a final image in a non-transitory computer-readable medium.

6. The method of claim 1 further comprises reconstructing a final image from the quantized pixel values of the matrix using a decoder, where the decoder is configured to minimize an image norm during reconstruction.

7. The method of claim 5 wherein the image norm is further defined as a total variation norm.

8. The method of claim 1 further comprises reconstructing a final image from the quantized pixel values of the matrix using a decoder, where the decoder is configured to minimize an image norm subject to a quantization constraint and the quantization constraint is defined by a finite integration operator applied to a quantization error with an L-infinite norm bounded by half of the quantization step-size, wherein the quantization error is a vector such that each element of the vector contains a quantization error for a given pixel in the matrix.

9. An imaging system, comprising: a camera configured to capture an image of a scene, where the image is represented by pixel values arranged in a matrix; a sequencing circuit configured to receive the image from the camera and operates to grouping the pixel values of the matrix into columns of pixel values; and an analog to digital converter interfaced with the sequencing circuit and, for each column in the matrix, quantizes pixel values of a given column using sigma delta modulation.

10. A method for quantizing pixels of an image, comprising: receiving, by a sequencing circuit, pixel values for an image captured by an imager, where the pixel values are arranged in a matrix; segmenting, by the sequencing circuit, the pixel values of the matrix into one or more patches of pixel values, where each patch of pixel values is subset of pixel values from the matrix arranged in a two dimensional array; and for each patch in the matrix, quantizing, by an analog to digital converter, the pixel values of a given patch using a two dimensional generalization of sigma delta modulation.

11. The method of claim 10 wherein, for each patch in the matrix, quantizing pixel values along anti-diagonals of the two dimensional array.

12. The method of claim 11 further comprises, for each patch in the matrix, quantizing pixel values along the anti-diagonals in parallel.

13. The method of claim 10 further comprises, for each patch in the matrix, quantizing pixels values in series starting from an upper left corner of the given patch and moving to the lower right corner of the given patch.

14. The method of claim 11 wherein quantizing the sequence of pixel values for a given patch further includes, for a given pixel in the given patch, summing quantization errors associated with at least three pixels neighboring pixels the given pixel and rounding sum to nearest member of an alphabet.

15. The method of claim 11 wherein quantizing the sequence of pixel values for a given patch in accordance with
q.sub.i,j=custom-character(u.sub.i,j−1+u.sub.i−1,j−u.sub.i−1,j−1+y.sub.i,j),
u.sub.i,j=u.sub.i,j−1+u.sub.i−1,j−u.sub.i−1,j−1+y.sub.i,j−q.sub.i,j, where q.sub.i,j is quantized value, u.sub.i,j is quantization error, and y.sub.i,j is pixel value.

16. The method of claim 10 further comprises assembling quantized values for each of the one or more patches into a rectangular array and storing the rectangular array as a final image in a non-transitory computer-readable medium.

17. The method of claim 10 further comprises reconstructing a final image from the quantized pixel values of the matrix using a decoder, where the decoder is configured to minimize an image norm during reconstruction.

18. The method of claim 16 wherein the image norm is further defined as a total variation norm.

19. The method of claim 10 further comprises reconstructing a final image from the quantized pixel values of the matrix using a decoder, where the decoder is configured to minimize an image norm subject to a quantization constraint and the quantization constraint is defined to ensure that a finite integration operator applied both to the left and the right of a quantization error results in a matrix with an entry-wise maximum absolute value bounded by half of the quantization step-size.

20. A method for quantizing pixels of an image, comprising: receiving, by a sequencing circuit, pixel values for an image captured by a camera, where the pixel values are arranged in a matrix; grouping, by the sequencing circuit, the pixel values of the matrix into subsets of pixel values, where pixels of a given subset of pixels are neighbors to each other; and for each subset of pixels, quantizing, by an analog to digital converter, pixel values of a given subset of pixels using sigma delta modulation.

Description

DRAWINGS

[0015] The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.

[0016] FIG. 1 is a diagram depicting an example imaging system;

[0017] FIG. 2 is a graph showing compression rate of D, D.sup.2 and D.sup.3n exponential functions with various frequencies;

[0018] FIG. 3 is a flowchart depicting a technique for quantizing pixels of an image;

[0019] FIG. 4A is a diagram illustrating how to sequence pixel vales of an image column wise and in series;

[0020] FIG. 4B is a diagram illustrating how to sequence pixel vales of an image column wise and in parallel;

[0021] FIG. 5 is a diagram illustrating how to sequence pixel values from a patch;

[0022] FIGS. 6A-6C are graphs showing a comparison of various 1D signal reconstruction results between the proposed encoder-decoder pairs and MSQ; and

[0023] FIGS. 7A and 7B are graphs showing the reconstruction result of signals that satisfy minimum separation condition.

[0024] Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.

DETAILED DESCRIPTION

[0025] Example embodiments will now be described more fully with reference to the accompanying drawings.

[0026] FIG. 1 depicts an example imaging system 10. The imaging system 10 is comprised generally of an imager 11, a sequencing circuit 12, an analog-to-digital converter (ADC) 13 and a decoder 14. The imager 11 is configured to capture an image of a scene, where the image is represented by pixel values arranged in a matrix. In some embodiments, the imager is further defined as a CCD sensor or CMOS sensor.

[0027] Pixel values for the image are then quantized by the imaging system 10. The sequencing circuit 12 is configured to receive the pixel values for image from the imager 11 and operates to segment and/or sequence the pixel values of the matrix into columns, rows, or patches as will be further described below. The analog-to-digital converter 13 is interfaced with the sequencing circuit 12 and quantizes the pixel values, for example using sigma delta modulation. In one embodiment, the quantized pixel values are assembled into a rectangular array and the array is stored as a final image in a non-transitory computer-readable memory. In other embodiments, the quantized pixel values are optionally reconstructed before being stored in the non-transitory computer-readable memory. In this case, the decoder 14 is interfaced with the analog-to-digital converter 13 and reconstructs a final image from the quantized pixel values. It is envisioned that other types of signal processing, such as JPEG compression, contrast adjustment, etc., may be applied to the quantized pixel values as well.

[0028] In one example, the imaging system 10 is implemented as part of a camera. It is to be understood that only the relevant components of the imaging system 10 are discussed in relation to FIG. 1, but that other components may be needed to control and manage the overall operation of the system.

[0029] By way of background, the existing quantization schemes are first introduced in the context of image quantization. Let X∈custom-character.sup.N,N store the pixel values of a grayscale image. Without any ambiguity, the image is referred to as matrix X.

[0030] For Memoryless Scalar Quantization (MSQ), suppose the alphabet custom-character=custom-character.sub.δ is defined as in (1.1), the scalar quantization, custom-character:custom-character.fwdarw.custom-character quantizes a given scalar by rounding it off to the nearest element in the alphabet

[00002] ( z ) .Math. v - z .Math.

The Memoryless Scalar Quantization (MSQ) applies scalar quantization to each sample of the input sequence independently. In terms of image quantization, for a given image X, MSQ on X means quantizing each pixel independently


custom-character.sup.N,Ncustom-characterq=custom-character(X), with q.sub.i,j=custom-character(X.sub.i,j).

Here q.sub.i,j and X.sub.i,j are the (i, j)th entry of the quantized and the original images q and X, respectively.

[0031] Sigma delta quantization is an adaptive quantization scheme proven to be more efficient than MSQ in a variety of applications. The adaptiveness comes from the fact that it utilizes quantization errors of previous samples to increase the accuracy of the current sample. Suppose the sample sequence is y=(y.sub.1, . . . , y.sub.m), the first order ΣΔ quantization q=custom-character(y) is obtained by running the following iterations

[00003] q i = ( y i + u i - 1 ) , ( D u ) i : = u i - u i - 1 = y i - q i . ( 1.2 )

One can see from the first equation that one quantizes the i.sup.th input y.sub.i by first adding to it the historical errors stored in the so-called state variable u.sub.i−1 then applying the scalar quantizaiton to the sum. In the second equation, the D is the forward finite difference operator/matrix, with 1 s on the diagonal and −1 s on the sub-diagonal. Hence, the second equation defines a recurrence relation allowing one to update the state variable u.sub.i. The scheme defined in equation (1.2) is the so-called first order quantization scheme because it only uses one step of the historical error, u.sub.i−1. More generally, one can define the r.sup.th order ΣΔ quantization denoted by q=custom-character(y) by involving r steps of historical errors, u.sub.i−1, u.sub.i−1, u.sub.i−2, . . . , u.sub.i−r. More precisely, each entry q.sub.i of q is obtained by

[00004] q i = ( ρ r ( u i - 1 , .Math. u i - r ) + y i ) , ( D r u ) i : = y i - q i ,

where ρ.sub.r is some general function aggregating the accumulated errors u.sub.i−1, . . . u.sub.i−r. The r.sup.th order finite difference operator is defined via D.sup.ru:=D(D.sup.r-1u).

[0032] The main advantage of Sigma Delta quantization over MSQ is its adaptive usage of the feedback information. The feedback information helps the quantizer to efficiently use the given bits to maximally store information from special types of signals, such as those with low frequencies. Mathematically, these adaptive quantizers achieve high information conversion rate through the noise-shaping effect on the quantization error, which means the errors of such quantizers are distributed non-uniformly. When inputting the same random sequence, the error spectrum of MSQ is uniform, while that of Sigma Delta quantization is (nearly) linear. This property has the following theoretically explanation. First, the matrix form of definition (1.2) gives an expression of the first order Sigma Delta quantization error y−q


yq=Du, ∥u∥.sub.∞≤δ/2,

where δ is the quantization step-size in (1.1) and D is the finite difference matrix. This is saying that y−q∈D(B.sub.∥.Math.∥.sub.(δ/2)): the quantization error y−q lies in the custom-character.sub.∞ ball of radius δ/2 reshaped by the operator D. In case of the r.sup.th order ΣΔ quantization (r∈custom-character.sub.+), one would similarly have


y−q=D.sup.ru, ∥u∥.sub.∞≤δ/2,

which means the quantization error y−q lies in the custom-character.sub.∞ ball reshaped by the operator D.sup.r.

[0033] The singular values of D.sup.r determine the radii of the reshaped custom-character.sub.∞ ball containing quantization errors. The singular vectors of D lie almost aligned with the Fourier basis, and the singular values of D increases with frequencies. Therefore, the low-frequency errors corresponding to smaller singular values of D.sup.r would be compressed most. One can numerically verify this unbalanced error reduction effect of D.sup.r on various sinusoidal frequencies by computing the ratio

[00005] ρ r ( w ) = .Math. D r e - iwt .Math. 2 .Math. e - i wt .Math. 2

From the plot of ρ.sub.r(w) in FIG. 2, one can see that the low frequency sinusoids lose more energy after going through D.sup.r especially when the quantization order r is large.

[0034] Since Sigma Delta quantization can keep most error away from the low frequency, it is ideal for quantizing low-frequency signals. For instance, dense time samples of audio signals can be deemed as low frequency vectors, and they have indeed been shown to be a good application of Sigma Delta quantization. Images, on the other hand, do not consist of only low frequencies, as sharp edges have pretty slowly decaying Fourier coefficients. Therefore, it is not obvious whether applying Sigma Delta quantization to images is beneficial.

[0035] Despite the importance of quantization in image acquisition, MSQ is still the state-of-the-art quantizer in commercial cameras. The major drawback of MSQ is that when the bit-depth (i.e., the number of bits used to represent each pixel) is small, it has a color-banding artifact, i.e., different colors merge together to cause fake contours and plateaus in the quantized image. A known technique called dithering reduces color-banding by randomly perturbing the pixel values (e.g., adding random noise) before quantization. It then breaks artificial contour patterns into the less harmful random noise. However, this random noise is still quite visible and a more fundamental issue is that dithering only randomizes the quantization error instead of reducing it. The same amount of errors still exist in the quantized image and will manifest themselves in other ways.

[0036] Another method to avoid color-banding is digital halftoning, first proposed in the context of binary printing, where pixel values are converted to 0 or 1 for printing leading to a possibly severe color-banding artifact. To mitigate it, the digital half-toning was proposed based on the ideas of sequential pixel quantization and error diffusion. Error diffusion means the quantization error of the current pixel will spread out to its neighbours to compensate for the overall under/over-shooting. The decay of energy during spreading is set to empirical values that minimizes the overall custom-character.sub.2 quantization error of an entire image class. Error diffusion works under a similar assumption as the Sigma Delta quantization that the image intensity is varying slowly and smoothly. In a sense, it trades color-richness with spatial resolution. As dithering, error fusion does not reduce the overall noise but only redistributes it.

[0037] From this discussion, one sees that both dithering and digital halftoning are only redistributing the quantization error instead of compressing it. In contrast, this disclosure introduces an improved technique for quantizing pixels of an image. Explicitly, suppose N.sup.2 is the total number of pixels and s is the number of pixels representing curve discontinuity (e.g., edges) in the image, the proposed technique reduces the quantization error from 0(N) to 0(√s). This is achieved by combining Sigma Delta quantization with an optimization based reconstruction. It was observed in the numerical experiment that both the low and high frequency errors are reduced.

[0038] FIG. 3 illustrates one embodiment for quantizing pixels of an image in accordance wih this disclosure. As a starting point, an image of a scene is captured at 31 by an imager or imaging device. The image is represented by pixel values arranged in a matrix.

[0039] When using a r.sup.th order Sigma Delta quantization scheme on a 2D image X, pixel values of the image X need to be converted into sequences as indicated at 32. One way to do this is applying Sigma Delta quantization independently to each column of the image,

[00006] q = ( X ) , q = [ q 1 , .Math. , q N ] , q j = ( X j ) , j = 1 , .Math. , N .

where X.sub.j is the j.sup.th column of X. In other words, the pixel values for each column are quantized as a whole, thereby minimizing accumulated quantization error from a starting pixel value to the currently quantized pixel value in a given column of the image. In one example, the pixel values from the columns of the image are quantized in series (i.e., one column at a time) as shown in FIG. 4A. In another example, the pixel values from the columns of the image are quantized in parallel as shown in FIG. 4B. As opposed to column wise, it is readily understood that pixel values of the image could alternatively be quantized row wise. In other words, the pixel values from a subset of pixels are grouped together and then quantized, where the pixels in the subset of pixels are neighbors to each other in the matrix.

[0040] With continued reference to FIG. 3, the sequence of pixel values is then quantized at 33 using Sigma Delta modulation. In a first embodiment, 1D Sigma Delta quantization is applied to each column of the image. A key question one may ask is the practicality of the proposed adaptive quantizers on commercial cameras. A natural concern is the waiting time. Unlike MSQ that quantizes each pixel in parallel, Sigma Delta quantization can only be performed sequentially, which seems to inevitably introduce extra waiting time. However, this is not the case because current cameras are already using sequential quantization architectures for consistency, energy and size considerations. More specifically, in current cameras, to reduce the number of ADC (Analog to Digital Converters) and save energy, the whole image or a column of pixels are assigned to one ADC, which means these pixels need to wait in a queue to be quantized anyway. Thus, to implement the proposed technique, an additional memory unit is added to the circuit of a camera.

[0041] Quantized values of the image are assembled into a rectangular array at 35 and stored as a final image in a non-transitory computer-readable medium. In some embodiments, the quantized values of the image may optionally be decoded at 34 using a decoder. The decoder is configured to minmize an image norm (e.g., total variation norm) as will be further described below.

[0042] Although one can apply 1D ΣΔ quantization column by column to an image, it is likely to create discontinuities across columns. As images are two-dimensional arrays, a two-dimensional quantization scheme seems more helpful in maintaining continuity along both rows and columns. For a second embodiment, this discloure proposes a 2D Sigma Delta quantization which can be applied to one or more patches of an image.

[0043] In a nutshell, the key property that defined the first order 1D Sigma Delta quantization map custom-character:custom-character.sup.Ncustom-charactery.fwdarw.q∈custom-character.sup.N (custom-character is the alphabet) is that there exists a vector u∈custom-character.sup.N, the state variable, such that q and u obey

[0044] (A1) (boundedness/stability): ∥u∥.sub.∞≤C, for some constant C;

[0045] (A2) (adaptivity): u.sub.i=u.sub.i−1+y.sub.i−q.sub.i, ∀i where u.sub.i, y.sub.i, q.sub.i being the ith component of the vectors u, x and q, respectively;

[0046] (A3) (causality): q only depends on the history of x, that is q.sub.i=f(y.sub.i, y.sub.i−1, . . . , y.sub.1), for any i and some function f.

[0047] This scheme is extended to two dimensions as described below. The quantization map in 2D is defined as custom-character:custom-character.sup.N,Ncustom-charactery.fwdarw.q∈custom-character.sup.N,N, the auxiliary variable u∈custom-character.sup.N,N and the properties (A1)-(A3) can be changed to

[0048] (A1′): ∥u∥.sub.max≤C (∥.Math.∥.sub.max denotes the entry-wise maximum of a matrix)

[0049] (A2′): u.sub.i,j=u.sub.i,j−1+u.sub.i−1,j−u.sub.i−1,j−1+y.sub.i,j−q.sub.i,j which has a matrix representation DuD.sup.T=y−q, and

[0050] (A3′): q.sub.i,j=f({y.sub.i′,j′}.sub.i′≤i,j′≤j).

Provided that the quantization alphabet is large enough, one can show that the u and q that satisfy (A1′)-(A3′) can be constructed through the recursive updates


q.sub.i,j=custom-character(u.sub.i,j−1+u.sub.i−1,j−u.sub.i−1,j−1+y.sub.i,j),  (2.1)


u.sub.i,j=u.sub.i,j−1+u.sub.i−1,j−u.sub.i−1,j−1+y.sub.i,j−q.sub.i,j.

Note that the first row and column are initialized exactly the same as the 1D 1.sup.st order sigma delta quantization. With a 1-bit alphabet, there might not exist a pair of u and q obeying (A1)-(A3). With a two or more bit alphabet, the following proposition ensures the existence of a stable 2D Sigma Delta quantization.

[0051] Proposition 2.1—For a given 2D array y∈[a, b].sup.N,N and bit depth d≥2, there exists an alphabet custom-character such that u and q generated by (2.1) satisfies (A1′)-(A3′) with

[00007] C = b - a 2 ( 2 d - 3 ) .

Proof for this propositon is as follows. Without loss of generality, assume for all 1≤i,j≤N, a≤y.sub.i,j≤b, with some constant a≤b. Let

[00008] C = b - a 2 ( 2 d - 3 ) ,

create the alphabet as


custom-character={a−2C,a,a+2C, . . . , b,b+2C}.

Then

[0052] [00009] .Math. �� .Math. = b + 2 C - ( a - 2 C ) 2 C + 1 = 2 d .

Next, use the second principle of induction to show that u generated by (2.1) satisfies ∥u∥.sub.∞≤C.

[0053] Induction hypothesis: if for all the pairs (m, n) such that m≤i, n≤j, m+n<i+j, |u.sub.m,n|≤C, then |u.sub.i,j|≤C.

[0054] Base case: |u.sub.1,1|=|y.sub.1,1−q.sub.1,1|=|y.sub.1,1−custom-character(y.sub.1,1)|≤C.

[0055] Induction step: if i=1, q.sub.i,j=custom-character(y.sub.i,j+u.sub.i,j−1), by induction hypothesis a−C≤y.sub.i,j+u.sub.i,j−1≤b+C, thus |u.sub.i,j|=|y.sub.i,j+u.sub.i,j−1−q.sub.i,j|≤C. The same reasoning follows when j=1.

If i,j≥2, by induction hypothesis a−3C≤y.sub.i,j+u.sub.i,j−1+u.sub.i−1,j−u.sub.i−1,j−1≤b+3C, we also have |u.sub.i,j|=|y.sub.i,j+u.sub.i,j−1+u.sub.i−1,j−u.sub.i−1,j−1−q.sub.i,j|≤C.
Next, we show that the stability constant

[00010] C = b - a 2 ( 2 d - 3 ) ,

corresponding to the uniform alphabet


A={a−2C,a,a+2C, . . . , b,b+2C}

used in the above proposition is optimal. Thus, it has been shown that for each patch of pixels comprising an image, pixel values of a given patch can be quantized using a two dimensional generalization of sigme delta modulation.

[0056] Different techniques for sequencing pixels for two dimensional sigma delta quantization are contemplated by this disclosure. One example for sequencing pixels is illustrated in FIG. 5. In this example, a sequence of pixel values is created by sequencing pixel values along antidiagonal of the matrix starting from an upper left corner of the given patch and moving to the lower right corner of the given patch. Within each antidiagonal, the sequence may go from lower left to upper right as seen in FIG. 5. That is, the sequence of pixel values is P11, P21, P12, P31, P22, P13, P32, P23 and P33. Alternatively, within each antidiagonal, the sequence may go from upper right to lower left. That is, the sequence of pixel values is P11, P12, P21, P13, P22, P31, P23, P32 and P33. In either case, the sequence of pixels may be quantized in series by an ADC as seen in FIG. 5. In other embodiments, pixels along each antidiagonal can be quantized in parallel. It is also envisioned that pixels may be sequenced and quantized along diagonals of the matrix as well. These seqeuncing techniques are applied to each patch from the image.

[0057] Proposition 2.2—For fixed bit depth d≥2, the alphabet custom-character for 2D ΣΔ quantization given in Proposition 2.1 is optimal, in the sense that let {tilde over (C)} be the stability constant of any other d-bit alphabet à (not necessarily equal-spaced), then it is necessary that {tilde over (C)}≤C. To prove by contradiction, assume there exists a d-bit alphabet à whose stability constant is smaller, i.e., {tilde over (C)}<C. Let the alphabet à be c.sub.1<c.sub.2< . . . <c.sub.n, with N=2.sup.d. Assume c.sub.1< . . . <c.sub.i<a≤c.sub.i+1< . . . <c.sub.j≤b<c.sub.j+1< . . . <C.sub.N, note that there is no restriction on the range of alphabet, that is, it is possible that a≤c.sub.1 or b≥c.sub.N. Also, denote the largest interval length in the alphabet within [a, b] as 2Ĩ=max{2I, b−c.sub.j} and 2I=max{c.sub.i+2−c.sub.i+1, . . . , c.sub.j−c.sub.j−1}. The case d≥3 is easier than d=2, so first prove the case d≥3.

[0058] For d≥3, start by proving that there are at least two elements in the alphabet that are within [a, b] so I is well defined. Notice that

[00011] C = b - a 2 ( 2 d - 3 ) < b - a 1 0 ,

if there is zero or only one custom-character between a, b, i.e., a≥custom-character≥b, then one can choose a≤y.sub.1,1≤b properly such that

[00012] .Math. y 1 , 1 - Q A ( y 1 , 1 ) .Math. b - a 4 > C > C ~ ,

which leads to a contradiction.

[0059] Next, consider the following cases. In this first case, a≤c.sub.1 or b≥c.sub.N. Under this assumption one of the following two cases must hold: 1) a≤c.sub.1 and c.sub.1−a≥b−c.sub.N or 2) b−c.sub.N>c.sub.1−a. A closer look indicates that these two cases are exactly the same upon exchanging the roles of a and b hence they share the same proof. Without loss of generality, assume case 1 hold: a≤c.sub.1 and c.sub.1−a≥b−c.sub.N. Next, specify the following sub-cases:

[0060] (a) c.sub.N≤b, let [custom-character] be the largest interval in à for some custom-character, i.e., custom-charactercustom-character=2I. Choose

[00013] y 1 , 1 = c + c + 1 2 - ϵ , y 1 , 2 = y 2 , 1 = c + 2 ϵ

with small enough ϵ such as 10.sup.-10 (C−{tilde over (C)}) and y.sub.2,2=a, this leads to u.sub.1,1=I−ϵ, u.sub.1,2=u.sub.2,1=−I+ϵ, then q.sub.2,2=Q.sub.A(y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1)=Q.sub.A(a−3I+3ϵ), the quantization error

[00014] .Math. u 2 , 2 .Math. = .Math. y 2 , 2 + u 1 , 2 + u 2 , 1 - u 1 . 1 - q 2 , 2 .Math. = .Math. Q A ( a - 3 I + 3 ϵ ) - ( a - 3 I + 3 ϵ ) .Math. = c 1 - a + 3 I - 3 ϵ c 1 - a + 3 2 .Math. c N - c 1 2 d - 1 - 3 ϵ c 1 - a + 3 2 .Math. b - a - 2 ( c 1 - a ) 2 d - 1 - 3 ϵ 3 ( b - a ) 2 ( 2 d - 1 ) - 3 ϵ b - a 2 ( 2 d - 3 ) - 3 ϵ = C - 3 ϵ > C ~ .

The second inequality used the assumption c.sub.1−a≥b−c.sub.N, the third one used c.sub.1−a≥0 and d≥3. Then this contradicts the assumption ∥u∥.sub.max≤{tilde over (C)}.

[0061] (b) c.sub.N>b and c.sub.j+1−c.sub.j<3(b−c.sub.j). If 2Ĩ=max{2I,b−c.sub.j}=b−c.sub.j, let

[00015] y 1 , 1 = c j + I ~ - ϵ , y 1 , 2 = y 2 , 1 = c j + 1 + c j 2 - I ~ + 2 ϵ b

with sufficiently small ϵ as in (a) and y.sub.2,2=a, then

[00016] u 1 , 1 = I ˜ - ϵ , u 1 , 2 = u 2 , 1 = - c j + 1 - c j 2 + ϵ < - I ˜ + ϵ .

If 2Ĩ=2I, one can choose y.sub.1,1,y.sub.1,2,y.sub.2,1 as in (a) such that u.sub.1,1=Ĩ−ϵ, u.sub.1,2=u.sub.2,1=−Ĩ+ϵ. In both cases, let y.sub.2,2=a, have y.sub.2,2+u.sub.1,2u.sub.2,1−u.sub.1,1≤a−3I+3ϵ<c.sub.1, q.sub.2,2=Q.sub.A(y.sub.2,2+u.sub.1,2+u.sub.2,1u.sub.1,1)=c.sub.1, the quantization error at q.sub.2,2 is

[00017] .Math. u 2 , 2 .Math. = .Math. c 1 - ( y 2 , 2 + u 1 , 2 + u 2 , 1 - u 1 , 1 ) .Math. c 1 - a + 3 I ˜ - 3 ϵ c 1 - a + 3 2 .Math. b - c 1 2 d - 1 - 3 ϵ c 1 - a + 3 2 .Math. b - a - ( c 1 - a ) 2 d - 1 - 3 ϵ 3 ( b - a ) 2 ( 2 d - 1 ) - 3 ϵ b - a 2 ( 2 d - 3 ) - 3 ϵ = C - 3 ϵ > C ~ .

This leads to a contradiction.

[0062] (c) c.sub.N>b and c.sub.j+1−c.sub.j≥3(b−c.sub.j). If one choose

[00018] y 1 , 1 = b + c j 2 - ϵ

with some small ϵ and y.sub.1,2=b, then

[00019] u 1 , 1 = b - c j 2 - ϵ , u 1 , 2 = 3 2 ( b - c j ) - ϵ C ˜ .

Since it holds for arbitrary small ϵ, it must have b−c.sub.j≤⅔{tilde over (C)}. This gives b−⅔{tilde over (C)}≤c.sub.j≤b and

[00020] 2 I c j - c 1 j - 1 b - 2 3 C ~ - c 1 2 d - 2 ,

where the last inequality is due to the assumption c.sub.N>b so that j≤N−1. Same as in (a), one can choose y.sub.1,1, y.sub.1,2, y.sub.2,1 properly and y.sub.2,2=a, such that u.sub.1,1=I−ϵ, u.sub.1,2=u.sub.2,1=−I+ϵ, provided that ϵ is small enough. Then the quantization error at q.sub.2,2=Q(a−3I+3ϵ)=c.sub.1 is

[00021] .Math. u 2 , 2 .Math. = .Math. Q A ( a - 3 I + 3 ϵ ) - ( a - 3 I + 3 ϵ ) .Math. = c 1 - a + 3 I - 3 ϵ c 1 - a + 3 2 .Math. b - 2 3 C ˜ - c 1 2 d - 2 - 3 ϵ c 1 - a + 3 2 .Math. b - a - b - a 3 ( 2 d - 3 ) - ( c 1 - a ) 2 d - 2 - 3 ϵ 3 2 .Math. b - a - b - a 3 ( 2 d - 3 ) 2 d - 2 - 3 ϵ b - a 2 ( 2 d - 3 ) - 3 ϵ = C - 3 ϵ > C ~ .

This also leads to a contradiction.

[0063] In the next case, a>c.sub.1 and b<c.sub.N. If a>c.sub.2 and b<c.sub.N−1, then one can easily choose a proper a≤y.sub.1,1≤b with quantization error at least

[00022] max { 2 I , c i + 1 - a , b - c j } b - a 2 ( 2 d - 4 + 1 ) = C > C ˜ ,

which leads to contradiction. Therefore, without loss of generality, assume c.sub.1<a≤c.sub.2 and c.sub.2−a≥b−c.sub.N−1, similar as above, specify the following sub-cases:

[0064] (d) c.sub.N−1≤b. For arbitrary constant a−3I<ξ<c.sub.2, one can choose y.sub.1,1,y.sub.1,2,y.sub.2,1,y.sub.2,2 properly such that ξ=y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1. If a≤ξ<c.sub.2, set y.sub.1,1=y.sub.1,2y.sub.2,1=c.sub.2, y.sub.2,2=ξ, then u.sub.1,1=u.sub.1,2=u.sub.2,1=0, y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1ξ. If a−3I <ξ<a, denote

[00023] W = a - ξ 3 ,

then 0<w<I. Let [custom-character] be the largest interval in à for some custom-character, i.e., custom-charactercustom-character. =2I. Choose y.sub.1,1=custom-character+w,y.sub.1,2=y.sub.2,1=custom-character−2w, y.sub.2,2=a, then u.sub.1,1=w, u.sub.1,2=u.sub.2,1=w, one has y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1=a−3w=ξ. Hence whatever c.sub.1 is, we can always obtain the quantization error

[00024] max a - 3 I < ξ < c 2 .Math. ξ - Q A ( ξ ) .Math. 1 3 ( c 2 - ( a - 3 I ) ) = 1 3 ( c 2 - a ) + I 1 3 ( c 2 - a ) + 1 2 .Math. c N - 1 - c 2 2 d - 3 1 3 ( c 2 - a ) + 1 2 .Math. b - a - 2 ( c 2 - a ) 2 d - 3 b - a 2 ( 2 d - 3 ) + ( 1 3 - 1 2 d - 3 ) ( c 2 - a ) b - a 2 ( 2 d - 3 ) = C > C ~ .

This leads to a contradiction.

[0065] (e) c.sub.N−1>b and c.sub.j+1−c.sub.j 3/2(b−c.sub.j). Similar as in (d), first show that one can choose between y.sub.1,1,y.sub.1,2,y.sub.2,1,y.sub.2,2 properly to make y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1 arbitrary constant between a−3Ĩ and c.sub.2. If 2Ĩ=2I, it follows the same reasoning as in (d), here we discuss the case when 2Ĩ=b−c.sub.j. If a≤ξ<c.sub.2, let y.sub.1,1=y.sub.1,2=y.sub.2,1=c.sub.j, y.sub.2,2=ξ, then y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1=ξ. If a−3Ĩ<ξ<a, denote w=a−ξ, then 0<w<3Ĩ, we specify the following sub-cases: i) if 0<w≤Ĩ, let y.sub.1,1=c.sub.j+w,y.sub.1,2=y.sub.2,1=c.sub.j−w, y.sub.2,2=a, then u.sub.1,1=w, u.sub.1,2=u.sub.2,1=0, y.sub.2,2+u.sub.1,2u.sub.2,1−u.sub.1,1=a−w=ξ; ii) if I<w≤2Ĩ, let y.sub.1,1=c.sub.j+w−Ĩ, y.sub.1,2=c.sub.j+1−w, y.sub.2,1=c.sub.j−(w−Ĩ) and y.sub.2,2=a, then u.sub.1,1=w−Ĩ, u.sub.1,2=−Ĩ, u.sub.2,1=0, y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1=a−w=ξ, iii) if 2Ĩ<w<3Ĩ, let y.sub.1,1=c.sub.j+w−2Ĩ, y.sub.1,2=y.sub.2,1=c.sub.j+1−w+Ĩ and y.sub.2,2=a, then u.sub.1,1=w−2Ĩ, u.sub.1,2=u.sub.2,1=−Ĩ, y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1=a−w=ξ.

[0066] Therefore, whatever c.sub.1 is, the worst case quantization error can reach

[00025] max a - 3 I ~ < ξ < c 2 .Math. ξ - Q A ( ξ ) .Math. = 1 3 ( c 2 - ( a - 3 I ~ ) ) = 1 3 ( c 2 - a ) + I ~ 1 3 ( c 2 - a ) + 1 2 .Math. b - c 2 2 d - 3 1 3 ( c 2 - a ) + 1 2 .Math. b - a - ( c 2 - a ) 2 d - 3 b - a 2 ( 2 d - 3 ) + ( 1 3 - 1 2 ( 2 d - 3 ) ) ( c 2 - a ) b - a 2 ( 2 d - 3 ) = C > C ~ .

This leads to a contradiction.

[0067] (f) c.sub.N−1>b and c.sub.j+1−c.sub.j> 3/2(b−c.sub.j), similar as (c), one has j≤N−2 and b−c.sub.j≤ 4/3{tilde over (C)}, then

[00026] max a - 3 I < ξ < c 2 .Math. ξ - Q A ( ξ ) .Math. = 1 3 ( c 2 - ( a - 3 I ) ) = 1 3 ( c 2 - a ) + I 1 3 ( c 2 - a ) + 1 2 .Math. b - 4 3 C ~ - c 2 2 d - 4 1 3 ( c 2 - a ) + 1 2 .Math. b - a - 2 ( b - a ) 3 ( 2 d - 3 ) - ( c 2 - a ) 2 d - 4 b - a 2 ( 2 d - 3 ) + ( 1 3 - 1 2 ( 2 d - 4 ) ) ( c 2 - a ) b - a 2 ( 2 d - 3 ) = C > C ~ .

There also leads to a contradiction.

[0068] For the case d=2, there are only 4 elements in the alphabet Ã={c.sub.1, c.sub.2, c.sub.3, c.sub.4} with c.sub.1<c.sub.2<c.sub.3<c.sub.4. Consider the case a≤c.sub.1 or b≥c.sub.4, if there are at least two elements in à that are within [a, b], the proof follows the same reasoning as d≥3, which has been discussed in (a)-(c). Here we discuss the case that a≤c.sub.1 and there is only one element in the à that is within [a, b], i.e., a≤c.sub.1≤b<c.sub.2<c.sub.3<c.sub.4. In this case, let

[00027] y 1 , 1 = c 1 + b 2 , y 1 , 2 = y 2 , 1 = y 2 , 2 = a , then u 1 , 1 = b - c 1 2 , u 1 , 2 = u 2 , 1 = a - c 1 , u 2 , 2 = a + 2 ( a - c 1 ) - b - c 1 2 - Q A ( a + 2 ( a - c 1 ) - b - c 1 2 ) = 2 ( a - c 1 ) - b - c 1 2 a - c 1 2 - b - c 1 2 = - C hence .Math. u 2 , 2 .Math. b - a 2 = C > C ~ ,

which leads to a contradiction.

[0069] Next, the two remaining cases are discussed when both a>c.sub.1 and b<c.sub.4 hold: c.sub.1<a≤c.sub.2≤b<c.sub.3<c.sub.4 and c.sub.1<a≤c.sub.2<c.sub.3≤b<c.sub.4, which corresponds to two cases that there are 1 or 2 elements in the alphabet between a and b, respectively.

[0070] For c.sub.1<a≤c.sub.2<c.sub.3≤b<c.sub.4, without loss of generality, assume c.sub.2+c.sub.3≥b+a. Note that one must have c.sub.2−c.sub.1<b−a since

[00028] C ~ < C = b - a 2 ,

combining these two inequalities get c.sub.1+c.sub.3>2a, then a<½(c.sub.1+c.sub.3)<b. Choose some small ϵ and the first 3×3 entries of y as

[00029] y = ( 1 2 ( c 3 + c 2 ) - ϵ c 2 c 2 + 2 ϵ .Math. c 2 c 2 1 2 ( c 1 + c 3 ) .Math. c 2 + 2 ϵ 1 2 ( c 1 + c 3 ) a .Math. .Math. .Math. .Math. .Math. ) .

Provided that ϵ is small enough, one can check that the first 3×3 entries in u are as follows

[00030] u = ( 1 2 ( c 3 - c 2 ) - ϵ 1 2 ( c 3 - c 2 ) - ϵ - 1 2 ( c 3 - c 2 ) + ϵ .Math. 1 2 ( c 3 - c 2 ) - ϵ 1 2 ( c 3 - c 2 ) - ϵ - 1 2 ( c 2 - c 1 ) + ϵ .Math. - 1 2 ( c 3 - c 2 ) + ϵ - 1 2 ( c 2 - c 1 ) + ϵ a - 1 2 ( c 3 + c 2 ) + 3 ϵ .Math. .Math. .Math. .Math. .Math. )

By assumption, one has c.sub.2+c.sub.3≥b+a, then for small enough ϵ,

[00031] .Math. u 3 , 3 .Math. b - a 2 - 3 ϵ = C - 3 ϵ > C ~ ,

this leads to a contradiction.

[0071] For c.sub.1<a≤c.sub.2<b≤c.sub.3<c.sub.4, specify the following two cases:

[0072] (i)

[00032] c 2 b + a 2 ,

notice that c.sub.2−c.sub.1<b−a, so c.sub.1+c.sub.2>2a, one can choose y.sub.1,1=c.sub.2, y.sub.1,2=y.sub.2,1=½(c.sub.1+c.sub.2)+ϵ for sufficiently small ϵ and y.sub.2,2=a, then u.sub.1,1=0, u.sub.1,2=u.sub.2,1=−½(c.sub.2−c.sub.1)+ϵ and Q.sub.A(y.sub.2,2+u.sub.1,2+u.sub.2,1−u.sub.1,1)=Q.sub.A(a−(c.sub.2−c.sub.1)+2ϵ)=c.sub.1 and

[00033] .Math. y 2 , 2 + u 1 , 2 + u 2 , 1 - u 1 , 1 - Q A ( y 2 , 2 + u 1 , 2 + u 2 , 1 - u 1 , 1 ) .Math. = c 2 - a - 2 ϵ b - a 2 - 2 ϵ > C ~ ,

which leads to a contradiction.

[0073] (ii)

[00034] c 2 < b + a 2 ,

also notice that c.sub.3−c.sub.2<b−a, then one can choose y as follows with some small ϵ,

[00035] y = ( 1 2 ( b + c 2 ) - ϵ c 2 c 2 + 1 2 ( c 3 - b ) + 2 ϵ .Math. c 2 c 2 1 2 ( c 1 + c 3 ) .Math. c 2 + 1 2 ( c 3 - b ) + 2 ϵ 1 2 ( c 1 + c 3 ) a .Math. .Math. .Math. .Math. .Math. )

Provided that ϵ is small enough, the corresponding u is

[00036] U = ( 1 2 ( b - c 2 ) - ϵ 1 2 ( b - c 2 ) - ϵ - 1 2 ( c 3 - c 2 ) + ϵ .Math. 1 2 ( b - c 2 ) - ϵ 1 2 ( b - c 2 ) - ϵ - 1 2 ( c 2 - c 1 ) + ϵ .Math. - 1 2 ( c 3 - c 2 ) + ϵ - 1 2 ( c 2 - c 1 ) + ϵ - c 2 + b 2 + a + 3 ϵ .Math. .Math. .Math. .Math. .Math. )

Since c.sub.2≥a, then

[00037] .Math. u 3 , 3 .Math. = c 2 + b 2 - a - 3 ϵ b - a 2 - 3 ϵ > C ~ ,

which leads to a contradiction.

[0074] The quanitzation time of 2D scheme (2.1) is 0(N). Because for a fixed t ∈{2,3, . . . , 2N}, all u.sub.i,j with i+j=t (the points on an anti-diagonal) can be computed in parallel. The matrix representation of (2.1) is


y−q=DuD.sup.T.

It is easy to extend the first order quantization to high orders. If r≥1, the r-th order quantization obey the matrix form recursive formula


y−q=D.sup.ru(D.sup.r).sup.T.

[0075] Throughout, it has been assumed that the image X to be quantized and reconstructed is a N×N matrix (the derivation for rectangle matrices are similar), X=(x.sub.1,x.sub.2, . . . , x.sub.N)=(x.sub.1, x.sub.2, . . . , x.sub.N).sup.T is the column-wise and row-wise decomposition of X, D is the N×N difference matrix with 1 s on the diagonal and −1 s on the sub-diagonal and D.sub.1 is the circulant difference matrix with an extra −1 on the upper right corner. Denote ∥.Math.∥.sub.1 as the entry-wise custom-character.sub.l-norm, and ∥.Math.∥.sub.∞ refers to the entry-wise custom-character.sub.∞-norm. Also, custom-character.sub.k denotes the discrete Fourier transform operator with frequency k, F is the N×N DFT matrix. Let F.sub.L contain rows of F with frequencies within {L, −L+1, . . . , L}, and P.sub.L=F.sub.L*F.sub.L.

[0076] A general assumption is that the images satisfy some sparsity property in its gradients. To be more precise, consider three classes of images each satisfy one of the following three assumptions.

[0077] Assumption 2.1 (β.sup.th order sparsity condition) Suppose X∈custom-character.sup.N,N is an image, the columns or rows of X are piece-wise constant or piece-wise linear. Explicitly, the cardinality of 1.sup.st order or 2.sup.nd order differences in each column or row is smaller than the number of pixels: for β=1 or 2, fix s<N,


∥(D.sup.β).sup.Tx.sub.i∥.sub.0≤s,∀j=1,2, . . . , N, or ∥x.sub.j.sup.TD.sup.β∥.sub.0≤s,∀j=1,2, . . . , N.

If β=1, the columns or rows of image X is piece-wise constant, if β=2, they are piece-wise linear.

[0078] Assumption 2.2—Both columns and rows in X are piece-wise constant or piece-wise linear. Explicitly, for β=1 or 2, fix s<2N.sup.2,


∥(D.sup.β).sup.TX∥.sub.0+∥XD.sup.β∥.sub.0≤s.

[0079] Assumption 2.3−(β.sup.th order minimum separation condition) X satisfies Assumption 2.1. In addition, the β.sup.th order differences of X in each column or row satisfy the Λ.sub.m-minimum separation condition defined below for some small constant M<<N. Explicitly, this means for β=1 or 2, {D.sub.1.sup.βx.sub.i}.sub.i=1,2, . . . , N or {x.sub.j.sup.T(D.sub.1.sup.β).sup.T}.sub.j=1,2, . . . , N satisfy Λ.sub.M-minimum separation condition, note that here D.sub.1 is the circulant difference matrix.

[0080] Definition 2.1 (Λ.sub.M-minimum)—For a vector x∈custom-character.sup.N, let S⊏ {1,2, . . . , N} be its support set, say that it satisfies Λ.sub.M-minimum separation condition if

[00038] min s , s T , s s 1 N | s - s | 2 M , ( 2.2 )

where |.Math.| is the wrap-around distance. Use the definition C(T, Λ.sub.M) as the space of trigonometric polynomials of degree M on set T, i.e.,


C(T, Λ.sub.M)={f∈C.sup.∞(T): f(x)=Σ.sub.k=-M.sup.Ma.sub.ke.sup.i2πkx}.

The proposed decoders and their error bounds.

[0081] Let Q be encoder Q.sub.col or Q.sub.2D which will be specified in each case, and X be the image. The proposed decoders for images satisfying different assumptions can be unified in the following framework


{circumflex over (X)}=arg min.sub.zf(Z,β)st. ρ(Z,r)≤c.  (2.3)

Here β is 1 or 2 depending on whether the image is assumed to be piece-wise constant or piece-wise linear, f(Z,β) is some loss function that encourages sparsity in the gradient under various assumptions, r is the order of Sigma Delta quantization, and ρ(Z,r)≤c is the feasibility constraint determined by the quantization scheme. Under this framework, let {circumflex over (X)} be a solution of (2.3), one can obtain reconstruction error bounds of the following type


{circumflex over (X)}−X∥.sub.F≤C(β,r,N,δ),  (2.4)

where N is the size of the image, and δ is the alphabet step-size.

[0082] Now specify the explicit form of the optimization framework and error bound for each class of images.

[0083] Class 1: X satisfies Assumption 2.1 with order β=1 or 2 and sparsity s, the encoder is r.sup.th order (r≥β) Q.sub.col with an alphabet step-size δ, use the following optimization for reconstruction with


{circumflex over (X)}=arg min.sub.z∥(D.sup.β).sup.TZ∥.sub.1st.∥D.sup.−r(Z−Q.sub.col(X))∥.sub.∞≤δ/2.  (2.5)

Theorem 3.1 shows that the reconstruction error is


{circumflex over (X)}−X∥.sub.F≤C√{square root over (sN)}δ.

[0084] Class 2: X satisfies Assumption 2.2 with order β=1 and sparsity s, the encoder is Q.sub.2D with alphabet step-size δ, use the following optimization


{circumflex over (X)}arg min.sub.z∥D.sup.TZ∥.sub.1+∥ZD∥.sub.1st.∥D.sup.−1(Z−Q.sub.2D(X))D.sup.−T∥.sub.∞≤δ/2.  (2.6)

Theorem 3.3 shows that the reconstruction error is bounded by


{circumflex over (X)}−X∥.sub.F≤C√{square root over (s)}δ.

Class 3: X satisfies Assumption 2.3 with order β=1 or 2 and sparsity s«N, the encoder is r.sup.th order Sigma Delta quantization applied to each column: Q.sub.col with alphabet spacing δ, r≥β. Here we define a new alphabet à with smaller step-size

[00039] δ ~ := 2 δ ( 2 N ) r

to quantize the last r entries in each column:


Ã:={a,a+{tilde over (δ)},a+2{tilde over (δ)} . . . , a+K{tilde over (δ)},b}, K=max {j,a+j{tilde over (δ)}<b}.

The total number of boundary bits is of order 0(logN), which is negligible comparing to the 0(N) bits needed for the interior pixels. Hence the following feasibility contraint holds:

[00040] .Math. D - r ( X - Q col ( X ) ) N - r : N - 1 , : .Math. ( 1 2 N ) r δ .

Then we use the following optimization to obtain the reconstructed image {tilde over (X)}:

[00041] X ~ = arg Z min .Math. D 1 β Z .Math. 1 subject to { .Math. D - r ( Z - Q col ( X ) ) .Math. δ 2 , .Math. D - r ( Z - Q col ( X ) ) N - r : N - 1 , : .Math. ( 1 2 N ) r δ . ( 2.7 )

Here D.sup.−r(Z−Q.sub.col(X)).sub.N−r:N−1: refers to the last r rows of D.sup.−r (Z−Q.sub.col(X)). The error bound is

[00042] .Math. X ~ - X .Math. F C M r + β - 2 N r - 3 δ .

[0085] In MSQ, the quantization error for each pixel is δ/2. Since the pixels are quantized independently, the total quantization error of the N×N image in Frobenius norm is Nδ/2. Similarly, when using ΣΔ quantizers (Q.sub.col or Q.sub.2D) and decoding with the following naive decoder,


{circumflex over (X)}=Find Z st.∥D.sup.−r(Z−Q.sub.col(X))∥.sub.∞≤δ/2,

the worse-case error is again 0(Nδ). This indicates that the TV norm penalty in the proposed decoders (2.5), (2.7) or (2.6) are playing a key role in reducing the error to 0(√{square root over (sN)}δ) and 0(√{square root over (s)}δ), respectively.

[0086] First, consider images with no minimum separation (Class 1), where the image X satisfies Assumption 2.1 and the encoder is Q.sub.col column by column quantization. With this encoder, the decoder (2.5) can be decoupled into columns, with the reconstructions done in parallel.

[0087] For each column x∈custom-character.sup.N, let q be its r.sup.th order Sigma Delta quantization, i.e., q=Q.sup.ΣΔ,r(x), r≥β,β=1,2. Then the decoder (2.5) reduces to


{circumflex over (x)}=arg min.sub.z∥(D.sup.β).sup.Tz∥.sub.1 subject to ∥D.sup.−r(z−q)∥.sub.∞≤δ/2.  (3.1)

Here D is the finite difference matrix and δ is the quantization step-size. Therefore (D.sup.β).sup.T z represents the 1.sup.st order or 2.sup.nd order discrete derivatives in z. The custom-character.sub.1-norm is used to promote the sparsity of the derivatives corresponding to edges. The ball-constraint was a well known feasibility constraint for Sigma Delta quantization.

[0088] The following theorem provides the error bound of this decoder.

[0089] Theorem 3.1—For first order or second order Sigma Delta quantization, i.e., β=1 or 2, r≥β, assume the support of (D.sup.β).sup.Tx has cardinality s, and {circumflex over (x)} is a solution to (3.1), then


{circumflex over (x)}−x∥.sub.2≤C√{square root over (s)}δ.  (3.2)

[0090] Remark 3.2—The above error bound is for each column. Putting the error of all columns together as {circumflex over (X)}, one has


{circumflex over (X)}−X∥.sub.F≤C√{square root over (sN)}δ.

[0091] Denote h=(D.sup.β).sup.T({circumflex over (x)}−x), assume the support set of (D.sup.β).sup.Tx is S with cardinality s, the complement set of S is S.sup.c. Since {circumflex over (x)} is a solution to (3.1), one has


∥(D.sup.β).sup.Tx∥.sub.1≥∥(D.sup.β).sup.Tx+h∥.sub.1≥∥(D.sup.β).sup.Tx∥.sub.1−∥h.sub.S∥.sub.1+∥h.sub.Sc∥.sub.1,

which gives ∥h.sub.S∥.sub.1≥∥h.sub.Sc∥.sub.1, one can bound the custom-character.sub.1-norm of the misfit h as


h∥.sub.1=∥h.sub.S∥.sub.1+∥h.sub.Sc∥.sub.1≤2∥h.sub.S∥.sub.1≤2.sup.β+r+1sδ,

where the last inequality is due to


h∥.sub.∞=∥(D.sup.β).sup.T({circumflex over (x)}−x)∥.sub.∞=∥(D.sup.β).sup.TD.sup.rD.sup.−r({circumflex over (x)}−x)∥.sub.∞≤2.sup.β+rδ.

Then the following properties hold


∥(D.sup.β).sup.T({circumflex over (x)}−x)∥.sub.1≤2.sup.β+r+1sδ, ∥D.sup.−β({circumflex over (x)}−x)∥.sub.∞=∥D.sup.r−βD.sup.−r({circumflex over (x)}−x)∥.sub.∞≤2.sup.r−βδ.

Note that the inequalities above are bounded in custom-character.sub.1-norm and custom-character.sub.∞ norm, which are dual to each other,one can therefore bound the reconstruction error ∥{circumflex over (x)}−x∥.sub.2 using


custom-character{circumflex over (x)}−x,{circumflex over (x)}−xcustom-character=custom-character(D.sup.β).sup.T({circumflex over (x)}−x),D.sup.−β({circumflex over (x)}−x)custom-character≤2.sup.2r+1.sup.2.

This is equivalent to saying, we have ∥{circumflex over (x)}−x∥.sub.2≤C√{square root over (s)}δ.

[0092] Second, consider Class 2, where the image X satisfies Assumption 2.2 and the encoder is Q.sub.2D. For simplicity, assume the patch number is 1 (there is only one patch identical to the original image). Results for larger patch numbers are similar. The following theorem establishes the error bound for 2D reconstruction of X from its quantization Q.sub.2D(X) using (2.6).

[0093] Theorem 3.3 If the original matrix X satisfies Assumption 2.2 , let {circumflex over (X)} be a solution to (2.6), then


{circumflex over (X)}−X∥.sub.F≤C√{square root over (s)}δ.  (3.3)

Denote H.sub.1=D.sup.T({circumflex over (X)}−X), H.sub.2=({circumflex over (X)}−X)D, S.sub.A and S.sub.B are the support sets of D.sup.TX and XD respectively, the corresponding complement sets are S.sub.A.sup.C and S.sub.B.sup.C, respectively. By assumption, |S.sub.A|+|S.sub.B|≤s. Also notice that


D.sup.TX∥.sub.1+∥XD∥.sub.1≥∥D.sup.T{circumflex over (X)}∥.sub.1+∥{circumflex over (X)}D∥.sub.1=∥D.sup.TX+H.sub.1∥.sub.1+∥XD+H.sub.2∥.sub.1≥∥D.sup.TX∥.sub.1−∥(H.sub.1).sub.S.sub.A∥.sub.1+∥(H.sub.1).sub.S.sub.A.sub.C∥.sub.1+∥XD∥.sub.1−∥(H.sub.2).sub.S.sub.B∥.sub.1+∥(H.sub.2).sub.S.sub.B.sub.C∥.sub.1

which gives ∥(H.sub.1).sub.S.sub.A.sub.C∥.sub.1+∥(H.sub.2).sub.S.sub.C∥.sub.1≤∥(H.sub.1).sub.S.sub.A∥.sub.1+∥(H.sub.2).sub.S.sub.B∥.sub.1, hence


H.sub.1∥.sub.1+H.sub.2∥.sub.1≤2(∥(H.sub.1).sub.S.sub.A∥.sub.1+∥(H.sub.2).sub.S.sub.B∥.sub.1)≤Csδ.

[0094] Here the last inequality is due to ∥H.sub.1∥.sub.∞=∥D.sup.TD(D.sup.−1({circumflex over (X)}−X)D.sup.−T)D.sup.T∥.sub.∞≤8δ, similarly, ∥H.sub.2∥.sub.∞≤8δ. Then one has the following constraints:


D.sup.T({circumflex over (X)}−X)∥.sub.1≤Csδ,∥D.sup.−1({circumflex over (X)}−X)∥.sub.∞≤2δ.

[0095] Similar to the proof of Theorem 3.1, the inequalities above lead to

[00043] .Math. X ^ - X .Math. F = .Math. D T ( X ^ - X ) , D - 1 ( X ^ - X ) .Math. 1 2 C s δ ,

then (3.3) also holds.

[0096] Third, consider reconstruction of images with minimum separation condition (Class 3), where the image X satisfies Assumption 2.3. Same as in Class 1, use Q.sub.col (column by column quantization) for encoding.

[0097] For x∈custom-character.sup.N, q=Q.sup.ΣΔ,β(x), β=1 or 2, r≥β, denote v as the last r rows of D.sup.−r (x−q), then (2.7) reduces to

[00044] x ^ = arg min Z .Math. D 1 β z .Math. 1 st . .Math. D - r ( z - q ) .Math. δ / 2 , .Math. ( D - r ( z - q ) ) N - r : N - 1 - v .Math. < ( 1 2 N ) r δ . ( 3.4 )

There are two differences between this decoder and that for Class 1: 1) here D.sub.1 is the circulant difference matrix instead of the forward difference matrix. This is to ensure that the separation condition is satisfied at the boundary; and 2) in order for the extra separation assumption to improve the error bound over Class 1, one needs to use a few more bits to encode the boundary pixels. The total number of boundary bits is of order 0(logN), which is negligible comparing to the 0(N) bits needed for the interior pixels.

[0098] Theorem 3.4 For high order ΣΔ quantization, i.e., r≥2, assume D.sub.1.sup.βx satisfies Λ.sub.M-minimization separation condition, and {circumflex over (x)} is a solution to (3.4), then for arbitrary resolution L≤N/2, the following error bound holds:

[00045] .Math. P L ( x ^ - x ) .Math. C L 2 N r M r + β - 2 δ . ( 3.5 )

Here P.sub.L is the projection onto the low frequency domain with bandwidth L, i.e., P.sub.L=F.sub.L.sup.*F.sub.L with F.sub.L being the first L rows of DFT matrix.

[0099] Again, since the image X is sliced into columns and reconstructed individually, if X satisfies Assumption 2.3, let {circumflex over (X)} be the reconstructed image concatenated from individual columns which are solutions to (3.4), the infinity norm error bound for decoder (2.7) is then

[00046] .Math. P L ( X ^ - X ) .Math. C L 2 N r M r + β - 2 δ ,

note that ∥.Math.∥.sub.∞ is the element-wise custom-character.sub.∞ norm. Substitute L with N/2, obtain

[00047] .Math. X ^ - X .Math. F C M r + β - 2 N r - 3 δ .

[0100] Consider using decoder (3.4) only when the gradients of each column satisfies minimum separation condition, i.e., for all i=1,2, . . . , N, D.sub.1.sup.βx.sub.i∈custom-character.sup.N satisfies minimum separation condition with some small constant M«N. In this case, the worst case custom-character.sub.∞-norm error bound for arbitrary resolution approximates 0 when r.fwdarw.∞.

[0101] In order to prove Theorem 3.4, super-resolution analysis is performed within the Sigma Delta reconstructions and adjust the analysis to fit in a discrete setting. First, one needs the following lemma.

[0102] Lemma 3.7—For feasible {circumflex over (x)}∈custom-character.sup.N which satisfies the constraints in (3.4), the following inequality holds:


custom-character.sub.MD.sub.1.sup.β({circumflex over (x)}−x)∥.sub.2≲(MN).sup.r+β√{square root over (N)}δ.

Recall that for z∈custom-character.sup.N, discrete Fourier transform

[00048] k z = .Math. n = 0 N - 1 z n e - i 2 π k n N .

For nonzero frequency k≠0, denote

[00049] α = 1 1 - e - i 2 π k N ,

then one has

[00050] k D 1 z = .Math. n = 0 N - 1 ( D 1 z ) n e - i 2 π k n N = ( 1 - e - i 2 π k N ) k z = α - 1 k z . Also , k D - 1 z = .Math. n = 0 N - 1 ( .Math. j = 0 n z j ) e - i 2 π k n N = .Math. n = 0 N - 1 z n .Math. j = n N - 1 e - i 2 π k j N = 1 1 - e - i 2 π k N .Math. n = 0 N - 1 z n ( e - i 2 π k n N - 1 ) = αℱ k z - α ( D - 1 z ) N - 1 .

[0103] Similarly,


custom-character.sub.kD.sup.−2z=acustom-character.sub.KD.sup.−1z−a(D.sup.−2z).sub.N−1=a.sup.2custom-character.sub.kz−a.sup.2(D.sup.−1z).sub.N−1−a(D.sup.−2z).sub.N−1, custom-character.sub.kD.sup.−3z=a.sup.3custom-character.sub.kz−a.sup.3(D.sup.−1z).sub.N−1−a.sup.2(D.sup.−2z).sub.N−1−a(D.sup.−3z).sub.N−1.

[0104] More generally, for β=1 or 2, r≥2,

[00051] k D - r z = α r k z - α r ( D - 1 z ) N - 1 - α r - 1 ( D - 2 z ) N - 1 - .Math. - α ( D - r z ) N - 1 = α r + β k D 1 β z - α r ( D - 1 z ) N - 1 - α r - 1 ( D - 2 z ) N - 1 - .Math. - α ( D - r z ) N - 1 .

[0105] Multiplying a.sup.−(r+β) on both sides and rearranging the terms gives,

[00052] k D 1 β z = ( 1 - e - i 2 π k N ) r + β k D - r z + ( 1 - e - i 2 π k N ) β ( D - 1 z ) N - 1 + ( 1 - e - i 2 π k N ) β + 1 ( D - 2 z ) N - 1 + .Math. + ( 1 - e - i 2 π k N ) β + r - 1 ( D - r z ) N - 1 .

Note that for k=0, custom-character.sub.0D.sub.1.sup.βz=Σ.sub.n=0.sup.N−1(D.sub.1.sup.βz).sub.n=0. Then the equation above holds for all z∈custom-character.sup.N and integer k with 0≤|k|≤N/2, denote Λ∈custom-character.sup.2M+1,2M+1 as the diagonal matrix with diagonal entries being

[00053] 1 - e - i 2 π k N , - M k M ,

obtain the matrix form of the equations above


F.sub.MD.sub.1.sup.βz=Λ.sup.r+βF.sub.MD.sup.−rz+Λ.sup.β(D.sup.−1z).sub.−11+Λ.sup.β+1(D.sup.−2z).sub.N−11+ . . . +Λ.sup.β+r−1(D.sup.−rz).sub.N−11=Λ.sup.r+βF.sub.MD.sup.−rz+Σ.sub.80 =1.sup.rΛ.sup.β+λ−1(D.sup.−λz).sub.N−1.

[0106] Recall that F.sub.M contains the rows of DFT matrix with frequency within {−L, −L+1, . . . , L}. Multiplying

[00054] 1 N F M *

on both sides of and replace z with z={circumflex over (x)}−x, it gives

[00055] P M D 1 β ( x ^ - x ) = 1 N F M * Λ r + β F M D - r ( x ^ - x ) + .Math. = 1 r 1 N F M * Λ β + - 1 ( D - ( x ^ - x ) ) N - 1 .

Note that

[00056] .Math. 1 - e - i 2 π k N .Math. 2 π .Math. k .Math. N 2 π M N , hence .Math. 1 N F M * Λ .Math. 2 1 N ( M N ) ,

the following error bound in custom-character.sub.2 norm holds

[00057] .Math. P M D 1 β ( x ^ - x ) .Math. 2 .Math. 1 N F M * Λ r + β F M .Math. 2 .Math. D - r ( x ^ - x ) .Math. 2 + .Math. = 1 r .Math. 1 N F M * Λ β + - 1 .Math. 2 M .Math. D - ( x ^ - x ) N - 1 .Math. ( M N ) r + β N δ + .Math. = 1 r ( M N ) β + - 1 2 .Math. 2 r ( 1 2 N ) r δ ( M N ) r + β N δ .

Therefore the low frequency error P.sub.MD.sub.1.sup.β({circumflex over (x)}−x) decreases with the ΣΔ quantization order r. Denote h=D.sub.1.sup.β({circumflex over (x)}−x), divide h into two parts based on whether the location of each entry is within a neighbor of some support of D.sub.1.sup.βx.

[0107] For simplicity of proof, one can view the vectors x,{circumflex over (x)},h∈custom-character.sup.N as signals on [0,1] sampled at grid t.sub.n=n/N,n=0,1, . . . N−1. For D.sub.1.sup.βx satisfying Λ.sub.M-minimum separation condition with support set S={ξ.sub.1,ξ.sub.2, . . . , ξ.sub.s}⊏[0,1], define


S.sub.M(j)={x∈[0,1]:|x−ξ.sub.j|≤0.16M.sup.−1}, j=1,2, . . . , s,

and


S.sub.M=U.sub.j=1.sup.sS.sub.M(j),S.sub.M.sup.c=[0,1]\S.sub.M.

Then the following lemma holds.

[0108] Lemma 3.8 If D.sub.1.sup.βx satisfying Λ.sub.M-minimum separation condition, with definitions above, there exists a constant C>0 such that the following hold


Σ.sub.t.sub.n.sub.∈S.sup.c|h.sub.n|≤C√{square root over (N)}P.sub.mh∥.sub.2,  (3.6)


Σ.sub.jΣ.sub.t.sub.n.sub.∈S.sub.M.sub.(j)|h.sub.n∥t.sub.n−s.sub.j|.sup.2≤CM.sup.−2√{square root over (N)}P.sub.Mh∥.sub.2.  (3.7)

Denote the restriction of h to a set S as P.sub.Sh, and P.sub.Sh(ξ.sub.j)=|P.sub.Sh(ξ.sub.j)|e.sup.iϕj, j=1,2, . . . , s. By Lemma 6.1, take v.sub.j=e.sup.iϕj, j=1,2. . . s, there exists f(t)=Σ.sub.k=−M.sup.M c.sub.ke.sup.i2πkt defined in [0,1] and constant C.sub.1, C.sub.2 such that


f(t.sub.j)=e.sup.iϕj, j=1,2. . . s,  (3.8)


|f(t)|≤1−C.sub.1M.sup.2(t−ξ.sub.j).sup.2,t∈S.sub.M(j),  (3.9)


|f(t)|<1−C.sub.2,t∈S.sub.M.sup.c.  (3.10)

Denote f.sub.n=f(t.sub.n) where t.sub.n=n/N, n=0,1, . . . , N−1, then

[00058] .Math. t n S .Math. h n .Math. = .Math. .Math. t n S f _ n h n .Math. .Math. .Math. n = 0 N - 1 f _ n h n .Math. + .Math. .Math. t n S M c f _ n h n .Math. + .Math. .Math. j .Math. t n S M ( j ) \ { s j } f _ n h n .Math. .Math. .Math. n = 0 N - 1 f _ n h n .Math. + ( 1 - C 2 ) .Math. t n S M c .Math. h n .Math. + .Math. j .Math. t n S M ( j ) ( 1 - C 1 M 2 ( t n - ξ j ) 2 ) .Math. h n .Math. = .Math. .Math. n = 0 N - 1 f _ n h n .Math. + .Math. t n S c .Math. h n .Math. - C 2 .Math. t n S M c .Math. h n .Math. - C 1 M 2 .Math. j .Math. t n S M ( j ) ( t n - ξ j ) 2 .Math. h n .Math.

[0109] Rearrange the inequality, one obtains

[00059] C 2 .Math. t n S M c .Math. h n .Math. + C 1 M 2 .Math. j .Math. t n S M ( j ) ( t n - ξ j ) 2 .Math. h n .Math. .Math. .Math. n = 0 N - 1 f _ n h n .Math. + .Math. t n S c .Math. h n .Math. - .Math. t n S .Math. h n .Math. . ( 3.11 )

Note that |Σ.sub.n=0.sup.N−1f.sub.nh.sub.n|=|<f,h>|=|<f,P.sub.Mh>|≤∥f∥.sub.2∥P.sub.Mh∥.sub.2≤√{square root over (N)}∥P.sub.Mh∥.sub.2, also note that {circumflex over (x)} is a solution of (3.4), so

[00060] .Math. D 1 β x .Math. 1 .Math. D 1 β x ^ .Math. 1 = .Math. D 1 β x + h .Math. 1 .Math. t n S .Math. ( D 1 β x ) n .Math. - .Math. t n S .Math. h n .Math. + .Math. t n S c .Math. h n .Math. , then .Math. t n S c .Math. h n .Math. - .Math. t n S .Math. h n .Math. 0.

then (3.11) becomes


C.sub.2Σ.sub.t.sub.n.sub.∈S.sub.M.sub.c|h.sub.n|+C.sub.1M.sup.2Σ.sub.jΣ.sub.t.sub.n.sub.∈S.sub.M.sub.(j)(t.sub.n−ξ.sub.j).sup.2|h.sub.n|≤√{square root over (N)}P.sub.Mh∥.sub.2.

From this inequality we can derive (3.6) and (3.7).

[0110] Next, bound ∥K*D.sub.1.sup.β({circumflex over (x)}−x)∥.sub.∞ for arbitrary kernel K with period 1. For arbitrary

[00061] x 0 { 0 , 1 N , .Math. , N - 1 N } ,
|K*h(x.sub.0)|=|Σ.sub.n=0.sup.N−1K(x.sub.0−t.sub.n)h.sub.n|≤|Σ.sub.jΣ.sub.t.sub.n.sub.∈S.sub.M.sub.(j)K(x.sub.0−t.sub.n)h.sub.n|+∥K∥.sub.∞Σ.sub.t.sub.n.sub.∈S.sub.M.sub.c|h.sub.n|.  (3.12)

On the interval S.sub.M(j), approximate K(x.sub.0−t.sub.n) with its first-order Taylor expansion around x.sub.0−ξ.sub.j:


K(x.sub.0−t.sub.n)=K(x.sub.0−ξ.sub.j)+K′(x.sub.0−ξ.sub.j)(ξ.sub.j−t.sub.n)+12K″(μ.sub.n)|t.sub.n−ξ.sub.j|.sup.2,x|S.sub.M(j),

with some μ.sub.n ∈S.sub.M(j) depending on x.sub.0,s.sub.j, x. Inserting this in to (3.12), one obtains

[00062] .Math. K * h ( x 0 ) .Math. .Math. .Math. j .Math. t n S M ( j ) ( K ( x 0 - ξ j ) - K ( x 0 - ξ j ) ( t n - ξ j ) ) h n .Math. + .Math. K .Math. .Math. j .Math. t n S M c .Math. t n - ξ j .Math. 2 .Math. h n .Math. + .Math. K .Math. .Math. t n S M c .Math. h n .Math. .

To bound the first term on the right hand side, use an interpolation argument. Let a,b∈C.sup.S such that a.sub.j=K(x.sub.0−ξ.sub.j), b.sub.j=−K′(x.sub.0−ξ.sub.j) and by Proposition 2.4 in [15], there exists a function f∈C([0,1], Λ.sub.M) such that


∥f∥.sub.∞≲∥K∥.sub.∞+M.sup.−1∥K′∥.sub.∞, |f(x)−a.sub.j−b.sub.j(x−ξ.sub.j)|≲(M.sup.2∥K∥.sub.∞+M∥K′∥.sub.∞)|x−ξ.sub.j|.sup.2,x∈S.sub.M(j).

which gives

[00063] .Math. .Math. j .Math. t n S M ( j ) ( K ( x 0 - ξ j ) - K ( x 0 - ξ j ) ( t n - ξ j ) ) h n .Math. .Math. .Math. j .Math. t n S M ( j ) ( f ( t n ) - K ( x 0 - ξ j ) + K ( x 0 - ξ j ) ( t n - ξ j ) ) h n .Math. + .Math. .Math. t n S M f n h n .Math. ( M 2 .Math. K .Math. + M .Math. K .Math. ) .Math. j .Math. t n S M ( j ) .Math. t n - ξ j .Math. 2 .Math. h n .Math. + .Math. .Math. n = 0 N - 1 f n h n .Math. + .Math. .Math. n S M c f n h n .Math.

Also, obtain


|Σ.sub.t.sub.n.sub.∈S.sub.M.sub.cf.sub.nh.sub.n|≲(∥K∥.sub.∞+M.sup.−1∥K′∥.sub.∞)Σ.sub.t.sub.n.sub.∈S.sub.M.sub.c|h.sub.n| |Σ.sub.n=0.sup.N−1f.sub.nh.sub.n|≤∥f∥.sub.2∥P.sub.Mh∥.sub.2≲(∥K∥.sub.∞+M.sup.−1∥K′∥.sub.∞)√{square root over (N)}∥P.sub.Mh∥.sub.2.

Combining these results, one obtains


|K*h(x.sub.0)|≲(2∥K∥.sub.∞+M.sup.−1∥K′∥.sub.∞)Σ.sub.t.sub.n.sub.∈S.sub.M.sub.c|h.sub.n|+(∥K∥.sub.∞+M.sup.−1∥K′∥.sub.∞)√{square root over (N)}P.sub.Mh∥.sub.2+(M.sup.2∥K∥.sub.∞+M∥K′∥.sub.∞+∥K″∥.sub.∞)Σ.sub.jρ.sub.t.sub.n.sub.∈S.sub.M(J)|t.sub.n−ξ.sub.j|.sup.2|h.sub.n|≲(∥K∥.sub.∞+M.sup.−1∥K′∥.sub.∞+M.sup.−2∥K″∥.sub.∞)√{square root over (N)}P.sub.Mh∥.sub.2   (3.13)

Next, for arbitrary resolution L, denote

[00064] K L ( x ) = 1 N .Math. k = - L , k 0 L e i 2 π kx ,

then by direct calculation one obtains

[00065] P L ( x ^ - x ) = K L * ( x ^ - x ) + 1 N .Math. n = 0 N - 1 ( x ^ - x ) n 1 , .Math. P L ( x ^ - x ) - K L * ( x ^ - x ) .Math. < 1 N r .

Here the last equation is due to the constraint in (3.4) which forces the absolute value of the last r rows in D.sup.−r ({circumflex over (x)}−x) to be smaller than

[00066] 2 * ( 1 2 N ) r ,

then

[00067] .Math. .Math. n = 0 N - 1 ( x ^ - x ) n .Math. = .Math. ( D - 1 ( x ^ - x ) ) N - 1 .Math. = 2 r - 1 .Math. D - r ( x ^ - x ) N - r : N - 1 .Math. ( 1 N ) r .

In the following, the goal is to bound the infinity norm ∥K.sub.L*({circumflex over (x)}−x)∥.sub.∞. Consider TV order=1. For K.sub.L(x), construct a corresponding function {tilde over (K)}.sub.L(x), x∈[0,1], which satisfies D.sub.1{tilde over (K)}(t.sub.j)=K.sub.L(t.sub.j) for all j=0,1, . . . , N−1. Then one can bound ∥K.sub.L* ({circumflex over (x)}−x)∥.sub.∞by


K.sub.L*({circumflex over (x)}−x)∥.sub.∞=∥D.sub.1{tilde over (K)}.sub.L*({circumflex over (x)}−x)∥.sub.∞=∥{tilde over (K)}.sub.L*D.sub.1({circumflex over (x)}−x)∥.sub.∞=∥{tilde over (K)}.sub.L*h∥.sub.∞,

which can be further bounded by (3.13). Consider

[00068] K ~ L ( x ) = 1 N .Math. k = - L , k 0 L 1 - e i 2 πk ( x + 1 N ) 1 - e i 2 π k N ,

see that

[00069] K ~ L ( t j ) = 1 N .Math. k = - L , k 0 L 1 - e i 2 π k j + 1 N 1 - e i 2 π k N = 1 N .Math. k = - L , k 0 L .Math. n = 0 j e i 2 π k n N = .Math. n = 0 j K L ( t n ) , j = 0 , 1 , .Math. , N - 1.

So it satisfies the desired property


D.sub.1{tilde over (K)}.sub.L(t.sub.j)={tilde over (K)}.sub.L(t.sub.j)−{tilde over (K)}.sub.L(t.sub.j−1)=K.sub.L(t.sub.j).

Now show that the infinity norm of {tilde over (K)}.sub.L(x) is bounded by some constant for arbitrary L≤N/2 and x∈[0,1]. Since e.sup.i2πkx is 1-periodic, one has

[00070] sup x K ~ L ( x ) = 1 N sup x .Math. k = - L , k 0 L 1 - e i 2 π kx 1 - e i 2 π k N = 1 N sup x .Math. k = 1 L 1 - cos ( 2 π kx ) - i sin ( 2 π kx ) 1 - cos ( 2 π k N ) - i sin ( 2 π k N ) + 1 - cos ( 2 π kx ) + i sin ( 2 π kx ) 1 - cos ( 2 π k N ) + i sin ( 2 π k N ) = 1 N sup x .Math. k = 1 L ( 1 - cos ( 2 π kx ) ) ( 1 - cos ( 2 π k N ) ) + sin ( 2 π kx ) sin ( 2 π k N ) 1 - cos ( 2 π k N ) = 1 N sup x .Math. k = 1 L ( 1 - cos ( 2 π kx ) ) + sin ( 2 π kx ) cos ( π k N ) sin ( π k N ) = L N + 1 N sup x .Math. k = 1 L sin ( 2 π kx ) cos ( π k N ) - sin ( π k N ) cos ( 2 π kx ) sin ( π k N ) = L N + 1 N sup x .Math. k = 1 L sin ( 2 πk ( x 1 2 N ) ) sin ( π k N ) = L N + 1 N sup x .Math. k = 1 L sin ( 2 π kx ) sin ( π k N )

Notice that for k≤L≤N/2,

[00071] π k N π 2 ,

then

[00072] sin ( π k N )

is of the same order as

[00073] π k N

since for all

[00074] 0 < x π 2 , x - x 3 6 sin ( x ) < x ,

which further gives

[00075] 0.58 π k N π k N - ( π k N ) 3 / 6 sin ( π k N ) < π k N .

Then see that

[00076] .Math. 1 N .Math. k = 1 L sin ( 2 π kx ) sin ( π k N ) - .Math. k = 1 L sin ( 2 π kx ) π k .Math. = 1 N .Math. k = 1 L sin ( 2 π kx ) ( 1 sin ( π k N ) - 1 π k N ) = 1 N .Math. k = 1 L sin ( 2 π kx ) π k N - sin ( π k N ) sin ( π k N ) π k N 1 N .Math. k = 1 L 1 6 ( π k N ) 3 0 . 5 8 ( π k N ) 2 0.2 3 .

It is known that the summation

[00077] .Math. k = 1 n sin ( 2 π kx ) k

is uniformly bounded by some constant smaller than 2 for arbitrary n∈custom-character and x∈custom-character, so {tilde over (K)}.sub.L(x) is also bounded, there exists some constant C such that ∥{tilde over (K)}.sub.L∥.sub.∞≤C. Therefore by (3.13) one has,

[00078] .Math. K L * ( x ^ - x ) .Math. .Math. D 1 K ~ L * ( x ^ - x ) .Math. = .Math. K ~ L * h .Math. C L 2 M 2 N .Math. M r + 1 N r + 1 2 δ = C L 2 N ( M N ) r - 1 δ .

For the last inequality, Bernstein's Inequality is used for trigonometric sums to obtain ∥{tilde over (K)}.sub.L∥.sub.∞≤C, ∥{tilde over (K)}.sub.L′∥{tilde over (K)}.sub.L″∥.sub.∞≤CL.sup.2.
For TV order=2, consider

[00079] K ~ L ( x ) = - 1 N .Math. k = - L , k 0 L e i 2 π k N - e i 2 π k ( 2 N + x ) ( 1 - e i 2 π k N ) 2 ,

similarly, one can show ∥{tilde over (K)}∥.sub.∞≤N, then

[00080] .Math. K L * ( x ^ - x ) .Math. .Math. D 1 2 K ~ L * ( x ^ - x ) .Math. = .Math. K ~ L * h .Math. C L 2 M 2 N 3 2 .Math. M r + 2 N r + 3 2 δ = C L 2 ( M N ) r δ .

[0111] In conclusion, for β=1 or 2, one has have the custom-character.sub.∞-norm error bound

[00081] .Math. K L * ( x ^ - x ) .Math. C L 2 N r M r + β - 2 δ ,

which further gives

[00082] .Math. P L ( x ^ - x ) .Math. .Math. K L * ( x ^ - x ) .Math. + .Math. 1 N .Math. n = 0 N - 1 ( x ^ - x ) n 1 .Math. C L 2 N r M r + β - 2 δ .

[0112] An optimization algorihtm is presenetd for for solving (3.1). Here, the algorithm is presenetd for the simple case when the TV order β and the quantization order r are both 1, which reduces (3.1) to

[00083] min z .Math. D T z .Math. 1 subject to .Math. D - 1 ( z - q ) .Math. δ / 2. ( 4.1 )

All other optimization problems proposed in this paper can be solved in similar ways.

[0113] Let us start with writing out the Lagrangian dual of (4.1)

[00084] ( z , y ) = .Math. D T Z .Math. 1 - δ 2 .Math. y .Math. 1 + .Math. y , D - 1 ( z - q ) .Math. . ( 4.2 )

This dual formulation (4.2) is a special case of the general form

[00085] max y min x .Math. Lx , y .Math. + g ( x ) - f * ( y ) , ( 4.3 )

which is known to be solvable by primal-dual algorithms such as the Chambolle-Pock method (Algorithm 1).

TABLE-US-00001 Algorithm 1: Solve (4.3) using Chambolle-Pock Method Initializations: τ, σ > 0, τσ < 1, θ ∈ [0,1], x.sub.0, y.sub.0. and set x.sub.0 = x.sub.0 Iterations: Update x.sub.n, y.sub.n, x.sub.n as follows:  [00086] { y n + 1 = P r o x σ f * ( y n + σ L x _ n ) ( a ) x n + 1 = P r o x τ g ( x n - τ L * y n + 1 ) ( b ) x _ n + 1 = x n + 1 + θ ( x n + 1 - x n ) ( c )
A natural way to apply Chambolle-Pock method on (4.2) is to let x=D.sup.Tz, L=D.sup.−1D.sup.−T,g(x)=∥x∥.sub.1,

[00087] f * ( y ) = δ 2 .Math. y .Math. 1 ,

then (4.2) becomes

[00088] ( x , y ) = .Math. x .Math. 1 - δ 2 .Math. y .Math. 1 + .Math. y , D - 1 D - T x - D - 1 q .Math. .Math. L x , y .Math. + g ( x ) - f * ( y ) . ( 4.4 )

Although this can be solved by the Chambolle-Pock method (Algorithm 1), it converges pretty slowly due to poor conditioning of L (the condition number of L is 0(N.sup.2)).

[0114] One proposed strategy is, through a change of variable, moving the large condition number from L to g in (4.3). This is beneficial as on the one hand, the improved condition number of L accelerated the convergence of the primal-dual updates; on the other hand, the increased condition number on g is harmless to the inner loop due to the use of the proximal map. Explicitly, the change of variable is through x=D.sup.−1(z−q), then z=Dx+q, the dual form becomes

[00089] ( x , y ) = .Math. y , x .Math. + .Math. D T D x + D T q .Math. 1 - δ 2 .Math. y .Math. 1 . ( 4.5 )

Next, let g(x)=∥D.sup.TDx+D.sup.Tq∥.sub.1,

[00090] f * ( y ) = δ 2 .Math. y .Math. 1 ,

one can see that (4.5) also fits into the form of (4.3). Applying Chambolle-Pock Method to solving (4.5), it gives the following algorithm:

TABLE-US-00002 Algorithm 2: Solve (4.5) using Chambolle-Pock Method Initializations: τ, σ > 0, τσ < 1, θ ∈ [0,1], x.sub.0, y.sub.0. and set x.sub.0 = x.sub.0 Iterations: Update x.sub.n, y.sub.n, x.sub.n as follows:  [00091] { y n + 1 = arg min y σ δ 2 .Math. y .Math. 1 + 1 2 .Math. y - ( y n + σ x _ n ) .Math. 2 ( a ) x n + 1 = τ .Math. D T D x + D T q .Math. 1 + 1 2 .Math. x - ( x n - τ y n + 1 ) .Math. 2 ( b ) x _ n + 1 = x n + 1 + θ ( x n + 1 - x n ) ( c )
Note that it has been shown that when L=I (I is the identity matrix), step sizes τ=1/σ and extrapolation rate θ is set to 1, PDHG is equivalent to DRS and ADMM. Hence the above algorithm can also be derived by appropriate applications of the ADMM algorithm when those special step sizes are used.

[0115] In Algorithm 2, step (a) has a closed form solution. To solve (b), one can apply ADMM algorithm, the procedure of which is stated in Algorithm 3.

TABLE-US-00003 Algorithm 3: Solve (b) in Algorithm 2 by ADMM Initializations: ρ > 0, x.sub.0, u.sub.0, b.sub.0 Iterations: Update x.sub.n, u.sub.n, b.sub.n as follows:  [00092] { x n + 1 = arg min x 1 2 .Math. x - ( x n - τ y n + 1 ) .Math. 2 2 + ρ 2 .Math. D T D x + D T q - u n + b n .Math. 2 2 u n + 1 = arg min u τ .Math. u .Math. 1 + ρ 2 .Math. D T D x n + 1 + D T q - u + b n .Math. 2 2 b n + 1 = b n + D T D x n + 1 + D T q - u n + 1

[0116] Numerical simulation of the proposed decoder on both 1D synthetic signals and natural images is presented. An experiment is designed to confirm the benefits of the proposed quantization method over MSQ on 1D signals (representing columns of images) proved in Theorem 3.1. The signal to be quantized is piece-wise constant or piece-wise linear with random boundary locations and random height/slope on each piece, which satisfies assumption 2.1. MSQ is compared with Sigma Delta quantization coupled with the decoder (3.1). The result is displayed in FIGS. 6A-6C. In FIG. 6A, MSQ is compared with 1st order Sigma Delta quantization, where the signal is piece-wise constant, not satisfying the minimum separation condition. The decoder used for reconstruction is according to (3.1) with β=1. The SNR of the reconstructed image from Sigma Delta quantization is 37.00, and the SNR of MSQ is 21.95. In FIG. 6B, MSQ is compared with the 2nd order Sigma Delta quantization and a decoder (3.1) with β=2. In this case, the SNR of the Sigma Delta reconstruction is 37.30, and the SNR of MSQ is 27.33. In FIG. 6C, the signal is piece-wise constant with random noise. MSQ is compared with the 1st order Sigma Delta quantization and decoder (3.1) with β=1. The SNR of Sigma Delta reconstruction is 33.40, and SNR of MSQ is 20.93. From FIG. 6A, with the same number of bits, the reconstructed signal using 1.sup.st order ΣΔ quantization and decoder (3.1) is closer to the true signal and generally preserves the piece-wise constant structure. FIG. 6B shows that the piece-wise linear signal is also better reconstructed by the proposed method. As there is no guarantee that a given signal strictly satisfies our assumption, FIG. 6C tests the stability of the proposed method by adding random noise to the signal. It can be seen that even with additive random noise, the proposed encoder-decoder pair has reasonably good performance.

[0117] FIGS. 7A and 7B show the reconstruction result of signals that satisfy minimum separation consdition. In FIG. 7A, the signal is piece-wise constant. For Sigma Delta quantization, 1st order Sigma Delta quantization is used and a decoder (3.1) with β=r=1, as well as 3rd order Sigma Delta quantization and decoder (3.4) with β=1, r=3. The SNR of the reconstructed signals are 41.90 and 33.69, respectively. The SNR of MSQ is 24.13. In FIG. 7B, the signal is piece-wise linear. For Sigma Delta quantization, 2.sup.nd order Sigma Delta quantization is used and a decoder (3.1) with β=r=2, as well as 3rd order Sigma Delta quantization and a decoder (3.4) with β=2, r=3. The SNR of the reconstructed signals are 47.02 and 36.11, respectively. The SNR of MSQ is 25.70. If the minimum separation condition is met, one can achieve further error reduction by using high order ΣΔ quantization. In both FIGS. 7A and 7B, the reconstructed signal with higher ΣΔ quantization order r is close to the true signal compared the result using lower quantization order, which agrees with the theoretical result in Theorem 3.4.

[0118] Numerical results are also presented for 2D natural images. The best performance is usually achieved when both the TV order and the ΣΔ quantization order are 1. This setting is used in the following experiments.

[0119] In the first example, gray-scale images are compard with the results of 2D Sigma Delta quantization Q2D coupled with decoder (2.6) (sd2D), 1D Sigma Delta quantization Qcol coupled with decoder (2.5) (sd1D) and the MSQ quantization, all quantization used the same number of bits and the optimal alphabets. In terms of visual quality, (sd2D) is better than (sd1D) and much better than MSQ. In terms of the PSNR, (sd1D) is slightly better than (sd2D) and better than MSQ.

[0120] In the second experiment, the effect of dividing the image into multiple rectangle patches is evaluated in (sd2D), quantizing and reconstructing the image individually using the test image Lena. The process can be done in parallel, which makes the decoding process significantly faster than the single patch 2D reconstruction. There is usually no visible difference between using 1 patch and multiple patches in quantization and reconstruction. In both cases, the images look more natural and closer to the original image than MSQ, as the latter is unable to distinguish the subtle difference between pixels, especially in the face and shoulder area.

[0121] To further investigate where the improved PSNRs of the two Sigma Delta reconstructions come from, the absolute spectra of the three reconstructions is plotted as well as the absolute spectra of the residue images. The residue image is defined as the difference between the reconstructed image and the original image. As predicted, the decoders can indeed retain the high frequency information while effectively compressing the low frequency noise.

[0122] The techniques described herein may be implemented by one or more circuits and/or computer programs executed by one or more processors. The computer programs include processor-executable instructions that are stored on a non-transitory tangible computer readable medium. The computer programs may also include stored data. Non-limiting examples of the non-transitory tangible computer readable medium are nonvolatile memory, magnetic storage, and optical storage.

[0123] Some portions of the above description present the techniques described herein in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by circuit and/or computer programs. Furthermore, it has also proven convenient at times to refer to these arrangements of operations as modules or by functional names, without loss of generality.

[0124] Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a circuit, a computer system, or combination thereof, that manipulates and transforms data represented as physical (electronic) quantities within the circuit and the computer system memories or registers or other such information storage, transmission or display devices.

[0125] The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.