Method for predicting a block of pixels from at least one patch
09729876 · 2017-08-08
Assignee
Inventors
- Christine Guillemot (Chantepie, FR)
- Safa Cherigui (Villeurbannes, FR)
- Dominique Thoreau (Cesson Sevigne, FR)
- Philippe Guillotel (Vern sur Seiche, FR)
Cpc classification
H04N19/105
ELECTRICITY
H04N19/154
ELECTRICITY
International classification
H04N19/154
ELECTRICITY
Abstract
The present invention generally relates to a method for predicting a block of pixels from at least one patch comprising a block of pixels and a causal neighborhood around this block of pixels. The method comprises the following steps: determining a mapping of a causal neighborhood, around that block of pixels to be predicted, on the block of pixels to be predicted in order that the block of pixels of each patch is best predicted by mapping the neighborhood of that patch on the block of pixels of that patch, and predicting the block of pixels from a prediction block computed by applying the determined mapping on the neighborhood of the block of pixels to predict.
Claims
1. A method for predicting a block of pixels comprising: forming a first patch with a neighborhood around the block of pixels to be predicted; searching in a search window at least one other patch similar to the first patch, each patch comprising a block of pixels and a causal neighborhood around that block of pixels; and obtaining a final prediction of the block to be predicted from said at least one other patch; wherein the method further comprises: determining a mapping of a causal neighborhood of a patch, on the block of pixels of said patch in order that the block of pixels of at least one training patch is best predicted by mapping the causal neighborhood of that training patch on the block of pixels of that training patch, and predicting the block of pixels from a prediction block computed by applying the determined mapping on the neighborhood of the block of pixels to predict, where in the course of the sub-step for searching, a patch which is the most similar to the patch formed with the neighborhood and the prediction block is considered, and the patches which are the most similar, in terms of content, to said most similar patch are considered; and adding said prediction block to the first patch.
2. The method as claimed in claim 1, wherein the mapping is determined by using a linear regression approach.
3. The method as claimed in claim 2, wherein the linear regression problem is solved by least squares minimizing at least one prediction error between the block of pixels of a training patch and a block resulting of the mapping the neighborhood of that training patch on said block of pixels.
4. The method as claimed in claim 2, wherein the linear regression problem is solved by minimizing at least one conditional probability expressed in a Bayesian context.
5. The method as claimed in claim 1, wherein at least one training patch belongs to an image which does not comprise the block of pixels to predict.
6. The method as claimed in claim 1, where in the course of the sub-step for searching, are considered the P patchs which are the most similar, in term of content, to the patch formed with the neighborhood and the prediction block, and in the course of the sub-step for predicting the neighborhood of the block to be predicted is predicted by linear combining the neighborhoods of the P patchs, and the final prediction of the block of pixels to be predicted is then obtained by linear combinating the blocks of pixels of said P patches, said linear combination using the weighting coefficients obtained by linear combining the neighborhoods of the P patchs.
7. The method as claimed in claim 1, where in the course of the sub-step for predicting the neighborhood of the block to be predicted is predicted by linear combining the neighborhoods of said patchs and the neighborhood of the most similar patch, and the final prediction of the block of pixels to be predicted is then obtained by linear combinating the blocks of pixels of said patches and the block of pixels of the most similar patch, said linear combination using the weighting coefficients obtained by linear combining the neighborhoods of said patchs and the neighborhood of the most similar patch.
8. The method as claimed in claim 1, wherein a patch which is the most similar to a first patch formed with the neighborhood and the prediction block is considered, and the block of pixels to be predicted is predicted by the block of pixels of said most similar patch.
9. A method for coding a block of pixels of an image from at least the blocks of patchs, wherein the block of pixels is predicted according to the method which conforms to claim 1.
10. A method for decoding a signal of coded data representing a block of pixels, wherein the prediction is obtained according to the method which conforms to claim 1.
11. An apparatus for coding a block of pixels of an image from at least the blocks of patchs, wherein it comprises a processor configured to predict the block of pixels according to the method which conforms to claim 1.
12. An apparatus for decoding a signal of coded data representing a block of pixels, wherein it comprises a processor configured to predict the block of pixels according to the method which conforms to claim 1.
Description
4. LIST OF FIGURES
(1) The embodiments will be described with reference to the following figures:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
5. DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION
(12) An image comprises pixels or image points with each of which is associated at least one item of image data. An item of image data is for example an item of luminance data or an item of chrominance data. In the following, the term “block of pixels” is used to designate a set of items of an image data.
(13) As illustrated in
(14) The pixels of a causal neighborhood around a block of pixels are usually said known because they belong to a reconstructed part SW of the image I. This part SW is usually called a search window. A patch X.sub.i comprises a block X.sub.i.sup.u of n2 pixels at the same positions as the pixels of the block of pixels X.sup.u and a causal neighborhood X.sub.i.sup.k of n1 pixels at the same positions as the pixels of the causal neighborhood X.sup.k.
(15)
(16) The method comprises, as illustrated in
(17) In practice, the mapping F1 is determined from a set of patches which comprises more than one patch X.sub.1. Potentially, a new set of patches is defined for each block of pixels to be predicted.
(18) A set of patches X.sub.i is defined, for example, by considering all possible patches (blocks of pixels) in the search window SW (which is defined within a coded-decoded causal part of the image I). Optionally, only a reduced number of these patches are considered. The set of data points formed by the pixels of the template and the set of data points formed by the patches (template plus block of pixels) belong to two related manifolds. Determining the mapping of a causal neighborhood around a block of pixels on that block of pixels consists thus in “connecting” these two manifolds in order to make sure that a good approximation of the causal neighborhood around the block of pixels to be predicted leads to a good approximation of the patch (template plus block to be predicted).
(19) According to an embodiment, the mapping F1 is determined by using a linear regression approach.
(20) Mathematically speaking, the mapping is defined by a matrix W of coefficients defined as follows:
(21) Considering N patches X.sub.i (i=1, . . . , N) from the search window SW, each patch X.sub.i comprising a template X.sub.i.sup.k and a block of pixels X.sub.i.sup.u as before explained, the problem to solve is then given by:
(22)
(23) where the n1×n2 matrix W maps the neighbourhood X.sub.i.sup.u of a patch X.sub.i on the block of pixels X.sub.i.sup.k of this patch.
(24) Thus, the coefficients of the matrix W are determined in order to best predict the set of vectors forming the columns of a matrix M.sup.u from the set of vectors forming a matrix M.sup.k:
M.sup.u=W.Math.M.sup.k+E
(25) where E is a prediction error matrix for which each row is a noise vector given for example by:
e.sub.i˜N(0,Σ.sup.2)
(26) This is a problem of multivariate regression where a vector (of correlated random variables) is predicted rather than a single scalar random variable in the case of the well-known simple regression problem.
(27) There exists multiple method to solve such a multivariate linear regression system in prior art and the invention is not restricted to any of these methods.
(28) According to an embodiment, the problem of equation (1) is solved by least squares minimising at least one prediction error between the block of pixels of a patch and a block resulting of the mapping of the neighbourhood of that patch on said block of pixels.
(29) Mathematically speaking, the coefficients of the matrice W are given by minimizing the following prediction error:
∥(M.sub.u).sup.T−(M.sub.k).sup.TW.sup.T∥.sup.2 (2)
which brings together the prediction errors between the blocks of pixels of the patches and the blocks resulting of the mapping of the neighbourhoods of these patches on the blocks of pixels of these patches.
(30) The minimization of the criteria of equation (2) is given by zeroing the derivative with respect to W which leads to the least squares estimator:
W=M.sub.uM.sub.k.sup.T(M.sub.k.sup.TM.sub.k.sup.T).sup.†
where the symbol † denotes the pseudo-inverse.
(31) According to an embodiment, the problem of equation (1) is solved by minimising at least one conditional probability expressed in a Bayesian context.
(32) In such a Bayesian context, one needs to estimate the model parameters W. For that, distributions of model parameter W and ε are defined by:
P(W,ε|M.sup.u,M.sup.k)∝P(ε)P(W|ε)P(M.sup.u|M.sup.k,W,ε) (3)
where P(ε) is an inverse-Wishart distribution, and P(W|ε) is a matrix normal distribution.
(33) The conditional probability of M.sup.u given the observations M.sup.k, and the unknown regression parameters W and ε is given by
P(M.sup.u|M.sup.k,W,ε)∝(ε.sup.2).sup.−n2/2exp(−½tr((Y−M.sup.kW).sup.Tε.sup.−1(M.sup.u−M.sup.kW)))
(34) The value of W which maximizes the likelihood (or minimizes the exponent of the above conditional probability) is the one given by
{circumflex over (W)}=(M.sup.k.sup.
(35) This is the classical unbiased estimate W of the regression parameters (matrix W). The value of ε which maximizes the likelihood is given by {circumflex over (ε)}=M.sup.u−M.sup.kŴ).sup.T(M.sup.u−M.sup.kŴ). The predicted value of of M.sup.u is then given by
=M.sup.kŴ.
(36) The procedures to estimate the regression parameters from the above conditional probabilities are described in Chapters 7 and 8 of in Daniel B. Rowe, “Multivariate Bayesian Statistics, Models for Source Separation and Signal Unmixing”, Chapman et Hall/CRC press company, 2003 1. Basically, there are two methods: computing the marginal posterior mean and the maximum a posteriori estimates. The marginal posterior mean estimation of the parameters involves computing the marginal posterior distribution for each of the parameters and then computing mean estimates from these marginal posterior distributions. To find the marginal posterior distribution of the matrix of the regression coefficients W, the joint posterior distribution (equation 3 must be integrated with respect to ε. Given that the posterior distribution is of the same form as an inverted Wishart distribution distribution, the integration can be easily performed. The joint posterior distribution may also be maximized with respect to W and by direct differentiation.
(37) According to an embodiment, at least one patch belongs to an image which does not comprise the block of pixels to be predicted. That means that the search window SW is defined not only in the image where the block of pixels to be predicted is located but also in at least one other either previous or following image or both.
(38) The method also comprises a step 2 for predicting the block of pixels X.sup.u from a prediction block computed by applying the determined mapping F1 on the neighbourhood X.sup.k of the block of pixels to predict.
(39) According to an embodiment, the block of pixels to be predicted is predicted by applying the determined mapping on the neighbourhood of said block of pixels to be predicted. In other words, the block X.sup.u is predicted by the prediction block .
(40) According to an embodiment illustrated in , a sub-step 22 for searching in the search window SW at least one other patch X.sub.m,p similar to the patch X′, and a sub-step 23 for obtaining a final prediction of the block of pixels to be predicted from at least said at least one other patch.
(41) According to a variant illustrated in , is then obtained by linear combinating the P blocks of pixels X.sub.m,p.sup.u. Said linear combination uses the weighting coefficients used to predict the neighbourhood X.sup.k.
(42) According to a variant illustrated in of the block of pixels to be predicted is then obtained by linear combinating the blocks of pixels X.sub.m,p.sup.u of said (P−1) patches and the block of pixels of the most similar patch. Said linear combination uses the weighting coefficients used to predict the neighbourhood X.sup.k.
(43) For example, the weights used by such a linear combination of neighbourhoods are computed with a similarity kernel function in order to give higher weights to neighbourhoods which are more similar to the neighbourhood (A. Wong and J. Orchard, “A nonlocal-means approach to exemplar-based inpainting,” in IEEE int. Conf image Process. (ICIP), 2006, pp. 2600-2603).
(44) According to a variant, the weighting coefficient relative to a block X.sub.m,p.sup.u may also depends on the distance in the image I of that block with the block of pixels to predict.
(45) Mathematically speaking, the final prediction block is given by:
(46)
(47) with
(48)
et h is a decay coefficient and under the constraint that Σ.sub.pα.sub.p=1.
(49) According to a variant illustrated in is considered, and the block of pixels to be predicted is predicted by the block of pixels of said most similar patch.
(50) Multiple criteria exist in the art to quantify the similarity in term of visual content of two blocks of pixels (including templates and patches). For example an Euclidean distance between the pixels of the blocks may be used as a similarity measure for example. But the invention is not restricted to such a particular similarity measure.
(51) A patch X.sub.m,p minimizes an euclidean distance with a patch X when:
(52)
(53) where d.sub.i=∥X−X.sub.m,p∥.sub.2.sup.2 and M the number of all (or some of them) possible patch of pixels in the search window SW.
(54) The invention concerns also a method for coding an image potentially belonging to a sequence of images.
(55) The term “motion” designates data relative to any motion. They comprises motion vectors and potentially the indexes of a reference image which allow the identification of that reference image in a sequence of images. They also comprise information indicating the interpolation type which is applied to a reference block to get a prediction block.
(56) The term “residue” designates the data obtained after extraction of other data. The extraction is generally a subtraction of prediction pixels from source pixels. However, the extraction is more general and comprises notably a weighted subtraction.
(57) The term “transformed residual data” designate residual data on which a transform has been applied. A DCT (Discrete Cosine Transform) is an example of such a transform and is described the chapter 3.4.2.2 of the book of I. E. Richardson intitulé “H.264 and MPEG-4 video compression” published in J. Wiley & Sons en September 2003. The wavelet transform described in the chapter 3.4.2.3 in the book of I. E. Richardson and the Hadamard transform are some other examples of transforms which may also be used.
(58) Such transforms <<transform>> a block of image data, for example residual data of luminance and/or chrominance, to a “transformed data block” also called “frequency data block” or “coefficient block”. A coefficient block comprises usually a low-frequency coefficient also called DC coefficient and high-frequency coefficients also called AC coefficients.
(59) The term “prediction data” designate data used to predict other data. A prediction block is a block of pixels with associated prediction data. A prediction block of an image is obtained from one or more reference block of that image (spatial or intra-prediction) or from one or more reference block of another image (monodirectional temporal prediction) or multiple reference blocks belonging to more than one different image (birectional temporal prediction).
(60) The term “reconstructs” designates data (for example pixels, blocks) obtained after merging of residues with prediction data. The merge is generally a sum of prediction pixels with residues. However, the merging is more general and comprises notably the weighted sum. A reconstructed block is a block of reconstructed pixels.
(61) In reference to image decoding, the terms “reconstruction” and “decoding” are very often used as being synonymous. Thus, a “reconstructed block” is also designated under the terminology of “decoded block”.
(62)
(63) In the course of a step 81, a prediction block Bpred is determined from motion data outputted a well-known block-based motion estimation process (<<blocks matching>>). However, the invention is not restricted by the use of such a process to determine a prediction Bpred.
(64) In the course of a step 82, a residual block Bres is determined by extracting the prediction block Bpred from the current block of pixels Bc.
(65) For example, the residual block Bres equals the difference pixel by pixel of the current block of pixels Bc and the prediction block Bpred.
(66) In the course of a step 83, the residual block Bres and potentially the motion data relative to the prediction block are encoded to a signal F of coded data.
(67) Usually, the step 83 comprises, before entropy encoding, a transformation of the residual block Bres to a block of coefficients followed by a quantization of said block of coefficients using a quantization parameter.
(68) According to a variant, the residual block Bres is only quantified before entropy encoding.
(69) For example, the residual block Bres, possibly transformed and quantified, is encoded according to a well-known entropy coding process such a VLC-type (Variable Length Coding), using for examples determined VLC tables such as describes in chapter 3.5.2 of the book of I. E. Richardson titled <<H.264 and MPEG-4 video compression>> published in J. Wiley & Sons in September 2003.
(70) According to a variant, a CABAC-type process (acronyme anglais de <<Context-based Adaptive Binary Arithmetic Coding>>) may be used as described in the chapter 6.5.4. of the book of I. E. Richardson or in section 9.3 of the document ISO/IEC 14496-10 titled <<Information technology—Coding of audio-visual objects—Part 10: Advanced Video Coding>>.
(71) According to a variant, a CAVLC-type process (<<Context-based Adaptive Variable Length Coding>>) may be used as described in section 9.2 of the ISO/IEC 14496-10 titled <<Information technology—Coding of audio-visual objects—Part 10: Advanced Video Coding>> or in chapter 6.4.13.2 in the book of I. E. Richardson.
(72)
(73) According to a variant, if the data extracted from the signal F comprises residual data which have been only quantified, the residual block Bres is only inverse quantified. The invention is not restricted by the process used to obtain to residual block Bres.
(74) The method for decoding also comprises a step 92 for determining a prediction block Bpred for the block of pixels to decode from the signal F. As an example, the prediction block Bpred is determined from motion data by decoding a part of the signal of coded data F relative to the block of pixels to decode.
(75) According to a variant, the prediction block Bpred is determined from motion data reconstructed from a template matching process. Such a process is, for example, described in the document VCEG-AG16 of Steffen Kamp et al. titled Decoder Side Motion Vector Derivation et publié le 20 October 2007 at Shenzhen in China, 33.sup.iéme meeting of the group VCEG of I'ITU-T.
(76) The method for decoding also comprises a step 93 for decoding (or reconstructing) the block of pixels by merging the residual block Bres with the prediction block Bpred.
(77) As an example, the decoded block of pixels equals the sum pixel to pixel of the residual block Bres and the prediction block Bpred.
(78) The method for encoding and decoding described in relation with the
(79)
(80) The apparatus 12 receives a block of pixels Bc to encode as input.
(81) The apparatus 12 is described in term of functional modules which implement at least a temporal prediction based coding method.
(82) Only the functional modules of the apparatus 12 in relation with the temporal prediction based coding (INTER coding) are shown in
(83) The apparatus 12 comprises a module 1200 for extracting, for example on a pixel base, a prediction block Bpred from a current block of pixels Bc to generate a residual block Bres. The apparatus 12 also comprises a module 1202 configured to transform and quantify the residual block Bres. The transform T is, for example, a Discrete Cosinus Transform (DCT) or any other block-based transform such a wavelet-based. The apparatus 12 further comprises a module 1206 which implement the inverse operations: inverse quantization Q.sup.−1 followed by an inverse transform T.sup.−1. The apparatus 12 further comprises a module 1204 for entropy encoding the quantified data to a signal F of coded data. The module 1206 is linked to a module 1208 which merges, for example on a pixel-based, the block of pixels outputting the module 1206 and the prediction block Bpred to generate a block of reconstructed data which is stored in a memory 1210.
(84) The apparatus 12 comprises also a module 1212 to estimate at least one motion vector between the block of pixels Bc and a block of pixels of a reference image Ir stored in the memory 1210, this reference image having been coded and reconstructed.
(85) According to a variant, the motion estimation may be executed between the block of pixels Bc and a block of pixels of an original reference image Ic. In that case, the memory 1210 is not linked to the module 1212.
(86) According to a well-known process, the motion estimation searches in the reference image Ir (or Ic) a motion data, such for example a motion vector, in order to minimise an error computed between a block of pixels Bc and a block of pixels in that reference image identified by the motion data.
(87) The motion data are then transmitted by the module 1212 to a decision module 1214 which selects a coding mode for the block of pixels Bc from a set of determined coding modes. The selected coding mode is for example the coding mode which minimizes a criterion such a rate-distrosion based criterium. However, the invention is not limited to any process to select a coding mode which may be selected using any other criterium.
(88) The selected coding mode and the motion data are transmitted to the module 1204 for entropy encoding in the signal F.
(89) The module 1216 determines the prediction block Bpred from the selected coding mode outputting the module 1214 and, potentially, from motion data outputting the module 1212.
(90) The module 1216 is configured to implement a prediction method as described in relation with
(91)
(92) The apparatus 13 receives a signal F as an input. The signal F of coded data, for example, has been transmitted by an apparatus as described in relation with
(93) The apparatus 13 comprises a module 1300 to generate decoded data, for example coding modes or residual data relative to coded data to decode.
(94) The apparatus 13 further comprises a module to decode motion data.
(95) According to an embodiment, this module is the module 1300 which entropy decode a part of the signal F of coded data relative to the coding mode and potentially motion data.
(96) According to a variant, not shown, the module to decode motion data is configured to implement a motion estimation process. This solution to decode the motion data is known as a template matching process in the prior art.
(97) The decoded data relative to the block of pixels to decode are then transmitted to a module 1302 which applies an inverse quantisation followed by an inverse transform. The module 1302 is identical to the module 1206 of the apparatus 12. The module 1302 is linked to the module 1304 which merges, for example on a pixel by pixel base, the residual block outputting the module 1302 and a prediction block to generate a decoded block of pixels Bc (also said reconstructed block) which is then stored in a memory 1306. The apparatus 13 also comprises a module 1308 which determines a prediction block Bpred from the coding mode extracted from the signal F for the block of pixels to decode, and potentially, from motion data determined outputting the module which reconstructs the motion data.
(98) The module 1308 is configured to implement a prediction method as described in relation with
(99) On
(100)
(101) Processing unit 53 can be implemented as a microprocessor, a custom chip, a dedicated (micro-) controller, and so on. Memory 55 can be implemented in any form of volatile and/or non-volatile memory, such as a RAM (Random Access Memory), hard disk drive, non-volatile random-access memory, EPROM (Erasable Programmable ROM), and so on. Device 500 is suited for implementing a data processing device according to the method of the invention. The data processing device 500 and the memory 55 work together for determining a mapping F1 of a causal neighborhood around that block of pixels on the block of pixels to predict in order that the block of pixels of each patch is best predicted by mapping the neighbourhood of that patch on the block of pixels of that patch, and for predicting the block of pixels from a prediction block computed by applying the determined mapping (F1, W) on the neighbourhood of the block of pixels to predict.
(102) The processing unit and the memory of the device 500 are also configured to implement any embodiment and/or variant of the coding and decoding methods describes in relation to
(103) While not explicitly described, the present embodiments and variants may be employed in any combination or sub-combination.