Method and apparatus for generating a super-resolved image from a single image
09760977 · 2017-09-12
Assignee
Inventors
Cpc classification
G06T3/4092
PHYSICS
G06T3/4053
PHYSICS
International classification
G06T3/40
PHYSICS
Abstract
Known methods for generating super-resolved images from single input images have various disadvantages. An improved method for generating a super-resolved image from a single low-resolution input image comprises up-scaling the input image to generate an initial version of the super-resolved image, searching, for each patch of the low-resolution input image, similar low-resolution patches in first search windows within down-sampled versions of the input image, and determining, in less down-sampled versions of the input image, high-resolution patches that correspond to the similar low-resolution patches. The determined high-resolution patches are cropped, a second search window is determined in the initial version of the super-resolved image, and a best-matching position for each cropped high-resolution patch is searched within the second search window. Finally, each cropped high-resolution patch is added to the super-resolved image at its respective best-matching position.
Claims
1. A method for generating a super-resolved image from a low-resolution input image, comprising: generating an initial super-resolved image by upsampling the input image; generating multiple down-scaled versions of the input image; searching in first search windows within the down-scaled versions of the input image for patches similar to patches of the input image; searching, for each patch that is found within a first down-scaled version of the input image and that is similar to a patch of the input image, a corresponding upscaled patch within a second down-scaled version of the input image that is larger than the first down-scaled version of the input image; cropping the upscaled patches, wherein the outermost pixels of at least one edge of the upscaled patches are discarded; and adding the cropped upscaled patches to the initial super-resolved image, wherein a weighted combination of the cropped upscaled patches to the initial super-resolved image is generated, and wherein the position of the cropped upscaled patches is determined within second search windows that are centred at projected positions of the centres of their corresponding patches of the input image to the scale of the initial super-resolved image.
2. The method of claim 1, wherein the generated down-scaled versions of the input image are at different scales; the input image is separated into a plurality of overlapping patches and for the patches of the input image, said searching in first search windows, for a current patch of the input image, further comprising determining a corresponding position in the initial super-resolved image; said searching corresponding upscaled patches further comprising: searching, for the current patch of the input image k most similar patches within a first search window in each down-sampled version of the input image, k being a pre-defined number and k being one or more; determining for each found patch being one of said k most similar patches in any particular down-sampled version of the input image, a patch at a corresponding position in a different version of the input image that has the next higher resolution than the particular down-sampled version; and defining a second search window within the initial super-resolved image, the second search window being around said determined position corresponding to the current patch; said cropping the upscaled patches further comprising cropping each determined patch from said different version of the input image, wherein pixels of at least one edge of the patch are removed; said adding the cropped upscaled patches to the initial super-resolved image further comprising searching for each determined and cropped patch a best-matching position within the second search window, wherein the determined patches are compared with a portion of the initial super-resolved image that is within the second search window; and adding each determined and cropped patch to the initial super-resolved image at the best-matching position.
3. The method according to claim 1, wherein the second search window has the same size as each determined patch from said different version of the input image, before the cropping.
4. The method according to claim 1, wherein the size of the first search window is 4-6 times the size of each determined from said different version of the input image, before the cropping.
5. The method according to claim 1, wherein said generating an initial super-resolved image by upsampling the input image further comprises determining the contents of the higher layer by resizing the lower layer by a factor C and then deblurring the resized lower layer.
6. An apparatus for generating a super-resolved image from a low-resolution input image, comprising: at least one processor connected to an associated memory, the at least one processor being configured to: generate an initial super-resolved image by upsampling the input image; generate a plurality of down-sampled versions at different scales of the input image; separate the input image into a plurality of overlapping patches; search for a current patch of the input image, k most similar patches within a first search window in each down-sampled version of the input image, k being a pre-defined number; determine for each found patch being one of said k most similar patches in any particular down-sampled version of the input image, a patch at a corresponding position in a less down-sampled version of the input image; define a second search window within the initial super-resolved image, the second search window being around a position that corresponds to the position of the current patch of the input image; crop each determined patch from said less down-sampled version of the input image, wherein pixels of at least one edge of the patch are removed; search for each determined and cropped patch, a best-matching position within the second search window, wherein the determined patches are compared with a portion of the initial super-resolved image that is within the second search window; and add each determined and cropped patch to the initial super-resolved image at the best-matching position, wherein a weighted combination of the cropped upscaled patches to the initial super-resolved image is generated, and determine the position of the cropped patches within second search windows that are centred at projected positions of the centres of their corresponding patches of the input image to the scale of the initial super-resolved image.
7. The apparatus of claim 6, wherein the processor is further configured to determine a first search window in each down-sampled version of the input image.
8. A non-transitory computer readable storage medium having executable instructions to cause a computer to perform a method for generating a super-resolved image from a low-resolution input image, comprising generating an initial super-resolved image by upsampling the input image; generating multiple down-scaled versions of the input image; searching in first search windows within the down-scaled versions of the input image for patches similar to patches of the input image; searching corresponding upscaled patches; cropping the upscaled patches; and adding the cropped upscaled patches to the initial super-resolved image, wherein a weighted combination of the cropped upscaled patches to the initial super-resolved image is generated, and wherein the position of the cropped upscaled patches is determined within second search windows that are centred at projected positions of the centres of their corresponding patches of the input image to the scale of the initial super-resolved image.
9. The non-transitory computer readable storage medium of claim 8, wherein the generated down-scaled versions of the input image are at different scales; the input image is separated into a plurality of overlapping patches; and for the patches of the input image, said searching in first search windows, for a current patch of the input image, further comprising determining a corresponding position in the initial super-resolved image; said searching corresponding upscaled patches further comprising: searching, for the current patch of the input image, k most similar patches within a first search window in each down-sampled version of the input image, k being a pre-defined number and k being one or more; determining, for each found patch being one of said k most similar patches in any particular down-sampled version of the input image, a patch at a corresponding position in a different version of the input image that has the next higher resolution than the particular down-sampled version; and defining a second search window within the initial super-resolved image, the second search window being around said determined position corresponding to the current patch; said cropping the upscaled patches further comprising cropping each determined patch from said different version of the input image, wherein pixels of at least one edge of the patch are removed; said adding the cropped upscaled patches to the initial super-resolved image further comprising: searching for each determined and cropped patch a best-matching position within the second search window, wherein the determined patches are compared with a portion of the initial super-resolved image that is within the second search window; and adding each determined and cropped patch to the initial super-resolved image at the best-matching position.
10. The non-transitory computer readable storage medium of claim 8, wherein the second search window has the same size as each determined patch from said different version of the input image, before the cropping.
11. The non-transitory computer readable storage medium of claim 8, wherein the size of the first search window is 4-6 times the size of each determined from said different version of the input image, before the cropping.
12. The non-transitory computer readable storage medium of claim 8, wherein in generating an initial super-resolved image by upsampling the input image, the contents of the higher layer are determined by resizing the lower layer by a factor C and then deblurring the resized lower layer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Exemplary embodiments of the invention are described with reference to the accompanying drawings, which show in
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF THE INVENTION
(11) The present invention provides a mechanism to introduce high-resolution example patches from different scales, partly similar to the method described by Glasner et al. However, the present invention does not require the applying of iterated back-projection (IBP). Known methods require IBP to ensure the consistency of the lower part of the spectrum. According to at least some embodiments of the invention, IBP can be avoided by reconstructing the high-frequency band of the super-resolved image at progressively larger scales and adding it to the interpolation-based low-frequency band. By doing so, it is ensured that the interpolation-based low-frequency band remains consistent with that of the input image in each scale.
(12) Main steps of a method according to the present invention, as shown in
(13) The first step of constructing examples 11 generates a number of lower-resolution layers that will be used for multi-scale self-similarity analysis. The second step 12 performs the multi-scale self-similarity analysis, which basically includes a search for most similar patches across the several resolutions. The third step of reconstructing the HR image 13 obtains the reconstruction of a super-resolved layer by combination of the examples retrieved by the multi-scale analysis. Several lower-resolution scales of the input images are generated. For each image patch (e.g. 3×3 pixels) of the input image, the k closest matches in each lower scale are obtained. Typical values for k are k=1, k=2 or k=3, but k may be higher. The position and enclosing rectangle of each of these patches are enlarged to the scale of the original input image in order to generate examples for each patch in the input image in higher scales. The algorithm then proceeds in a coarse to fine manner by resizing the current highest layer, applying a deblurring step and synthesizing the high frequency detail by combining the overlapping high-frequency contributions of the examples obtained from the low-resolution image with inverse down-scaling to the up-scaling of the current layer.
(14) Next, details about the example construction 11 are described.
(15) Given the input low-resolution image L.sub.0, the desired magnification factor M and the cross-scale magnification C, the number of intermediate layers (for multi-scale analysis) is computed as
(16)
(17) In one embodiment, the lower resolution layers L.sub.−i, i={1, . . . , N.sub.L} are simply obtained by applying a resizing of the input image by a factor (1/C).sup.i. In one embodiment, this can be accomplished by analytic resampling, as shown in
(18) In
(19) Next, details about the multi-scale analysis 12 are described.
(20) Given a subdivision of the input image L.sub.0 in overlapping patches with size 3×3 pixels (in one embodiment, while in other embodiments the patch sizes can be different), the goal of this stage is to find the k closest 3×3 patches to each patch from the input image in each layer L.sub.−i. The location of each of these similar patches (once up-scaled by a factor C) determines the position of a larger patch of size (3C.sup.i)×(3C.sup.i) within the input image L.sub.0 which can be used as a higher resolution example of the original patch. This point will be further described in more detail below, with respect to the reconstruction stage.
(21)
(22) For the example shown in
(23) The implemented algorithm performs an exhaustive search over a window. Localizing the search with a window (i.e. limiting the search space to the search window) allows avoiding spending computing cycles in far regions with very low likelihood of containing similar structures (in scales similar to the input image) and extend the search to larger relative areas within the image for different scales (further away from that of the input image), which could contain similar structures at different scales by effect of the perspective.
(24) Other embodiments, which may have lower resulting quality, apply approximate global search for best patches instead of the exhaustive localized search explained above. In this case, the so-called Approximate Nearest Neighbors (ANN) approach may be used.
(25) Next, details about the HR reconstruction step 13 are described.
(26) The overall mechanism of one embodiment of the reconstruction stage is depicted in
(27) The algorithm is applied layer by layer, starting from L.sub.0. First, L.sub.0 is resized by the cross-scale magnification factor C (e.g. 3/2 in this example) and deblurred with L1 data cost (also known as “Manhattan norm” cost—note that the L1 data cost is not the L.sub.1 image layer) and Total Variation (TV) regularization (i.e. a cost function such as D(L1)+λR), resulting in an initial estimate of the layer L.sub.1. For a current patch P.sub.40 in L.sub.0, k best matches from L.sub.1 are searched (in this example, k=2, but k can be different, e.g. in the range of 1-10). For example, the found best-matching patches are denoted P.sub.−1.41 and P.sub.−1.42 in
(28) In general, only patches with cost (according to a cost function, e.g. the cost function mentioned above) lower than a predefined threshold th are accepted as best matching patches, rather than all the k neighbors. In one embodiment, the threshold for the L.sub.−1 layer's SAD cost is th=0.08. In one embodiment, the threshold is decreased for each successive layer. This reflects the fact that the likelihood that slightly dissimilar patches are actually leading to good examples is decreasing with the magnification factor. In one embodiment, the threshold decrease is 0.02 per successive layer (keeping in mind that cost thresholds cannot be negative, so that a minimum threshold is zero).
(29) In this embodiment, Iterative Back-Projection (IBP) is used to ensure the spectral compatibility of layers L.sub.1 and L.sub.i-1. The procedure is repeated until reaching L.sub.NL, where N.sub.L is the total number of layers.
(30)
(31) In an alternative embodiment, which is described in the following, the usage of IBP is omitted. This embodiment uses High-frequency synthesis, so that no problem of spectral compatibility between different layers L.sub.1 and L.sub.i-1 occurs.
(32) Next, High-frequency synthesis is described.
(33) In this embodiment, the problem of super-resolving each intermediate layer L.sub.i is treated as the reconstruction of the missing high-frequency band. By resizing a layer L.sub.i-1 by a factor C, the filled-in bandwidth of layer L.sub.i is 1/C. In order to exploit this, the input L.sub.0 layer is further analyzed. This is done differently for the low-frequency band LF.sub.0(with bandwidth 1/C) and the corresponding high-frequency band (HF.sub.0=L.sub.0−LF.sub.0). For this purpose, the same filter or interpolating kernel as for creating the lower layers L.sub.−i and the upscaled layers L.sub.i is used. In this embodiment, IBP is not used. This is advantageous since IBP leads to ringing artifacts, which decrease image quality or need additional treatment. Such treatment can therefore be omitted. In this embodiment, the examples are not directly the cropped larger patches from L.sub.0, but rather cropped patches from HF.sub.0. The corresponding low-frequency band from LF.sub.0 is used for looking for the target position in L.sub.i. Then, the high-frequency examples are accumulated in their target positions (as illustrated in
(34) In one embodiment shown in
(35)
(36) In one embodiment, the algorithm is applied only once. In one embodiment, the algorithm is applied iteratively more than once, which results in Iterative reconstruction. That is, for each new layer, the multi-scale analysis is performed taking the previous one as the new L.sub.0. This has the advantage that an increased amount of examples in the higher layers is available, which are far from the initial scale so that normally the set of examples will be reduced.
(37)
(38)
(39) The concept can be generalized from images to general digital data structures.
(40) In some embodiments, the upscaled input data structure after filtering 130 by the second low-pass filter F.sub.l,1 is downscaled 140 by a downscaling factor d, with n>d. Thus, a total non-integer upscaling factor n/d is obtained for the low-frequency upscaled data structure L.sub.1. The high-frequency upscaled data structure H.sub.1,init (or H.sub.1 respectively) has the same size as the low-frequency upscaled data structure L.sub.1. The size of H.sub.1 may be pre-defined, or derived from L.sub.1. H.sub.1 is initialized in an initialization step 160 to an empty data structure H.sub.1,init of this size.
(41)
(42) In the example shown in
(43) The above description is sufficient for a 1-dimensional (1D) data structure. For 2D data structures, the position of a further subsequent patch is found by vertical patch advance (this may or may not be combined with a horizontal patch advance). Also vertical patch advance includes an overlap, as mentioned above and shown in
(44) The position of the search window is determined according to the position of the current patch. As shown in
(45) In one embodiment (not shown in
(46) In one embodiment, the present invention comprises generating an initial version of the super-resolved (SR) image from a low-resolution input image, searching, for each patch of the input image, similar low-resolution (LR) patches in down-sampled versions of the input image, wherein the searching is performed within first search windows, determining, in less down-sampled versions of the input image, high-resolution (HR) patches that correspond to the similar LR patches, cropping the determined HR patches, determining a second search window in the initial version of the SR image, searching, within the second search window, a best-matching position for each cropped HR patch, and adding each cropped HR patch at its respective best-matching position to the SR image. As a result, the initial SR image is enhanced by the detail information that comes from the added patches.
(47) For generating an initial version of the super-resolved image, any conventional upsampling of the input image can be used.
(48) In various embodiments, important features of the invention are the following: Simple conventional upsampling/upscaling is used for generating the initial version of the super-resolved image (i.e. higher layer). Multiple (at least two) down-scaled versions are generated as lower layers. HF/detail information patches are obtained from the lower layer images, using a first search window in each lower layer image. A fixed number k of patches (k-Nearest Neighbours, KNN) is obtained from each lower layer image. Found patches are cropped, and the cropped patches are overlaid to the initial version of the super-resolved image. Cropping includes removing pixels of at least one edge of the patch. E.g., the cropping of a 5×5 pixel patch results in a 5×4 pixel cropped patch, or a 3×3 pixel cropped patch. When overlaying the cropped patches to the initial version of the super-resolved image, the overlay position is determined within a second search window. In one embodiment, the second search window has the size of the patch before cropping, e.g. 5×5 pixels. In another embodiment, the second search window is slightly larger, e.g. 6×6 pixels (square), or 5×6, 5×7 or 6×7 pixels (non-square). In yet another embodiment, the second search window is slightly smaller, e.g. 4×4 pixels (square), or 4×5 pixels (non-square). If only one edge of the patch was cropped, the search within the second search window is very simple.
(49) In one embodiment, a method for generating a super-resolved image L.sub.1 from a low-resolution input image L.sub.0 comprises steps of
(50) generating an initial super-resolved image by upsampling the input image,
(51) generating multiple down-scaled versions of the input image,
(52) searching in first search windows within the down-scaled versions of the input image for patches similar to patches of the input image, searching corresponding upscaled patches, cropping the upscaled patches, and adding/overlaying the cropped upscaled patches to the initial super-resolved image, wherein the position of the cropped upscaled patches is determined within second search windows.
(53) In one embodiment, a device for generating a super-resolved image L.sub.1 from a low-resolution input image L.sub.0 comprises
(54) an upsampling module for generating an initial super-resolved image by upsampling the input image,
(55) one or more down-scaling modules for generating multiple down-scaled versions of the input image,
(56) a first search module for searching in first search windows within the down-scaled versions of the input image for patches similar to patches of the input image, a patch projection module for searching corresponding upscaled patches, a cropping module for cropping the upscaled patches, and a pixel overlay module for adding/overlaying the cropped upscaled patches pixel-wise to the initial super-resolved image, wherein the position of the cropped upscaled patches is determined within second search windows.
(57) While there has been shown, described, and pointed out fundamental novel features of the present invention as applied to preferred embodiments thereof, it will be understood that various omissions and substitutions and changes in the apparatus and method described, in the form and details of the devices disclosed, and in their operation, may be made by those skilled in the art without departing from the spirit of the present invention. It is expressly intended that all combinations of those elements that perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Substitutions of elements from one described embodiment to another are also fully intended and contemplated.
(58) It will be understood that the present invention has been described purely by way of example, and modifications of detail can be made without departing from the scope of the invention. Each feature disclosed in the description and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination. Features may, where appropriate be implemented in hardware, software, or a combination of the two. Connections may, where applicable, be implemented as wireless connections or wired, not necessarily direct or dedicated, connections.
(59) Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
CITED REFERENCES
(60) J. Yang, J. Wright, T. Huang, and Y. Ma, Image Super-resolution via Sparse Representation, IEEE Trans, on Image Processing, pp. 2861-2873, vol. 19, issue 11, May 2010
(61) D. Glasner, S. Bagon, and M. Irani, Super-Resolution form a Single Image, ICCV 2009