SYSTEM AND METHOD FOR IMAGE DEMOSAICING
20200184598 ยท 2020-06-11
Inventors
Cpc classification
H04N2209/046
ELECTRICITY
G06T3/4015
PHYSICS
International classification
Abstract
A method of adaptive demosaicing of a mosaiced image includes receiving the mosaiced image having a mosaic pattern with a first color, a second color, and a third color, obtaining a noise level index for the mosaiced image, determining whether the noise level index is greater than a threshold value, and, in response to the noise level index being greater than the threshold value, performing an adaptive demosaicing on the mosaiced image to generate an adaptively demosaiced image. The adaptive demosaicing includes interpolating values of a portion of unknown pixels of the first color in a horizontal or vertical direction, interpolating values of a portion of unknown pixels of the second color in a horizontal or vertical direction, and interpolating values of a portion of unknown pixels of the third color in a horizontal or vertical direction.
Claims
1. A method of adaptive demosaicing of a mosaiced image comprising: receiving the mosaiced image having a mosaic pattern with a first color, a second color, and a third color; obtaining a noise level index for the mosaiced image; determining whether the noise level index is greater than a threshold value; and in response to the noise level index being greater than the threshold value, performing an adaptive demosaicing on the mosaiced image to generate an adaptively demosaiced image, the adaptive demosaicing including: interpolating values of a portion of unknown pixels of the first color in a horizontal or vertical direction; interpolating values of a portion of unknown pixels of the second color in a horizontal or vertical direction; and interpolating values of a portion of unknown pixels of the third color in a horizontal or vertical direction.
2. The method of claim 1, wherein interpolating the values of the portion of unknown pixels of the second color and interpolating the values of the portion of unknown pixels of the third color are based on interpolating the values of the portion of unknown pixels of the first color.
3. The method of claim 1, wherein interpolating the values of the portion of unknown pixels of the second color and interpolating the values of the portion of unknown pixels of the third color occurs after interpolating the values of the portion of unknown pixels of the first color.
4. The method of claim 1, wherein the mosaiced image includes red, green, and blue pixels in a Bayer pattern.
5. The method of claim 1, wherein the first color is green, the second color is red, and the third color is blue.
6. The method of claim 1, wherein horizontal or vertical interpolation is selected based on known color data of a pixel location.
7. The method of claim 6, wherein the horizontal or vertical interpolation is selected based on a respective vertical and horizontal nearest neighbor pixel comparison.
8. The method of claim 6, wherein selecting one of the horizontal interpolation and the vertical interpolation includes calculating a horizontal gradient and calculating a vertical gradient.
9. The method of claim 1, wherein one of a horizontal interpolation and a vertical interpolation is selected for each given pixel location and interpolation of the unknown pixels of the first, second, and third color is performed for that pixel location using the selected interpolation direction.
10. The method of claim 9, wherein selection of the horizontal interpolation or the vertical interpolation for a given pixel location is based on identifying an interpolation direction at that pixel location that has the least change among similar neighboring pixel color values.
11. The method of claim 1, wherein the adaptive demosaicing further includes applying a median filter to the demosaiced image.
12. The method of claim 1, further comprising displaying the demosaiced image on a display.
13. The method of claim 1, wherein receiving the mosaiced image includes receiving the mosaiced image from an image sensor array.
14. The method of claim 1, further comprising generating the mosaiced image based on sensed light that passes through a lens and a mosaicing filter.
15. The method of claim 1, wherein interpolating the values of the portion of unknown pixels of the first color includes interpolating a value of an unknown pixel of the first color based on a low-pass filter, a known value of one of the second color and the third color at the pixel, and known values of the first color at neighboring pixels.
16. The method of claim 15, further comprising obtaining a zero phase finite impulse response filter as the low-pass filter by solving an optimization problem.
17. A method of adaptive demosaicing of a mosaiced image comprising: receiving the mosaiced image having a mosaic pattern with at least one color; obtaining a noise level index for the mosaiced image; determining whether the noise level index is greater than a threshold value; in response to the noise level index being not greater than the threshold value, performing a bilinear demosaicing on the mosaiced image to generate a bilinearly demosaiced image; and in response to the noise level index being greater than the threshold value, performing an adaptive demosaicing on the mosaiced image to generate an adaptively demosaiced image, the adaptive demosaicing including interpolating values of a portion of unknown pixels of the at least one color in a horizontal or vertical direction.
18. A method of demosaicing a mosaiced image comprising: receiving the mosaiced image having a mosaic pattern with a first, second and third color; selecting, from a plurality of demosaicing methods, a demosaicing method to be performed on the mosaiced image based on a comparison between a noise level index of the mosaiced image and at least one threshold value; and demosaicing the mosaiced image using the selected demosaicing method to generate a demosaiced image.
19. The method of claim 18, wherein the mosaiced image comprises red, green and blue pixels in a Bayer pattern.
20. The method of claim 18, further comprising applying a median filter to the demosaiced image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021] It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It also should be noted that the figures are only intended to facilitate the description of the exemplary embodiments. The figures do not illustrate every aspect of the described embodiments and do not limit the scope of the present disclosure.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0022] Since currently-available demosaicing systems and methods fail to provide for optimal image interpolation and require large software or hardware overhead, improved demosaicing systems and methods that provide for high quality image processing while operating at various Signal to Noise Ratios (SNRs), and requiring minimal computational time and overhead, can prove desirable and provide a basis for a wide range of digital imaging applications, such as digital camera systems, and the like. This result can be achieved, according to one embodiment disclosed herein, by a digital camera system 100 as illustrated in
[0023] Turning to
[0024] As illustrated in
[0025] This mosaiced digital image 130 can be converted to a conventional RGB triplet image 140 through interpolation as described herein. For example, each pixel in a conventional RGB image comprises red, green and blue components that are combined to define the color of that pixel from a full spectrum of visible light. Accordingly, a conventional RGB triplet image 140 comprises a red portion 141, a green portion 142, and a blue portion 143, that collectively define a full-color RGB image.
[0026] To convert the mosaiced digital image 130 to a RGB triplet image 140, missing information can be added through a process of interpolation. Stated another way, the mosaiced digital image 130 only has one-third of the total information that will be present in the RGB triplet image 140 when formed, and therefore this missing information can added by inferring its value based on the pixels around it. For example, as shown in
[0027] When converting to the RGB triplet image 140, however, a 99 image for each of the red, green and blue portions 141, 142, 143 can be generated. For example, for the red portion 141, many of the red values for a given pixel location are already known 141K (pixels 141K signified by the R in a given location in the red portion 141) from the value being present in the mosaiced digital image 130. However, the remaining pixel values (pixels 1411 signified by the blank pixel in a given location in the red portion 141) can be interpolated so that a full 99 array of red values can be present for the red portion 141. Similarly, such interpolation can also occur for the green portion 142 and the blue portion 143. The green portion 142 includes known pixels 142K and includes unknown pixels 1421 that can be interpolated. The blue portion 143 includes known pixels 143K and includes unknown pixels 1431 that can be interpolated.
[0028]
[0029] The abstracted digital RGB camera system 100 is depicted including a processor 121, memory 122 and display 123; however, further embodiments can include any suitable set of components and any of the processor 121, memory 122 and/or display 123 can be present in plurality or absent in some embodiments. In one embodiment, the memory 122 can store instructions for performing any of the methods described herein and can store digital images 130, 140 as discussed herein. The display 123 can be configured to display digital images 130, 140, and in various embodiments, and can comprise a Liquid Crystal Display (LCD); Light Emitting Diode (LED) display, Organic Light Emitting Diode (OLED) display, Digital Light Projection (DLP) display, or the like.
[0030] The abstracted digital RGB camera system 100 is depicted using a Single-Lens Reflex (SLR) lens; however, in various embodiments, any suitable lens system can be used, including a pin-hole lens, a biological lens, a simple convex glass lens, or the like. Additionally, lenses in accordance with various embodiments can be configured with certain imaging properties including a macro lens, zoom lens, telephoto lens, fisheye lens, wide-angle lens, or the like.
[0031] Additionally, while the digital RGB camera system 100 can be used to detect light in the visible spectrum and generate images therefrom, in some embodiments, the digital RGB camera system 100 can be adapted to detect light of other wavelengths including, gamma rays, X-rays, ultraviolet rays, infrared light, micro waves, radio waves, or the like. Additionally, a digital RGB camera system 100 can be adapted for still images, video images, and three-dimensional images, or the like. Accordingly, the present disclosure should not be construed to be limiting to the example digital RGB camera system 100 shown and described herein.
[0032]
[0033]
[0034]
[0035] In decision block 420, a determination is made whether the noise level index is greater than a threshold value. In some embodiments, such a threshold value can be determined manually based on a specific hardware and settings configurations. For example, a user can test a given configuration of a digital camera with images having different noise levels and visually or analytically determine a threshold where bilinear demosaicing is preferable over adaptive demosaicing and set a threshold level where one demosaicing method is chosen over another. In some embodiments, an optimal threshold value can be calculated by mass qualitative and quantitative experiments on a given camera. For example, one determined optimal threshold value can be 0.76.
[0036] If the noise level is greater than the threshold value, the sub-method 400A continues to sub-method block 500, where adaptive demosaicing is performed. An example of such a method 500 is shown and described in more detail herein in relation to
[0037] Bilinear demosaicing can be performed in any suitable way. For example, in one embodiment, bilinear demosaicing uses correlations with neighboring pixels to obtain the color intensities of the other colors. Referring again to
[0038] For example, given that the three color channels R, G, B have correlations, and given that neighboring pixel intensities often change smoothly, interpolation can be based on surrounding values. Accordingly, calculating a horizontal gradient H and a vertical gradient V can be used to select the interpolation direction. For example, pixel R.sub.5 of
[0039] In the embodiment discussed above, where H>V the vertical interpolation values can be selected. For example, referring to
[0040] On the other hand, where H<V the horizontal interpolation values can be selected. For example, referring to
[0041] However, where H=V the four nearest neighbors can be averaged and used as the interpolation value. For example, referring to
[0042]
[0043] For example, and referring to
H=|(R.sub.1+R.sub.9)/2R.sub.5)|
V=|(R.sub.3+R.sub.7)/2R.sub.5)|
[0044] Obtaining V and H values for each pixel can be used to determine if horizontal or vertical interpolation should be used. For example, in some embodiments, a determination is made that horizontal interpolation should be selected if H<V. On the other hand, a determination is made that vertical interpolation should be selected if H>V. Where H=V an arbitrary selection can be made or the four nearest neighbor values can be averaged to obtain the value.
[0045] In further embodiments, other suitable methods of determining whether to use vertical or horizontal interpolation can be used, and such methods can be simple or complex. For example, the method described above uses one nearest-neighbor pixel in each direction. However, in some embodiments, two or more nearest neighbors in each direction can be used. Multiple methods may be used at the same time or combined in some embodiments.
[0046] In some embodiments, it may be desirable to estimate and identify potential edges so as to avoid interpolating across such edged. For example, referring to
H=|(G.sub.2G.sub.8)|
V=|(G.sub.4G.sub.6)|
[0047] In some embodiments, it may be desirable to use a Laplacian operator for edge detection and/or detection of rapid intensity changes. For example, referring to
H=|(G.sub.2G.sub.8)|+|R.sub.5R.sub.1+R.sub.5R.sub.9|
V=|(G.sub.4G.sub.6)|+|R.sub.5R.sub.3+R.sub.5R.sub.7|
[0048] In various embodiments, the selection method can be based on weighing time and internal storage costs relative to the capabilities of the camera device 120 (see
[0049] At block 520, a loop begins for all unknown green pixels 1421. At block 525, the vertical or horizontal interpolation values for the given unknown green pixel 1421 are determined based on the selected direction, and at loop block 530, the loop for all unknown green pixels 1421 ends. The loop for all pixel locations 130 ends in block 535.
[0050] For example, in some embodiments, the horizontal green interpolation values can be determined as follows. Let R(), G(), and B() respectively represent the red, green, and blue pixels on the Bayer map. Suppose that G(x)R(x) changes gradually; in other words, the high frequency portion of this differential image changes more rapidly than G(x). For computation of a horizontal interpolation value RGRG, respective rows in the Bayer map (e.g., the row of pixels [R.sub.1, G.sub.2, R.sub.5, G.sub.8, R.sub.9] labeled 205 in
[0051] Where G.sub.0(x) and G.sub.1(x) are the even and odd signals respectively of G(x), then G.sub.0(x) is known and can be directly obtained from the Bayer data, but G.sub.1(x) cannot. For example, referring to
[0052] Accordingly, G(x)=G.sub.0(x)+G.sub.1(x), where all G.sub.0(x) values are already known (e.g., G labeled pixels 142K in green portion 142, shown in
[0053] To obtain currently unknown values G.sub.1(x), assume that G(x) has passed through a linear filter h. In other words, G(x)=h(x)*G(x). Assuming G(x) is a band-limited signal and that h(x) is an ideal low-pass filter, the following can be derived.
G(x)=h.sub.0(x)*G.sub.0(x)+h.sub.1(x)*G.sub.0(x)+h.sub.0(x)*G.sub.1(x)+h.sub.1(x)*G.sub.1(x)
[0054] Accordingly low-pass filter h(x) has the following properties:
h.sub.1(x)*G.sub.0(x)=0, where x is even;
h.sub.0(x)*G.sub.1(x)=0, where x is even;
h.sub.0(x)*G.sub.0(x)=0, where x is odd; and
h.sub.1(x)*G.sub.1(x)=0, where x is odd.
[0055] The above equations can therefore be rewritten as follows.
[0056] Using this formula, it is therefore possible to interpolate all missing green pixel values using G.sub.0(x) and R.sub.1(x). In other words, interpolation of unknown pixel values G.sub.1(x) in an RGRGR row can be done using known values from the adjacent green pixels G.sub.0(x) and from the known red pixel values R.sub.1(x). For interpolation of GBGBG rows can similarly be done using known values from the adjacent green pixels G.sub.1(x) and from the known blue pixel values B.sub.0(x). This same analysis can be applied to RGRGR columns or GBGBG columns to determine vertical green pixel interpolation values.
[0057] Obtaining a zero phase Finite Impulse Response (FIR) filter h(x) that fulfills the constraints discussed above can be done by solving the following optimization problem:
h.sub.opt.sup.()=arg min()(1()).sup.2
[0058] Where {circumflex over ()} represents the Fourier transform and w() is the weighting function. Such optimization to obtain a set value of filter h(x) can be done in various suitable ways, including but not limited to using the Matlab optimization toolbox (The MathWorks, Inc., Natick, Mass.), or the like.
[0059] To consider the effects of adjacent red and green pixels on the interpolated green value, it is possible to introduce impact factor to determine h as follows.
h=[0,0.5,0,0.5,0]+[0.25,0,0.5,0,0.25]*6
[0060] In some embodiments, impact factor is set to 1, however, in further embodiments, impact factor can take any other suitable value.
[0061] Returning to the method 500 of
[0062] For example, with all green values previously known or interpolated in a horizontal or vertical direction, the method 500 continues by finding vertical or horizontal interpolation values for each of the unknown blue and/or red values. Referring to
[0063] In various embodiments, interpolation of the unknown red and/or blue pixel values 1411, 1431 can be performed as follows. For example, as discussed above an assumption is made that the differential image R-G is band limited and has a frequency substantially less than the Nyquist sampling rate. Accordingly, the differential image R-G can be reconstructed through the already-known differential image R.sub.SG.sub.S. Similarly, differential image B-G can be reconstructed through the already-known differential image B.sub.SG.sub.S. Accordingly, the following equations can respectively be used to calculate R-G and B-G, where L is a low-pass filter.
RG=L*(R.sub.SG.sub.S)
BG=L*(B.sub.SG.sub.S)
[0064] Accordingly the unknown red and/or blue values for a given pixel location 130 can be interpolated in a horizontal or vertical direction. In one embodiment, L can be a low-pass filter, L=[ ; 1; ]. However in further embodiments, L can be any suitable low-pass filter, or other type of filter.
[0065] Returning again to the method 500 of
[0066] The following examples are included to demonstrate exemplary embodiments. Those skilled in the art, however, should in light of the disclosure, appreciate that many changes can be made in the specific embodiments which are disclosed and still obtain a like or similar result without departing from the spirit and scope of the disclosure.
Example 1
[0067] The adaptive demosaicing methods described herein were used on a color porcupine moire test target image and compared to other demosaicing methods used on the same test target image. For example, tests were performed using bilinear demosaicing; Variable Number of Gradients (VNG) demosaicing; Patterned Pixel Grouping (PPG) demosaicing; DCB demosaicing (www.linuxphoto.org); Adaptive Filtered Demosaicing (AFD); and Variance of Color Differences (VCD) demosaicing. The test target images processed using adaptive demosaicing methods described herein showed better color restoration effects and better false color removal abilities compared to the other methods.
Example 2
[0068] The computational costs of the example embodiments discussed above were calculated (not including median filtering) in an example of analyzing a 6464 signal block, giving statistics about the number of addition and multiplication operations for modules embodying various steps of the present methods. For example, module 1 included steps of G interpolation in the horizontal and vertical directions and R and B interpolation in only the horizontal direction; module 2 included interpolation of R and B in the vertical direction; module 3 included selection of horizontal or vertical interpolation based on calculated H and V values; and module 4 included preparation for median filtering. The table below (without median filtering) shows the results and illustrates that the present example embodiments described herein can be performed with relatively minimal computational overhead. This can be desirable regardless of whether the present methods are performed via hardware and/or software.
TABLE-US-00001 Module 1 Module 2 Module 3 Module 4 (not do) Additions 14*64*64 2*64*62 1*64*64 0 Multiplications 14*64*64 0 0 0
Example 3
[0069] Mosaiced images of the exterior of a building at five different noise levels were demosaiced using the example embodiments described herein. The Signal to Noise Ratio (SNR) of the five mosaiced images was 10, 20, 40, 60 and 100 respectively.
[0070] The described embodiments are susceptible to various modifications and alternative forms, and specific examples thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the described embodiments are not to be limited to the particular forms or methods disclosed, but to the contrary, the present disclosure is to cover all modifications, equivalents, and alternatives.