Pixel value calibration method and pixel value calibration device
11516449 · 2022-11-29
Assignee
Inventors
Cpc classification
H04N9/646
ELECTRICITY
International classification
Abstract
A pixel value calibration method includes: obtaining input image data generated by pixels, the input image data including a first group of pixel values in a first color plane and a second group of pixel values in a second color plane, generated by a first portion and a second portion of the pixels respectively; determining a difference function associated with filter response values and target values, the filter response values being generated by utilizing characteristic filter coefficients to filter first and second estimated pixel values of estimated pixel data in the first and second color planes, respectively; determining a set of calibration filter coefficients by calculating a solution of the estimated pixel data, the solution resulting in a minimum value of the difference function; and filtering the input image data, by a filter circuit using the set of calibration filter coefficients, to calibrate the first group of pixel values.
Claims
1. A pixel value calibration method, comprising: obtaining input image data generated by a plurality of pixels, wherein the input image data comprises a first group of pixel values in a first color plane outputted by a first portion of the pixels, and a second group of pixel values in a second color plane outputted by a second portion of the pixels; determining a difference function associated with a plurality of groups of filter response values of estimated image data and a plurality of groups of target values, wherein the estimated image data comprises a plurality of first estimated pixel values for the pixels in the first color plane and a plurality of second estimated pixel values for the pixels in the second color plane; the groups of filter response values are generated by utilizing a plurality of sets of characteristic filter coefficients to filter the first estimated pixel values and the second estimated pixel values; an anticipated pixel value of each pixel in the second portion of the pixels in the first color plane serves as a first group of target values in the groups of target values; determining at least one set of calibration filter coefficients by calculating a solution of the estimated image data, the solution resulting in a minimum value of the difference function; and filtering the input image data, by a filter circuit using the at least one set of calibration filter coefficients, to calibrate the first group of pixel values.
2. The pixel value calibration method of claim 1, wherein the pixels are arranged in a plurality of rows; each pixel in the first portion of the pixels is located in a same row as a pixel having a first color and has a second color; and each pixel in the second portion of the pixels is located in a same row as a pixel having a third color and has the second color.
3. The pixel value calibration method of claim 1, wherein the step of determining the at least one set of calibration filter coefficients by calculating the solution of the estimated image data comprises: using a least-squares method to calculate the solution of the estimated image data resulting in the minimum value of the difference function, wherein the groups of filter response values are expressed as a data matrix multiplied by an estimated vector, the data matrix is associated with the sets of characteristic filter coefficients, and the estimated vector is associated with the first estimated pixel values and the second estimated pixel values; and determining the at least one set of calibration filter coefficients according to the data matrix.
4. The pixel value calibration method of claim 3, wherein a first estimated pixel value and a second estimated pixel value for a pixel of interest of the pixels are located in a first row and a second row of the estimated vector, respectively; the step of determining the at least one set of calibration filter coefficients according to the data matrix comprises: determining a filter matrix by multiplying an inverse matrix of a product of a transposed matrix of the data matrix and the data matrix with the transposed matrix; and determining the at least one set of calibration filter coefficients according to one row of the filter matrix corresponding to the first row of the estimated vector and another row of the filter matrix corresponding to the second row of the estimated vector.
5. The pixel value calibration method of claim 1, wherein the step of determining the difference function associated with the groups of filter response values and the groups of target values comprises: determining a weighted squared deviation between the groups of filter response values and the groups of target values, the weighted squared deviation serving as the difference function.
6. The pixel value calibration method of claim 1, wherein the step of calibrating the first group of pixel values comprises: using the at least one set of calibration filter coefficients to filter the input image data to determine a pixel value correction amount for a pixel value in the first group of pixel values; and calibrating the pixel value according to the pixel value correction amount.
7. The pixel value calibration method of claim 1, wherein the step of calibrating the first group of pixel values comprises: using the at least one set of calibration filter coefficients to filter the input image data to calibrate a pixel value of a pixel in the first group of pixel values to an average value of the first estimated pixel value for the pixel in the first color plane and the second estimated pixel for the pixel in the second color plane.
8. The pixel value calibration method of claim 1, wherein the sets of characteristic filter coefficients comprise a first set of characteristic filter coefficients, configured to sample the first estimated pixel value for each pixel in the first portion of the pixels to generate a first set of filter response values in the groups of filter response values; the first group of filter response value corresponds to the first group of target values.
9. The pixel value calibration method of claim 8, wherein the groups of target values further comprise a second group of target values and a third group of target values; the second group of target values is determined according to the first group of pixel values, and the third group of target values is determined according to the second group of pixel values; the sets of characteristic filter coefficients further comprise a second set of characteristic filter coefficients and a third set of characteristic filter coefficients; the second set of characteristic filter coefficients is configured to sample the first estimated pixel value for each pixel in the first portion of the pixels to generate a second group of filter response values in the groups of filter response values, and the second group of filter response values corresponds to the second group of target values; the third set of characteristic filter coefficients is configured to sample the second estimated pixel value for each pixel in the second portion of the pixels to generate a third group of filter response values in the groups of filter response values, and the third group of filter response values corresponds to the third group of target values.
10. The pixel value calibration method of claim 9, wherein the groups of target values further comprise a fourth group of target values, a fifth group of target values and a sixth target value; the fourth group of target values is determined according to a high frequency component of the input image data in the first color plane, the fifth group of target values is determined according to a high frequency component of the input image data in the second color plane, and the sixth group of target values is determined according to a change in a pixel value difference between the corresponding first estimated pixel value and the corresponding second estimated pixel value for each of the pixels; the sets of characteristic filter coefficients further comprise a fourth set of characteristic filter coefficients, a fifth set of characteristic filter coefficients and a sixth set of characteristic filter coefficients; the fourth set of characteristic filter coefficients is configured to perform high-pass filtering on the first estimated pixel values to generate a fourth group of filter response values in the groups of filter response values, and the fourth group of filter response values corresponds to the fourth group of target values; the fifth set of characteristic filter coefficients is configured to perform high-pass filtering on the second estimated pixel values to generate a fifth group of filter response values in the groups of filter response values, and the fifth group of filter response values corresponds to the fifth group of target values; the sixth set of characteristic filter coefficients is configured to perform high-pass filtering on the pixel value difference between the corresponding first estimated pixel value and the corresponding second estimated pixel value for each of the pixels, and accordingly generate a sixth group of filter response values in the groups of filter response values; the sixth group of filter response values corresponds to the sixth group of target values.
11. A pixel value calibration device, comprising: a calculation module, configured to: obtain input image data generated by a plurality of pixels, wherein the input image data comprises a first group of pixel values in a first color plane outputted by a first portion of the pixels, and a second group of pixel values in a second color plane outputted by a second portion of the pixels; determine a difference function associated with a plurality of groups of filter response values of estimated image data and a plurality of groups of target values, wherein the estimated image data comprises a plurality of first estimated pixel values for the pixels in the first color plane and a plurality of second estimated pixel values for the pixels in the second color plane; the groups of filter response values are generated by utilizing a plurality of sets of characteristic filter coefficients to filter the first estimated pixel values and the second estimated pixel values; an anticipated pixel value of each pixel in the second portion of the pixels in the first color plane serves as a first group of target values in the groups of target values; and determine at least one set of calibration filter coefficients by calculating a solution of the estimated image data, the solution resulting in a minimum value of the difference function; and a filter circuit, coupled to the calculation module, the filter circuit being configured to use the at least one set of calibration filter coefficients to filter the input image data to calibrate the first group of pixel values.
12. The pixel value calibration device of claim 11, wherein the pixels are arranged in a plurality of rows; each pixel in the first portion of the pixels is located in a same row as a pixel having a first color and has a second color; and each pixel in the second portion of the pixels is located in a same row as a pixel having a third color and has the second color.
13. The pixel value calibration device of claim 11, wherein the calculation module is configured to use a least-squares method to calculate the solution of the estimated image data resulting in the minimum value of the difference function; the groups of filter response values are expressed as a data matrix multiplied by an estimated vector, the data matrix is associated with the sets of characteristic filter coefficients, and the estimated vector is associated with the first estimated pixel values and the second estimated pixel values; and the calculation module is further configured to determine the at least one set of calibration filter coefficients according to the data matrix.
14. The pixel value calibration device of claim 13, wherein a first estimated pixel value and a second estimated pixel value for a pixel of interest of the pixels are located in a first row and a second row of the estimated vector, respectively; the calculation module is configured to determine a filter matrix by multiplying an inverse matrix of a product of a transposed matrix of the data matrix and the data matrix with the transposed matrix, and determine the at least one set of calibration filter coefficients according to one row of the filter matrix corresponding to the first row of the estimated vector and another row of the filter matrix corresponding to the second row of the estimated vector.
15. The pixel value calibration device of claim 11, wherein the calculation module is configured to determine a weighted squared deviation between the groups of filter response values and the groups of target values, and the weighted squared deviation serve as the difference function.
16. The pixel value calibration device of claim 11, wherein the filter circuit is configured to use the at least one set of calibration filter coefficients to filter the input image data to determine a pixel value correction amount for a pixel value in the first group of pixel values, and calibrate the pixel value according to the pixel value correction amount.
17. The pixel value calibration device of claim 11, wherein the filter circuit is configured to use the at least one set of calibration filter coefficients to filter the input image data to calibrate a pixel value of a pixel in the first group of pixel values to an average value of the first estimated pixel value for the pixel in the first color plane and the second estimated pixel for the pixel in the second color plane.
18. The pixel value calibration device of claim 11, wherein the sets of characteristic filter coefficients comprise a first set of characteristic filter coefficients, configured to sample the first estimated pixel value for each pixel in the first portion of the pixels to generate a first set of filter response values in the groups of filter response values; the first group of filter response value corresponds to the first group of target values.
19. The pixel value calibration device of claim 18, wherein the groups of target values further comprise a second group of target values and a third group of target values; the second group of target values is determined according to the first group of pixel values, and the third group of target values is determined according to the second group of pixel values; the sets of characteristic filter coefficients further comprise a second set of characteristic filter coefficients and a third set of characteristic filter coefficients; the second set of characteristic filter coefficients is configured to sample the first estimated pixel value for each pixel in the first portion of the pixels to generate a second group of filter response values in the groups of filter response values, and the second group of filter response values corresponds to the second group of target values; the third set of characteristic filter coefficients is configured to sample the second estimated pixel value for each pixel in the second portion of the pixels to generate a third group of filter response values in the groups of filter response values, and the third group of filter response values corresponds to the third group of target values.
20. The pixel value calibration device of claim 19, wherein the groups of target values further comprise a fourth group of target values, a fifth group of target values and a sixth target value; the fourth group of target values is determined according to a high frequency component of the input image data in the first color plane, the fifth group of target values is determined according to a high frequency component of the input image data in the second color plane, and the sixth group of target values is determined according to a change in a pixel value difference between the corresponding first estimated pixel value and the corresponding second estimated pixel value for each of the pixels; the sets of characteristic filter coefficients further comprise a fourth set of characteristic filter coefficients, a fifth set of characteristic filter coefficients and a sixth set of characteristic filter coefficients; the fourth set of characteristic filter coefficients is configured to perform high-pass filtering on the first estimated pixel values to generate a fourth group of filter response values in the groups of filter response values, and the fourth group of filter response values corresponds to the fourth group of target values; the fifth set of characteristic filter coefficients is configured to perform high-pass filtering on the second estimated pixel values to generate a fifth group of filter response values in the groups of filter response values, and the fifth group of filter response values corresponds to the fifth group of target values; the sixth set of characteristic filter coefficients is configured to perform high-pass filtering on the pixel value difference between the corresponding first estimated pixel value and the corresponding second estimated pixel value for each of the pixels, and accordingly generate a sixth group of filter response values in the groups of filter response values; the sixth group of filter response values corresponds to the sixth group of target values.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures. It should be noted that, in accordance with the standard practice in the field, various features are not drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION
(9) The following disclosure provides various embodiments or examples for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, when an element is referred to as being “connected to” or “coupled to” another element, it may be directly connected to or coupled to the other element, or intervening elements may be present. In addition, repeat reference numerals and/or letters may be repeated in various examples of the present. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Furthermore, as could be appreciated, the present embodiments provide many ideas that can be widely applied in various scenarios. The following embodiments are provided for illustration purposes, and shall not be used to limit the scope of the present disclosure.
(10) With the use of suitable/optimal filter design, the proposed pixel value calibration scheme can reconstruct color plane(s) according to real-time image data while keeping respective pattern features of different color planes, such as a Gb plane and a Gr plane, and estimate a pixel value difference or a pixel value correction amount between different color planes to thereby perform pixel value calibration. Further description is provided below.
(11)
(12) The pixel value calibration device 100 may include, but is not limited to, a calculation module 110 and a filter circuit 120. The calculation module 110 is configured to obtain the input image data IM, and determine at least one portion of a plurality of groups of target values {T} according to the input image data IM. In addition, the calculation module 110 can be configured to determine a difference function f({R},{T}) associated with a plurality of groups of filter response values {R} of estimated image data EM and a plurality of groups of target values {T}. By calculating a solution of the estimated image data EM, which makes the difference function f({R},{T}) to satisfy a pre-determined condition, the calculation module 110 can be configured to determine at least one set of calibration filter coefficients {C}. For example, the calculation module 110 can determine the at least one set of calibration filter coefficients {C} by calculating a solution of the estimated image data EM which results in a minimum value of the difference function f({R},{T}).
(13) The filter circuit 120, coupled to the calculation module 110, is configured to filter the input image data IM according to the at least one set of calibration filter coefficients {C}, and accordingly calibrate pixel value(s) of at least one portion of the input image data IM. For example, when employed to process the Gr-Gb imbalance, the filter circuit 120 can be configured to reconstruct at least one of a Gr plane and a Gb plane according to the at least one set of calibration filter coefficients {C}, and calibrate pixel value(s) outputted by at least one of a Gr pixel and a Gb pixel included in the input image data IM according to a pixel value difference between the Gr plane and the Gb plane.
(14)
(15) Additionally, a plurality of pixels P.sub.0, P.sub.2, P.sub.4, P.sub.6, P.sub.8, P.sub.10, P.sub.12, P.sub.14, P.sub.16, P.sub.18, P.sub.20, P.sub.22 and P.sub.24 are configured to provide color information on a green channel. The pixels P.sub.0, P.sub.2, P.sub.4, P.sub.10, P.sub.12, P.sub.14, P.sub.20, P.sub.22 and P.sub.24, arranged in a same row as an R pixel and referred to as Gr pixels, are configured to output a plurality of pixel values Gr.sub.0, Gr.sub.2, Gr.sub.4, Gr.sub.10, Gr.sub.12, Gr.sub.14, Gr.sub.20, Gr.sub.22 and Gr.sub.24 in a color plane, i.e. a Gr plane, respectively. The pixels P.sub.6, P.sub.8, P.sub.16 and P.sub.18, arranged in a same row as a B pixel and referred to as Gb pixels, are configured to output a plurality of pixel values Gb.sub.6, Gb.sub.8, Gb.sub.16 and Gb.sub.18 in a color plane, i.e. a Gb plane, respectively.
(16) To facilitate understanding of the present disclosure, the proposed pixel value calibration scheme is described with reference to the following embodiments where a Gr plane and a Gb plane are reconstructed to calibrate pixel values outputted by a Gr pixel and a Gb pixel. However, this is not intended to limit the scope of the present disclosure. For example, the proposed pixel value calibration scheme can be employed in embodiments where at least one color plane is reconstructed to estimate a pixel value, or a pixel value calibration amount, between different color planes to thereby perform pixel value calibration. Such modifications and alternatives also fall within the spirit and the scope of the present disclosure.
(17) Referring to
(18) Moreover, the calculation module 110 can be configured to use a plurality of sets of characteristic filter coefficients to filter the estimated image data EM to generate a plurality of groups of filter response values {R}, thereby determining the difference function f({R},{T}) associated with the groups of filter response values {R} and the groups of target values {T}.
(19) In the present embodiment, the estimated image data EM may include a plurality of estimated pixel values G.sub.0′(0)-G.sub.0′(24) for the pixels P.sub.0-P.sub.24 in a color plane G.sub.0′ and a plurality of estimated pixel values G.sub.1′(0)-G.sub.1′(24) for the pixels P.sub.0-P.sub.24 in a color plane G.sub.1′. One of the color plane G.sub.0′ and the color plane G.sub.1′ can be the Gr plane, and the other of the color plane G.sub.0′ and the color plane G.sub.1′ can be the Gb plane. For example, in some cases where a pixel of interest of the pixels P.sub.0-P.sub.24 is a Gr pixel, the color plane G.sub.0′ is the Gr plane while the color plane G.sub.1′ is the Gb plane. Respective pixel values of the pixel of interest in the color plane G.sub.0′ and in the color plane G.sub.1′ can be used to estimate a pixel value difference/deviation between the color plane G.sub.0′ and the color plane G.sub.1′. As another example, in some cases where the pixel of interest is a Gb pixel, the color plane G.sub.0′ is the Gb plane while the color plane G.sub.1′ is the Gr plane. In the present embodiment, the pixel of interest can be, but is not limited to, a center pixel of the pixels P.sub.0-P.sub.24. As a result, the color plane G.sub.0′ and the color plane G.sub.1′ can be the Gr plane and the Gb plane, respectively.
(20) The sets of characteristic filter coefficients can be designed according to image characteristics which an reconstructed color plane is expected to have, such that the groups of filter response values {R} can be close to or substantially equal to the groups of target values {T}. Referring to
(21) The image smoothness refers to the fact that natural images usually do not have extremely high frequency components but have good signal continuity to exhibit relatively smooth characteristics. As a result, it can be expected that the reconstructed Gr plane and Gb plane do not have extremely high frequency components. For example, a high frequency component of the input image data IM in the color plane G.sub.0′ can be used to determine the group of target values T.sub.smooth,g0. The set of characteristic filter coefficients M.sub.smooth can be used to perform high-pass filtering on the estimated pixel values G.sub.0′(0)-G.sub.0′(24) in the color plane G.sub.0′, such that the group of filter response values R.sub.smooth,g0 can reflect a high frequency component of the estimated image data EM in the color plane G.sub.0′. In other words, the set of characteristic filter coefficients M.sub.smooth can be implemented using a high-pass filter coefficient matrix, such as a 25×25 filter coefficient matrix. In a case where the estimated image data EM has no or almost no high frequency component in the color plane G.sub.0′, the group of target values T.sub.smooth,g0 can be equal to or approximately equal to zero, and the group of filter response values R.sub.smooth,g0 can be expected to be equal to or approximately equal to zero.
(22) Similarly, the group of target values T.sub.smooth,g1 can be determined according to a high frequency component of the input image data IM in the color plane G.sub.1′. The set of characteristic filter coefficients M.sub.smooth can be used to perform high-pass filtering on the estimated pixel values G.sub.1′(0)-G.sub.1′(24) in the color plane G.sub.1′, such that the group of filter response values R.sub.smooth,g1 can reflect a high frequency component of the estimated image data EM in the color plane G.sub.1′. In a case where the estimated image data EM in the color plane G.sub.1′ has no or almost no high frequency component, the group of target values T.sub.smooth,g1 can be equal to or approximately equal to zero, and the group of filter response values R.sub.smooth,g1 can be expected to be equal to or approximately equal to zero.
(23) It is worth noting that although the estimated pixel values G.sub.0′(0)-G.sub.0′(24) in the color plane G.sub.0′ and the estimated pixel values G.sub.1′(0)-G.sub.1′(24) in the color plane G.sub.1′ are filtered using the same set of characteristic filter coefficients M.sub.smooth in the embodiments described above, the estimated pixel values G.sub.0′(0)-G.sub.0′(24) can be filtered using a set of characteristic filter coefficients different from that used to filter the estimated pixel values G.sub.1′(0)-G.sub.1′(24) without departing from the scope of the present disclosure.
(24) The structure similarity means that different reconstructed color planes can have similar structures. As a result, although a certain pixel value difference exists between the reconstructed Gr plane and the Gb plane, each of the reconstructed Gr plane and Gb plane can exhibit G channel texture characteristics associated with the pixels P.sub.0-P.sub.24. For each pixel, it can be expected that a difference between a structure characteristic of a corresponding estimated pixel value in the reconstructed Gr plane and a structure characteristic of a corresponding estimated pixel value in the reconstructed Gb plane will be quite small. For example, the group of target values T.sub.struct can be determined according to a change in a pixel value difference between a corresponding estimated pixel value in the color plane G.sub.0′ and a corresponding estimated pixel value in the color plane G.sub.1′ for each of the pixels P.sub.0-P.sub.24. The set of characteristic filter coefficients M.sub.struct can be used to perform high-pass filtering on the pixel value difference between the corresponding estimated pixel values in the color plane G.sub.0′ and in the color plane G.sub.1′, such that the group of filter response values R.sub.struct can reflect a change in a high frequency structure difference between the color plane G.sub.0′ and the color plane G.sub.1′ for the estimated image data EM. In other words, the set of characteristic filter coefficients M.sub.struct can be implemented using a high-pass filter coefficient matrix, such as 25×25 filtering coefficient matrix. In a case where the color plane G.sub.0′ and the color plane G.sub.1′ have the same or similar texture structures, the group of target values T.sub.struct can be equal to or approximately equal to zero, and the group of filter response values R.sub.struct can be expected to be equal to or approximately equal to zero.
(25) The data consistency means that pixel values corresponding to the pixels P.sub.0, P.sub.2, P.sub.4, P.sub.10, P.sub.12, P.sub.14, P.sub.20, P.sub.22 and P.sub.24 (Gr pixels) in the reconstructed Gr plane can be equal to or approximately equal to the pixel values Gr.sub.0, Gr.sub.2, Gr.sub.4, Gr.sub.10, Gr.sub.12, Gr.sub.14, Gr.sub.20, Gr.sub.22 and Gr.sub.24 included in the input image data IM, respectively. Also, pixel values corresponding to the pixels P.sub.6, P.sub.8, P.sub.16 and P.sub.18 (Gb pixels) in the reconstructed Gb plane can be equal to or approximately equal to the pixel values Gb.sub.6, Gb.sub.8, Gb.sub.16 and Gb.sub.18 included in the input image data IM.
(26) For example, in some embodiments where the color plane G.sub.0′ and the color plane G.sub.1′ are the Gr plane and the Gb plane respectively, the group of target values T.sub.subraw,g0 can be determined according to the pixel values Gr.sub.0, Gr.sub.2, Gr.sub.4, Gr.sub.10, Gr.sub.12, Gr.sub.14, Gr.sub.20, Gr.sub.22 and Gr.sub.24, and the group of target values T.sub.subraw,g1 can be determined according to the pixel values Gb.sub.6, Gb.sub.8, Gb.sub.16 and Gb.sub.18. The set of characteristic filter coefficients M.sub.subraw,g0 can be used to sample the respective estimated pixel values for the pixels P.sub.0, P.sub.2, P.sub.4, P.sub.10, P.sub.12, P.sub.14, P.sub.20, P.sub.22 and P.sub.24 in the color plane G.sub.0′, such that the group of filter response values R.sub.subraw,g0 can reflect the corresponding pixel value of each Gr pixel in the color plane G.sub.0′. In addition, the set of characteristic filter coefficients M.sub.subraw,g1 can be used to sample the respective estimated pixel values for the pixels P.sub.6, P.sub.8, P.sub.16 and P.sub.18 in the color plane G.sub.1′, such that the group of filter response values R.sub.subraw,g1 can reflect the corresponding pixel value of each Gb pixel in the color plane G.sub.1′.
(27) In some embodiments, the pixel value of each Gr pixel in the input image data IM can directly serve as the group of target values T.sub.subraw,g0, and the pixel value of each Gb pixel in the input image data IM can directly serve as the group of target values T.sub.subraw,g1:
(28)
(29) The texture detail can reflect information regarding signal variations between columns of pixels, and/or information regarding signal variations between rows of pixels. By way of example but not limitation, during interpolation in image processing, pixel values in the Gr plane and the Gb plane can be considered as pixel information on a same color channel. As a result, pixel values of adjacent columns/rows of pixels in the reconstructed Gr plane, or the reconstructed Gb plane, can exhibit brightness variations or image textures associated with the columns/rows of pixels. For example, in some embodiments where the color plane G.sub.0′ and the color plane G.sub.1′ are the Gr plane and the Gb plane respectively, a group of anticipated pixel values G.sub.0,sub″ of Gb pixels in the Gr plane, i.e. anticipated pixel values G.sub.0″(6), G.sub.0″(8), G.sub.0″(16) and G.sub.0″(18) respectively corresponding to the pixel values Gb.sub.6, Gb.sub.8, Gb.sub.16 and Gb.sub.18 in the present embodiment, can serve as the group of target values T.sub.approx, such that the reconstructed the Gr plane can be expected to exhibit brightness variations between columns/rows of pixels. Additionally, the set of characteristic filter coefficients M.sub.approx can be used to sample the estimated pixel values G.sub.0′(6), G.sub.0′(8), G.sub.0′(16) and G.sub.0′(18) in the color plane G.sub.0′, such that the group of filter response values R.sub.approx can reflect an estimated pixel value for each Gr pixel in the color plane G.sub.0′.
(30) It is worth noting that the anticipated pixel values G.sub.0″(6), G.sub.0″(8), G.sub.0″(16) and G.sub.0″(18) can be obtained according to various estimation techniques. For example, in some embodiments, the calculation module 110 shown in
(31)
(32)
(33) In the present embodiment, a solution of the estimated pixel values G.sub.0′(0)-G.sub.0′(24) and G.sub.1′(0)-G.sub.1′(24), i.e. the estimated image data EM shown in
(34)
(35) It is worth noting that the above-described methods for performing pixel value calibration according to at least one set of calibration filter coefficients are provided for illustrative purposes, and are not intended to limit the scope of the present disclosure. In some embodiments, the pixel value correction amount is not limited to half of a pixel value difference between a pixel value of a pixel in the color plane G.sub.0′ and a pixel value of the pixel in the color plane G.sub.1′. In some embodiments, the filter circuit 120 can use two sets of calibration filter coefficients f.sub.12,g0 and f.sub.12,g0 to filter the input image data IM to obtain the estimated pixel value G.sub.0′(12) and the estimated pixel value G.sub.1′(12), respectively. Next, the filter circuit 120 can determine a pixel value correction amount, e.g. ½(G.sub.0′(12)−G.sub.1′(12)), according to a difference between the estimated pixel value G.sub.0′(12) and the estimated pixel value G.sub.1′(12), and accordingly calibrate the pixel value Gr.sub.12.
(36) In some embodiments, the calculation module 110 can be configured to determine a set of calibration filter coefficients ½(f.sub.12,g0+f.sub.12,g1). The filter circuit 120 can use the set of calibration filter coefficients ½(f.sub.12,g0+f.sub.12,g1) to filter the input image data IM, thereby calibrating the pixel value Gr.sub.12 to an average value ½(f.sub.12,g0+f.sub.12,g1) V of the estimated pixel value G.sub.0′(12) for the pixel P.sub.12 in the color plane G.sub.0′ and the estimated pixel value G.sub.1′(12) for the pixel P.sub.12 in the color plane G.sub.1′. In some embodiments, the filter circuit 120 can use two sets of calibration filter coefficients f.sub.12,g0 and f.sub.12,g1 to filter the input image data IM as to obtain the estimated pixel value G.sub.0′(12) and the estimated pixel value G.sub.1′(12), respectively. Next, the filter circuit 120 can calibrate the pixel value Gr.sub.12 to an average value ½(G.sub.0′(12)+G.sub.1′(12)) of the estimated pixel value G.sub.0′(12) and the estimated pixel value G.sub.1′(12).
(37) Although described with reference to the center pixel of the pixels P.sub.0-P.sub.24 in the above embodiments, the proposed pixel value calibration scheme can be applied to other pixels of interest, e.g. other Gr pixels or Gb pixels, without departing from the scope of the present disclosure.
(38) The above description is provided for illustrative purposes, and is not intended to limit the scope of the present disclosure. In some embodiments, the difference function f({R},{T}) shown in
(39) In some embodiments, the filter circuit design may not take all of the image smoothness, the structure similarity, the data consistency and the texture detail into consideration. For example, it is possible to design a filter circuit without considering one or more image characteristics associated with the image smoothness, the structure similarity, the data consistency and the texture detail.
(40) In some embodiments, the proposed pixel value calibration scheme can be applied in an RYYB color filter array, which is a filter design utilizing one red filter, one blue filter and two yellow filters, a CYYM color filter array, which is a filter design utilizing one cyan filter, one magenta filter and two yellow filters, or other types of color filter arrays. For example, when employed in the RYYB color filter array, the proposed pixel value calibration scheme can be used to correct errors caused by pixel crosstalk between two yellow pixels, one of which is located in a same row as a red pixel while the other is located in a same row as a blue pixel. As another example, when employed in the CYYM color filter array, the proposed pixel value calibration scheme can be used to correct errors caused by pixel crosstalk between two yellow pixels, one of which is located in a same row as a cyan pixel while the other is located in a same row as a magenta pixel.
(41) As long as a pixel value calibration scheme can determine a difference function associated with a plurality groups of filter response values of estimated image data in different color planes and a plurality of groups of target values, determine at least one set of calibration filter coefficients by calculating a solution of the estimated image data which results in a minimum value of the difference function, and allow a filter circuit to estimate a pixel value difference (or a pixel value calibration amount) between different color planes in real time to thereby perform pixel value calibration, various variations and alternatives fall with the spirit and scope of the present disclosure.
(42)
(43) In step 702, input image data generated by a plurality of pixels is obtained. The input image data includes a first group of pixel values in a first color plane outputted by a first portion of the pixels, and a second group of pixel values in a second color plane outputted by a second portion of the pixels. For example, the calculation module 110 can obtain the input image data IM generated by the pixels P.sub.0-P.sub.24, which include a first group of pixel values in the Gr plane outputted by the pixels P.sub.0, P.sub.2, P.sub.4, P.sub.6, P.sub.8, P.sub.10, P.sub.12, P.sub.14, P.sub.16, P.sub.18, P.sub.2, P.sub.22 and P.sub.24, i.e. the pixel values Gr.sub.0, Gr.sub.2, Gr.sub.4, Gr.sub.10, Gr.sub.12, Gr.sub.14, Gr.sub.20, Gr.sub.22 and Gr.sub.24, and a second group of pixel values in the Gb plane outputted by the pixels P.sub.6, P.sub.8, P.sub.16 and P.sub.18, i.e. the pixel values Gb.sub.6, Gb.sub.8, Gb.sub.16 and Gb.sub.18.
(44) In step 704, a difference function associated with a plurality of groups of filter response values of estimated image data and a plurality of groups of target values is determined. The estimated image data includes a plurality of first estimated pixel values for the pixels in the first color plane and a plurality of second estimated pixel values for the pixels in the second color plane. The groups of filter response values are generated by utilizing a plurality of sets of characteristic filter coefficients to filter the first estimated pixel values and the second estimated pixel values. An anticipated pixel value of each pixel in the second portion of the pixels in the first color plane serves as a first group of target values in the groups of target values. For example, the calculation module 110 can determine a difference function associated with the groups of filter response values {R} of the estimated image data EM and the groups of target values {T}, e.g. the difference function fd. In some embodiments, the groups of filter response values {R} can be obtained by using the sets of characteristic filter coefficients M.sub.smooth, M.sub.struct, M.sub.subraw,g0, M.sub.subraw,g1 and M.sub.approx to filter the estimated pixel values G.sub.0′(0)-G.sub.0′(24) in the color plane G.sub.0′ and the estimated pixel values G.sub.1′(0)-G.sub.1′(24) in the color plane G.sub.1′ included in the estimated image data EM. In some embodiments where the color plane G.sub.0′ and the color plane G.sub.1′ are the Gr plane and the Gb plane respectively, the estimated pixel value for each Gb pixel in the Gr plane can serve as a group of target values in the groups of target values {T} such that pixel values corresponding to adjacent columns/rows of pixels in the reconstructed Gr plane can exhibit brightness variations between the columns/rows of pixels.
(45) In step 706, at least one set of calibration filter coefficients is determined by calculating a solution of the estimated image data, wherein the solution results in a minimum value of the difference function. For example, the calculation module 110 can calculate a solution of the estimated image data EM which results in a minimum value of the difference function fd, thereby determining the at least one set of calibration filter coefficients {C}. In some embodiments, the calculation module 110 can use a least squares method to calculate an optimal solution of the estimated image data EM that results in a minimum value of the difference function fd, thereby determining the at least one set of calibration filter coefficients {C}.
(46) In step 708, the input image data is filtered by a filter circuit using the at least one set of calibration filter coefficients to thereby calibrate the first group of pixel values. For example, the filter circuit 120 can use the at least one set of calibration filter coefficients {C} to filter the input image data IM, thereby calibrating the pixel values Gr.sub.0, Gr.sub.2, Gr.sub.4, Gr.sub.10, Gr.sub.12, Gr.sub.14, Gr.sub.20, Gr.sub.22 and Gr.sub.24. In some embodiments, the filter circuit 120 can use the set of calibration filter coefficients ½(f.sub.12,g0−f.sub.12,g1) to filter the input image data IM to thereby determine the pixel value correction amount ½(f.sub.12,g0−f.sub.12,g1) V for the pixel value Gr.sub.12, and accordingly correct the pixel value Gr.sub.12.
(47) In some embodiments, in step 702, the first group of pixel values can be pixel values outputted by Gb pixels, such as the pixel values Gb.sub.6, Gb.sub.8, Gb.sub.16 and Gb.sub.18, and the second group of pixel values can be pixel values outputted by Gr pixels, such as the pixel values Gr.sub.0, Gr.sub.2, Gr.sub.4, Gr.sub.10, Gr.sub.12, Gr.sub.14, Gr.sub.20, Gr.sub.22 and Gr.sub.24. In other words, a pixel value to be calibrated can be a pixel value of a Gb pixel included in the input image data IM.
(48) In some embodiments, in step 702, each pixel in the first portion of the pixels is located in a same row as a pixel having a first color and has a second color. Each pixel in the second portion of the pixels is located in a same row as a pixel having a third color and has the second color. For example, in some cases where the pixels P.sub.0-P.sub.24 are arranged in correspondence with an RGGB color filter array, the first color, the second color and the third color can be red, green and blue respectively, or can be blue, green and red respectively. Errors caused by pixel crosstalk between two green pixels, i.e. a Gr pixel and a Gb pixel, can be corrected accordingly. In other words, the first group of pixel values to be calibrated can be pixel value(s) of Gr pixel(s) or pixel value(s) of Gb pixel(s) included in the input image data IM. As another example, in some cases where the pixels P.sub.0-P.sub.24 are arranged in correspondence with an RYYB color filter array, the first color, the second color and the third color can be red, yellow and blue respectively, or can be blue, yellow and red respectively. Errors caused by pixel crosstalk between two yellow pixels, one of which is located in a same row as a red pixel while the other is located in a same row as a blue pixel, can be corrected accordingly.
(49) As a person skilled in the art can appreciate operation of each step of the pixel value calibration method 700 after reading the above paragraphs directed to
(50) With the use of suitable/optimal filter design, the proposed pixel value calibration scheme can reconstruct different color planes, such as a complete Gb plane or Gr plane, according to real-time image data, thereby estimating and calibrating a pixel value difference between different color planes. Alternatively, the proposed pixel value calibration scheme can estimate and calibrate a difference between pixel values for a same pixel in different color planes.
(51) The foregoing outlines features of several embodiments so that those skilled in the art may better understand various aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent embodiments still fall within the spirit and scope of the present disclosure, and they may make various changes, substitutions, and alterations thereto without departing from the spirit and scope of the present disclosure.