METHOD FOR PROCESSING A PIXELS MATRIX IN AN IMAGE PROCESSING CHAIN AND CORRESPONDING ELECTRONIC DEVICE

20230095645 · 2023-03-30

    Inventors

    Cpc classification

    International classification

    Abstract

    The method for processing a matrix of pixels each containing an original red, green, blue, or infrared component, comprises at least one interpolation of an interpolated component different from the original component of a pixel of interest from the components of a group of pixels neighboring the pixel of interest. The interpolation comprises: a calculation of the sum of the components of reference pixels weighted by a respectively assigned weight, the reference pixels being pixels of the group having the same original component as the interpolated component, an evaluation of the spatial uniformity of an environment, within the group of each reference pixel, a calculation of the weights assigned to the reference pixels at values which are normalized and proportional to the respective spatial uniformity.

    Claims

    1. A method for processing a pixel matrix in an image processing chain circuit couplable to an imager circuit, each pixel comprising an original component of one of an original red, an original green, an original blue, or an original infrared component, the method comprising: interpolating an interpolated component different from the original component of a pixel interest from components of a group of pixels neighboring the pixel of interest, the interpolating comprising: calculating a sum of the components of reference pixels weighted by an assigned weight, the reference pixels being pixels of the group of pixels having the same original component as the interpolated component, evaluating a spatial uniformity of each reference pixel in an environment within the group of pixels, and calculating assigned weights of the reference pixels with values normalized and proportional to the spatial uniformity.

    2. The method of claim 1, wherein the evaluating the spatial uniformity comprises calculating a gradient of components of pixels having an original green component adjacent to the reference pixels.

    3. The method of claim 2, wherein calculating the gradient of components of pixels comprises measuring an absolute difference between a greatest value and a smallest value of components of pixels having an original green component adjacent to the reference pixels.

    4. The method of claim 2, wherein evaluating the spatial uniformity comprises: identifying an orientation of spatial variation from a comparison of the components of the reference pixels; and selecting pixels with an original green component as used for the calculating the gradient of components of pixels based on the orientation of the spatial variation as identified.

    5. The method of claim 1, wherein the group of pixels comprises a set of pixels belonging to a square of pixels having an odd number of pixels on each side, the pixel of interest located at a center of the square of pixels.

    6. The method of claim 1, wherein a pixel matrix is delivered to the image processing chain circuit by the imager circuit in accordance with an elementary pattern of a red-green-blue-infrared (RGB-IR) 4×4 type having two red pixels, eight green pixels, two blue pixels, and four infrared pixels, each red pixel, blue pixel, and infrared pixel arranged such that it is adjacent only to green pixels.

    7. The method of claim 1, wherein the interpolating is implemented in a processing of depollution of an infrared noise from the pixel matrix, the interpolated component being an infrared component, the pixel of interest having an original red component, an original green component, and an original blue component.

    8. The method of claim 1, wherein the interpolating is implemented in a processing of reconstruction of a visible component instead of an infrared component, the interpolated component being a red component and a blue component, the pixel of interest having an original infrared component.

    9. The method of claim 1, wherein the interpolating is implemented in a processing for formatting the pixel matrix into a Bayer matrix, the interpolated component being one of a red component or a blue component, the pixel of interest having one of an original blue component or an original red component.

    10. An electronic device, comprising: an image processing chain circuit couplable to an imager circuit, the image processing chain circuit configured to: process a pixel matrix, each pixel comprising an original component of one of an original red, an original green, an original blue, or an original infrared component; and determine an interpolated component different from the original component of a pixel interest from components of a group of pixels neighboring the pixel of interest, the image processing chain circuit, to determine the interpolated component, is configured to: calculate a sum of the components of reference pixels weighted by an assigned weight, the reference pixels being pixels of the group of pixels having the same original component as the interpolated component, evaluate a spatial uniformity of each reference pixel in an environment within the group of pixels, and calculate assigned weights of the reference pixels with values normalized and proportional to the spatial uniformity.

    11. The electronic device of claim 10, wherein the image processing chain circuit, to evaluate the spatial uniformity, is configured to calculate a gradient of components of pixels having an original green component adjacent to the reference pixels.

    12. The electronic device of claim 11, wherein the image processing chain circuit, to calculate the gradient of components of pixels, is configured to measure an absolute difference between a greatest value and a smallest value of components of pixels having an original green component adjacent to the reference pixels.

    13. The electronic device of claim 11, wherein the image processing chain circuit, to evaluate the spatial uniformity, is configured to: identify an orientation of spatial variation from a comparison of the components of the reference pixels; and select pixels with an original green component as used for the calculating the gradient of components of pixels based on the orientation of the spatial variation as identified.

    14. The electronic device of claim 10, wherein the group of pixels comprises a set of pixels belonging to a square of pixels having an odd number of pixels on each side, the pixel of interest located at a center of the square of pixels.

    15. The electronic device of claim 10, wherein the image processing chain circuit is configured to process a pixel matrix in accordance with an elementary pattern of a red-green-blue-infrared (RGB-IR) 4×4 type having two red pixels, eight green pixels, two blue pixels, and four infrared pixels, each red pixel, blue pixel, and infrared pixel arranged such that it is adjacent only to green pixels.

    16. The electronic device of claim 10, wherein the image processing chain circuit comprises a processor for depolluting an infrared noise from the pixel matrix, the interpolated component being an infrared component, the pixel of interest having an original red component, an original green component, and an original blue component.

    17. The electronic device of claim 10, wherein the image processing chain circuit comprises a processor for reconstructing a visible component instead of an infrared component, the interpolated component being a red component and a blue component, the pixel of interest having an original infrared component.

    18. The electronic device of claim 10, wherein the image processing chain circuit comprises a processor for formatting the pixel matrix into a Bayer matrix, the interpolated component being one of a red component or a blue component, the pixel of interest having one of an original blue component or an original red component.

    19. An image processing chain circuit configured to: process a pixel matrix, each pixel comprising an original component of one of an original red, an original green, an original blue, or an original infrared component; and determine an interpolated component different from the original component of a pixel interest from components of a group of pixels neighboring the pixel of interest, the image processing chain circuit, to determine the interpolated component, is configured to: calculate a sum of the components of reference pixels weighted by an assigned weight, the reference pixels being pixels of the group of pixels having the same original component as the interpolated component, evaluate a spatial uniformity of each reference pixel in an environment within the group of pixels, and calculate assigned weights of the reference pixels with values normalized and proportional to the spatial uniformity.

    20. The image processing chain circuit of claim 19, wherein the image processing chain circuit is configured to process a pixel matrix in accordance with an elementary pattern of a red-green-blue-infrared (RGB-IR) 4×4 type having two red pixels, eight green pixels, two blue pixels, and four infrared pixels, each red pixel, blue pixel, and infrared pixel arranged such that it is adjacent only to green pixels.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0036] For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:

    [0037] FIG. 1 is a block diagram of an embodiment electronic device and photosensitive pixels; and

    [0038] FIGS. 2-6 are embodiments of photosensitive pixels.

    DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS

    [0039] FIG. 1 illustrates an electronic device DIS, including an image processing chain CHT. An input of the image processing chain CHT is intended to be connected to an imager IMG, and an output of the image processing chain CHT is intended to be connected to an image signal processing unit ISP.

    [0040] The imager IMG or the image signal processing unit ISP may belong to the device DIS in a variant that is fully integrated or not. The imager IMG includes a matrix of photosensitive “pixels” in an RGB-IR type configuration, including an interlaced pattern of photosensitive pixels dedicated to the visible light components and photosensitive pixels dedicated to an infrared light component.

    [0041] The photosensitive pixels generate an electrical signal representative of the amount of light received during an acquisition phase regardless of its wavelength. The components of the photosensitive pixels are conventionally defined by respectively blue, green, red, and infrared optical filters facing the corresponding photosensitive pixels. Furthermore, above the pixel matrix, an optical module typically incorporates a dual-band filter with a thin infrared spectral band specifying the sensitivity on infrared wavelengths and a visible spectral band. Consequently, the pixels dedicated to the infrared receive an infrared signal, but the pixels are also dedicated to the visible components, and are therefore partly polluted by this amount of infrared.

    [0042] The signals generated by the photosensitive pixels of the imager IMG are communicated to the processing chain CHT in the form of a “raw” digital data matrix RGBIR_RAW, also called “data pixels” or simply “pixels.”

    [0043] In the following, which relates to the processing of a digital image data matrix, the meaning of the term “pixel” corresponds to a position of a datum in the digital data matrix, this position being typically identical to the position of the photosensitive pixel corresponding in the photosensitive matrix of the imager IMG.

    [0044] Furthermore, each pixel is considered to contain a single digital datum, called component, representative of the intensity of the respective red, green, blue, or infrared component in the image at the position of that pixel.

    [0045] In this example, the “raw” digital data matrix RGBIR_RAW is of the RGB-IR 4×4 type, i.e., an elementary pattern of the matrix (i.e., the smallest element that can be repeated to compose the matrix) includes, in a sixteen-pixel square, two red R pixels, eight green G pixels, two blue B pixels, and four infrared IR pixels, arranged so that each red R, blue B, and infrared IR pixel is adjacent only to green G pixels, and typically so that the red R, blue B, and infrared IR pixels are substantially equally distributed in the elementary pattern.

    [0046] The processing chain CHT includes a processing means for depolluting DEPOL an infrared noise from the pixels configured to interpolate an infrared noise component in the red R, green G, and blue B pixels of the raw data matrix RGBIR_RAW. The depollution processing means DEPOL is thus capable of subtracting the infrared noise component from the information item contained in each visible pixel R, G, B, and of providing the corresponding “depolluted” components downstream of the processing chain CHT, in particular to a reconstruction processing means RCNST and to a formatting processing means RBAYR.

    [0047] The processing chain CHT further includes a means RCNST for a processing reconstruction of a visible component instead of an infrared component, configured to interpolate a reconstituted red R or blue B component, at the position of the infrared IR pixels of the raw data matrix RGBIR_RAW. The reconstruction processing means RCNST is thus capable of providing a reconstructed matrix RGB_RCNST of the non-Bayer RGB type, containing only visible R, G, and B components, but in a format that is not the Bayer format. For example, the matrix RGB_RCNST of the non-Bayer RGB type includes an elementary pattern in a square of sixteen pixels.

    [0048] In practice, the depollution processing means DEPOL and the reconstruction processing means RCNST can be pooled so that their respective functions are implemented in an “interlaced” and concomitant manner, for example, within the framework of a single pass algorithm.

    [0049] In this case, during the first phase of the processing chain CHT, the processing of depollution DEPOL of an infrared noise from the visible pixels R, G, and B, and simultaneously, the reconstruction processing RCNST of a visible component instead of an infrared component, use the information items obtained by scanning the “raw” digital data matrix RGBIR_RAW row by row and perform the processing pixel by pixel. The input image RGBIR_RAW is scanned only once hence the term “single pass.”

    [0050] Briefly, in the manner more fully described below in relation to FIG. 1, as well as on the one hand with FIGS. 2 to 4 and on the other hand with FIG. 5, when the “raw” digital data matrix RGBIR_RAW is scanned, depending on the original component of the processed pixel:

    [0051] If the processed pixel has the original infrared IR component, the depollution processing of the two red R (or blue B) reference pixels is carried out “on the fly,” and the interpolation calculation is then carried out from the depolluted reference pixels. The depolluted reference pixels are then stored in the output image, replacing the initial polluted value.

    [0052] If the pixel of interest has the original green component G, the depollution processing is executed.

    [0053] If the pixel of interest has the original red R or blue B component, then: either the pixel has already been depolluted, or it has not already been depolluted (when the pixel of interest does not correspond to any diagonal of an infrared pixel, i.e., it is in an angle), and in this case, the depollution processing is carried out.

    [0054] The processing chain CHT finally includes a means for formatting processing RBAYR into a Bayer matrix RGB_BAYR, configured to interpolate, in the reconstructed matrix RGB_RCNST, reconstituted red components R instead of blue components B and reconstituted blue components B instead of red components R, to provide a matrix processed in the Bayer format RGB_BAYR.

    [0055] The Bayer format includes an elementary pattern in a four-pixel square, containing one red R pixel, one blue B pixel on one diagonal, and two green G pixels on the other diagonal.

    [0056] The processed matrix RGB_BAYR can then be “handled” by an image signal processing unit ISP, which is conventional and, in embodiments, adapted for Bayer format matrices.

    [0057] Each processing means DEPOL, RCNST, and RBAYR is, thus, configured to implement, in embodiments, an interpolation of a component, called an interpolated component.

    [0058] The components of the pixels of the data matrices transmitted on the inputs of each of the processing means and processed by each of the processing means are called original components.

    [0059] The pixel on which the interpolated component is calculated is called the pixel of interest P. In the resulting processing matrix, the pixel of interest is called interpolated pixel ITP.

    [0060] For clarity, the following references are shown in relation to the reconstruction processing of a visible component instead of an infrared component RCNST, performed on the raw digital data matrix RGBIR_RAW. This being the case, the interpolation method is described below for the general case, applying equally well in the depollution processing DEPOL on the raw data matrix RGBIR_RAW, in the reconstruction processing RCNST on the raw data matrix RGBIR_RAW and collaboration with at least some pixels resulting from the depollution processing DEPOL, and in the formatting processing RBAYR on the reconstructed and depolluted matrix RGB_RCNST.

    [0061] Thus, for a pixel of interest P of a processed matrix RGBIR_RAW, RGB_RCNST, i.e., the location of a datum in the matrix, the original component (in this example, infrared) IR is the information item known at the input of the processing, contained by this pixel P or by pixels KER neighboring the pixel of interest P. In contrast, the interpolated component ITP is an information item at the position of the pixel of interest P in the matrix, which is unknown before the processing and “reconstructed” or “reconstituted” by calculations executed by the respective processing means DEPOL, RCNST, RBAYR.

    [0062] The interpolation implemented by each processing means DEPOL, RCNST, and RBAYR, uses the known information items of the original components of the pixels of the matrix, in particular the pixels KER neighboring the pixel of interest P, by assigning them a respective weight. The weight is conventionally a coefficient of distribution of the influence in the calculation of each weighted value relative to the others.

    [0063] In the interpolation implemented by the processing means DEPOL, RCNST, and RBAYR, the allocation of the weights is adjusted, taking into account the variations in textures and edges in the image, so that the strongest weights are given to the pixels located in the “flattest,” i.e., the most uniform, or “the least textured” areas.

    [0064] In this regard, the interpolation of an interpolated component ITP, different from the original component of a pixel of interest P is made from the components of a group of pixels KER neighboring the pixel of interest P, called kernel or pixel kernel KER.

    [0065] For example, the group of pixels neighboring the pixel of interest P, i.e., the kernel KER, comprises a set of pixels belonging to a square of pixels having an odd number of pixels (for example, five) on each side, the pixel of interest P being located in the center of the square KER.

    [0066] Thus, the interpolation comprises a calculation of the sum of the components of reference pixels weighted by a respectively assigned weight, the reference pixels being the pixels of the kernel KER having the same original component (in this example, red) R as the interpolated component R of the resulting pixel ITP in the processed matrix RGB_RCSNT.

    [0067] To obtain the weights, the interpolation comprises an evaluation of a spatial uniformity of an environment, within the kernel KER, of each reference pixel (in this example, the pixels having the red component in the kernel KER) R, and a calculation of the weights assigned to the reference pixels R at values normalized and proportional to the respective spatial uniformity.

    [0068] The evaluation of the spatial uniformity of the reference pixels R advantageously comprises a calculation of gradients on the components of the pixels having the original green component G adjacent to the respective reference pixels R. For example, the calculation of the gradients can be obtained by a measurement of the absolute difference between the greatest value and the smallest value of the components of the green pixels G adjacent to the reference pixels R.

    [0069] Within the framework of an implementation of “single pass” depollution DEPOL and reconstruction RCNST processing, the pixels having the original green component G come from the raw data matrix RGBIR_RAW and have not yet received the depollution DEPOL processing at the time of the implementation of the calculation of the gradients mentioned above. This does not pose a problem in practice since it is assumed that the green G components and the infrared noise components on these pixels are correlated, i.e., the infrared noise components are generally uniform in the areas where the green G components are generally uniform, and the infrared noise components have variations in areas where the green G components have variations. Consequently, the presence of the infrared noise component in the data taken into account in calculating the gradients has little or no impact on the final weighting decision.

    [0070] Reference is now made to FIGS. 2 to 6 to detail different cases of evaluation of the spatial uniformity of the environment of the reference pixels, as well as the calculations of the weights and the interpolated component of the respective cases.

    [0071] FIGS. 2 to 4 advantageously correspond to the depollution processing DEPOL, wherein the interpolation is implemented so that the interpolated component is the infrared component IR (infrared noise component), and the pixels of interest P have the original red R, green G, and blue B components. Consequently, the reference pixels have the original infrared IR component.

    [0072] FIG. 2 illustrates the different possible cases of kernel KER and respective reference pixels P1-P6, P1-P4, for two positions of pixels of interest P having the original green component KER_G, KER_G2, and for the position of pixels of interest P having the original red or blue component KER_RB.

    [0073] Thus, for the pixels of interest P with an original green component G, the kernels KER_G, KER_G2 include six reference pixels P1, P2, P3, P4, P5, P6, distributed either in two rows and three columns KER_G, or in three rows and two columns KER_G2, constituting two perfectly equivalent cases by rotation of a quarter of a turn.

    [0074] For the pixels of interest P with an original red R or blue B component, the kernel KER_RB includes four reference pixels P1, P2, P3, P4, located at the diagonals of the pixels of interest P. The case illustrated corresponds to a pixel of interest P with original blue component B, but the distribution of the reference pixels P1, P2, P3, P4 is strictly identical for a pixel of interest P with original red component R.

    [0075] FIG. 3, in relation with the equations Eq.301 to Eq.338, illustrates an example of the interpolation calculations of the value of the interpolated infrared noise component (ITP) within the group KER_G in the case where the pixel of interest P has the original green component.

    [0076] In the equations, references such as P1, P4, G1 express the value of the component of the pixel designated by the reference.

    [0077] In this example, the evaluation of the spatial uniformity first comprises an identification of the orientation of spatial variation ORT_1, ORT_2, ORT_3 from a comparison of the components of the reference pixels P1-P6 based on the equations Eq.301 and Eq.302.


    grad.sub.EW=|(P1+P4)−(P3+P6)|  Eq.301


    grad.sub.NS=|(P1+P2+P3)−(P4+P5+P6)|  Eq.302

    [0078] If grad.sub.NS>grad.sub.EW, then an orientation of spatial variation ORT_1 is identified in a direction N-S (“North-South”). The calculation of the weights W1, W2, W3, W1′, W2′, W3′ as defined by the equations Eq.311-Eq.319, uses a selection of pixels G, G1, G1′ having the original green component, which are aligned with the pixel of interest P in the identified N-S orientation.

    [0079] The evaluation of the spatial uniformity of the environment of the reference pixels P1-P6, is defined by the equations Eq.311 and Eq.312.


    grad=|G1−G|  Eq.311


    grad′=|G1′−G|  Eq.312

    [0080] The calculation of the weights W, W′, W1, W1′ assigned to the reference pixels P1-P6 at values, which are normalized and proportional to the respective spatial uniformity is defined by the equations Eq.313 to Eq.318.

    [00001] α = 1 + grad 1 + grad Eq .313 { W 2 = α * W 2 W 1 + W 2 + W 3 + W 1 + W 2 + W 3 = 1 W 1 = W 3 = 0 . 5 * W 2 W 1 = W 3 = 0.5 * W 2 Eq .314

    [0081] The resolution of the system Eq.314 gives the different values of the weights assigned to the reference pixels P1-P6:

    [00002] W = 1 2 ( 1 + α ) Eq .315 W = α 2 ( 1 + α ) Eq .316 W 1 = 1 4 ( 1 + α ) Eq .317 W 1 = α 4 ( 1 + α ) Eq .318

    [0082] Finally, the interpolated component (ITP) is obtained by calculating the sum of the components of the reference pixels {Pi}.sub.1≤i≤6 weighted by the respective weights {ωi}.sub.1≤i≤6, as defined by the equation Eq.319.


    ITP=Σ.sub.iPi*ωi with {ωi}.sub.1≤i≤6={ω1=W1; ω2=W; ω3=W1; ω4=W1′; ω5=W′; ω6=W1′}, as represented in the case ORT_1 compared to the Pi of the kernel KER_G of FIG. 3.  Eq.319

    [0083] If grad.sub.NS=grad.sub.EW, then an orientation of spatial variation ORT_2 is identified in no direction, and the values of the weights W1, W are fixed arbitrarily homogeneously as defined by the equations Eq.321-Eq.323.


    W=0.25  Eq.321


    W1=0.125  Eq.322

    [0084] And, the interpolated component (ITP) is obtained by calculating the sum of the components of the reference pixels {Pi}.sub.1≤i≤6 weighted by the respective weights {ωi}.sub.1≤i≤6, as defined by the equation Eq.323.


    ITP=Σ.sub.iPi*ωi with {ωi}.sub.1≤i≤6={ω1=W1; ω2=W; ω3=W1; ω4=W1; ω5=W; ω6=W1}, as represented in the case ORT_2 compared to the Pi of the kernel KER_G of FIG. 3.  Eq.323

    [0085] If grad.sub.NS<grad.sub.EW, then an orientation of spatial variation ORT_3 is identified in a direction W-E (“West-East”). The calculation of the weights W1, W2, W3, Wf, W2′, W3′ as defined by the equations Eq.331-Eq.338, uses a selection of pixels G, G1, G1′ having the original green component, which are aligned with the pixel of interest P in the identified W-E orientation.

    [0086] The evaluation of the spatial uniformity of the environment of the reference pixels P1-P6, is defined by the equations Eq.331 and Eq.332.


    grad=|G1−G|  Eq.331


    grad′=|G1′−G|  Eq.332

    [0087] The calculation of the weights W1, W1′, W assigned to the reference pixels P1-P6 at values, which are normalized and proportional to the respective spatial uniformity is defined by the equations Eq.333 to Eq.337

    [00003] α = 1 + grad 1 + grad Eq .333 { W 1 = α * W 1 2 W + 2 W 1 + 2 W 1 = 1 W = 0 . 2 5 Eq .334

    [0088] The resolution of the system Eq.334 gives the different values of the weights assigned to the reference pixels P1-P6:

    [00004] W 1 = 1 4 ( 1 + α ) Eq .335 W 1 = α 4 ( 1 + α ) Eq .336 W = 0. 2 5 Eq .337

    [0089] Finally, the interpolated component (ITP) is obtained by calculating the sum of the components of the reference pixels {Pi}.sub.1≤i≤6 weighted by the respective weights {ωi}.sub.1≤i≤6, as defined by equation Eq.338.


    ITP=Σ.sub.iPi*ωi avec{ωi}.sub.1≤i≤6={ω1=W1; ω2=W; ω3=W1′; ω4=W1; ω5=W; ω6=W1′}, as represented in the case ORT_3 compared to the Pi of the kernel KER_G of FIG. 3.  Eq.338

    [0090] FIG. 4, in relation with the equations Eq.401 to Eq.436, illustrates an example of the interpolation calculations of the value of the interpolated infrared noise component (ITP) within the group KER_RB in the case where the pixel of interest P has the original blue component B.

    [0091] In the equations, the references such as P1, P4, GN1 express the value of the component of the pixel designated by the reference.

    [0092] In this example, the evaluation of the spatial uniformity first comprises an identification of the orientation of spatial variation ORT_1, ORT_2, ORT_3 from a comparison of the components of the reference pixels P1-P4 based on the equations Eq.401, Eq.402.


    grad.sub.EW=|(P1+P3)−(P2+P4)|  Eq.401


    grad.sub.NS=|(P1+P2)−(P3+P4)|  Eq.402

    [0093] If grad.sub.NS>grad.sub.EW, then an orientation of spatial variation ORT_1 is identified in a direction N-S (“North-South”), and the calculation of the weights W1, W2, as defined by the equations Eq.411-Eq.416, uses pixel selection GN1, GW1, GE1, GS1, GN1′, GE1′, GN2 and respectively GN2, GW2, GE2, GS2, GS2′, GW2′, GS1 having the original green component, which are adjacent with reference pixels P1, P2, P3, P4.

    [0094] The evaluation of the spatial uniformity of the environment of the reference pixels P1-P4, is defined by the equations Eq.411 and Eq.412.


    grad1=max(GN1;GS1;GE1;GW1)−min(GN1;GS1;GE1;GW1)+max(GN1′;GN2;GE1;GE1′)−min(GN1′;GN2;GE1;GE1′)  Eq.411


    grad2=max(GN2;GS2;GE2;GW2)−min(GN2;GS2;GE2;GW2)+max(GS1;GS2′;GW2′;GW2)−min(GS1;GS2′;GW2′;GW2)  Eq.412

    [0095] The calculation of the weights W1, W2 assigned to the reference pixels P1-P4 at values, which are normalized and proportional to the respective spatial uniformity is defined by the equations Eq.413 to Eq.415.

    [00005] α = 1 + grad 1 1 + grad 2 Eq .413 W 1 = 1 2 ( 1 + α ) Eq .414 W 2 = α 2 ( 1 + α ) Eq .415

    [0096] Finally, the interpolated component (ITP) is obtained by calculating the sum of the components of the reference pixels {Pi}.sub.1≤i≤4 weighted by the respective weights {ωi}.sub.1≤i≤4, as defined by the equation Eq.416.


    ITP=Σ.sub.iPi*ωi with {ωi}.sub.1≤i≤4={ω1=W1; ω2=W1; ω3=W2; ω4=W2}, as represented in the case ORT_1 compared to the Pi of the kernel KER_RB of FIG. 4.  Eq.416

    [0097] If grad.sub.NS=grad.sub.EW, then an orientation of spatial variation ORT_2 is identified in no direction, and the values of the weights are fixed arbitrarily homogeneously as defined by the equations Eq.421-Eq.422.


    W=0.25  Eq.421

    [0098] And, the interpolated component (ITP) is obtained by calculating the sum of the components of the reference pixels {Pi}.sub.1≤i≤4 weighted by the respective weights {ωi}.sub.1≤i≤4, as defined by the equation Eq.422.


    ITP=Σ.sub.iPi*ωi with {ωi}.sub.1≤i≤6={ω1=W; ω2=W; ω3=W; ω4=W}, as represented in the case ORT_2 compared to the Pi of the kernel KER_RB of FIG. 4.  Eq.422

    [0099] If grad.sub.NS<grad.sub.EW, then an orientation of spatial variation ORT_3 is identified in a direction W-E (“West-East”). The calculation of the weights W1, W2, as defined by the equations Eq.431-Eq.436, uses pixel selection GN1, GW1, GE1, GS1, GW2′, GW2, GS2′ and respectively GN2, GW2, GE2, GS2, GE1′, GE1 having the original green component, which are adjacent with at least some of the reference pixels P1, P2, P3, P4.

    [0100] The evaluation of the spatial uniformity of the environment of the reference pixels P1-P4 is defined by the equations Eq.431 and Eq.432.


    grad1=max(GN1;GS1;GE1;GW1)−min(GN1;GS1;GE1;GW1)+max(GS1;GS2′;GW2′;GW2)−min(GS1;GS2′;GW2′;GW2)  Eq.431


    grad2=max(GN2;GS2;GE2;GW2)−min(GN2;GS2;GE2;GW2)+max(GN1′;GE1′;GN2;GE1)−min(GN1′;GE1′;GN2;GE1)  Eq.432

    [0101] The calculation of the weights W1, W2 assigned to the reference pixels P1-P4 at values which are normalized and proportional to the respective spatial uniformity is defined by the equations Eq.433 to Eq.435

    [00006] α = 1 + grad 1 1 + grad 2 Eq .433 W 1 = 1 2 ( 1 + α ) Eq .434 W 2 = α 2 ( 1 + α ) Eq .435

    [0102] Finally, the interpolated component (ITP) is obtained by calculating the sum of the components of the reference pixels {Pi}.sub.1≤i≤4 weighted by the respective weights {ωi}.sub.1≤i≤4, as defined by the equation Eq.436.


    ITP=Σ.sub.iPi*ωi with {ωi}.sub.1≤i≤4={ω1=W1; ω2=W2; ω3=W1; ω4=W2}, as represented in the case ORT_3 compared to the Pi of the kernel KER_RB of FIG. 4.  Eq.436

    [0103] FIG. 5 advantageously corresponds to the reconstruction processing of a visible component instead of an infrared component RCNST, where the interpolation is implemented so that the interpolated components are the red R and blue B components, and the pixels of interest P have the original infrared IR component. Consequently, the reference pixels have the original red R and blue B components.

    [0104] Equations Eq.501 to Eq.506 describe an example of interpolation calculations of the value of the red or blue interpolated component within the group KER in the case where the pixel of interest P has the original infrared component IR.

    [0105] In the equations, the references such as P1, P2, P11, express the value of the component of the pixel designated by the reference.

    [0106] The case illustrated in relation to FIG. 5 corresponds to the red interpolated component R, the reference pixels P1, P2 being the two pixels having the original red component R in the kernel KER. The substitution of the red pixels by the blue pixels of the kernel KER directly gives the case where the interpolated component is blue.

    [0107] The evaluation of the spatial uniformity of the environment of the reference pixels P1, P2 is defined by the equations Eq.501 and Eq.502.


    grad1=max(P11,P12,P13,P14)−min(P11,P12,P13,P14)  Eq.501


    grad2=max(P21,P22,P23,P24)−min(P21,P22,P23,P24)  Eq.502

    [0108] The calculation of the weights ω1, ω2 assigned to the reference pixels P1, P2 at values, which are normalized and proportional to the respective spatial uniformity is defined by the equations Eq.503 to Eq.505.

    [00007] α = 1 + grad 2 1 + grad 1 Eq .503 ω1 = α 1 + α Eq .504 ω2 = 1 1 + α Eq .505

    [0109] Finally, the interpolated component (ITP) is obtained by calculating the sum of the components of the reference pixels {Pi}.sub.1≤i≤2 weighted by the respective weights {ωi}.sub.1≤i≤2, as defined by the equation Eq.506.


    ITP=Σ.sub.iPi*ωi  Eq.506

    [0110] FIG. 6 advantageously corresponds to the formatting processing RBAYR of a matrix of pixels in the Bayer format, where the interpolation is implemented so that the interpolated components are either red R or blue B, the pixels of interest P (shown with a white fill in FIG. 6) have an original component which is respectively either blue B or red R. Consequently, the reference pixels P1-P6 have the original components respectively either red R or blue B.

    [0111] Equations Eq.601 to Eq.630 describe an example of interpolation calculations of the value of the red or blue interpolated component, within the group KER in the case where the pixel of interest P has the original blue or red component.

    [0112] In the equations, the references such as P1, P6, GN2, express the value of the component of the pixel designated by the reference.

    [0113] The case illustrated in relation to FIG. 6 corresponds to the blue interpolated component, the reference pixels P1-P6 being the pixels having the original blue component B in the kernel KER. The substitution of the red pixels by the blue pixels of a corresponding kernel directly gives the case where the interpolated component is blue.

    [0114] The evaluation of the spatial uniformity of the environment of the reference pixels P1-P4 is defined by the equations Eq.601 to Eq.606.


    gradNS=|GN−GS|  Eq.601


    gradEW=|GE−GW|  Eq.602


    gradDiag1=|GN−GE|  Eq.603


    gradDiag2=|GW−GS|  Eq.604


    gradDiag3=|GN−GW|  Eq.605


    gradDiag4=|GE−GS|  Eq.606

    [0115] The calculation of the weights WNormNS, WNormEW, WNormDiag assigned to the reference pixels P1-P6 at values, which are normalized and proportional to the respective spatial uniformity is defined by the equations Eq.611 to Eq.617.

    [00008] InvGradNS = 1 1 + gradNS Eq .611 InvGradEW = 1 1 + gradEW Eq .612 InvGradDiag = 1 1 + Avg ( gradDiag 1 ; gradDiag 2 ; gradDiag 3 ; gradDiag 4 ) , Eq .613

    where Avg ( ) is a conventional “average” function.

    [00009] Sum = InvGradNS + InvGradEW + InvGradDiag Eq .614 WNorm NS = InvGradNS Sum Eq .615 WNorm EW = InvGradEW Sum Eq .616 WNorm Diag = InvGradDiag Sum Eq .617

    [0116] An average component in the orientation N-S “North-South” PNS, an average component in the orientation E-W “East-West” PEW and an average component in the diagonal orientation PDiag, are further defined by the equations Eq.621 to Eq.628

    [00010] P N S = P 1 + P 6 2 Eq .621 P E W = P 3 + P 4 2 Eq .622

    [0117] The spatial uniformity of the environment of the reference pixels P2, P5 is evaluated for the average component in the diagonal orientation PDiag:


    grad1=max(GN2,GS2,GE2,GW2)−min(GN2,GS2,GE2,GW2)  Eq.623


    grad2=max(GN5,GS5,GE5,GW5)−min(GN5,GS5,GE5,GW5)  Eq.624

    [0118] The weights ω1, ω2 assigned to the reference pixels P2, P5 for the average component in the diagonal orientation PDiag is calculated:

    [00011] α = 1 + grad 1 1 + grad 2 Eq .625 ω1 = 1 1 + α Eq .626 ω2 = α 1 + α Eq .627 P D i a g = ω 1 * P 2 + ω 2 * P 5 Eq .628

    [0119] Finally, the interpolated component ITP is obtained by calculating the sum of the average components in the respective orientations PNS, PEW, PDiag weighted by the respective weights WNormNS, WNormEW, WNormDiag, as defined by the equation Eq.630.


    ITP=WNorm.sub.NS*P.sub.NS+WNorm.sub.EW*P.sub.EW+WNorm.sub.Diag*P.sub.Diag  Eq.630

    [0120] The exemplary embodiments and implementations described above thus propose an interpolation technique adapted for three types of image processing operations, adapted for processing an image of the RGB-IR matrix type at the input of an image processing unit ISP. The interpolation technique takes into account variations in textures and edges in the image by means of an evaluation of the spatial uniformity of the reference pixels. The weights assigned to the reference pixels are adjusted based on the spatial uniformity evaluated for the respective pixels. The adjustment of the weights is done so that the strongest weights are given to the pixels located in the “flattest” i.e., the most uniform areas. This allows to improve the image quality and leads to more faithful reproduction.

    [0121] Examples of principle calculations have been given in this regard; however, the invention is not limited to these examples of embodiment, implementation and calculations, but encompasses all the variants, for example, it is possible to provide for improving the calculations by using conventional means, for example to proportion the amount of infrared noise to be subtracted in the depollution mechanism, by the ratio between the energy accumulated on the infrared band by the color pixel to be depolluted and the energy accumulated over the entire spectrum (visible and infrared) accumulated by the infrared pixel, it is also possible to dimension the size of the kernel “KER” differently according to the type of elementary pattern of the matrix processed.

    [0122] Although the description has been described in detail, it should be understood that various changes, substitutions, and alterations may be made without departing from the spirit and scope of this disclosure as defined by the appended claims. The same elements are designated with the same reference numbers in the various figures. Moreover, the scope of the disclosure is not intended to be limited to the particular embodiments described herein, as one of ordinary skill in the art will readily appreciate from this disclosure that processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, may perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

    [0123] The specification and drawings are, accordingly, to be regarded simply as an illustration of the disclosure as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present disclosure.