Device and method for parasitic heat compensation in an infrared camera
11601606 · 2023-03-07
Assignee
Inventors
Cpc classification
G01J5/064
PHYSICS
G06T7/80
PHYSICS
G01J5/06
PHYSICS
International classification
G06T7/80
PHYSICS
G01J5/06
PHYSICS
H04N17/00
ELECTRICITY
Abstract
A method of calibrating an infrared (IR) camera including a pixel array housed in a housing, the pixel array having an image sensor and one or more parasitic heat sensing pixels arranged to receive infrared light from different portions of an interior surface of the housing, the method including: receiving, by a processing device, one or more readings from each of the parasitic heat sensing pixels and from each pixel of the pixel array; and generating, by the processing device based on the one or more readings, one or more conversion matrices for converting readings from the parasitic heat sensing pixels into pixel correction values for performing 2D signal correction of the image.
Claims
1. A method of calibrating an infrared camera comprising a pixel array housed in a housing, the pixel array having an image sensor and one or more parasitic heat sensing pixels arranged to receive infrared light from different portions of an interior surface of the housing, the method comprising: receiving, by a processing device, one or more readings from each of said parasitic heat sensing pixels and from each pixel of said pixel array; and generating, by the processing device based on said one or more readings, one or more conversion matrices for converting said readings from said parasitic heat sensing pixels into pixel correction values for performing 2D signal correction of signals captured by the image sensor, wherein generating the one or more conversion matrices comprises: determining by the processing device, for each image pixel of the image sensor and for each of said parasitic heat sensing pixels based on an assumption of the responsivity of each pixel, a relative transfer function based on an etendue of each pixel with respect to each of a plurality of zones of said interior surface.
2. The method of claim 1, wherein generating the one or more conversion matrices further comprises: determining the responsivity of each of said image pixels and each of said parasitic heat sensing pixels.
3. The method of claim 2, wherein the responsivity of said parasitic heat sensing pixels is determined by placing a black body in the field view of the pixels of said pixel array, and taking readings from said parasitic heat sensing pixels at at least two different temperatures.
4. The method of claim 2, wherein determining the relative transfer function based on an etendue of each pixel comprises defining, at least partially by the processing device, a model of the interior surface of said housing comprising a plurality of zones of uniform temperature, and calculating by the processing device the etendue of each pixel with respect to each of the zones of said model based on the geometry of the camera housing and of the pixel array.
5. The method of claim 4, wherein said model is in the form of a dome.
6. The method of claim 5, further comprising determining, by the processing device, a radius of said dome based on an average reading captured by said image sensor while said black body is placed in the field view of the pixels of said pixel array.
7. The method of claim 5, wherein each of the plurality of zones of said model has the same surface area.
8. A non-transitory storage medium storing computing instructions for implementing the method of claim 1 when executed by a processing device.
9. A computing device configured to perform calibration of an IR camera, the computing device comprising: an IR camera interface configured to receive an image captured by an image sensor of the IR camera and one or more readings from parasitic heat sensing pixels of the IR camera; and a processing device configured to: generate, based on said one or more readings, one or more conversion matrices for converting said readings from said parasitic heat sensing pixels into pixel correction values for performing 2D signal correction of signals captured by the image sensor, wherein generating the one or more conversion matrices comprises: determine for each image pixel of the image sensor and for each of said parasitic heat sensing pixels based on an assumption of the responsivity of each pixel, a relative transfer function based on an etendue of each pixel with respect to each of a plurality of zones of said interior surface.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The foregoing and other features and advantages will become apparent from the following detailed description of embodiments, given by way of illustration and not limitation with reference to the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
DETAILED DESCRIPTION
(19) While embodiments are described in the following description in relation with a pixel array of the microbolometer type, it will be apparent to those skilled in the art that the methods described herein could be equally applied to other types of IR cameras, including cooled devices.
(20) Throughout the present disclosure, the term “substantially” is used to designate a tolerance of plus or minus 10% of the value in question. Furthermore, the following terms are considered to have the following definitions in the present disclosure:
(21) pixel array—an arrangement of light sensitive pixels, in which the pixels may be arranged in columns and rows, or in other arrangements;
(22) image sensor—an arrangement, usually rectangular, of pixels of the pixel array that serves for capturing an image from the image scene;
(23) image pixel—each pixel of the image sensor;
(24) parasitic heat sensing pixel—a pixel having a field of view that has been modified with respect to that of the image pixels in order to favour the capture of parasitic heat. For example, each parasitic heat sensing pixel is configured to capture a greater portion of parasitic heat than each image pixel of the pixel array; and
(25) 2D signal correction—the correction of the signals or readings generated by an image sensor prior to the creation of the image, the image optionally being subjected to one or more subsequent steps of image correction.
(26)
(27) The pixel array 102 is indicated by a dashed rectangle in
(28) In the example of
(29) In the example of
(30) In the example of
(31) A control circuit (CTRL) 110 for example provides control signals to the pixel array 102, to the reference pixels 106, and to the output block 108.
(32) The raw image I.sub.B and the readings P.sub.R from the parasitic heat sensing pixels 105 are for example provided to an image processing circuit (IMAGE PROCESSING) 112, which for example applies 2D signal correction to the pixels of the image to produce a corrected image I.sub.C. In particular, the image processing circuit 112 for example applies correction of parasitic heat in the captured image based on the readings P.sub.R from the parasitic heat sensing pixels 105 and based on a conversion matrix M.sub.Cpix stored in a non-volatile memory (NVM) 114, which for example permits a conversion of the readings P.sub.R into a correction value for each pixel of the captured image.
(33) Indeed, a voltage reading VOUT from each image pixel 104 of the image sensor 103 can be modelled by the following equation:
VOUT=({right arrow over (P1)},T.sub.pix)
where T.sub.pix is the temperature of the pixel, {right arrow over (P1)} is a vector representing the parameters of the pixel array effecting the temperature to voltage conversion, such as the conversion gain, losses in the readout path, etc., and is the function linking the output voltage VOUT to the parameters {right arrow over (P1)} and the temperature T.sub.pix.
(34) The temperature T.sub.pix of each pixel will be influenced by the various thermal components, and can for example be modelled by the following equation:
T.sub.pix=g({right arrow over (P2)},ϕ.sub.parasitic,ϕ.sub.scene,T.sub.CMOS)
where ϕ.sub.scene is the luminous flux arriving at the pixel from the image scene via the optical elements of the IR camera, ϕ.sub.parasitic is the luminous flux arriving at the pixel from sources other than the image scene, such as from the interior surfaces of the housing of the IR camera, T.sub.CMOS is the temperature of the focal plane, in other words the temperature of the substrate on which the image sensor is formed, {right arrow over (P2)} is a vector representing the parameters of the image pixels effecting the conversion of the received luminous flux to the temperature T.sub.pix of the pixel, and g is the function linking the temperature T.sub.pix to the parameters {right arrow over (P2)} and variables ϕ.sub.scene, ϕ.sub.parasitic and T.sub.CMOS.
(35) By estimating the parameters {right arrow over (P1)} and {right arrow over (P2)} and the variables ϕ.sub.parasitic and T.sub.CMOS, and by approximating the functions and g, it is possible to isolate the component ϕ.sub.scene and thereby generate a thermographic image of the scene. Among these parameters, variables and functions, it is the component ϕ.sub.parasitic that is the most challenging to estimate accurately. Indeed, this component can vary for each image pixel based on the temperature of several different interior surfaces in the IR camera, and the effect on each pixel will depend on the distance and sensitivity of the pixel with respect to the relevant surfaces.
(36) The present inventors have found that, by using readings from parasitic heat sensing pixels positioned in the image plane, it becomes possible to generate a relatively precise estimation of the luminous flux ϕ.sub.parasitic received by each image pixel, without the use of a temperature probe, as will be described in more detail below.
(37)
(38) In an operation 201, the readings P.sub.R captured by the parasitic heat sensing pixels 105 are received by the circuit 112.
(39) In an operation 202, signal correction values are generated based on the readings P.sub.R. For example, the conversion matrix M.sub.Cpix, and optionally one or more further matrices stored by the non-volatile memory 114, are used to convert the readings P.sub.R into a signal correction value for each pixel of the image I.sub.B, as will now be explained in more detail.
(40) In some embodiments, the readings P.sub.R are first processed in order to extract an estimate of the temperature of a plurality q of zones of a model of the interior surface of the IR camera housing, wherein each zone of the model is for example considered to have a uniform temperature. These estimates form a luminance vector {right arrow over (V)}.sub.lum of the form [ϕ.sub.1 . . . ϕ.sub.q], each of the values ϕ.sub.1 . . . ϕ.sub.q representing a luminous flux from the q zones of the model. For example, the readings P.sub.R form an output vector {right arrow over (V)}.sub.out of the form [Out1 . . . Outn], which can for example be characterized as follows:
{right arrow over (V)}.sub.out=M.sub.Clum.Math.{right arrow over (V)}.sub.lum
where M.sub.Clum defines the relationship between the luminance values ϕ.sub.1 . . . ϕ.sub.q and the n readings P.sub.R of the output vector {right arrow over (V)}.sub.out and is for example of the form:
(41)
wherein the parameters P.sub.w1.sup.1 to P.sub.wn.sup.q represent the relation between the readings Out1 to Outn and the luminance ϕ.sub.i of each zone i.
(42) Thus the luminance vector {right arrow over (V)}.sub.lum can for example be generated from the readings of the output vector {right arrow over (V)}.sub.out based on the following multiplication:
{right arrow over (V)}.sub.lum=M.sup.−1.sub.Clum.Math.{right arrow over (V)}.sub.out
where M.sup.−1.sub.Clum is the inverse of the matrix M.sub.Clum.
(43) The parasitic luminance present at each of the p pixels of the image sensor will be represented herein by a vector {right arrow over (V)}.sub.parasitic of the form [ϕ.sub.parasitic_1 . . . ϕ.sub.parasitic_p]. The conversion matrix M.sub.Cpix is for example adapted to convert the luminance vector {right arrow over (V)}.sub.lum into an estimation of the parasitic luminance present at each pixel in accordance with the following equation:
{right arrow over (V)}.sub.parasitic=M.sub.Cpix.Math.{right arrow over (V)}.sub.lum
(44) The conversion matrix M.sub.Cpix is for example of dimensions p by q, where p is the number of pixels in the image sensor and q is the number of zones of the model of the interior surface of the housing.
(45) In an operation 203, the signal correction values are applied to the pixels of the captured image. For example, this correction may be performed directly to the signals forming the raw image I.sub.B, or after other forms of offset and/or gain correction have been applied to the raw image I.sub.B.
(46) In one embodiment, the signal correction is applied by subtracting, from each of the p pixels of the captured image I.sub.B, the corresponding correction value from the vector {right arrow over (V)}.sub.parasitic. In alternative embodiments, the signal correction is based on an estimation of the inverse of the function g described above in order to determine the scene component ϕ.sub.scene.
(47)
(48)
(49) The pixel array 102 is mounted on a substrate 402. A housing 404 of the IR camera is also mounted on the substrate 402, and houses the pixel array 102. For example, the housing 404 is formed of moulded plastic, or of metal. In the example of
(50) It should be noted that the particular form of the housing 404 of
(51) An arc 420 in
(52) In some embodiments, the field of view of one or more of the parasitic heat sensing pixels 105 is restricted such that it does not encompass the aperture 418, and thus these pixels are not directly lit by the image scene. It should be noted that even if a parasitic heat sensing pixel targets a zone of the housing close to the aperture 418, such as the zone 416 in
(53) The parasitic heat sensing pixels 105 are for example each oriented, in at least the plane of the pixel array, in a different manner from each other in order to detect parasitic heat from different areas of an interior surface of the housing 404 of the image sensor. For example, one of the parasitic heat sensing pixels 105 is configured to directly receive infrared light only from a first area of the interior surface of the housing, and another of the parasitic heat sensing pixels 105 is configured to directly receive infrared light only from a second area of the interior surface of the housing, the first and second areas being non-overlapping.
(54) Examples of the structure of the parasitic heat sensing pixels 105 will now be described with reference to
(55)
(56) In the example of
(57)
(58) The partial light shield 508 for example comprises a support layer 514, for example formed of Si, SiN, SiON, or another material, covered by a reflective layer 516. The support layer 514 is for example suspended over the pixel 105 by a support wall 518, which also for example blocks light from entering from one side of the pixel. The opposite side of the pixel is open, such that light at a certain angle can enter the space between the shield 508 and the reflective layer 510, and be absorbed by the membrane 502. This is aided by the portion 512 of the reflective layer, which for example directs light at a certain angle onto the underside of the partial light shield 508, from which it reflects onto the membrane 502 of the bolometer.
(59)
(60)
(61) The mask 602 for example comprises a support layer 606 covered by a reflective layer 608 and through which the openings 604 over each pixel 105 are formed. The support layer 606 and reflective layer 608 are for example suspended over the pixels 105 of the sub-array by lateral walls 610.
(62) The openings 604 over each pixel 105 are for example misaligned with respect to the membrane 502 of each bolometer such that only light at certain angles falls on the membrane 502 of each bolometer. Each pixel 105 is for example configured to receive light from a different portion of the interior of the housing.
(63)
(64) The cover or mask 602 is represented by dashed lines line
(65) In the example of
(66) In the example of
(67)
(68) In the embodiment of
(69)
(70) In the example of
(71) As described above, the signal correction applied to images captured by the image sensor 103 based on readings from the parasitic heat sensing pixels 105 is for example based on an approximation of the interior surface of the camera housing. For example, the conversion matrices M.sup.−1.sub.Clum and M.sub.Cpix described above are based on a model representing the interior surface of the IR camera housing. Examples of models for approximating the interior surface the housing 404 of
(72)
(73)
(74) According to some embodiments, the model of the interior of the housing is divided into q discrete zones, each zone being considered to have a uniform temperature, as will now be described with reference to
(75)
(76) The surface of the model is divided into q discrete zones 808, two of which are shown shaded in the example of
(77) The number q of zones is for example equal to at least two, and in some embodiments to at least eight. It will be apparent to those skilled in the art that the greater the number of zones, the better the precision, but the more complex the image processing for correcting the signals of the images based on the luminance vector {right arrow over (V)}.sub.lum.
(78) According to embodiments of the present disclosure, the readings from the parasitic heat sensing pixels are used to estimate an average heat of each zone 808 of the model, as will now be described in more detail with reference to
(79)
(80) As represented by
(81)
(82) There are three possible relations between the observation areas of the parasitic heat sensing pixels and the zones of the model.
(83) According to a first relation, there are as many parasitic heat sensing pixels as zones in the model, and each parasitic heat sensing pixel has an angular sensitivity in θ and φ adapted to a corresponding one of the zones. Thus the reading from each parasitic heat sensing pixel corresponds directly to a reading for a corresponding zone.
(84) According to a second relation, there is a greater number of parasitic heat sensing pixels than zones of the model, and/or the total areas observed by the parasitic heat sensing pixels is greater than the area of the model. For example, the relation is based on following equation:
M.sub.Clum.Math.{right arrow over (V)}.sub.lum={right arrow over (V)}.sub.out
This can be expressed as:
(85)
where the model comprises q discrete zones, there are n parasitic heat sensing pixels w1 to wn, the values ϕ.sub.1 to ϕ.sub.q of the vector {right arrow over (V)}.sub.lum correspond to the parasitic luminance from each zone 1 to q, which is the vector to be found, the values P.sub.w1.sup.1 to P.sub.wn.sup.q of the matrix M.sub.Clum represent the contribution of the parasitic heat sensing pixels to each zone 1 to q, and the values Out1 to Outn of the vector {right arrow over (V)}.sub.out correspond to the readings from the n parasitic heat sensing pixels. In the simplest case (first relation indicated above), each parasitic heat sensing pixel observes only a corresponding zone, and the matrix M.sub.Clum is a diagonal matrix. However, in other cases, each zone 1 to q is defined by a set of weighted contributions from one or more of the parasitic heat sensing pixels.
(86) According to a third relation, there are less parasitic heat sensing pixels than discrete zones in the model. In this case, the above matrix M.sub.Clum is under-defined, as will now be described with reference to an example of
(87)
(88)
wherein Ω is a 2D surface representing the model divided into discrete zones i,j, S.sub.i,j is the intersection surface between each zone and the observation area of the pixel k, and ϕ.sub.i,j is surface luminance flux of zones i,j.
(89) In the case that the entire surface Ω is not fully observed by the collection of parasitic heat sensing pixels as shown in
Δϕ.sub.i,j=0
where Δ represents the Laplacian of the luminance. The non-uniformity repartition of the luminance is then for example solved for each zone ϕ.sub.i,j based on the above hypothesis, and an a priori hypothesis for the thermal diffusion in any white zones, i.e. zones that are not intersected by any observation area 810.
(90) A method of calibrating an IR camera comprising parasitic heat sensing pixels in order to construct the conversion matrices M.sup.−1.sub.Clum and M.sub.Cpix will now be described with reference to
(91)
(92)
(93) The generation of the at least one conversion matrix involves determining the correlation between the outputs of the parasitic heat sensing pixels and the parasitic luminous flux received by each image pixel. In other words, a relative map of the response by each parasitic heat sensing pixel and each image pixel to an exact same luminance variation should be estimated. This can be represented by the following equation:
(94)
where ΔV.sub.out(x,y) is the variation of the output voltage of each pixel at position (x,y), ∂ϕ.sub.i is the variation in the luminance ϕ.sub.i at each zone i of the model of the interior surface of the housing, T.sub.i(x,y) is the etendue of each pixel with respect to each zone i, and Resp(x,y) is the responsivity of each pixel.
(95) When calibrating a standard infrared image pixel array, a gain map is generally used in a process known as a 2-point non-uniformity-correction. In the case of the pixel array of the present disclosure, in practice, exposing the parasitic heat sensing pixels and image pixels to a same luminance variation would be difficult, and the calibration process would be long. Instead, the present inventors propose to perform the calibration using two main operations (1001 and 1003), as will now be described in more detail.
(96) In an operation 1001, relative transfer functions are determined between the surface contribution of the interior surface of the camera housing and the luminous flux received by the parasitic heat sensing pixels and by the image pixels. This corresponds to the etendue between each pixel and the various zones i of the model. In this operation, it is assumed that all of the pixels have the same response in terms of their voltage generated for a given received luminous flux of a given power (watts, W) and for a given solid angle (steradian, sr). Based on the geometry of the camera housing and of the pixels of the pixel array, the etendue T.sub.i(x,y) of each parasitic heat sensing pixel and of each image pixel at position (x,y) with respect to each zone i can for example be estimated, as will now be described.
(97) As known by those skilled in the art, in the field of optics, the etendue defines the extent to which light is spread out in area and angle.
(98) The etendue T.sub.i(x,y) for each pixel of the pixel array with respect to a zone i of the interior surface of the camera housing, assuming that this surface is in the form of a dome of radius R, can be defined as follows:
(99)
where S.sub.pixel is the surface area of the pixel, θ is the elevation angle, φ is the azimuth angle, and d is the distance between the pixel and the centre of the dome. Thus, based on the geometry of the pixel array and of the interior of the camera, it is possible to estimate etendue T.sub.i(x,y) of each image pixel and parasitic heat sensing pixel based on the above equation.
(100) The operation 1001 is for example performed once for a given type of IR camera having a given housing and pixel array, the generated etendues being relevant to any IR camera having the given geometry of the camera housing and of the pixel array.
(101) Optionally, in an operation 1002, one or more parameters of the model of the interior of the housing of the IR camera may be determined. For example, in the case that the model is a dome, the radius R of the model of the dome is for example defined based on an estimate of the average level of luminous flux received from the interior of the housing.
(102) In an operation 1003, a unitary calibration is for example performed for each IR camera unit in a family of products in order to determine absolute values of the transfer functions between the surface contribution of the model of the interior surface of the camera housing and the pixel readings from the image sensor and from the parasitic heat sensing pixels. In particular, this for example involves determining the relative responsivity Resp(x,y) of each pixel for a same solid angle. For the image pixels of the image sensor, the responsivity Resp(x,y) can for example be determined using known calibration techniques, such as based on 2-point non-uniformity-correction. As regards the characterisation of the parasitic heat sensing pixels, this is for example performed by placing a dome-shaped black-body over the pixel array and obtaining readings from each of the parasitic heat sensing pixels for two different temperatures of the black body.
(103) Once this relative responsivity has been determined for each pixel, the matrices M.sup.−1.sub.Clum and M.sub.Cpix can for example be determined based on the responsivity Resp(x,y) and etendue T.sub.i(x,y) of each pixel.
(104) An advantage of the embodiments described herein is that a parasitic heat component in an image captured by an IR camera can be estimated relatively precisely without the use of a temperature probe. For example, the present inventors have found that a precision as low as +/−1° C. can be achieved.
(105) Having thus described at least one illustrative embodiment, various alterations, modifications and improvements will readily occur to those skilled in the art. For example, it will be apparent to those skilled in the art that the embodiments of the parasitic heat sensing pixels merely provide one example, and that other pixel structures for limiting the field of view of the pixels would be possible.
(106) Furthermore, while example embodiments have been described in relation with a dome-shaped model, it will be apparent to those skilled in the art how the calculations could be adapted to other forms of models.
(107) Furthermore, it will be apparent to those skilled in the art that, while embodiments have been described involving the use of two conversion matrices M.sup.−1.sub.Clum and M.sub.Cpix, in alternative embodiments a single conversion matrix, or more than two conversion matrices, could be employed.
(108) Furthermore, it will be apparent to those skilled in the art that the various features described in relation with the various embodiments could be combined, in alternative embodiments, in any combination.