System and method for acquiring visible and near infrared images by means of a single matrix sensor
10477120 · 2019-11-12
Assignee
Inventors
Cpc classification
H04N23/11
ELECTRICITY
H04N2209/046
ELECTRICITY
H04N9/646
ELECTRICITY
International classification
Abstract
A system for acquiring images in color and near-infrared, comprising: a matrix sensor, which comprises first, second, and third types of pixels sensitive to respective visible colors and a fourth type of panchromatic pixels sensitive in the near-infrared; and a signal processing circuit configured to reconstruct first and second sets of monochromatic images, a panchromatic image, a color image from the images of the first set and from the panchromatic image, and at least one image in the near-infrared from the images of the second set and from the panchromatic image. A visible-near-infrared bispectral camera comprises such an acquisition system and method implemented by means of such a camera.
Claims
1. An image acquisition system, comprising: a matrix sensor comprising a two-dimensional arrangement of a plurality of pixels, each of said pixels being adapted to generate an electrical signal representative of light intensity at a point of an optical image of a scene; and a signal processing circuit configured to process the electrical signal generated by each of said pixels to generate digital images of said scene; wherein the plurality of pixels comprises colored pixels of first, second, and third types and panchromatic pixels; wherein the colored pixels of the first type are sensitive to visible light in a first spectral band; wherein the colored pixels of the second type are sensitive to visible light in a second spectral band different from the first spectral band; and wherein the colored pixels of the third type are sensitive to visible light in a third spectral band different from the first and second spectral bands, wherein a combination of the first, second, and third spectral bands reconstitutes all visible spectrum; wherein the panchromatic pixels are sensitive to the visible spectrum and near-infrared; and wherein said signal processing circuit is further configured to: reconstruct a first set of monochromatic images from the electrical signals generated by the colored pixels of the first, second, and third types; reconstruct a panchromatic image from the electrical signals generated by the panchromatic pixels; reconstruct a second set of monochromatic images from the electrical signals generated by the colored pixels, and from said panchromatic image; reconstruct a color image by application of a first colorimetry matrix to the monochromatic images of the first set and to said panchromatic image; reconstruct at least one image in the near-infrared by application of a second colorimetry matrix at least to the monochromatic images of the second set and to said panchromatic image; and supply as output said color image and said at least one image in the near-infrared.
2. The image acquisition system as claimed in claim 1, wherein said colored pixels are further sensitive to the near-infrared, and wherein there are no other colored pixels in said image acquisition system.
3. The image acquisition system as claimed in claim 1, wherein the colored pixels of the first, second, and third types are sensitive to green light, blue light, and red light, respectively.
4. The image acquisition system as claimed in claim 1, wherein at least 25% of the plurality of pixels are panchromatic.
5. The image acquisition system as claimed in claim 1, wherein said signal processing circuit is further configured to reconstruct the monochromatic images of said first set by application of a method comprising the following steps: determining the light intensity associated with said each colored pixel of said first type and reconstructing a first monochromatic image of said first set by interpolation; determining the light intensity associated with said each colored pixel of the second and third types, and subtracting therefrom a value representative of an intensity associated with a corresponding pixel of said first monochromatic image; and reconstructing new monochromatic images by interpolation of light intensity values of the respective colored pixels of said second and third types, from which have been subtracted said values representative of the intensity associated with the corresponding pixel of said first monochromatic image, then combining these new reconstructed images with said first monochromatic image to obtain respective final monochromatic images of said first set.
6. The image acquisition system as claimed in claim 1, wherein said signal processing circuit is further configured to reconstruct said panchromatic image by interpolation of the electrical signals generated by the panchromatic pixels.
7. The image acquisition system as claimed in claim 1, wherein said signal processing circuit is further configured to reconstruct the monochromatic images of said second set by computing a luminance level of each pixel of each said image by application of a linear function, defined locally, to luminance of a corresponding pixel in the panchromatic image.
8. The image acquisition system as claimed in claim 1, wherein said signal processing circuit is further configured to reconstruct the monochromatic images of said second set by computing a luminance level of each pixel of each said image by means of a non-linear function of luminance levels of a plurality of pixels of the panchromatic image in a neighborhood of the pixel of said panchromatic image corresponding to said pixel of said image of the second set and/or of the light intensity of a plurality of colored pixels.
9. The image acquisition system as claimed in claim 1, wherein said matrix sensor is composed of a periodic repetition of blocks containing pseudo-random distributions of pixels of the different types and wherein said signal processing circuit is further configured to: extract regular patterns of pixels of the same types from said matrix sensor; and reconstruct said first set and second set of monochromatic images by parallel processing of said regular patterns of pixels of the same types.
10. The image acquisition system as claimed in claim 1, wherein said signal processing circuit is further configured to reconstruct a monochromatic image with low brightness level by application of a third colorimetry matrix at least to the monochromatic images of the second set and to said panchromatic image.
11. The image acquisition system as claimed in claim 1, wherein said matrix sensor further comprises a two-dimensional arrangement of another plurality of pixels that are only sensitive to the near-infrared, and wherein said signal processing circuit is further configured to generate an image in the near-infrared using electrical signals generated by the other pixels.
12. The image acquisition system as claimed in claim 1, further comprising an actuator for producing a relative periodic displacement between the matrix sensor and the optical image, wherein the matrix sensor is adapted to reconstruct said first and second sets of monochromatic images and said panchromatic image from electrical signals generated by the pixels of the matrix sensor corresponding to a plurality of distinct relative positions of the matrix sensor and of the optical image.
13. The image acquisition system as claimed in claim 1, wherein said signal processing circuit is produced from a programmable logic circuit.
14. A visible-near-infrared bispectral camera comprising: an image acquisition system as claimed in claim 1; and an optical system adapted to form an optical image of a scene on a matrix sensor of such an image acquisition system, without filtering of near-infrared.
15. A method for simultaneous acquisition of images in color and in the near-infrared by use of a bispectral camera as claimed in claim 14.
16. The image acquisition system as claimed in claim 4, wherein at least 50% of the plurality of pixels are panchromatic.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Other features, details and advantages of the invention will emerge on reading the description given with reference to the attached drawings given by way of example and which represent, respectively:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION
(9)
(10) The matrix sensor can be of CCD or CMOS type; in the latter case, it can incorporate an analog-digital converter so as to directly supply digital signals at its output. In any case, it comprises at least four different types of pixels: three first types sensitive to spectral bands corresponding to colors which, mixed, reproduce the white of the visible spectral band (typically red, green and blue) and a panchromatic fourth type. In a preferred embodiment, all these types of pixels also exhibit a non-zero sensitivity in the near-infraredwhich is the case of the silicon sensors. This sensitivity in the near-infrared is generally considered as a nuisance, and eliminated using an optical filter, but is exploited by the invention. Advantageously, the pixels all have the same structure, and differ only by a filtering coating on their surface (absent in the case of the panchromatic pixels), generally polymer-based.
(11) As will be explained in more detail later, referring to
(12) The figure identifies the visible (350-700 nm) VIS and near-infrared (800-1100) PIR spectral bands. The intermediate band (700-800 nm) can be filtered, but that is not advantageous in the case of the invention; more usefully, it can be considered as near-infrared.
(13) Advantageously, the matrix sensor CM can also be sparse, which means that the panchromatic pixels are at least as numerous as, and preferably more numerous than, those of each of the three colors. Advantageously, at least half of the pixels are panchromatic. That makes it possible to enhance the sensitivity of the sensor because the panchromatic pixels, not including any filter, receive more light than the colored pixels.
(14) The arrangement of the pixels can be pseudo-random, but is preferably regular (that is to say periodic according to the two spatial dimensions) in order to facilitate the image processing operations. It can notably be a periodicity on a random pattern, that is to say a periodic repetition of blocks within which the pixels are distributed pseudo-randomly.
(15) As an example, the left-hand part of
(16) It is also possible to use a sensor obtained by the regular repetition of a block of dimensions MN containing a distribution of colored and panchromatic pixels that is pseudo-random (but with a controlled distribution between these different types of pixels).
(17) For example,
(18) It is also possible to use more than three types of colored pixels, exhibiting different sensitivity bands, in order to obtain a plurality of monochromatic visible (and, if appropriate, in the near-infrared) images corresponding to these bands. It is thus possible to obtain hyperspectral images.
(19) Moreover, it is not essential for the colored pixels (or all of them) to be sensitive to the near-infrared: it can be sufficient for the panchromatic pixels to be so.
(20)
(21) The circuit CTS receives as input a set of digital signals representing the light intensity values detected by the different pixels of the matrix sensor CM. In the figure, this set of signals is designated by the expression full-band sparse image. The first processing operation consists in extracting from this set the signals corresponding to the pixels of the different types. By considering the case of a regular arrangement of blocks of pixels MN (M>1 and/or N>1), this can be called the extraction of the R.sub.PB (full-band red), V.sub.PB (full-band green), B.sub.PB (full-band blue) and M.sub.PB (full-band panchromatic) patterns. These patterns correspond to downsampled images, or to holes; it is therefore necessary to proceed with reconstruction of complete images, sampled at the pitch of the matrix sensor.
(22) The reconstruction of a full-band panchromatic image IM.sub.PB is the simplest operation, particularly when the panchromatic pixels are the most numerous. Such is the case in
(23) The reconstruction of the full-band colored images (red, green, blue) is performed twice, by means of two different methods. A first method is called intra-channel, because it uses only the colored pixels to reconstruct the colored images; a second method is called inter-channel, because it uses also the information from the panchromatic pixels. Examples of such methods will be described later, referred to
(24) The full-band images, whether obtained by an intra-channel or inter-channel method are not directly usable, because they are polluted by the NIR (near-infrared) component not filtered by the optical system. This NIR component can be eliminated by combining the full-band images IR.sub.PB, IV.sub.PB, IB.sub.PB obtained by the intra-channel method with the full-band panchromatic image by means of a first colorimetry matrix (reference MCol1 in
(25) The full-band images IR*.sub.PB, IV*.sub.PB, IB*.sub.PB obtained by the inter-channel method are, also, combined with the full-band panchromatic image by means of a second colorimetry matrix (reference MCol2 in
(26) Optionally, the combination of the full-band images IR*.sub.PB, IV*.sub.PB, IB*.sub.PB obtained by the inter-channel method with the full-band panchromatic image by means of a third colorimetry matrix (reference MCol3 in
(27) In some cases, interest could be focused solely on the image in the near-infrared I.sub.PIR and possibly on the monochromatic visible image with low brightness I.sub.BNL. In these cases, it would not be necessary to implement the intra-channel reconstruction method.
(28) An advantageous image reconstruction method of intra-channel type will now be described referring to
(29) In the matrix sensor of
(30)
(31) The reconstruction of the full-band red and blue images is a little more complex. It is based on a method similar to hue constancy described in U.S. Pat. No. 4,642,678.
(32) Firstly, the full-band green image IV.sup.PB is subtracted from the patterns of red and blue pixels. More specifically, that means that a value representative of the intensity of the corresponding pixel of the full-band green image IV.sup.PB is subtracted from the signal derived from each red or blue pixel. The pattern of red pixels is broken down into two sub-patterns SMPR1, SMPR2; after subtraction, the modified sub-patterns SMPR1, SMPR2; are obtained; likewise, the pattern of blue pixels is broken down into two sub-patterns SMPB1, SMPB2; after subtraction, the modified sub-patterns SMPB1, SMPB2 are obtained. That is illustrated in
(33) Next, as illustrated in
(34) The full-band red image IR.sub.PB and the full-band blue image IB.sub.PB are obtained by adding the full-band green image IV.sub.PB to the modified red and blue images IR.sub.PB, IB.sub.PB.
(35) The benefit of proceeding in this way, by subtracting the green image reconstructed from the patterns of red and blue pixels to add it at the end of the processing, is that the modified patterns exhibit a low intensity dynamic range, which makes it possible to reduce the interpolation errors. The problem is less acute for the green, which is sampled more finely.
(36) The inter-channel reconstruction is performed differently. It explicitly exploits the panchromatic pixels, contrary to the intra-channel reconstruction which exploits only the red, green, blue pixels. As an example, it can be performed by means of an algorithm that can be qualified as monochrome law, which is illustrated using
(37)
(38) The first step of the method consists in reconstructing rows of the blue component of the image using the panchromatic image; only the rows containing the blue pixels are reconstructed in this way, i.e. one line in four. At the end of this step, there are complete blue rows, separated by rows in which the blue component is not defined. Looking at the columns, it will be noted that, in each column, one pixel in four is blue. It is therefore possible to reconstitute blue columns by interpolation assisted by the knowledge of the panchromatic image, as was done for the rows. The same process is applied for the green and red components.
(39) The application of the monochrome law to reconstruct a colored component of the image proceeds in the following way.
(40) Interest is focused on the pixels M.sub.1 to M.sub.5 of the reconstituted panchromatic image which are situated between two pixels C.sub.1 and C.sub.5 of the pattern of the color concerned, including the end pixels M.sub.1, M.sub.5 which are co-located with these two colored pixels. Then, a determination is made as to whether the corresponding portion of the panchromatic image can be considered uniform. To do this, the total variation of panchromatic luminance between M.sub.1 and M.sub.5 is compared to a threshold Th. If |M.sub.5M.sub.1|<Th, then the zone is considered uniform, otherwise it is considered non-uniform.
(41) If the zone of the panchromatic image is considered uniform, a check is carried out to see if the total panchromatic luminance M.sub.1+M.sub.2+M.sub.3+M.sub.4+M.sub.5 is below a threshold, a function in particular of the thermal noise, in which case the panchromatic image does not contain usable information and the reconstruction of the colored component (more specifically: the computation of the luminance of the colored pixels C.sub.2, C.sub.3 and C.sub.4) is done by linear interpolation between C.sub.1 and C.sub.5. In other words, a step-by-step reconstruction is carried out:
(42)
(43) In other words, the luminance of each colored pixel to be reconstructed is determined from that of the immediately preceding colored pixel, in the order of reconstruction, by applying to it the local variation rate measured on the panchromatic image.
(44) If the luminance has to be an integer value, the computed result is rounded.
(45) If the zone of the panchromatic image is not considered uniform (|M.sub.5M.sub.1|Th), then it is possible to directly reconstruct the colored pixels C.sub.2-C.sub.4 by application of a monochrome law, that is to say the affine function expressing C.sub.i as a function of M.sub.i (i=1-5) and such that the computed values of C.sub.1 and C.sub.5 coincide with the measured values:
(46)
(47) Thereagain, if the luminance has to be an integer value, the computed result is rounded.
(48) The reconstruction by direct application of the monochrome law can lead to an excessively great dynamic range of the reconstructed colored component, or else to the saturation thereof. In this case, it may be worthwhile to revert to a step-by-step reconstruction. For example, an excessively great dynamic range condition can be observed when
(49)
where Th1 is a threshold, generally different from Th.
(50) A saturation can be observed if min(C.sub.i)<0 or if Max(C.sub.i) is greater than a maximum allowable value (65535 considering the case of a luminance expressed by an integer number coded on 16 bits).
(51) Obviously, the configuration of the
(52) Variants of this method are possible. For example, another approach for determining the luminance of the colored pixels by using both close colored pixels (situated at a certain distance dependent on the pattern of the colored pixels concerned) and of the reconstituted neighboring panchromatic pixels, consist in using non-linear functions which approximate the distribution of the colored pixels by using, for example, a polynomial approximation (and more generally an approximation of a nonlinear spatial surface function) of the neighboring panchromatic pixels. The advantage of these mono-axis non-linear functions, or on the contrary surface bi-axis non-linear functions, is that they take account of the distribution of the colored pixels on a larger scale than the colored pixels closest to the pixel that is to be reconstructed. Within this framework of ideas, it is also possible to use more general value diffusion functions which exploit local gradients and abrupt jumps appearing in the luminance value of the panchromatic pixels. Whatever the method used, the principle remains the same: exploit the panchromatic pixels which are more numerous than the colored pixels, and the law of variation thereof to reconstruct the colored pixels.
(53) Although the monochrome law method involves an approach with two successive mono-axis passes, the use of surface functions or of the diffusion equations makes it possible to proceed through a single-mass approach to reconstruct the colored pixels.
(54) At this stage of the processing, there are a full-band panchromatic image, IM.sub.PB, and two sets of three full-band monochromatic images (IR.sub.PB, IV.sub.PB, IB.sub.PB) and (IR*.sub.PB, IV*.sub.PB, IB*.sub.PB). As described above, none of these images is directly usable, However, a color image in visible light I.sub.VIS can be obtained by combining the full-band images of the first set (IR.sub.PB, IV.sub.PB, IB.sub.PB) and the full-band panchromatic image IM.sub.PB via a 34 colorimetry matrix, MCol1. More specifically, the red component IR of the visible image I.sub.VIS is given by a linear combination of IR.sub.PB, IV.sub.PB, IB.sub.PB and IM.sub.PB with coefficients a.sub.11, a.sub.12, a.sub.13 and a.sub.14. Similarly, the green component IV is given by a linear combination of IR.sub.PB, IV.sub.PB, IB.sub.PB and IM.sub.PB with coefficients a.sub.21, a.sub.22, a.sub.23 and a.sub.24, and the blue component IB is given by a linear combination of IM.sub.PB, IV.sub.PB, IB.sub.PB and IR.sub.PB with coefficients a.sub.31, a.sub.32, a.sub.33 and a.sub.34. That is illustrated by
(55) Next, the visible image I.sub.VIS can be enhanced by a conventional white balance operation, to take account of the difference in lighting of the scene relative to that used to establish the coefficients of the colorimetry matrix.
(56) Likewise, an image in the near-infrared I.sub.PIR can be obtained by combining the full-band images of the second set (IR.sub.PB*, IV.sub.PB*, IB.sub.PB*) and the full-band panchromatic image IM.sub.PB via a second 14 colorimetry matrix, MCol2. In other words, the image in the near-infrared I.sub.PIR is given by a linear combination of IV.sub.PB*, IB.sub.PB*, IR.sub.PB* and IR.sub.PB with coefficients a.sub.41, a.sub.42, a.sub.43 and a.sub.44. That is illustrated by
(57) If several types of pixels exhibit different spectral sensitivities in the near-infrared, it is possible to obtain a plurality of images in the near-infrared that are different, corresponding to N.sub.PIR different spectral sub-bands (with N.sub.PIR>1). In this case, the second colorimetry matrix MCol2 becomes an N.sub.PIR(N.sub.PIR+3) matrix, N.sub.PIR being the number of images in the near-infrared that are to be obtained. The case dealt with previously is the particular case where N.sub.PIR=1.
(58) Next, the image in the near-infrared I.sub.PIR can be enhanced by a conventional spatial filtering operation. This operation can for example be an outline enhancement operation associated or not with an adaptive filtering of the noise (the possible outline enhancement techniques that can be cited include the operation consisting in passing a high-pass convolution filter over the image).
(59) The matrices MCol1 and MCol2 are in fact sub-matrices of a same colorimetry matrix A, of dimensions 44 in the particular case where N.sub.PIR=1 and where there are three types of colored pixels, which is not used as such.
(60) The size of the colorimetry matrices must be modified if the matrix sensors has more than three different types of colored pixels. As an example, as for the N.sub.PIR pixels having different spectral sensitivities in the infrared, there can be N.sub.VIS (with N.sub.VIS3) types of pixels sensitive to different sub-bands in the visible in addition to the unfiltered panchromatic pixels. These N.sub.VIS types of pixels can moreover exhibit different spectral sensitivities in the near-infrared to allow the acquisition of N.sub.PIR PIR images. Assuming that there are no pixels sensitive only in the infrared, the colorimetry matrix Mcol1 is then of dimensions 3(N.sub.VIS+1).
(61) Moreover, the monochromatic visible image I.sub.BNL can be obtained by combining the full-band images of the second set (IV.sub.PB*, IB.sub.PB*, IR.sub.PB*) and the full-band panchromatic image IM.sub.PB via a second 14 colorimetry matrix, MCol3. In other words, the image I.sub.BNL is given by a linear combination of IR.sub.PB*, IV.sub.PB*, IB.sub.PB* and IM.sub.PB with coefficients .sub.41, .sub.42, .sub.43, .sub.44 which form the last row of another 44 colorimetry matrix , which is also not used as such. That is illustrated by
(62) The image I.sub.BNL can, in its turn, be enhanced by a conventional spatial filtering operation.
(63) The colorimetry matrices A and can be obtained by a calibration method. The latter consists, for example, in using a pattern on which different paints reflecting in the visible and the NIR have been deposited, lighting the device by a controlled lighting and comparing the theoretical luminance values that these paints should have in the visible and the NIR with those measured, by using a 44 colorimetry matrix in which the coefficients are best adapted by a least square. The colorimetry matrix can also be enhanced by weighting the colors that are to be revealed as a priority or by adding measurements made on natural objects present in the scene. The proposed method (use of the NIR in addition to color, use of the 44 matrix, use of different paints emitting both in the visible and the NIR) differs from the conventional methods confined to the color, exploiting a 33 matrix and a conventional test pattern such as the X-Rite checkerboard, or the Macbeth matrix.
(64)
(65) In the example of
(66) By taking the example of a double acquisition frequency, from two acquired images corresponding to two opposite extreme positions of the sensor (lefthand part of the
(67) an image in color formed by the repetition of a pattern of four pixelstwo greens arranged along a diagonal, one blue and one red (so-called Bayer matrix), formed by reconstructed pixels having an elongate form in the direction of the displacement with a form ratio of 2; and
(68) a full panchromatic image, directly usable without the need for interpolation;
(69) these two reconstructed images being acquired at a rate two times lower than the acquisition frequency.
(70) Indeed, the micro-scanning completes the panchromatic and colored pixel information, and the processing operations presented in the context of the present invention can directly be applied to the patterns generated from the detector before micro-scanning and from the additional patterns obtained after sub-scanning, so it is therefore not essential to use a specific pattern as presented in
(71) As an example,
(72) Hitherto, only the case of a matrix sensor comprising exactly four types of pixelsred, green, blue and panchromatichas been considered, but that is not an essential limitation. It is possible to use three types of colored pixels, even more, exhibiting sensitivity curves different from those illustrated in
(73) The signals from the pixels of the fifth type can be used in different ways. For example, it is possible to reconstruct, by the intra-channel method of
(74)
(75)