Imager system with two sensors
10848658 ยท 2020-11-24
Assignee
Inventors
Cpc classification
H04N25/533
ELECTRICITY
H04N23/45
ELECTRICITY
G06T7/80
PHYSICS
International classification
G06T7/80
PHYSICS
Abstract
The invention relates to an imager system comprising a main image sensor (1) and comprising a main matrix (2) of active pixels exhibiting a first instantaneous dynamic span of luminous sensitivity, and a main reading circuit adapted for reading the pixels of the main image sensor (1) and for acquiring a main image on the basis of said reading, an auxiliary image sensor (11) comprising a second matrix (12) of active pixels exhibiting a second instantaneous dynamic span of luminous sensitivity which is more extensive than the first instantaneous dynamic span of luminous sensitivity, and an auxiliary reading circuit adapted for reading the active pixels of the auxiliary image sensor (11) and for acquiring an auxiliary image on the basis of said reading, and a data processing unit (10) configured to determine at least one value of an acquisition parameter of the main image sensor on the basis of the auxiliary image.
Claims
1. Imager system comprising a main image sensor having a first acquisition field and comprising a main array of active pixels having a first instantaneous dynamic range of light sensitivity, and a main readout circuit adapted to read the pixels of the main image sensor after exposure of said pixels and to acquire a main image from said readout, wherein the main image sensor has a voltage response that is at least in part linear to light exposure in the first instantaneous dynamic range of the light sensitivity, wherein the imager system comprises: an auxiliary image sensor having a second acquisition field covering at least in part the first acquisition field, and comprising a second array of active pixels having a second instantaneous dynamic range of light sensitivity wider than the first instantaneous dynamic range of light sensitivity, and an auxiliary readout circuit adapted to read the active pixels of the auxiliary image sensor after exposure of said active pixels and to acquire an auxiliary image from said readout, wherein the auxiliary image sensor has a non-linear voltage response to light exposure in the first instantaneous dynamic range of light sensitivity, and a data processing unit configured to determine at least one value of an acquisition parameter for the main image sensor from the auxiliary image.
2. The imager system according to claim 1, wherein the data processing unit is configured to determine a spatial distribution of luminosity in the auxiliary image, and wherein the data processing unit is configured to determineas acquisition parameter of the main imageat least the exposure time of the active pixels of the main image sensor and/or an amplification gain of the main readout circuit from the spatial distribution of luminosity in the auxiliary image.
3. The imager system according to claim 1, wherein the width of the second instantaneous dynamic range of light sensitivity is twice greater than the width of the first instantaneous dynamic range of light sensitivity.
4. The imager system according to claim 1, wherein the main array of the main image sensor has a pixel density that is twice higher than a pixel density of the active pixels of the array of active pixels of the auxiliary image sensor.
5. The imager system according to claim 1, wherein the pixels of the main array of the main image sensor and the active pixels of the auxiliary array of the auxiliary image sensor are interleaved on a surface of one same substrate.
6. The imager system according to claim 1, wherein the pixels of the main array of the main image sensor are arranged on a first substrate and the active pixels of the auxiliary array of the auxiliary image sensor are arranged on a second substrate separate from the first substrate.
7. The imager system according to claim 1, wherein the active pixels of the auxiliary image sensor have aperiodic spatial distribution.
8. The imager system according to claim 1, wherein the active pixels of the auxiliary image sensor are pixels with a logarithmic response.
9. The imager system according to claim 1, wherein the active pixels of the auxiliary image sensor are pixels having a counter adapted to accumulate charges on exposure and to count the number of times when charge accumulation reaches an accumulation reset threshold.
10. The imager system according to claim 1, wherein the active pixels of the auxiliary image sensor are discharge pixels adapted to discharge on exposure for a discharge time that is function of the luminance of exposure.
11. The imager system according to claim 1, wherein the auxiliary image sensor is provided with a colour filter array, and the data processing unit is configured to determine at least the white balance from the auxiliary image as acquisition parameter for the main image.
12. The imager system according to claim 1, wherein the data processing unit is configured to determine several different values of an acquisition parameter for the main image sensor from the auxiliary image, and the main image sensor is configured to acquire several main images using a different value of the acquisition parameter for each of said main images.
13. Method for acquiring an image by means of an imager system according to claim 1, said method comprising the steps whereby: the auxiliary image sensor acquires an auxiliary image, the data processing unit determines at least one value of an acquisition parameter for the main image sensor from the auxiliary image, the main image sensor acquires at least one main image using said value of the acquisition parameter.
14. The method according to claim 13, wherein the data processing unit determines several different values of an acquisition parameter for the main image sensor from the auxiliary image, and wherein the main image sensor acquires several main images using a different value of the acquisition parameter for each of said main images.
Description
PRESENTATION OF THE FIGURES
(1) The invention will be better understood by means of the following description relating to embodiments and variants of the present invention, given as nonlimiting examples and explained with reference to the appended schematic drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10) In all the Figures, similar elements are designated by same references.
DETAILED DESCRIPTION
(11) In the present description, by image is meant a set of data representing a spatially distributed representation in two dimensions of at least one optical characteristic, typically luminance, of elements in a scene, said data being spatially organized according to said spatial representation. In the following description, the main array of the main image sensor is composed of active pixels. Other configurations can be used, for example with pixels other than active pixels for the main array.
(12) With reference to
(13) The main image sensor may particularly be a sensor provided with possible implementation of multiple exposure times that can either be implemented as separate shots and mixed together in a single shot, according to the conditions set forth above, in particular for methods 1) and 2) already mentioned.
(14) Each active pixel comprises a photodetector e.g. a photodiode, and an active amplifier. It may be a conventional three-transistor, four-transistor active pixel or other configuration. Preferably, for reasons of compactness, each active pixel is a four-transistor pixel the output amplifier of which is shared by several photodiodes.
(15) The main image sensor 1 also comprises optics 3 comprising usual components for image capture such as a lens 4 for example or lens assembly 5 equipped with a diaphragm or shutter. The optics 3 can be controlled in particular by acquisition parameters such as exposure time or diaphragm aperture which control the amount of light received by the active pixels.
(16) The main image sensor 1 has a first acquisition field 6 which extends from the optics 3. In the acquisition field 6 there is a scene composed of different elements of varied luminance. When the active pixels of the main array 2 are exposed, the light emitted/reflected by these different elements of the scene passes through the optics 3 and reaches the active pixels of the main array 2. The amount of incident light on the active pixels is dependent on exposure time and the characteristics of the optics 3, and in particular on aperture size.
(17) The main array 2 is formed on a substrate 7 which typically contains the main readout circuit. The electrical characteristics of the active pixels vary with the amount of incident light during exposure. On readout of these active pixels, the main readout circuit measures the electrical characteristics of each of active pixel and thereby acquires a main image representing the amount of incident light on each active pixel.
(18) The imager system also comprises an auxiliary image sensor 11 having a second acquisition field 16 overlapping the first acquisition field 6 at least in part. It is preferable that the first acquisition field 6 and second acquisition field 16 have maximum coincidence, for example more than 80%, in particular at more than ten centimetres from the imager system which is generally a minimum distance for image capture. Good superimposition of the acquisition fields can be obtained by placing the sensors adjacent to each other with parallel or converging optical axes. Preferably, the second acquisition field 16 and the first acquisition field can be the same, which can be obtained by using the same optics for the two sensors 1, 11 and in particular by using an image return device such as a mirror.
(19) The second array 12 of active pixels has a second instantaneous dynamic range of light sensitivity that is wider than the first instantaneous dynamic range of light sensitivity. Preferably, the width of the second instantaneous dynamic range of light sensitivity is twice larger than the width of the first instantaneous dynamic range of light sensitivity, and further preferably ten times the width of the first instantaneous dynamic range of light sensitivity.
(20) The width of an instantaneous dynamic range of light sensitivity can simply be determined be evidencing the limits thereof with fixed acquisition parameters. For the lower limit, first the noise affecting the sensor under test conditions is determined: the sensor is placed in the dark, an image is acquired and from the values of the pixels in the acquired image, a noise level is statistically determined. The lower limit of the instantaneous dynamic range of light sensitivity corresponds to the luminance of the elements in the acquisition field which allows a pixel value higher than this noise level to be obtained.
(21) The upper limit corresponds to the luminance of the elements in the acquisition field on and after which the active pixels start to saturate. It is also possible to infer the instantaneous dynamics of the active pixels by calculating the ratio of the two limits, determined as explained above.
(22) The auxiliary image sensor 11 also comprises optics 13 comprising usual components for image capture such as a lens 14 for example or lens assembly 15 equipped with a diaphragm or shutter. The optics 13 can be controlled in particular by acquisition parameters such as exposure time or diaphragm aperture which will control the amount of light received by the active pixels. However, simplified optics having a lens with fixed aperture may suffice.
(23) Similar to the first sensor 1, the second sensor comprises an auxiliary readout circuit adapted to read the active pixels of the auxiliary image sensor 11 after exposure of said active pixels, and to acquire an auxiliary image from said readout. Similar to the main image sensor 1, the auxiliary array 12 is formed on a substrate 17, which typically comprises the auxiliary readout circuit. The electrical characteristics of the active pixels vary with the amount of incident light during exposure. On reading these active pixels, the auxiliary readout circuit measures the electrical characteristics of each active pixel and thereby acquires the auxiliary image representing the amount of incident light on each active pixel.
(24) The auxiliary image sensor 11 is therefore used to acquire an auxiliary image before acquisition of a main image by the main image sensor 1 so that at least one acquisition parameter for the main image sensor is determined from this auxiliary image and can be used to acquire the main image.
(25) For this purpose, the imager system also comprises a data processing unit 10 configured to determine at least one value of an acquisition parameter for the main image sensor from the auxiliary image. For example, this acquisition parameter can be exposure time and/or diaphragm aperture, and/or amplification gain of the main readout circuit and/or optical attenuation of an optical attenuator and/or white balance. The processing unit 10 is therefore connected both with the auxiliary image sensor 11 to receive the auxiliary image and with the main image sensor 1 to provide the latter with the value of the acquisition parameter.
(26) In particular, the data processing unit 10 can be configured to determine a spatial distribution of luminosity in the auxiliary image and to determine, as value of the acquisition parameter of the main image, at least one exposure time of the active pixels of the main image sensor 1 and/or an amplification gain of the main readout circuit from the spatial distribution of luminosity inferred from the auxiliary image.
(27) The method for acquiring an image by means of an imager system according to any of the possible embodiments therefore comprises the steps whereby:
(28) the auxiliary image sensor 11 acquires an auxiliary image,
(29) the data processing unit 10 determines at least one value of an acquisition parameter for the main image sensor 1 from the auxiliary image,
(30) the main image sensor 1 acquires a main image using said value of the acquisition parameter.
(31) Insofar as the auxiliary image is essentially used to determine the value of an acquisition parameter for the main image sensor 1, the resolution of the auxiliary image does not need to be as high as the resolution of the main image. It follows that the density of active pixels in the auxiliary image sensor 11 does not need to be as high as the density of active pixels in the main image sensor 1.
(32) Therefore, the main array 2 of the main image sensor 1 may have a density of active pixels twice higher than the density of the active pixels in the array 12 of active pixels of the auxiliary image sensor 11, even preferably ten times higher than the density of the active pixels in the array 12 of active pixels of the auxiliary image sensor 11. In terms of number of pixels, the main array 2 of the main image sensor 1 may have a number of active pixels higher than ten times the number of active pixels in the array 12 of active pixels of the auxiliary image sensor 11, even preferably higher than one hundred times the number of active pixels in the array 12 of active pixels of the auxiliary image sensor 11.
(33) Low resolution for the auxiliary image reduces the burden of processing applied by the processing unit, and in particular therefore allows accelerated analysis of the auxiliary image. This low resolution also allows the advantageous use of logarithmic pixels without saturation and without control over image capture parameters as proposed by patent EP2186318.
(34) The auxiliary image sensor 11 has sufficient resolution however to allow fine analysis of the auxiliary image. Therefore, the array 12 of active pixels in the auxiliary image sensor 11 preferably has a number of active pixels higher than 320240 pixels, and preferably higher than one thousandth, even one hundredth of the active pixels in the main array 2 of the main image sensor 1.
(35) It is possible to provide the auxiliary image sensor with a colour filter array (CFA). This colour filter array is composed of small colour filters positioned in front of the photosensitive elements of the active pixels of the auxiliary image sensor. It is then possible to determine the distribution of luminance in different colours. In this case, the data processing unit 10 can be configured to determine at least the white balance from the auxiliary image as acquisition parameter for the main image.
(36) In the example in
(37) However, said configuration is not necessary. To simplify the structure of the imager system, and with reference to
(38) The two sensors 1, 11 having interleaved arrays share the same common optics 23, for example with a common lens 24 and a common lens assembly 24, and therefore the same common acquisition field 26 which therefore forms both the first acquisition field and the second acquisition field. However, each image sensor 1, 11 can maintain a readout circuit dedicated to their respective active pixels.
(39)
(40) As previously, it is to be noted that the number of active pixels 30 of the main image sensor 1 is much higher, at least twice higher and preferably at least ten times higher than the number of active pixels 32 of the auxiliary image sensor 11. Preferably, the active pixels 32 of the auxiliary image sensor 11 are not adjacent to each other but are individually isolated from each other by active pixels 30 of the main image sensor 11. Also preferably, the active pixels 32 of the auxiliary image sensor 12 have aperiodic spatial distribution to prevent the creation of geometric effects such as a moir effect, and therefore do not have symmetrical distribution within the active pixels 30 of the main image sensor 11.
(41) As indicated above, the active pixels of the main image sensor 1 are of the type commonly used in imagers to obtain high definition images, e.g. three-transistor or four-transistor active pixels. The active pixels 30 of the main image sensor 1 and the active pixels 32 of the auxiliary image sensor 11 differ in their different instantaneous dynamic ranges of light sensitivity.
(42) To obtain active pixels 32 of the auxiliary image sensor 11 having a second instantaneous dynamic range of light sensitivity that is substantially wider than that of the active pixels 30 of the main image sensor 1, several solutions are possible.
(43) Whereas the main image sensor 1 generally has linear voltage response at least in part to light exposure in the first instantaneous dynamic range of light sensitivity, the auxiliary image sensor 2 preferably has non-linear voltage response to light exposure in the first instantaneous dynamic range of light sensitivity.
(44) It is possible to obtain this non-linearity by using pixels with logarithmic response as active pixels 32 of the auxiliary image sensor 12. For example, patent EP2186318 proposes an active pixel structure of CMOS type that can be used. Other configurations of logarithmic response pixels can be used. Since the active pixels have logarithmic response, their response scarcely changes with luminosity and therefore allows saturation to be prevented. In this manner, a near-infinite instantaneous dynamic range of light sensitivity can be obtained.
(45)
(46)
(47) For the first exposure, to strong luminance, discharge 61 takes place rapidly so that the charge threshold 60 is soon reached after a short time t1. For the second exposure, to low luminance, discharge 62 occurs more slowly so that the charge threshold 60 is reached after a time t2 longer than time t1. The signal corresponding to the luminance of exposure can then be determined as being proportional to the inverse of discharge time. Therefore, the signal for exposure to strong luminance l/t1 is greater than the signal for exposure to low luminance l/t2. High dynamics are possible if timing is sufficiently fine-tuned.
(48) Other technical solutions to extend the dynamic range can be found for example in the work High Dynamic Range Imaging: Sensors and Architectures, by Arnaud Darmont, 2013, ISBN 9780819488305.
(49) The large width of the second instantaneous dynamic range of light sensitivity allows improved acquisition of the main image by means of the auxiliary image. For this purpose, the data processing unit 10 is configured to determine at least one value of an acquisition parameter for the main image sensor from the auxiliary image.
(50) It is also possible to acquire several main images from one same auxiliary image. For this purpose, the data processing unit 10 is configured to determine several different values of an acquisition parameter for the main image sensor 1 from the auxiliary sensor, and the main image sensor is configured to acquire several main images using a different value of the acquisition parameter for each of said main images.
(51)
(52)
(53) The processing unit then determines at least one value of an acquisition parameter so that the first instantaneous dynamic range of light sensitivity 71 of the active pixels of the main image sensor best corresponds to the luminance of the elements belonging to the first portion 100, as illustrated in
(54) From the same auxiliary image, the processing unit also determines at least one acquisition parameter so that the first instantaneous dynamic range of light sensitivity 71 of the active pixels of the main image sensor best correspond to the luminance of the elements belonging to portion 101, as illustrated in
(55) With the invention it was therefore possible with two main images to capture all the elements in the first acquisition field 6, ensuring that there is no pixel saturation and therefore no loss of information.
(56) It can be seen that these two exposures indeed form a hole in the luminance scale between the dynamic ranges in
(57) The invention is not limited to the embodiment described and illustrated in the appended Figures. Modifications remain possible, in particular regarding the composition of the various elements or via substitution of technical equivalents, without departing from the protected scope of the invention.