IMAGE PROCESSING FOR ON-CHIP INFERENCE
20220351506 · 2022-11-03
Inventors
Cpc classification
H04N23/743
ELECTRICITY
G06V10/12
PHYSICS
G06V10/25
PHYSICS
H04N23/676
ELECTRICITY
International classification
Abstract
The present disclosure relates to a method of performing, by an image processing circuit, an inference operation comprising: capturing first and second images using first and second values respectively of an image capture parameter; generating, for a first region of the first and second images, first and second estimates respectively of an image quality metric, wherein the image quality metric is dependent on the value of the image capture parameter; calculating first and second distances between the first and second estimates respectively and first and second target levels respectively; and supplying a result of the inference operation performed on the first region of either the first or second image selected based on the first and second distances.
Claims
1. A method of performing an inference operation comprising: capturing a first image F.sub.H using a first value P.sub.H of an image capture parameter; capturing a second image F.sub.L using a second value P.sub.L, lower than the first value, of the image capture parameter; generating, by an image processing circuit, for a first region R.sub.1 of the first image, a first estimate E.sub.H,i of an image quality metric, wherein the image quality metric is dependent on the value of the image capture parameter; calculating, by the image processing circuit, a first distance D.sub.H,1 between the first estimate E.sub.H,i and a first target level M.sub.H; generating, by the image processing circuit, for a first region R.sub.1 of the second image, a second estimate E.sub.L,i of the image quality metric, wherein the first regions of the first and second images are spatially corresponding regions; calculating, by the image processing circuit, a second distance D.sub.L,1 between the second estimate E.sub.L,i and a second target level M.sub.L; and supplying, by the image processing circuit, a result Z.sub.i of the inference operation performed on the first region R.sub.1 of either the first or second image selected based on the first and second distances.
2. The method of claim 1, further comprising: calculating, by the image processing circuit, a first new value P.sub.H(j+1) of the image capture parameter based on at least the first estimate E.sub.H,i; capturing a third image F.sub.H(j+1) using the first new value P.sub.H(j+1) of the image capture parameter; calculating, by the image processing circuit, a second new value P.sub.L(j+1) of the image capture parameter based on at least the second estimate E.sub.L,i; and capturing a fourth image F.sub.L(j+1) using the second new value P.sub.L(j+1) of the image capture parameter.
3. The method of claim 2, wherein the first new value P.sub.H(j+1) is further calculated based on the first target level M.sub.H, and the second new value P.sub.L(j+1) is further calculated based on the second target level M.sub.L.
4. The method of claim 1, further comprising: performing the inference operation on the first region R.sub.1 of the first image F.sub.H to generate a first inference result Z.sub.H,1; and performing the inference operation on the first region R.sub.1 of the second image to generate a second inference result Z.sub.L,1, wherein supplying the result of the inference operation comprises selecting the first inference result or the second inference result based in the first and second distances D.sub.H,1, D.sub.L,1.
5. The method of claim 1, further comprising: comparing, by the image processing circuit, the first and second distances D.sub.H,1, D.sub.L,1; if the first distance is lower than the second distance, performing the inference operation on the first region R.sub.1 of the first image F.sub.H to generate a first inference result Z.sub.H,1, and supplying the first inference result Z.sub.H,1 as the result Z.sub.i of the inference operation; and if the second distance is lower than the first distance, performing the inference operation on the first region R.sub.1 of the second image F.sub.L to generate a second inference result Z.sub.L,1, and supplying the second inference result Z.sub.L,1 as the result Z.sub.i of the inference operation.
6. The method of claim 1, further comprising: generating, by the image processing circuit, for a second region R.sub.2 of the first image, a third estimate E.sub.H,2 of the image quality metric; calculating, by the image processing circuit, a further first distance D.sub.H,2 between the third estimate E.sub.H,2 and the first target level M.sub.H; generating, by the image processing circuit, for the second region R.sub.2 of the first image, a fourth estimate E.sub.L,2 of the image quality metric, wherein the second regions R.sub.2 of the first and second images are spatially corresponding regions; calculating, by the image processing circuit, a further second distance D.sub.L,2 between the fourth estimate E.sub.L,2 and the second target level M.sub.L; and supplying a result of the inference operation performed on the second region of either the first or second image selected based on the first and second distances.
7. The method of claim 6, further comprising: calculating, by the image processing circuit, a first new value (P.sub.H(j+1)) of the image capture parameter based on at least the first estimate (E.sub.H,i); capturing a third image (F.sub.H(j+1)) using the first new value (P.sub.H(j+1)) of the image capture parameter; calculating, by the image processing circuit, a second new value (P.sub.L(j+1)) of the image capture parameter based on at least the second estimate (E.sub.L,i); and capturing a fourth image (F.sub.L(j+1)) using the second new value (P.sub.L(j+1)) of the image capture parameter, wherein the first new value P.sub.H(j+1) of the image capture parameter is based on a minimum min(E.sub.H,i) of at least the first and third estimates E.sub.H,i, and the second new value P.sub.L(j+1) of the image capture parameter is based on a maximum max(E.sub.L,i) of at least the second and fourth estimates E.sub.L,i.
8. The method of claim 1, wherein the image capture parameter is an exposure time.
9. The method of claim 1, wherein the first and second estimates E.sub.H,i, E.sub.L,i of the image quality metric are average pixel values of the pixels of the first region R.sub.1.
10. The method of claim 1, wherein the result Z.sub.i of the inference operation indicates a confidence level of a detection of an object in the first region.
11. The method of claim 10, further comprising comparing, by the image processing circuit, the result Z.sub.i of the inference operation with a threshold value th.sub.d, and outputting the first and/or second image F.sub.H F.sub.L if the threshold value is exceeded.
12. An imaging device (100) comprising: one or more image sensors configured to capture a first image F.sub.H using a first value P.sub.H of an image capture parameter and a second image F.sub.L using a second value P.sub.L, lower than the first value, of the image capture parameter; and an image processing circuit configured to: generate, for a first region R.sub.1 of the first image, a first estimate E.sub.H,i of an image quality metric, wherein the image quality metric is dependent on the value of the image capture parameter; calculate a first distance D.sub.H,1 between the first estimate E.sub.H,i and a first target level M.sub.H; generate, for a first region R.sub.1 of the second image, a second estimate E.sub.L,i of the image quality metric, wherein the first regions of the first and second images are spatially corresponding regions; calculate a second distance D.sub.L,1 between the second estimate E.sub.L,i and a second target level M.sub.L; and supply a result Z.sub.i of the inference operation performed on the first region R.sub.1 of either the first or second image selected based on the first and second distances.
13. The imaging device of claim 12, wherein the one or more image sensors and the image processing circuit are in a same integrated circuit chip.
14. The imaging device of claim 12, wherein the image processing circuit is further configured to: generate, for a second region R.sub.2 of the first image, a third estimate E.sub.H,2 of the image quality metric; calculate a further first distanced D.sub.H,2 between the third estimate E.sub.H,2 and the first target level M.sub.H; generate, for the second region R.sub.2 of the first image, a fourth estimate E.sub.L,2 of the image quality metric, wherein the second regions R.sub.2, of the first and second images are spatially corresponding regions; calculate a further second distance D.sub.L,2 between the fourth estimate E.sub.L,2 and the second target level M.sub.L; and supply a result of the inference operation performed on the second region of either the first or second image selected based on the first and second distances.
15. The imaging device of claim 14, wherein the image processing circuit is further configured to: calculate a first new value P.sub.H(j+1) of the image capture parameter based on a minimum min(E.sub.H,i) of at least the first and third estimates E.sub.H,i; capture a third image F.sub.H(j+1) using the first new value P.sub.H(j+1) of the image capture parameter; calculate a second new value P.sub.L(j+1) of the image capture parameter based on a maximum max (E.sub.L,i) of at least the second and fourth estimates E.sub.L,i; and capture a fourth image F.sub.L(j+1) using the second new value P.sub.L(j+1) of the image capture parameter.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The foregoing features and advantages, as well as others, will be described in detail in the following description of specific embodiments given by way of illustration and not limitation with reference to the accompanying drawings, in which:
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
DETAILED DESCRIPTION OF THE PRESENT EMBODIMENTS
[0029] Like features have been designated by like references in the various figures. In particular, the structural and/or functional features that are common among the various embodiments may have the same references and may dispose identical structural, dimensional and material properties.
[0030] Unless indicated otherwise, when reference is made to two elements connected together, this signifies a direct connection without any intermediate elements other than conductors, and when reference is made to two elements coupled together, this signifies that these two elements can be connected or they can be coupled via one or more other elements.
[0031] In the following disclosure, unless indicated otherwise, when reference is made to absolute positional qualifiers, such as the terms “front”, “back”, “top”, “bottom”, “left”, “right”, etc., or to relative positional qualifiers, such as the terms “above”, “below”, “higher”, “lower”, etc., or to qualifiers of orientation, such as “horizontal”, “vertical”, etc., reference is made to the orientation shown in the figures, or to an imaging device as orientated during normal use.
[0032] Unless specified otherwise, the expressions “around”, “approximately”, “substantially” and “in the order of” signify within 10%, and preferably within 5%.
[0033] The term “image capture parameter” is used to designate any of a broad range of parameters than may be set when an image is to be captured by an image sensor. These for example include:
[0034] a parameter setting the exposure time, including the integration time of a photodiode, or other type of photosite, of each pixel and/or the opening time of a shutter, in order to reduce the effects of data quantization and noise by setting the dynamic range based on the scene;
[0035] a parameter setting the focal plane, for example by adjusting the lens power and/or depth of field, in order to obtain a sharp image; and
[0036] a parameter setting the gain, including the conversion gain of each pixel and the gain in the read out circuitry, which is for example at the bottom of the columns of the pixel array.
[0037]
[0038] The image sensor 102 for example comprises an array of pixels, the array being formed on a focal plane of the image sensor 102. As known by those skilled in the art, light from the image scene is for example focused onto the image sensor 102 via an optical system (not illustrated), which may comprise lenses, filters, and/or other optical elements. The image sensor 102 is for example a CMOS sensor that is sensitive to visible light wavelengths, although in alternative embodiments the image sensor 102 could be of another type, including technologies sensitive to other light wavelengths, such as infrared.
[0039] The image processing circuit 104 is for example implemented by dedicated hardware. In some embodiments, the image processing circuit 104 is integrated in a same integrated circuit as the image sensor 102, although in alternative embodiments they could be implemented by separate chips. More generally, the imaging device 100 is for example a full custom CMOS System on Chip.
[0040] The image processing circuit 104 for example provides one or more image capture parameters (CONTROL PARAMETERS) to the image sensor 102 in order to control the image capture operation. The image processing circuit 104 receives, from the image sensor 102, image data, for example in the form of image frames (FRAMES), over a suitable communications interface.
[0041] The image processing circuit 104 is for example configured to output the image data in the form of a data signal (DATA). In some embodiments, prior to outputting the image data, one or more image processing operations are for example performed on the image data. For example, these image processing operations may involve filtering out noise from the raw image data provided by the image sensor 102, and/or other image processing adjustments.
[0042] Furthermore, the image processing circuit 104 is for example configured to perform an inference based on the image data in order to generate an inference result (INFERENCE). For example, this inference involves applying logical rules to the image data in order to implement functions such as classification or regression on this data. For example, the inference operation may include one or more of:
[0043] object and/or event detection;
[0044] presence detection and/or movement detection; and
[0045] the detection and/or measurement of certain environmental conditions.
[0046] In some embodiments, the image processing circuit 104 has machine learning capabilities, and for example comprises an artificial neural network that has been trained to implement the inference algorithm. The use of artificial neural networks for performing inferences on image data is well known to those skilled in the art, and will not be described in detail here.
[0047]
[0048] As represented in
[0049] The image sensor 102 for example provides two types of image data, a first type captured using the parameter P.sub.H, for example in the form of image frames F.sub.H, and a second type captured using the parameter P.sub.L, for example in the form of image frames F.sub.L. In some embodiments, single frames F.sub.H and F.sub.L are interlaced at the output of the image sensor 102, although in alternative embodiments there could an interlacing of bursts of two or more frames F.sub.H with bursts of two or more frames F.sub.H.
[0050] In some embodiments, the image processing circuit 104 is configured to output one or all of the captured frames F.sub.H and/or F.sub.L. In some cases, the image processing circuit 104 is configured to output frames F.sub.H′ corresponding to the frames F.sub.H after some image processing, and frames F.sub.L′ corresponding to the frames F.sub.L after some image processing. Alternatively, the raw image frames F.sub.H and/or F.sub.L are outputted by the image processing circuit 104.
[0051] The image processing circuit 104 is also configured to output a result Z of the inference. For example, an inference result Z is provided for each pair of frames F.sub.H and F.sub.L processed together, as described in more detail below. It would also be possible to output a result Z based on two or more successive frames F.sub.H and two or more successive frames F.sub.L. Each result Z may be a single inference for the associated frames F.sub.H and F.sub.L, or a set of inferences Z.sub.1 . . . N for a plurality of regions R.sub.1 . . . N of the associated frames F.sub.H and F.sub.L.
[0052] According to the embodiments described herein, the inference result is based on a region of the frame F.sub.H or based on a corresponding region of the frame F.sub.L, selected based on a distance calculation, as will now be described in more detail with reference to
[0053]
[0054]
[0055] With reference to
[0056] The auto-bracketing module 302 for example receives the frames F.sub.H and F.sub.L captured by the image sensor 102, and also target levels M.sub.H and M.sub.L for an image quality metric of the frames F.sub.H and F.sub.L respectively. For example, as illustrated in
[0057] In some embodiments, each of the captured frames F.sub.H, F.sub.L comprises one or more regions, corresponding to groups of pixels within the frames. In the example of
[0058] Referring again to
[0059] In some embodiments, in the case that the regions R.sub.1 . . . N have different areas from each other, the calculation of the image quality estimates E.sub.H,1 . . . N and E.sub.L,1 . . . N involves resizing the regions so that these estimates are all based on regions having the same size or resolution in terms of pixels, for example once processed.
[0060] In some embodiments, the image quality estimates E.sub.H,1 . . . N and E.sub.L,1 . . . N are related to frame statistics representing the dynamic range of the pixel values of the frames F.sub.H and F.sub.L calculated independently for each region R.sub.1 . . . N.
[0061] The image quality estimates E.sub.H,1 . . . N and E.sub.L,1 . . . N are for example used by the module 302 to adjust the parameters P.sub.H and P.sub.L. The adjusted parameters are then for example provided to the image sensor 102 for use in the subsequent image capture operations. For example, with reference to
[0062] The image quality estimates E.sub.H,1 . . . N and E.sub.L,1 . . . N are also for example used by the module 302 to calculate distances D.sub.H,1 to D.sub.H,N between the image quality estimates E.sub.H,1 to E.sub.H,N respectively and the target level M.sub.H, and to calculate distances D.sub.L,1 to D.sub.LA between the image quality estimates E.sub.L,1 to E.sub.L,N respectively and the target level M.sub.L. For example, in one embodiment, the distances D.sub.H,1 to D.sub.H,N are calculated using a function dist.sub.H,i(E.sub.H,i, M.sub.H) and the distances D.sub.L,1 to D.sub.L,N are calculated using a function dist.sub.L,i(E.sub.L,i, M.sub.L). In some cases, the distance calculation functions are the same, in other words dist.sub.H,i( . . . , . . . )=dist.sub.L,i( . . . , . . . ). In one example, D.sub.H,i=[abs(E.sub.H,i−M.sub.H)], and D.sub.L,i=[abs(E.sub.L,i−M.sub.L)].
[0063] The inference algorithm 304 for example receives the frames F.sub.H and F.sub.L captured by the image sensor 102, and performs inferences on the regions of these frames to generate inference results Z. In the example of
[0064] The distances D.sub.H,1 . . . N and D.sub.L,1 . . . N generated by the auto-bracketing module 302, and the inference results Z.sub.H,1 . . . N and Z.sub.L,1 . . . N, are for example supplied to the arbiter 306. The arbiter 306 is for example configured to select, for each of the regions R.sub.1 . . . N, the inference result associated with the region having the lowest distance. In other words, for each region R.sub.i, with i from 1 to N, the inference result Z.sub.H,i is chosen if D.sub.H,i<D.sub.L,i, and the inference result Z.sub.L,i is chosen if D.sub.L,i≤D.sub.H,i.
[0065] The inference results chosen for each region form for example an output set of results Z.sub.1 . . . N of the image processing circuit 104. Each inference result Z.sub.1 . . . N is for example a scalar value, although depending on the inference operation, it could alternatively be a more complex result, such as a vector. In some embodiments, the inference operation is a classification operation, and the inference result is a confidence level in the given label, corresponding for example to the presence of an object in the given region R. For example, the inference algorithm has been trained such that when the result is positive for a given region, this signifies that an object or other characteristic has been found. Alternatively, rather than the inference operation being a classification operation, it could be a regression operation that estimates a quantity associated with the given region R. An example of such a regression operation would be specific object numbering.
[0066] In an alternative embodiment, rather than the inference algorithm 304 systematically calculating all of the inference results Z.sub.H,1 to Z.sub.H,N for each frame F.sub.H, and all of the inference results Z.sub.L,1 to Z.sub.L,N for each frame F.sub.L, the distances D.sub.H,1 . . . N and D.sub.L,1 . . . N could be supplied by the auto-bracketing module 302 to the inference algorithm 304, and the inference algorithm 304 is configured to compare the distance for each region, and to perform the inference only for the region having the lowest distance. In other words, for each region R.sub.i, with i from 1 to N, the inference result Z.sub.H,i is calculated if D.sub.H,i<D.sub.L,i, and the inference result Z.sub.L,i is calculated if D.sub.L,i<D.sub.H,i. The inference result chosen for each region then for example forms, as before, the output set of results Z.sub.1 . . . N of the image processing circuit 104. Thus, in this case, the arbiter 306 can be omitted.
[0067]
[0068] An input/output interface (I/O INTERFACE) 518 is also for example coupled to the bus 508 and permits communication with other devices such as the image sensor 102 and other hardware of imaging device 100.
[0069] Rather than being implemented in software, it would also be possible that some or all of the functions of the image processing circuit 104 are implemented by one or more dedicated hardware circuits, such as by an ASIC (application specific integrated circuit) or by an FPGA (field-programmable gate array). In the case that the inference algorithm 304 is implemented by an artificial neural network, this network may be implemented in software, in other words by computing instructions and data stored in memories of the circuit 104, or at least partially by dedicated hardware.
[0070]
[0071] A function 601 (FRAME SEQUENCER) involves controlling, by the image processing circuit 104, the image sensor 102 to generate interlaced frames F.sub.H and F.sub.L based on the image capture parameters P.sub.H and P.sub.L respectively. The frames F.sub.H are generated and processed by a set of operation 602 to 606 (F.sub.H PROCESSING) and the frames F.sub.L are generated and processed by a set of operation 602′ to 606′ (F.sub.L PROCESSING).
[0072] In the operation 602 (F.sub.H FRAME ACQ), a frame F.sub.H is acquired from the image sensor 102.
[0073] Similarly, in the operation 602′ (F.sub.L FRAME ACQ), a frame F.sub.L is acquired from the image sensor 102.
[0074] In operations 603-1 to 603-N(COMPUTE), the estimations of the image quality metric E.sub.H,1 . . . N, and the inference values Z.sub.H,1 . . . N are for example generated for the regions R.sub.1 . . . N respectively of the frame F.sub.H.
[0075] Similarly, in operations 603-1′ to 603-N′ (COMPUTE), the estimations of the image quality metric E.sub.L,1 . . . N, and the inference values Z.sub.L,1 . . . N are for example generated for the regions R.sub.1 . . . N respectively of the frame F.sub.L.
[0076] In operations 604-1 to 604-N(COMPUTE), the distances D.sub.H,1 . . . N between the estimated image quality metrics E.sub.H,1 . . . N and the target level M.sub.H are for example computed.
[0077] Similarly, in operations 604-1′ to 604-N′ (COMPUTE), the distances D.sub.L,1 . . . N between the estimated image quality metrics E.sub.L,1 . . . n and the target level M.sub.L are for example computed.
[0078] In an operation 605 (COMPUTE), an image quality metric estimate E.sub.H for the frame F.sub.H is for example generated. In some embodiments, the image quality metric E.sub.H is selected as the lowest value among the estimates E.sub.H,1 . . . N.
[0079] Similarly, in an operation 605′, an estimation of the image quality metric E.sub.L for the frame F.sub.L is generated. In some embodiments, the image quality metric E.sub.L is selected as the highest value among the estimates E.sub.L,1 . . . N.
[0080] In an operation 606 (UPDATE), the parameter P.sub.H is for example updated based on the estimation of the image quality metric E.sub.H and on the target level M.sub.H.
[0081] Similarly, in an operation 606′ (UPDATE), the parameter P.sub.L is for example updated based on the estimation of the image quality metric E.sub.L and on the target level M.sub.L.
[0082] In some embodiments, updating the parameters in the operations 606 and 606′ involves the use of a look-up table. Furthermore, in some embodiments, updating the parameters involves forcing the parameters P.sub.L and P.sub.H to be different from each other, with P.sub.H>P.sub.L. In some embodiments, M.sub.H and M.sub.L may be identical.
[0083] In an operation 607 (ARBITER), inference results Z.sub.1 . . . N for the regions R.sub.1 . . . N are for example generated based on the inference results Z.sub.H,1 . . . N and corresponding distances D.sub.H,1 . . . N and on the inference results Z.sub.L,1 . . . N and corresponding distances D.sub.L,1 . . . N.
[0084] In some cases, the image processing circuit 104 is configured to only output the frame F.sub.H and/or F.sub.L in the case of an object detection or other form of significant inference result concerning one of these frames. In such a case, the operation 607 for example involves comparing each of the inference results Z.sub.1 . . . N to a detection threshold th.sub.d, and if Z.sub.i>th.sub.d, the image processing circuit 104 is configured to output the frame F.sub.H and/or F.sub.L in addition to the inference results Z.sub.L,1 . . . N.
[0085]
[0086] An operation 701 of
[0087] In an operation 702 (CALCULATE IMAGE G.sub.Hj), tone-mapping and resolution reduction is used to convert the frame F.sub.Hj into an image G.sub.Hj, which for example uses a log.sub.2 representation of each pixel of the image. For example, the binary code representing the value of each pixel is converted into a representation based on log.sub.2 conversion through a Maximum Significant Bit position operator, e.g. 001XXXXX.fwdarw.101(5) or 00001XXX.fwdarw.011(3). For instance, an 8-bit coded pixel values is thus encoded with 3 bits.
[0088] Similarly, in an operation 702′ (CALCULATE IMAGE G.sub.Li), tone-mapping and resolution reduction is used to generate an image G.sub.Lj, in a similar manner to the generation of the image G.sub.Hj.
[0089] In an operation 703 (CALCULATE ESTIMATES E.sub.H,1 . . . N), the estimates E.sub.H,1 . . . N of operations 603-1 to 603-N are for example generated based on the image G.sub.Hj. For example, the estimate of the image quality for a region R.sub.i of the frame is calculated based on the sum of the pixels of the region R.sub.i in the image G.sub.Hj, and keeping only a certain number of the highest significant bits. For example, the calculation is represented by the following equation:
where b is an integer representing the number of bits that is removed from the result of the sum. As one example, the sum is calculated using 12 bits, the 9 least significant bits are removed (b=9), and thus a 3-bit value remains.
[0090] Similarly, in an operation 703′ (CALCULATE ESTIMATES E.sub.L,1 . . . N), the estimates E.sub.L,1 . . . N of operations 603-1′ to 603-N′ are for example generated based on the image G.sub.Lj. For example, the estimate of the image quality for a region R.sub.i of the frame is calculated based on the sum of the pixels of the region R.sub.i in the image G.sub.Lj, for example as represented by the following equation:
[0091] An operation 704 (GENERATE T.sub.H,(j+1) BASED ON min(E.sub.H,1 . . . N) AND ON M.sub.H) of
T.sub.H,(j+1)=aq.sup.k.sup.
where k.sub.H is an exposure time index, a is a minimum integration time, and q is for example the ratio between two successive integration times for successive indexes k.sub.H. For example, a simplification can be achieved if we set q=2.sup.2.sup.
k.sub.H,(j+1)=k.sub.Hj+M.sub.H−min(E.sub.H,i)
[0092] In some embodiments, the parameter T.sub.H(j+1) is updated based on the variable k.sub.H using a look-up table. Furthermore, in some embodiments, in order to speed up the convergence time, in the case that the estimate E.sub.H,i is the result of a linear operation, thus without a tone-mapping stage, such as a mean value, the mechanism for updating the index k.sub.H could be based on a feedback control. For example, the index k.sub.H,(j+1) could be updated based on the equation k.sub.H,(j+1)=k.sub.H,j+[log.sub.q(M.sub.H)−log.sub.q (E.sub.H)], where [.] is a function bringing the result into the integer domain, such as a rounding operation, threshold function, etc.
[0093] Similarly, an operation 704′ (GENERATE T.sub.L,(j+1) BASED ON max(E.sub.L,1 . . . N) AND ON M.sub.L) of
T.sub.L,(j+1)=aq.sup.k.sup.
where k.sub.L is an exposure time index, a is the minimum integration time as before, and q is for example, as before, the ratio between two successive integration times for successive indexes k.sub.L. The index k.sub.L is for example updated as follows:
k.sub.L,(j+1)=k.sub.L,j+M.sub.L−max(E.sub.L,i) [Math 6]
[0094] In some embodiments, the parameter T.sub.L(j+1) is updated based on the variable k.sub.L using a look-up table. Furthermore, in some embodiments, in order to speed up the convergence time, in the case that the estimate E.sub.L,i is the result of a linear operation, such as a mean value, the mechanism for updating the index k.sub.L could be based on a feedback control. For example, the index k.sub.L,(j+1) could be updated based on the equation k.sub.L,(j+1)=k.sub.L,j+[log.sub.q(M.sub.L)−log.sub.q(E.sub.L)], where [.] is a function bringing the result into the integer domain, such as a rounding operation, threshold function, etc.
[0095] It will be apparent to those skilled in the art that the above example of how to update the parameters based on the target levels M.sub.H and M.sub.L and based on the estimates E.sub.H,1 . . . N and E.sub.L,1 . . . N is merely one example, and that different calculations could be used.
[0096] For example, in some embodiments, the T.sub.H and T.sub.L are modified in steps of fixed size ΔT, equal for example to the smallest step size, such that the parameters are modified incrementally over many cycles. According to one example, T.sub.H(j+1)=T.sub.j+ΔT if E.sub.H<M.sub.H, or T.sub.H(j+1)=T.sub.j−ΔT if E.sub.H>M.sub.H, and T.sub.L(j+1)=T.sub.j+ΔT if E.sub.L<M.sub.L, or T.sub.L(j+1)=T.sub.j−ΔT if E.sub.L>M.sub.L.
[0097] After operations 704 and 704′, further images are for example captured using the updated parameters T.sub.H,(j+1) and T.sub.L,(j+1). For example, in operations 705 and 705′ after the operations 704 and 704′ respectively, j is incremented, and then the method returns to the operation 701 in which the new frames are captured using the updated parameters. Furthermore, also after operations 704 and 704′, an inference result is for example generated for each region R.sub.1 . . . N of the frames in operations 706 to 713, as will now be described in more detail.
[0098] The operation 706 (CALCULATE INFERENCES Z.sub.H,1 . . . N) corresponds to the inference calculation of the operations 603-1 to 603-N of
[0099] Similarly, the operation 706′ (CALCULATE INFERENCES Z.sub.L,1 . . . N) corresponds to the inference calculation of the operations 603-1′ to 603-N′ of
[0100] The operation 707 (CALCULATE DISTANCES D.sub.H,1 . . . N) corresponds to the distance calculation of operations 604-1 to 604-N of
[0101] Similarly, the operation 707′ (CALCULATE DISTANCES D.sub.L,1 . . . N) corresponds to the distance calculation of operations 604-1′ to 604-N′ of
[0102] After operations 707 and 707′, arbitration is performed in operations 708 to 713, corresponding to the operation 607 of
[0103] It will be noted that updating the parameter T.sub.H based on the minimum region-based estimator E.sub.H and the parameter T.sub.L based on maximum region-based estimator E.sub.L has the advantage of intrinsically leading to different parameter values.
[0104] While
[0105] An advantage of the embodiments described herein is that an inference operation can be applied to a best case of two different captured images based on a relatively simple distance calculation.
[0106] Further, in the case that the image capture parameters P.sub.H and P.sub.L are exposure times T.sub.H and T.sub.L, an advantage is that the embodiments described herein provide a solution of relatively low complexity for performing inference operations on low dynamic range images, leading to a performance close to the inference that would be performed on the same image but with a high dynamic range. Indeed, while an alternative solution could be to capture high dynamic range images, or to merge two low dynamic range images in order to generate a high dynamic range image, processing such images would be very complex. Indeed, the inference algorithm should be designed or trained to process such images, and thus the size and complexity of the inference algorithm would be very high in view of the high number of bits. By applying the same inference algorithm to either or both of two frames captured with difference image capture parameters, and selecting the inference result based on the region of the two frames that best matches a target image quality, the inference algorithm remains relatively simple.
[0107] Various embodiments and variants have been described. Those skilled in the art will understand that certain features of these embodiments can be combined and other variants will readily occur to those skilled in the art. For example, it will be apparent to those skilled in the art that:
[0108] while embodiments have been described in which the frames F.sub.H and F.sub.L are captured by the same image sensor, it would also be possible to capture the frames F.sub.H with one image sensor, and to capture the frames F.sub.L with another image sensor;
[0109] the frames F.sub.H and F.sub.L that are processed as a pair are for example images captured at relatively close time instances, but these frames are not necessarily sequential frames from the image sensor;
[0110] in some embodiments, in the case that the frames F.sub.H and F.sub.L have different integration times, it would be possible to capture the frames F.sub.L and F.sub.H sequentially without resetting the pixels of the image sensor between the frames. For example, the frame F.sub.L is read in a non-destructive manner while the pixels continue to integrate, and then the frame F.sub.H is captured after a further integration period;
[0111] while the estimates E.sub.H and E.sub.L have been described as being based on the minimum among the estimates E.sub.H,1 . . . N and the maximum among the estimates E.sub.L,1 . . . N, in alternative embodiments the estimates E.sub.H and E.sub.L could be calculated based on more than one of the regional estimates E.sub.H,1 . . . N and E.sub.L,1 . . . N respectively;
[0112] while embodiments have been described in which there are two types of frames F.sub.L and F.sub.H that are captured, it would also be possible to apply the teaching described herein to more than two types of frames, an additional medium frame F.sub.M for example being added, captured based on a medium parameter P.sub.M.
[0113] Finally, the practical implementation of the embodiments and variants described herein is within the capabilities of those skilled in the art based on the functional description provided hereinabove.