Image processing device that generates an image from pixels with different exposure times
09800806 · 2017-10-24
Assignee
Inventors
Cpc classification
H04N25/533
ELECTRICITY
H04N25/75
ELECTRICITY
International classification
Abstract
An image processing apparatus includes an intermediate image generating unit configured to input an image which has been shot with differing exposure times set by region, generates a plurality of exposure pattern images corresponding to differing exposure times based on the input image, and generates a plurality of timing images which are difference images of the plurality of exposure pattern images; and a distortion correction processing unit configured to generate a corrected image equivalent to an exposure processing image at a predetermined exposure time by synthesizing processing of the plurality of timing images.
Claims
1. An imaging system, comprising: an imaging apparatus configured to have a plurality of pixels which are arranged in a plurality of pixel groups, wherein each pixel group of the plurality of pixel groups includes at least two pixels; and one or more processors configured to: generate a plurality of exposure pattern images based on each pixel signal from each pixel group of the plurality of pixel groups, wherein each pixel group of the plurality of pixel groups has different exposure times, and wherein the plurality of exposure pattern images correspond to the different exposure times; and generate a corrected image based on synthesis of the plurality of exposure pattern images that have different exposure start times within a set interval.
2. The imaging system according to claim 1, wherein each pixel of the plurality of pixels includes a photoelectric conversion element.
3. The imaging system according to claim 1, wherein each pixel of the plurality of pixels includes a transistor element.
4. The imaging system according to claim 3, wherein the transistor element is an amplifying transistor.
5. The imaging system according to claim 3, wherein the transistor element is a select transistor.
6. The imaging system according to claim 1, wherein the one or more processors are further configured to: generate an image based on each pixel group of the plurality of pixel groups; input the image which has been shot with different exposure times set by region; and generate the plurality of exposure pattern images that correspond to the different exposure times based on the input image.
7. The imaging system according to claim 6, wherein the input image is shot with a different exposure start time for each row of a plurality of rows and a different exposure duration for each pixel group of the plurality of pixel groups.
8. The imaging system according to claim 1, wherein the one or more processors are further configured to generate, from the plurality of exposure pattern images, a plurality of difference images.
9. The imaging system according to claim 8, wherein each difference image of the plurality of difference images comprises a plurality of pixels that have pixel values that are a difference between corresponding pixels of two or more of the plurality of exposure pattern images.
10. The imaging system according to claim 8, wherein the one or more processors are further configured to generate the corrected image based on synthesis of the plurality of difference images that have different exposure start times within the set interval.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
DETAILED DESCRIPTION OF EMBODIMENTS
(30) Details of the image processing apparatus and image processing method and program according to the present disclosure will be described below with reference to the diagrams. Description will be given in the following order.
(31) 1. Configuration Example of Image Processing Apparatus
(32) 2. Details of Configuration and Processing of Distortion Correcting Unit
(33) 2-1. Overall Configuration and Processing of Distortion Correcting Unit
(34) 2-2. Processing of Intermediate Image Generating Unit
(35) 2-3. Processing of Distortion Correction Processing Unit
(36) 3. Other Embodiments
(37) 3-1. Modification of Exposure Control Pattern
(38) 3-2. Modification of Exposure Time Control
(39) 3-3. Modification of Frame Buffer Setting
(40) 3-4. Modification of Distortion Correcting Processing
(41) 4. Shared Configuration Example with Other Image Processing
(42) 5. Description of Advantages of Processing of the Present Disclosure
3. Configuration Example of Image Processing Apparatus
(43) First, a configuration example of the imaging apparatus serving as an embodiment of an image processing apparatus will be described with reference to
(44) The incident light enters the imaging device (CMOS image sensor) 203 via the optical lens 201. The imaging device 202 is an imaging device wherein sequential readout processing by region, e.g. by row, according to a focal-plane operation, is executed.
(45) The imaging device 202 accumulates a charge with the pixels of the imaging device according to the incident light, and outputs a photoelectric conversion signal based on the accumulated charge to the distortion correcting unit 203 as image data.
(46) The distortion correcting unit 203 inputs the output image signal from the imaging device 202, and corrects the image distortion resulting from a focal-plane operation. This processing will be described in detail later. The distortion correcting unit 203 outputs the corrected image 204 generated by distortion correction to the signal processing unit 205.
(47) At the signal processing unit 205, corrected image 204 that has been subjected to distortion correction is input, and predetermined image processing such as white balance adjusting processing, gamma correction processing and the like, for example, are performed on the input image, and an output image 206 is generated and output.
(48) The imaging apparatus is an imaging apparatus that can photograph moving or still images, and the image device (CMOS image sensor) 202 has a similar configuration as that described above with reference to
(49) For example, exposure time control by image region is executed with a certain configuration described in Japanese Unexamined Patent Application Publication Nos. 2006-253876 and 2006-542337, and Japanese Patent Application No. 2008-147818 described above, or another configuration according to the related art.
(50) Note that with the examples described below, as an example of the imaging device 202, an example using an imaging device having the configuration shown in
(51) The imaging device 202 has an RGB array as shown in
(52) (a) Pixel block with the longest exposure time (Exposure Pattern 1)
(53) (b) Pixel block with the second longest exposure time (Exposure Pattern 2)
(54) (c) Pixel block with the third longest exposure time (Exposure Pattern 3), and
(55) (d) Pixel block with the shortest exposure time (Exposure Pattern 4).
(56) Pixel blocks with such four patterns of exposure times are set so as to be adjacent, as shown in
2. Details of Configuration and Processing of Distortion Correcting Unit
(57) 2-1. Overall Configuration and Processing of Distortion Correcting Unit
(58) Next, the configuration and processing of the distortion correcting unit 203 will be described with reference to
(59) Note that processing can be performed to output one image, using a continuously shot image at time of still image photography also, and can be applied to the time of shooting of either still images or moving pictures. If we say that three continuously shot images are frames N−1, N, N+1, and the sensor output image 211 which is the newest shot image is frame N+1, then the frame images are stored, frame N in the frame buffer 212, and frame N−1 in the frame buffer 213.
(60) The intermediate image generating units 214 through 216 each generate multiple intermediate images as to the sensor output image 211 and the stored images in the frame buffers 212 and 213. This intermediate image generating processing will be described later.
(61) The multiple intermediate images generated by the intermediate image generating units 214 through 216 are input in the distortion correction processing unit 218. The distortion correction processing unit 218 inputs the multiple intermediate images generated by the intermediate image generating units 214 through 216, and further a scanning line position information 217 to be input from the control unit 207 is further input, the image distortion from a focal-plane shutter is corrected, and the corrected output image 219 is generated. This processing will be described later.
(62) 2-2. Processing of Intermediate Image Generating Unit
(63) Next, processing of the intermediate image generating unit configured within the distortion correcting unit 203 shown in
(64) The intermediate image generating units 214 through 216 perform processing as to images that have each been consecutively shot. Specifically, the intermediate image generating unit 214 executes processing as to the sensor output image (frame N+1), the intermediate image generating unit 215 executes processing as to the stored image (frame N) in the frame buffer 212, and the intermediate image generating unit 216 executes the stored image (frame N−1) in the frame buffer 213.
(65) The processing to execute these three intermediate image generating units 214 through 216 differs only in the images to be processed, and basically are the same processing. Accordingly, the processing of the intermediate image generating unit 214 will be described below as a representative example.
(66) A sensor output image 211 is input into the intermediate image generating unit 214. The sensor output image 211 is an image having been shot, with exposure patterns (exposure patterns 1 through 4) that differ by pixel region, i.e., by setting four types of different exposure times, as described above with reference to
(67) First, the interpolating processing unit 222 shown in
(68) (a) Exposure pattern image 223 that is equivalent to an image shot with exposure pattern 1 having the longest exposure time,
(69) (b) Exposure pattern image 224 that is equivalent to an image shot with exposure pattern 2 having the second longest exposure time,
(70) (c) Exposure pattern image 225 that is equivalent to an image shot with exposure pattern 3 having the third longest exposure time, and
(71) (d) Exposure pattern image 226 that is equivalent to an image shot with exposure pattern 4 having the shortest exposure time; these four exposure pattern images are generated.
(72) The interpolation processing executed in exposure pattern image generating by the interpolation processing unit 222 is an interpolation method by a filter such as linear interpolation or the like, or a method wherein edge direction detection is performed and interpolation is based thereupon, or the like. The four exposure pattern images 223 through 226 generated by the interpolation processing unit 222 are each input into three different image generating units 227 through 229, with a pair having the least difference in exposure time as one pair.
(73) The processing of the difference image generating units 227 through 229 will be described with reference to
(74) The difference image generating units 227 through 229 each input a pair having little difference in exposure time from the four exposure pattern images and compute difference pixel values of corresponding pixels, and generate the images made up of the difference pixel values thereof as timing images 230 through 232.
(75) The difference image generating unit 227 generates a first timing image 230 as below. The first timing image 230 is an image made up of the difference between a pixel value of the first exposure pattern image 223 which is equivalent to an image shot with the exposure pattern 1 which is the longest exposure time, and a pixel value of the second exposure pattern image 224 which is equivalent to an image shot with the exposure pattern 2 which is the second longest exposure time.
(76) The difference image generating unit 228 generates a second timing image 231 as below. The second timing image 231 is an image made up of the difference between a pixel value of the second exposure pattern image 224 which is equivalent to an image shot with the exposure pattern 2 which is the second longest exposure time, and a pixel value of the third exposure pattern image 225 which is equivalent to an image shot with the exposure pattern 3 which is the third longest exposure time.
(77) The difference image generating unit 229 generates a third timing image 232 as below. The third timing image 232 is an image made up of the difference between a pixel value of the third exposure pattern image 225 which is equivalent to an image shot with the exposure pattern 3 which is the third longest exposure time, and a pixel value of the fourth exposure pattern image 226 which is equivalent to an image shot with the exposure pattern 4 which is the shortest exposure time.
(78) Also, the fourth timing image 233 uses the fourth exposure pattern image 226 without change, which is equivalent to the image shot with the exposure pattern 4 which is the shortest exposure time. The four difference images correspond to images shot with the settings of the exposure times as shown in
(79) (1) First timing image 230 is an image shot at exposure time T (t0 to t1),
(80) (2) Second timing image 231 is an image shot at exposure time T (t1 to t2),
(81) (3) Third timing image 232 is an image shot at exposure time T (t2 to t3), and
(82) (4) Fourth timing image 233 is an image shot at exposure time T (t3 to t4).
(83) Thus, the four timing images (different images) 230 through 233 are equivalent to the four consecutively shot images at the same exposure time (T), wherein the shooting timing has shifted by T each time.
(84) Thus, each of the intermediate image generating units 214 through 216 shown in
(85) The intermediate image generating unit 215 generates four timing images based on the stored image (frame N) of the frame buffer 212. The intermediate image generating unit 216 generates four timing images based on the stored image (frame N−1) of the frame buffer 213.
(86) 2-3. Processing of Distortion Correction Processing Unit
(87) Next, processing of the distortion correction processing unit 218 that is configured within the distortion correcting unit 203 shown in
(88) The distortion correction processing here is described with reference to
(89) The images of frames N−1, N, and N+1 which are the three consecutively shot images correspond to images that are to be processed with the three intermediate image generating units 214 through 216 of the distortion correcting unit 203 shown in
(90) The intermediate image generating unit 215 generates four timing images based on the storage image (frame N) of the frame buffer 212. The intermediate image generating unit 216 generates four timing images based on the storage image (frame N−1) of the frame buffer 213.
(91)
(92) The four timing images of frame N are generated by the intermediate image generating unit 215 based on the stored image (frame N−1) of the frame buffer 212. The four timing images of frame N+1 are generated by the intermediate image generating unit 214 based on the sensor output image (frame N+1).
(93) In
(94) An example of processing executed with the distortion correction processing unit 218 will be described with reference to
(95) For example, processing to generating an image with the output timing (Tx) shown in
(96) Processing to compute the pixel values of the pixels on the row of the upper edge making up the image at the output timing (Tx) shown in this
(97) On the other hand, the shooting timing of timing images A and E is partially overlapping with the output timing (Tx) of the image planned to be generated and partially not overlapping. Accordingly, blending processing is performed which multiplies these by a predetermined weighting coefficient.
(98) The pixel values of the pixels on the row of the upper edge (OUT) making up the image at the output timing (Tx) shown in
OUT=a×A+B+C+D+(1−a)×E (Expression 1)
(99) where A through E are pixel values at the same pixel position as the various timing images, i.e. at the corresponding pixel position, and a is a weighting coefficient.
(100) Note that the weighting coefficient a sets the value that is equivalent to the overlapping ratio of the output timing (Tx) shown in
(101) In the above (Expression 1), the timing images B, C, D use the pixel values 100% without blending and without change, and so the blurring by interpolation processing can be minimized.
(102) As shown in
(103) Pixel value computing processing for rows on the upper edge has been described above, but for example in the case of computing pixel values of an image at output timing (Tx) shown in
(104) The pixel value (OUT) of the pixel on the lower edge row can be found with (Expression 2) below
OUT=P+Q+R+S (Expression 2)
(105) where P through S are pixel values at the same pixel position as the various timing images, i.e. the pixel values at corresponding pixel image positions.
(106) In this case, the total of the exposure timings of the four timing images exactly matches the output timing (Tx), and the timing image overlapping with the output timing (Tx) does not have to be used. Other rows can also similarly have the pixel values of the output image computed with pixel value adding processing of multiple timing images using the pixel value (OUT) computing expression.
(107) Thus, the distortion correction processing unit 218 synthesizes multiple timing images to compute the pixel values of the corrected images, and generates and outputs the corrected image 204.
(108) As we can see from the processing described with reference to
(109) Consequently, the output image can be generated as an image shot with settings of roughly the same exposure time (Tx) for all of the rows from the upper edge row to the lower edge row.
(110) Accordingly, the output image generated with the present method has very little occurrence of image distortion resulting from focal-plane operations such as described above with reference to
(111) Consequently, the corrected image 204 generated by the distortion correcting unit 204 becomes an image that has suppressed image distortion.
(112) As shown in
3. Other Embodiments
(113) Next, other embodiments will be described.
(114) 3-1. Modification of Exposure Control Pattern
(115) In the above-described examples, an example is described wherein setting is performed for four exposure times in increments of pixel blocks of a rectangular region, with reference to
(116) The setting patterns for exposure regions and exposure times are not restricted to such settings, and other various types of settings can be made. Regions to be control increments of exposure time of the input image may be set variously, such as pixel blocks made up of multiple pixels, or rows, or pixels.
(117) Note that in the above-described example, the number of exposure control patterns is four, but the present disclosure can be implemented if there are two or more patterns. An example of changing two exposure patterns is shown in
(118) Also, the pixel array described with reference to
(119) The pixel array shown in
(120) (1) First timing image=exposure pattern 1−exposure pattern 2
(121) (2) Second timing image=exposure pattern 2
(122) These two timing images can be generated, and a corrected image that reduces the distortion by synthesizing processing of a timing image similar to the image generating processing described with reference to
(123) 3-2. Modification of Exposure Time Control
(124) With the above-described example, a setting example wherein exposure starting is shifted in region increments, and the readout timings are coordinated, has been described as shooting processing in the case of obtaining imaging images having different exposure times, but the exposure time control by region increments can have various settings.
(125) For example, as shown in
(126) 3-3. Modification of Frame Buffer Setting
(127) With the configuration described above with reference to
(128) Note that in the case of using this configuration, the distortion correction processing unit 218 generates a distortion corrected image using a timing image generated by applying only two consecutive frames as shown in
(129) Further, the present disclosure may have a configuration wherein a frame buffer is not used, as shown in
(130) In this case, the setting of the exposure time period of an image output as shown in
(131) Further, the distortion correcting unit 203 can also generate an signal of a frame rate that is higher speed than the frame rate of the input image. That is to say, the intermediate image generating unit generates a timing image as a consecutively shot image having an exposure time that is shorter than the input image, and makes up an output unit whereby the timing image generated by the intermediate image generating unit is output as a high frame rate image, thereby enabling a frame rate signal that is higher speed than the frame rate of the input image to be generated and output.
(132) Specifically, for example, as shown in
(133) 3-4. Modification of Distortion Correction Processing
(134) With the examples described above, an example using
(135) For example, motion is detected from an image A and an image B which are two adjacent timing images shown in
(136) These images are used to compute the pixel value (OUT) of the corrected image, according to (Expression 2) shown below.
OUT=a×A′+B+C+D+(1−a)×E′ (2)
(137) where A′, B, C, D, E′ are pixel values at the same pixel positions of the various timing images or motion compensation timing image, i.e. of corresponding pixel positions, and a is a weighting coefficient.
(138) Note that the weighting coefficient a sets a value that is equivalent to a overlapping rate between the output timing (Tx) shown in
4. Shared Configuration Example with Other Image Processing
(139) The present disclosure includes a method to reduce distortion resulting from focal-plane shutter operations, but by using other processing also at the same time, image quality can be further improved. Several of such configuration examples will be described.
(140) In the example described above, in order to correct focal-plane distortion as shown in
(141) Such processing does not cause a problem for images that are blurred due to motion that causes distortion, but a problem occurs in that, when shooting a subject that is completely still, the resolution deteriorates.
(142) A configuration example of the distortion correcting unit 203 to reduce focal-plane distortion while preventing such resolution deterioration will be described with reference to
(143) The sensor output image 211 is an image with exposure times that differ by pixel. A gain compensation processing unit 241 performs processing to multiply the gain according to exposure time in region increments of the sensor output image 211 by the pixel values.
(144) For a still image, an image having no resolution deterioration that is the same as the normal Bayer array can be obtained by executing the gain compensation processing. The output of the gain compensation processing unit 241 has distortion, and blurring amounts of the motion for each exposure control pattern differs, whereby in the case there is motion, image breakdown occurs.
(145) Therefore, motion is detected by pixel or by area with the motion detecting unit 242, and with a motion adapting processing unit 243, selecting processing is performed such that an output image of the distortion correction processing unit 218 is used in a pixel region having motion, and an output image of the gain compensation processing unit 241 is used in a location having no motion, or blending processing is performed according to the motion amount. Thus, the resolution of an image having no motion is as it has been in the past, and locations having motion can reduce the distortion.
(146) Further, portions having no motion use a frame memory to perform noise reduction processing in the temporal direction (also called three-dimensional NR or 3DNR), thereby reducing noise in the still image portions. Further, a configuration may be made which adds a pixel value saturation countermeasure processing.
(147) In
(148) For example, in the case that the corresponding pixel values of the two images applied to generate the timing image (difference image) are originally 1200 and 800,
Difference image pixel value=1200−800=400
(149) holds true. However, in the case that the output of the sensor is 10 bit, the sensor output can only output the pixel values (0 through 1023). In this case the pixel value 2000 mentioned above is output as pixel value 1023,
Difference image pixel value=1023−800=223
(150) holds true, and a timing image (difference image) having a pixel value smaller than the actual can be generated.
(151) As a saturation countermeasure, the simplest is to perform clipping processing with values that differ according to exposure patterns.
(152) As shown in
(153) Also, with another method, saturation is detected, and a difference image is not generated for the saturated portion. For this saturated portion, pixel value setting is performed by a dynamic range expanding method that is disclosed in PTL 3, for example. With such processing, for example, the distortion reduction effect is weakened, but the dynamic range can be expanded.
5. Description of Advantages of Processing of the Present Disclosure
(154) Next, advantages of processing according to the present disclosure will be described.
(155) In order to show the advantages of the present disclosure, focal-plane distortion will be described with reference to
(156) As a comparison with the related art, a focal-plane distortion reduction effect with a method shown in Japanese Unexamined Patent Application Publication No. 2006-148496 (or
(157) The distortion reduction processing result from the Japanese Unexamined Patent Application Publication No. 2006-148496 is shown in
(158) The focal-plane distortion reduction effect according to the present disclosure will be shown. As described with reference to
(159) For example, a specific example of multiple timing images (difference images) is an image such as that shown in
(160) As compared to the image shown in
(161) As shown above, the image set at exposure times that differ by region is shot with an imaging device, according to the present disclosure, and with the processing applying the shot image, images that have performed focal-plane distortion correcting can be generated.
(162) In order to obtain a similar effect with a method according to related art, a sensor has to operate at a high speed, but not with the present disclosure, so demerits such as increased power consumption that accompany an increase in operation speed of the imaging device do not occur. Also, for a similar reason, the storage apparatus serving as a frame buffer does not have to be capable of high speed operations, and power consumption and apparatus cost can be reduced.
(163) Further, complicated computations such as motion vector calculating processing and the like do not have to be performed, so reduction of computing load and high speed processing are realized. Note that even in the case wherein the present disclosure and the motion vector are configured so as to share the computing processing, the time difference between timing images that are to be subjected to motion vector computing is small, and motion vector computing processing can be executed with high precision, whereby a highly precise distortion reduction effect can be obtained.
(164) The present disclosure have been described above with reference to specific embodiments. However, it is obvious that one skilled in the art can make modifications and substitutions to these examples within the scope and essence of the present disclosure. That is to say, the present embodiments have been disclosed in exemplary form, and should not be interpreted in a restricted manner. In order to determine the essence of the present disclosure, the Claims should be referenced.
(165) Also, the series of processing described in the Specification can be executed with hardware, software, or a combined configuration of both. In the case of executing a processing with software, a program having recorded a processing sequence is installed in the memory of a computer that is built in to dedicated hardware, and processing is executed, or a program is installed in a general-use computer that can execute various types of processing, and the processing is executed. For example, the program can be recorded beforehand in a recording medium. In addition to being installed from a recording medium to a computer, the program can be received via a network such as LAN (Local Area Network) or the Internet, and installed in a recording medium such as a built-in hard disk.
(166) Note that the various types of processing described in the Specification are not only executed in a time-series manner according to the description, but may be executed in parallel or individually, according to the processing capability of the apparatus executing the processing, or as suitable. Also, a system according to the present Specification is a theoretical collective configuration of multiple apparatuses, and is not restricted to apparatuses with various configurations within the same housing.
(167) The present disclosure contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2011-038240 filed in the Japan Patent Office on Feb. 24, 2011, the entire contents of which are hereby incorporated by reference.
(168) It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.