IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
20230209220 · 2023-06-29
Inventors
Cpc classification
H04N25/48
ELECTRICITY
H04N25/683
ELECTRICITY
International classification
H04N25/683
ELECTRICITY
H04N25/48
ELECTRICITY
H04N17/00
ELECTRICITY
Abstract
An image processing device according to the present disclosure includes: a defect candidate pixel detecting unit that detects a defect candidate pixel for each of captured images captured in a state where positional relationships between an imaging range and an image sensor including a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and an interpolation target defective pixel determining unit that determines, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value by the defect candidate pixel detecting unit.
Claims
1. An image processing device comprising: a defect candidate pixel detecting unit that detects a defect candidate pixel for each of captured images captured in a state where positional relationships between an imaging range and an image sensor comprising a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and an interpolation target defective pixel determining unit that determines, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value by the defect candidate pixel detecting unit.
2. The image processing device according to claim 1, wherein the threshold value is less than or equal to the number of the plurality of captured images.
3. The image processing device according to claim 1, wherein the threshold value is less than the number of the plurality of captured images.
4. The image processing device according to claim 1, wherein the defect candidate pixel detecting unit detects whether or not each of pixels of interest in each of the plurality of captured images is the defect candidate pixel on a basis of a comparison result between a pixel value of the pixel of interest and a pixel value of a detection neighboring pixel that is a neighboring pixel of each of the pixel of interest.
5. The image processing device according to claim 4, wherein the interpolation target defective pixel determining unit determines the pixel of interest as the interpolation target defective pixel in a case where polarities of differences between the pixel of interest and each of a plurality of the detection neighboring pixels are same and absolute values of the differences exceed a threshold value and in a case where the pixel of interest is determined to be the defect candidate pixel a number of times greater than or equal to the threshold value by the defect candidate pixel detecting processing of detecting the pixel of interest as the defect candidate pixel.
6. The image processing device according to claim 1, further comprising: an interpolation target defective pixel interpolating unit that outputs a plurality of defect-corrected images corresponding to the plurality of captured images by interpolating the interpolation target defective pixel with an interpolation neighboring pixel that is a neighboring pixel of the interpolation target defective pixel in a case where each of the plurality of captured images includes the interpolation target defective pixel.
7. The image processing device according to claim 6, wherein the interpolation target defective pixel interpolating unit interpolates the interpolation target defective pixel using a pixel included in a same captured image as a captured image of the interpolation target defective pixel as the interpolation neighboring pixel.
8. The image processing device according to claim 6, further comprising: a composition unit that sets, among the plurality of captured images, for a captured image determined by the interpolation target defective pixel determining unit to include the interpolation target defective pixel, a captured image subjected to interpolation of the interpolation target defective pixel by the interpolation target defective pixel interpolating unit as a composition target image and, for a captured image determined not to include the interpolation target defective pixel by the interpolation target defective pixel determining unit, the captured image as a composition target image and then generates a composite image having a higher image quality than image quality of each of a plurality of the composition target images by performing composite processing using the plurality of composition target images.
9. The image processing device according to claim 8, wherein the composite image has a larger number of pixels than a number of pixels of each of the plurality of captured images.
10. The image processing device according to claim 8, wherein the composite image has a higher color resolution than a color resolution of each of the plurality of captured images.
11. The image processing device according to claim 6, wherein a difference between pixels at positions corresponding to each other between each of the plurality of captured images and a defect-corrected image corresponding to the captured image among the plurality of defect-corrected images is calculated, and interpolation target defective pixel information is generated, the interpolation target defective pixel information indicating that a pixel whose difference is greater than or equal to a predetermined value is a defective pixel.
12. The image processing device according to claim 11, further comprising: an association unit that associates the plurality of captured images with the interpolation target defective pixel information.
13. The image processing device according to claim 1, wherein the plurality of captured images is obtained by capturing images in a state in which positional relationships between the imaging range and the image sensor are caused to be different from each other by a pixel or a subpixel.
14. An image processing device comprising: a defect candidate pixel detecting unit that detects a defect candidate pixel for each of captured images captured in a state where positional relationships between an imaging range and an image sensor comprising a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and an association unit that associates the captured images and a detection result of the defect candidate pixel detecting unit.
15. The image processing device according to claim 14, wherein the detection result is defect candidate pixel detection information indicating a number of times each pixel of the captured images is detected as the defect candidate pixel by the defect candidate pixel detecting unit.
16. The image processing device according to claim 14, further comprising: an interpolation target defective pixel determining unit that determines, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value by the defect candidate pixel detecting unit.
17. The image processing device according to claim 16, wherein the detection result is interpolation target defective pixel address information indicating an address of the interpolation target defective pixel.
18. The image processing device according to claim 6, further comprising: an association unit that associates address difference information with the plurality of defect-corrected images, the address difference information indicating addresses of pixels, of a highest rank to a predetermined rank in a descending order of absolute values of differences among absolute values of a plurality of differences calculated for pixels at positions corresponding to each other in the plurality of captured images and the plurality of defect-corrected images, and the differences.
19. An image processing device comprising: a pixel value restoring unit that restores a pixel value of a pixel in a plurality of captured images by using address difference information indicating an address of the pixel, an absolute value of a difference calculated for which at positions corresponding to each other in a plurality of defect-corrected images corresponding to one of the plurality of captured images and the plurality of captured images is greater than or equal to a predetermined value, and differences for the pixel and the plurality of defect-corrected images.
20. An image processing method of executing control of: detecting a defect candidate pixel for each of a plurality of captured images captured in a state where positional relationships between an imaging range and an image sensor comprising a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and determining, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value.
21. An image processing program of executing control of: detecting a defect candidate pixel for each of captured images captured in a state where positional relationships between an imaging range and an image sensor comprising a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and determining, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
DESCRIPTION OF EMBODIMENTS
[0044] Hereinafter, embodiments of the present disclosure will be described in detail on the basis of the drawings. Note that an image processing device, an image processing method, and an image processing program according to the present application are not limited by the embodiments. Note that in each of the following embodiments, the same parts are denoted by the same symbols, and redundant description will be omitted.
[0045] The present disclosure will be described in the following order of items.
[0046] 1. Embodiments
[0047] 1-1. Overview
[0048] 1-1-1. Pixel-Shifted High-Image-Quality Imaging Mode
[0049] 1-1-2. Defect Correction
[0050] 1-1-3. Exemplary Procedure of Pixel-Shifted High-Image-Quality Imaging Mode
[0051] 1-1-4. Comparative Technology and Problems Thereof
[0052] 1-2. Overview of Image Processing According to Embodiment of Present Disclosure 1-2-1. Processing Overview
[0053] 1-2-2. Specific Examples of Processing
[0054] 1-2-3. Other Variations
[0055] 1-3. Configuration of Devices Applicable as Image Processing Device
[0056] 1-3-1. Configuration of Imaging Device
[0057] 1-4. Configuration of Image Processing Device
[0058] 1-5. Other Configurations of Imaging Device
[0059] 1-6. Processing Procedure by Image Processing System
[0060] 1-7. Processing Example by Image Processing System
[0061] 1-7-1. First Processing Example
[0062] 1-7-2. Second Processing Example
[0063] 1-7-3. Third Processing Example
[0064] 1-7-4. Fourth Processing Example
[0065] 1-7-5. Fifth Processing Example
[0066] 1-7-6. Sixth Processing Example
[0067] 1-7-7. Seventh Processing Example
[0068] 2. Other Embodiments
[0069] 2-1. Others
[0070] 3. Effects of Present Disclosure
[0071] 4. Hardware Configuration
1. Embodiments
1-1. Overview
[0072] Prior to describing the embodiments of the present disclosure, first, an overview of the present disclosure will be described.
1-1-1. Pixel-Shifted High-Image-Quality Imaging Mode
[0073] There is known a pixel-shifted high-image-quality imaging method of performing a plurality of times of imaging while physically shifting an image sensor, a lens, or the like by a minute distance, combining the plurality of captured images, and generating a high-quality image.
1-1-2. Defect Correction
[0074] Defective pixel correction includes a method in which a position, where a defective pixel is (defective pixel address), is stored in advance and defect correction is performed on a pixel at the stored defective pixel position (hereinafter, also referred to as “address-type defect correction”) and a method in which defect correction is performed on a pixel in a case where the pixel is detected as a defective pixel (hereinafter, also referred to as “detection-type defect correction”). In the detection-type defect correction, for example, in order to correct a subsequent defective pixel that occurs due to a temporal change of an image sensor or a defective pixel that becomes apparent only at the time of long-time exposure or high temperature, whether or not a pixel is a defective pixel is estimated from values of a pixel to be processed (also referred to as a “pixel of interest”) and neighboring pixels thereof, and in a case where it is determined (detected) that the pixel is a defective pixel, the pixel is corrected. The address-type defect correction and the detection-type defect correction are often used in combination.
[0075] In the detection-type defect correction, there is a case where a pixel that receives light from sparkle that is likely to occur on a reflecting surface such as metal, a star in the nighttime sky, or the like is erroneously determined as a defect. In normal imaging of one image (frame), a side effect of erroneous detection of a defective pixel is a slight decrease in contrast or slight coloring; however, in the pixel-shifted high-image-quality imaging, in a case where defect correction is performed or not performed on the same subject portion among a plurality of captured images (a plurality of frames), there is a possibility that the image quality is deteriorated as a result of combining these images.
1-1-3. Exemplary Procedure of Pixel-Shifted High-Image-Quality Imaging Mode
[0076] Here, an exemplary procedure of the pixel-shifted high-image-quality imaging mode to which the technology of the present disclosure is applied will be described. In the imaging method of the exemplary procedure, a mechanism that shifts an image sensor vertically and horizontally within a light receiving plane, for example, by 1 pixel or 0.5 pixels, is provided. As the pixel-shifted high-image-quality imaging as described above, for example, the following first to third exemplary procedures are conceivable.
[0077] The first exemplary procedure is a method of performing a shift by one pixel, in which imaging is performed a total of 2×2=4 times by shifting by “0” and “1” pixels in the horizontal direction and shifting by “0” and “1” pixels in the vertical direction, and the four captured images (RAW images) are combined. A RAW image is image data in which output of the image sensor is recorded as it is, and is, for example, an image that is output from the image sensor and is having an arrangement of color filters as they are.
[0078] In the second exemplary procedure, shifting is performed by 0.5 pixels. Imaging is performed a total of 4×4=16 times by shifting by “0”, “0.5”, “1.0”, and “1.5” pixels in the horizontal direction and shifting by “0”, “0.5”, “1.0”, and “1.5” pixels in the vertical direction, and the sixteen captured images (RAW images) are combined.
[0079] In the third exemplary procedure, imaging is performed eight times, and two types will be described below as third exemplary procedures (a) and (b). In the third exemplary procedure (a), imaging is performed a total of 4×2=8 times by shifting by “0”, “0.5”, “1.0”, and “1.5” pixels in the horizontal direction and shifting by “0” and “0.5” pixels in the vertical direction (alternatively, twice in the horizontal direction and four times in the vertical direction), and the eight captured images (RAW images) are combined. In addition, in the third exemplary procedure (b), first, similarly to the first exemplary procedure, imaging is performed a total of 2×2=4 times by shifting by “0” and “1” pixels in the horizontal direction and shifting by “0” and “1” pixels in the vertical direction, and furthermore, each of the 4 times of imaging is shifted by “0.5” pixels obliquely to perform imaging 4 times, which results in a total of 8 times of imaging, and the eight captured images (RAW images) are combined.
[0080] In a case where the image sensor has a Bayer arrangement as illustrated in
[0081]
[0082] In the second exemplary procedure, the number of pixels that is 2×2=4 times the number of pixels of an image obtained by one-time imaging is obtained, the resolution is improved by about 2 times, and moire or jaggies of color luminance are reduced.
[0083] In the third exemplary procedure (a), the number of green pixels increases to four times, whereas the increase in red and blue pixels is merely twice. Therefore, although the color resolution of the third exemplary procedure (a) is not as high as that of the second exemplary procedure, image quality comparable to that of the second exemplary procedure can be expected in other respects. In the third exemplary procedure (b), the resolution in the oblique direction is not as good as that in the second exemplary procedure in both the luminance and the colors, however, a resolution comparable to that in the second exemplary procedure can be expected in the horizontal and vertical directions.
[0084] Note that, in the present specification, the first exemplary procedure is referred to as the pixel-shifted image-quality-improved imaging mode, the second and third exemplary procedures are referred to as a pixel-shifted super-resolution imaging mode, and the two modes are collectively referred to as the pixel-shifted high-image-quality imaging mode. The composite processing in the pixel-shifted image-quality-improved imaging mode of the first exemplary procedure is referred to as image-quality-improved composite processing, and the composite processing in the pixel-shifted super-resolution imaging mode of the second and third exemplary procedures is referred to as super-resolution composite processing. In addition, the image-quality-improved composite processing and the super-resolution composite processing are collectively referred to as high-image-quality composite processing. Note that methods other than the following examples are conceivable as the pixel-shifted high-image-quality imaging mode, and the application scope of the present invention is not limited to the following exemplary procedures.
[0085] The third exemplary procedure will be described with reference to
[0086] First, explanation will be given on green pixels illustrated in
[0087] Hereinafter, for convenience of description, as illustrated in
[0088] Next, red pixels illustrated in
[0089] Hereinafter, for convenience of explanation, as illustrated in
[0090] With respect to blue pixels, positions of non-hatched rectangles in the image group RD4 are imaged by processing similar to that in
[0091] In addition, the processing of the third exemplary procedure (b) is as illustrated in
[0092] A pixel group RD11 illustrated in
[0093] Furthermore, in the case of the method illustrated in
1-1-4. Comparative Technology and Problems Thereof
[0094] In the present specification, technology in which detection-type defect correction is independently applied to a plurality of images obtained in the above-described pixel-shifted high-image-quality imaging mode is defined as comparative technology, and a problem thereof will be described here. Not limited to the case of the pixel-shifted high-image-quality imaging mode, in the detection-type defect correction, there is a possibility that a pixel that has received light from a subject having a small area and a large luminance difference from the vicinity thereof, such as glitter which is likely to occur on a reflecting surface such as metal or star light in the night sky, is erroneously determined as a defect. This is because these subjects can give excessive values with respect to values of neighboring pixels and thus have the same properties as those of defects, and it is difficult to distinguish between them.
[0095] In addition, in the case of imaging of a sheet of normal image, a side effect of erroneous detection appears as a phenomenon, in which the brightness of a bright spot is weakened or the contrast is reduced in the case of a subject with a fine pattern, and is subjectively allowable in many cases. However, in the pixel-shifted high-image-quality imaging mode, there is a case where even a subject, which gives an excessive value in a corresponding pixel in each of images before composition included in the plurality of images, does not necessarily give an excessive value even in a corresponding pixel in the composite image.
[0096] The above point will be described with reference to
[0097] In
[0098] In the imaging (first imaging) illustrated in
[0099] Then, the detection-type defect correction is performed on the pixel values PT11, PT12, PT13, PT14, and PT15 in
[0100] First, in the detection step, in a case where polarities (plus or minus) of differences (two differences) between the pixel value of a pixel of interest and the pixel values of both adjacent pixels are the same, and the absolute values of the differences (the two differences) both exceed a threshold value (here, 0.8), the pixel of interest is detected as a defective pixel.
[0101] Then, in the interpolation step, in a case where the pixel of interest is detected as a defective pixel in the detection step, the value of the pixel of interest (defective pixel) is replaced with an average value of the pixel values on both adjacent pixels (interpolated). Hereinafter, the interpolation step may be referred to as an “interpolation step of the comparative technology.”
[0102] By performing the above processing, the pixel value PT13 in
[0103]
[0104] Therefore, in a case where the super-resolution image in
1-2. Overview of Image Processing According to Embodiment of Present Disclosure
[0105] Therefore, the present disclosure proposes a method of appropriately determining a defective pixel by determining the defective pixel using results of defect detection of each pixel for each image.
1-2-1. Overview of Processing
[0106] First, the principle of the present technology will be described while describing the processing overview of the present disclosure with reference to
[0107] Processing in a case where there is a defective pixel will be described with reference to
[0108] In the imaging (first imaging) illustrated in
[0109] Then, similarly to
[0110] The pixel value PT33 in
1-2-2. Specific Example of Processing
[0111] The principle of processing of determining a defective pixel will be described hereinafter with reference to
[0112] In
[0113] In
[0114] In
[0115] Next, the image processing device TD performs the defect candidate pixel detecting processing on each pixel of a captured image IM2 obtained by shifting the position of the subject SB1 in the plane (light receiving plane) of the image sensor 121 from the first position to a second position and performing imaging with a different imaging range (step S2). In
[0116] Next, the image processing device TD performs the defect candidate pixel detecting processing on each pixel of a captured image IM3 obtained by shifting the position of the subject SB1 in the plane (light receiving plane) of the image sensor 121 from the second position to a third position and performing imaging with a different imaging range (step S3). In
[0117] Next, the image processing device TD performs the defect candidate pixel detecting processing on each pixel of a captured image IM4 obtained by shifting the position of the subject SB1 in the plane (light receiving plane) of the image sensor 121 from the third position to a fourth position and performing imaging with a different imaging range (step S4). In
[0118] Then, the image processing device TD determines a defective pixel using the defect candidate pixel detection map MP1-4 (step S5). In the example of
[0119] Then, the image processing device TD interpolates each pixel in the captured images IM1 to IM4 corresponding to the pixel P6 that has been determined as the interpolation target defective pixel with neighboring pixels and performs the defective pixel interpolating processing on the interpolation target defective pixel with neighboring pixels for the captured image including the interpolation target defective pixel. For example, in a case where the image sensor 121 is a color image sensor using color filters of the Bayer arrangement and the pixel P6 is a green pixel, the image processing device TD replaces the pixel value of the pixel P6 with an average value of pixel values of the nearest green pixels P1, P3, P9, and P11. For example, the image processing device TD interpolates the pixel P6 of the captured image IM1 by using, as neighboring pixels for interpolation, the pixels P1, P3, P9, and P11 included in the same captured image IM1 as the captured image IM1 of the pixel P6 which is the interpolation target defective pixel. The image processing device TD performs the processing of interpolating the pixel P6 similarly for the captured images IM2 to IM4. Then, the image processing device TD generates a composite image by combining the captured images IM1 to IM4 after the defective pixel interpolating processing.
[0120] As described above, the number of times each pixel is detected as a defect candidate pixel is counted, and a pixel whose number of times reaches a threshold value is determined as an interpolation target defective pixel. As a result, it is possible to appropriately determine the interpolation target defective pixel.
[0121] Moreover, the defective pixel interpolating processing is enabled for the pixel determined as the interpolation target defective pixel. As a result, the pixel that is the actual defect can be regarded as the interpolation target defective pixel and appropriately subjected to the detection-type defect correction.
[0122] If a pixel of interest is actually a defective pixel, the pixel of interest is detected as an interpolation target defective pixel in the number of images, the number larger than or equal to a threshold value, among images used for the composition, and thus the count value matches the number of captured images used for the composition. On the other hand, even if there is an image in which a pixel that is not a defect is erroneously detected as a defect candidate pixel depending on the pattern, it is unlikely that the pixel is erroneously detected as a defect candidate pixel in all images. Therefore, by determining a pixel detected as a defect candidate pixel in all images used for composition as an interpolation target defective pixel and enabling only defective pixel interpolating processing in the interpolation target defective pixel, it becomes possible to obtain good image quality without side effects due to erroneous detection.
1-2-3. Other Variations
[0123] In the above description, the determination criterion (threshold value) for an interpolation target defective pixel is the number of all captured images to be combined (N, 4 in
[0124] The image sensor 121 may be either a black-and-white sensor (monochrome sensor) or a color image sensor. Hereinafter, side effects due to the detection-type defect correction in a case where a color captured image is a target, that is, in a case where the image sensor 121 is a color image sensor will be specifically described. For example, in a color image sensor using color filters of the Bayer arrangement, green pixels are densely arranged, and red and blue pixels are more sparsely arranged than green pixels. Therefore, in the detection-type defect correction, a defect is detected by referring to pixels farther for red and blue than for green.
[0125] Therefore, the rate at which a high-luminance subject having a small area and a large luminance difference from the vicinity is erroneously recognized as a defect is higher for the red and blue pixels than for the green pixels. Therefore, in a case as illustrated in
[0126] A composite image example IMS illustrated in
[0127] An example of simulation of side effects will be described with
[0128]
[0129]
[0130] In the present technology, the number of times each pixel of each captured image is detected as a defect is counted. For example, using the image sensor 121 (see
[0131] Using the image sensor 121 to which the Bayer arrangement BA illustrated in
[0132] The image processing device TD performs defect detection processing on each of the plurality of captured images. In the example of
[0133] For example, in a case where the polarities (plus or minus) of differences between a pixel to be subjected to the defect candidate pixel detecting processing (pixel of interest) and neighboring pixels having the same color as that of the pixel of interest, and the absolute values of the differences both exceed the threshold value (such as 0.6 or 0.75), the image processing device TD (see
[0134] Then, a pixel whose number of times of detection as a defect candidate pixel has reached the number of the plurality of captured images (for example, N) is determined as an interpolation target defective pixel. As a result, it is possible to appropriately determine the interpolation target defective pixel also for a color image.
[0135] In addition, the image processing device TD performs the defective pixel interpolating processing using neighboring pixels only for a pixel determined as an interpolation target defective pixel. As a result, similarly to the black-and-white image, the image processing device TD can appropriately perform the detection-type defect correction processing targeted on an interpolation target defective pixel, which is an actual defective pixel, also for a color image. The image processing device TD generates an image of each color for which the defective pixel interpolating processing using neighboring pixels has been performed only for a pixel determined as an interpolation target defective pixel and generates a composite image by the high-image-quality composite processing in which images corresponding to every color, for which the defective pixel interpolating processing has been performed, are combined only for the pixel determined as the interpolation target defective pixel. In the example of
1-3. Configuration of Device Applicable as Image Processing Device
[0136] The image processing device that performs the above processing can be implemented in various devices. First, devices to which the technology of the present disclosure can be applied will be described with reference to
[0137] Illustrated in
[0138] As the image source VS, an imaging device 1, a server 4, a recording medium 5, and the like are presumed. As the image processing device TD, a mobile terminal 2 such as a smartphone, a personal computer 3, or the like are presumed. In addition, as the image processing device TD, various devices such as an image editing dedicated device, a cloud server, a television device, and a video recording and reproducing device are presumed as the image processing device TD.
[0139] The imaging device 1 as the image source VS is a digital camera or the like and transfers an image file MF obtained by imaging to the mobile terminal 2, the personal computer 3, or the like via wired communication or wireless communication. The server 4 may be any one of a local server, a network server, a cloud server, or the like and refers to a device capable of providing the image file MF captured by the imaging device 1. The server 4 transfers the image file MF to the mobile terminal 2, the personal computer 3, or the like via some transmission path.
[0140] The recording medium 5 may be any of a solid-state memory such as a memory card, a disc-shaped recording medium such as an optical disc, a tape-shaped recording medium such as a magnetic tape, and the like, and is a removable recording medium on which the image file MF captured by the imaging device 1 is recorded. The image file MF read from the recording medium 5 is read by the mobile terminal 2, the personal computer 3, or the like.
[0141] The mobile terminal 2, the personal computer 3, or the like as the image processing device TD can perform the image processing on the image file MF acquired from the above image source VS.
[0142] Note that the mobile terminal 2 or the personal computer 3 may serve as the image source VS for another mobile terminal 2 or personal computer 3 functioning as the image processing device TD.
[0143]
[0144] Similarly, since the mobile terminal 2 can be the image source VS by having an imaging function, the mobile terminal 2 can output an image in which a defective pixel is interpolated by performing the defect candidate pixel detecting processing, the interpolation target defective pixel determining processing, and the defective pixel interpolating processing on the image file MF generated by imaging. Note that, without being limited to the imaging device 1 or the mobile terminal 2, various other devices are also conceivable as a device that can serve both as an image source and an image processing device.
[0145] As described above, there are various devices that function as the image processing device TD and various image sources VS of the embodiment, however, in
1-3-1. Configuration of Imaging Device
[0146] The configuration of the imaging device 1 that is an image source VS as well as an image processing device TD in
[0147] The imaging device 1 in
[0148] The lens system 11 includes lenses (for example, a lens 111 in
[0149] The imaging element unit 12 includes, for example, the image sensor 121 (imaging element) of a complementary metal oxide semiconductor (CMOS) type, a charge coupled device (CCD) type, or the like. The imaging element unit 12 executes, for example, correlated double sampling (CDS) processing, automatic gain control (AGC) processing, and the like on an electric signal obtained by photoelectrically converting the light received by the image sensor 121 and further performs analog-to-digital (A/D) conversion processing. Then, an imaging signal as digital data is output to the detection-type defect correction processing unit 13 or the control unit 21 in the subsequent stage.
[0150] The image sensor 121 may be configured so that each of a plurality of pixels detects the intensity of light and captures a monochrome image. Alternatively, as illustrated in
[0151] The imaging element unit 12 includes a driving unit (for example, an image sensor driving unit 123 in
[0152] In the above example, the case where the imaging device 1 implements the pixel-shifted high-image-quality imaging mode by changing the position of the image sensor 121 has been described as an example, however, imaging may be performed by changing the position or the attitude of components other than the image sensor 121. For example, the imaging device 1 may perform imaging by changing the position or the attitude of the lens system 11 or the device itself (that is, the entire imaging device 1). For example, the imaging device 1 may change the position or the attitude of a lens of the lens system 11. Furthermore, the position or the attitude of the imaging device 1 may be modified by a driving device (a camera platform, a tripod, a stabilizer, or the like) other than the imaging device 1 or by a user.
[0153] The detection-type defect correction processing unit 13 performs defect candidate pixel detecting processing, interpolation target defective pixel determining processing, defective pixel interpolating processing, and others on the captured image input from the imaging element unit 12 and passes the captured image after the interpolation processing to the composition processing unit 14. The detection-type defect correction processing unit 13 is, for example, a detection-type defect correcting circuit 130 as illustrated in
[0154] The composition processing unit 14 performs the high-image-quality composite processing on a defect-corrected image in which a defective pixel is interpolated by the detection-type defect correction processing unit 13. The composition processing unit 14 performs the high-image-quality composite processing corresponding to the number of captured images as described above.
[0155] The development processing unit 15 performs development processing on the composite image generated by the composition processing unit 14. The development processing performed by the development processing unit 15 includes processing of converting RGB data into luminance data Y and chrominance data C of a YC system, adjustment of white balance, γ correction, and others. Note that the development processing performed by the development processing unit 15 does not include demosaic processing.
[0156] The recording control unit 16 performs recording and reproduction with respect to a recording medium by a nonvolatile memory, for example. The recording control unit 16 performs processing of recording, for example, an image file MF such as moving image data or still image data, an image after the development processing, or the like on a recording medium. Various actual forms are conceivable as the recording control unit 16. For example, the recording control unit 16 may be configured as a flash memory built in the imaging device 1 and a write and read circuit thereof or may be in a form of a card recording and reproducing unit that performs recording and reproducing access to a recording medium that can be attached to and detached from the imaging device 1 such as a memory card (portable flash memory or the like). Alternatively, the recording control unit 16 may be a hard disk drive (HDD) or the like as a form built in the imaging device 1.
[0157] The display unit 17 performs various displays for an imaging operator, and is, for example, a display panel or a viewfinder by a display device such as a liquid crystal display (LCD) or an organic electro-luminescence (EL) display arranged in a housing of the imaging device 1. The display unit 17 executes various displays on a display screen on the basis of an instruction from the control unit 21. For example, the display unit 17 displays a reproduced image of image data read from the recording medium in the recording control unit 16. Furthermore, the display unit 17 causes various operation menus, icons, messages, and the like, that is, display as a graphical user interface (GUI), to be executed on the screen on the basis of an instruction from the control unit 21.
[0158] The output unit 18 performs data communication or network communication with an external device in a wired or wireless manner. The output unit 18 transmits and outputs captured image data (still image file or moving image file) to, for example, an external display device, a recording device, a reproduction device, or the like. Furthermore, as a network communication unit, the output unit 18 may perform communication via various networks such as the Internet, a home network, and a local area network (LAN) and transmit and receive various types of data to and from a server, a terminal, or the like on the network.
[0159] The operation unit 19 collectively indicates input devices for a user to perform various types of operation input. Specifically, the operation unit 19 indicates various operators (keys, dials, touch panels, touch pads, etc.) provided in a housing of the imaging device 1. The operation of the user is detected by the operation unit 19, and a signal corresponding to the input operation is sent to the control unit 21.
[0160] The control unit 21 includes a microcomputer (arithmetic processing device) including a central processing unit (CPU). The memory unit 20 stores information and the like used for processing by the control unit 21. The memory unit 20 comprehensively indicates, for example, a read only memory (ROM), a random access memory (RAM), a flash memory, and the like. The memory unit 20 may be a memory area built in a microcomputer chip as the control unit 21 or may be configured by a separate memory chip. The control unit 21 controls the entire imaging device 1 by executing a program stored in the ROM, the flash memory, or the like of the memory unit 20. For example, the control unit 21 controls the operation of each unit as necessary with respect to control of the shutter speed of the imaging element unit 12, an instruction of various types of image processing in the detection-type defect correction processing unit 13, imaging operation or recording operation in accordance with user's operation, reproduction operation of a recorded image file, operation of the lens system 11 such as zooming, focusing, and diaphragm adjustment in a lens barrel, user interface operation, and the like.
[0161] The RAM in the memory unit 20 is used for temporary storage of data, programs, and the like as a work area at the time of various data processing of the CPU of the control unit 21. The ROM and the flash memory (nonvolatile memory) in the memory unit 20 are used for storing the operating system (OS) for the CPU to control each unit, content files such as image files, application programs for various types of operation, firmware, and others.
[0162] The driver unit 22 includes, for example, a motor driver for a zoom lens driving motor, a motor driver for a focus lens driving motor, a motor driver for a motor of a diaphragm mechanism, and the like. These motor drivers apply a drive current to a corresponding driver according to an instruction from the control unit 21 and cause the driver to execute shift of a focus lens or a zoom lens, opening or closing of diaphragm blades of the diaphragm mechanism, and the like.
[0163] The sensor unit 23 comprehensively indicates various sensors mounted on the imaging device. As the sensor unit 23, for example, an acceleration sensor, a position information sensor, an illuminance sensor, or the like may be mounted. Note that the imaging device 1 of
[0164] Here,
[0165] The defect candidate pixel detecting unit 131 performs the defect candidate pixel detecting processing on a RAW image input thereto. The input RAW image is stored in a memory 202 as a temporarily-stored captured image 203. The memory 202 may be a memory unit 20. Note that the memory 202 may be inside the detection-type defect correcting circuit 130.
[0166] A detection result 204 of the defect candidate pixel detecting processing by the defect candidate pixel detecting unit 131 is stored in the memory 202. The detection result 204 is, for example, a defect candidate pixel detection map indicating an address and a count value of a defect candidate pixel. Note that the detection result 204 is not limited to the defect candidate pixel detection map and may be a defect candidate pixel address list or the like that indicates addresses of pixels detected as defect candidate pixels. In a case where the defect candidate pixel detecting unit 131 performs counting-up, the defect candidate pixel detecting unit 131 holds the count value of each pixel and stores the detection result 204 of the defect candidate pixel detecting processing in the memory 202. In the case of grasping only addresses of defect candidate pixels, the defect candidate pixel detecting unit 131 counts up at the time of overwriting corresponding addresses of the memory 202 with the memory 202.
[0167] The imaging device 1 associates the input RAW image with a defect candidate pixel detection map that is the detection result 204 of the defect candidate pixel detecting unit. Here, the term “associate” means, for example, to allow another piece of information to be used (linked) when one piece of information (data, command, program, etc.) is processed. That is, the pieces of information associated with each other may be integrated as one file or the like or may be separate pieces of information. For example, information B associated with information A may be transmitted on a transmission path different from that of the information A. Furthermore, for example, information B associated with information A may be recorded in a recording medium different from that of the information A (or another recording area of the same recording medium). Note that this “association” may be performed on a part of information instead of the entire information. For example, an image and information corresponding to the image may be associated with each other in any unit such as a plurality of frames, one frame, or a part in a frame. More specifically, for example, actions such as assigning the same ID (identification information) to a plurality of pieces of information, recording a plurality of pieces of information in the same recording medium, storing a plurality of pieces of information in the same folder, storing a plurality of pieces of information in the same file (assigning one piece of information to another as metadata), embedding a plurality of pieces of information in the same stream, and embedding metadata in an image such as an electronic watermark are included in “associating”. The recording control unit 16, the output unit 18, or the control unit 21 in
[0168] The interpolation target defective pixel determining unit 132 performs the interpolation target defective pixel determining processing. The interpolation target defective pixel determining unit 132 determines an interpolation target defective pixel using the detection result 204 stored in the memory 202. For example, the interpolation target defective pixel determining unit 132 determines a pixel whose count value is larger than or equal to a threshold value in the defect candidate pixel detection map as the detection result 204 as an interpolation target defective pixel. The interpolation target defective pixel determining unit 132 passes an interpolation target defective pixel address list indicating a pixel determined as an interpolation target defective pixel to the interpolation target defective pixel interpolating unit 133.
[0169] The interpolation target defective pixel interpolating unit 133 performs the defective pixel interpolating processing on the basis of the determination result by the interpolation target defective pixel determining unit 132. The interpolation target defective pixel interpolating unit 133 interpolates each pixel corresponding to an interpolation target defective pixel indicated by the interpolation target defective pixel address list among pixels in the temporarily-stored captured image 203 stored in the memory 202 with neighboring pixels. The interpolation target defective pixel interpolating unit 133 passes the defect-corrected image CIM generated by the defective pixel interpolating processing to the composition processing unit 14.
1-4. Configuration of Image Processing Device
[0170] Furthermore, the image file MF can be transferred to the image processing device TD such as the mobile terminal 2 and subjected to the image processing. The mobile terminal 2 and the personal computer 3 serving as the image processing device TD can be implemented as an image processing device having the configuration illustrated in
[0171] In
[0172] An input unit 76 including an operator or an operation device is connected to the input and output interface 75. For example, as the input unit 76, various operators or operation devices such as a keyboard, a mouse, a key, a dial, a touch panel, a touch pad, and a remote controller are conceivable. An operation by a user is detected by the input unit 76, and a signal corresponding to the input operation is interpreted by the CPU 71.
[0173] In addition, a display unit 77 including an LCD, an organic EL panel, or the like and an audio output unit 78 including a speaker or the like are integrally or separately connected to the input and output interface 75. The display unit 77 performs various displays and includes, for example, a display device included in a housing of the image processing device 70, a separate display device connected to the image processing device 70, or the like. The display unit 77 displays images and the like for various types of image processing on a display screen on the basis of an instruction from the CPU 71. In addition, the display unit 77 displays various operation menus, icons, messages, and the like, in other words, performs display as a graphical user interface (GUI) on the basis of an instruction from the CPU 71.
[0174] In some cases, the storage unit 79 including a hard disk, a solid state memory, or the like and a communication unit 80 including a modem or the like are connected to the input and output interface 75. The communication unit 80 performs communication processing via a transmission path such as the Internet or performs communication with various devices by wired or wireless communication, bus communication, or others.
[0175] A drive 82 is further connected to the input and output interface 75 as necessary, and a removable recording medium 81 such as a magnetic disk, an optical disc, a magneto-optical disc, or a semiconductor memory is mounted as required. A data file such as the image file MF, various computer programs, and the like can be read from the removable recording medium 81 by the drive 82. The read data file is stored in the storage unit 79, or images or audio included in the data file are output by the display unit 77 or the audio output unit 78. Furthermore, a computer program or the like read from the removable recording medium 81 is installed in the storage unit 79, as necessary.
[0176] In the image processing device 70, for example, software for image processing as the image processing device of the present disclosure can be installed via network communication by the communication unit 80 or the removable recording medium 81. Alternatively, the software may be stored in advance in the ROM 72, the storage unit 79, or the like.
1-5. Other Configurations of Imaging Device
[0177] Another example of the configuration of the imaging device 1 that is both the image source VS and the image processing device TD in
[0178] The imaging device 1 in
[0179] The image sensor driving unit 123 shifts the image sensor 121 in the horizontal direction and the vertical direction by one pixel or by one subpixel. For example, the image sensor driving unit 123 causes the image sensor driving mechanism 122, which is an actuator, to shift the image sensor 121 in accordance with an instruction from the control unit 21.
[0180] In a case where the position or the attitude of the lens 111 is modified, the lens driving unit 222 may cause the lens driving mechanism 221 which is an actuator to shift the lens 111 in accordance with an instruction from the control unit 21.
[0181] The address-type defect correction processing unit 24 performs the address-type defect correction processing on images captured by the image sensor 121. The address-type defect correction processing unit 24 performs the address-type defect correction processing on a defective pixel indicated by an address (position) of the defective pixel stored in the defect address storing unit 201. The imaging device 1 may not include the address-type defect correction processing unit 24 or the defect address storing unit 201. The output unit 18, the memory unit 20, or the control unit 21 in
[0182] Note that some of the functions of the imaging device 1 described with reference to
1-6. Processing Procedure by Image Processing System
[0183] First, a processing procedure by the image processing system 50 will be described with reference to
[0184] First, the flow of processing by the image processing system will be described with reference to
[0185] As illustrated in
[0186] Next, an example of processing in a case where functions are shared by the imaging device 1 and the image processing device 70 included in the image processing system 50 will be described with reference to
[0187] As illustrated in
[0188] Then, the imaging device 1 performs the defect candidate pixel detecting processing (step S202). Note that the steps illustrated in
[0189] Then, the imaging device 1 performs the interpolation target defective pixel determining processing (step S203). The imaging device 1 compares the number of times (count value) each pixel is detected as a defect candidate pixel with a threshold value and determines a pixel detected as a defect candidate pixel the number of times larger than or equal to the threshold value as an interpolation target defective pixel.
[0190] Then, the imaging device 1 transmits a plurality of captured images or interpolation target defective pixel information DPI indicating interpolation target defective pixels to the image processing device 70 (step S204). Note that the imaging device 1 may transmit a plurality of captured images subjected to the detection-type defect correction processing by the imaging device 1 itself or the interpolation target defective pixel information DPI to the image processing device 70 without transmitting a plurality of captured images not subjected to the detection-type defect correction processing of the present embodiment. This point will be described with reference to
[0191] Then, the image processing device 70 that has received the information from the imaging device 1 performs the defective pixel interpolating processing on each of the interpolation target defective pixels of the captured image including the interpolation target defective pixels among the plurality of captured images (step S205).
[0192] Then, the image processing device 70 performs the high-image-quality composite processing (step S206). The image processing device 70 performs the high-image-quality composite processing by using an image subjected to the defective pixel interpolating processing for the image on which the defective pixel interpolating processing has been performed and using an image that is not processed for an image on which the defective pixel interpolating processing has not been performed.
[0193] Then, the image processing device 70 performs the development processing on the composite image combined by the high-image-quality composite processing (step S207).
1-7. Processing Example by Image Processing System
[0194] Hereinafter, a processing example by the image processing system 50 will be described with reference to
1-7-1. First Processing Example
[0195] First, a first processing example will be described with reference to
[0196] First, the premise of the diagram illustrated in
[0197] In
[0198] First, an overview of the processing of
[0199] Here, for example, in a case of an actual defective pixel, after the captured images (N images in
[0200] Subsequently, each piece of processing will be specifically described with reference to
[0201] The defect candidate pixel detecting unit 131 of the imaging device 1 performs the defect candidate pixel detecting processing on each pixel of each of the captured images of the captured image group IMG and generates the defect candidate pixel detection map MP11 indicating the number of times (count value) each pixel is detected as a defective pixel. The defect candidate pixel detection map MP11 has a size of width W×height H (W×H elements) corresponding to the size of each of the captured images of the captured image group IMG. For example, in the defect candidate pixel detection map MP11, the count values of all the pixels are initialized to 0 before the defect candidate pixel detecting processing, and 1 is added (incremented) to an address (count value) of a pixel detected as a defect candidate pixel. The imaging device 1 associates the captured image group IMG with the defect candidate pixel detection map MP11. The imaging device 1 transmits the defect candidate pixel detection map MP11 and the captured image group IMG to the image processing device 70. For example, in
[0202] Then, using the captured image group IMG and the defect candidate pixel detection map MP11 received from the imaging device 1, the image processing device 70 performs the interpolation target defective pixel determining processing by the interpolation target defective pixel determining unit 132 or the defective pixel interpolating processing by the interpolation target defective pixel interpolating unit 133. The image processing device 70 determines a pixel whose value in the defect candidate pixel detection map MP11 is larger than or equal to the threshold value as an interpolation target defective pixel and applies the defective pixel interpolating processing using neighboring pixels only to the interpolation target defective pixel.
[0203] Then, the image processing device 70 generates a defect-corrected image group CIG including N images such as defect-corrected images CI1, CI2, and CI3 that are obtained by performing the defective pixel interpolating processing on the interpolation target defective pixel. Then, the image processing device 70 generates a final image (output image), which is an image after the high-image-quality composite processing and the development processing, by performing the high-image-quality composite processing and the development processing using the defect-corrected image group CIG. Note that, in a case where there is no need to perform display or the like, the development processing is unnecessary, and the composite image may be recorded, transmitted, or output.
1-7-2. Second Processing Example
[0204] Next, a second processing example will be described with reference to
[0205] First, an overview of the processing of
[0206] Hereinafter, differences from
[0207] Then, the image processing device 70 performs the defective pixel interpolating processing by the interpolation target defective pixel interpolating unit 133 using the captured image group IMG and the defect address list LT11 received from the imaging device 1. The image processing device 70 applies the defective pixel interpolating processing only to a pixel corresponding to an address included in the defective pixel address list LT11.
[0208] Then, the image processing device 70 generates the defect-corrected image group CIG including N images such as the defect-corrected images CI1, CI2, and CI3 in which a defective pixel is corrected. Then, the image processing device 70 generates a final image (output image) by performing the high-image-quality composite processing and the development processing using the defect-corrected image group CIG.
1-7-3. Third Processing Example
[0209] Next, a third processing example will be described with reference to
[0210] First, an overview of the processing of
[0211] Hereinafter, differences from
[0212] The detection-type defect correcting circuit CC performs the defect candidate pixel detecting processing on each image of the captured image group IMG by the defect candidate pixel detecting processing and generates the defect candidate pixel detection map MP11 on the basis of the result. The detection-type defect correcting circuit CC generates the defect candidate pixel detection map MP11 indicating the number of times (count value) each pixel is detected as a defect. For example, the detection-type defect correcting circuit CC generates the defect candidate pixel detection map MP11 which is a map having a size of W×H (W×H elements) corresponding to the size of each image of the captured image group IMG by the defect candidate pixel detecting processing.
[0213] The detection-type defect correcting circuit CC performs the defective pixel interpolating processing using the result of the defect candidate pixel detecting processing and performs the defective pixel interpolating processing on a defective pixel. As a result, the detection-type defect correcting circuit CC generates a defect-corrected image group CIG1 including N images such as defect-corrected images CI11, CI12, and CI13 in which a defective pixel is corrected. The detection-type defect correcting circuit CC outputs the defect-corrected image group CIG1. The detection-type defect correcting circuit CC can generate a defect-corrected image group CIG1 that is not subjected to composition and defect-corrected, and the imaging device 1 can acquire the defect-corrected image group CIG1.
[0214] The imaging device 1 acquires the defect candidate pixel detection map MP11 from the detection-type defect correcting circuit CC and transmits the acquired defect candidate pixel detection map MP11 or the captured image group IMG to the image processing device 70.
[0215] Then, using the captured image group IMG and the defect candidate pixel detection map MP11 received from the imaging device 1, the image processing device 70 performs the interpolation target defective pixel determining processing by the interpolation target defective pixel determining unit 132 or the defective pixel interpolating processing by the interpolation target defective pixel interpolating unit 133. The image processing device 70 applies the defective pixel interpolating processing only to a pixel whose value in the defect candidate pixel detection map MP11 is larger than or equal to the threshold value. In this case, the image processing device 70 applies the defective pixel interpolating processing only to a pixel detected as a defect in all the images of the captured image group IMG.
[0216] Then, the image processing device 70 generates a defect-corrected image group CIG2 including N images such as the defect-corrected images CI1, CI2, and CI3 in which a defective pixel is corrected. Then, the image processing device 70 generates a final image (output image) by performing the high-image-quality composite processing and the development processing using the defect-corrected image group CIG2.
1-7-4. Fourth Processing Example
[0217] Next, a fourth processing example will be described with reference to
[0218] First, an overview of the processing of
[0219] Hereinafter, differences from
[0220] Using the detection-type defect correcting circuit CC, the imaging device 1 generates a defect-corrected image group CIG1 including N images such as the defect-corrected images CI11, CI12, and CI13 in which a defective pixel is corrected.
[0221] A difference processing unit 134 of the imaging device 1 obtains a difference between each pixel of each of the images of the captured image group IMG and a pixel of an image of the defect-corrected image group CIG1 corresponding to the image of the captured image group IMG. Note that, in a case where a pixel is detected as a defective pixel and is corrected, the difference is not zero, and thus it can be determined that the pixel whose difference is not zero has been detected as a defective pixel and defect correction has been performed. The difference processing unit 134 obtains a difference between each pixel of the captured image IM1 of the captured image group IMG and a pixel of the defect-corrected image CI11 of the defect-corrected image group CIG1 corresponding to the pixel of the captured image IM1. Then, the difference processing unit 134 adds “1” to the defect candidate pixel detection map MP11 for the pixel whose difference is not zero. Note that the difference is not limited to zero, and, for example, in consideration of noise or the like, “1” may be added in a case where the difference is a predetermined value. In addition, the difference processing unit 134 obtains a difference between each pixel of the captured image IM2 and a pixel of the defect-corrected image CI12 corresponding to the pixel of the captured image IM2. Then, the difference processing unit 134 adds “1” to the defect candidate pixel detection map MP11 for the pixel whose difference is not zero. The difference processing unit 134 repeats similar processing for the N images. In this manner, the imaging device 1 generates the defect candidate pixel detection map MP11 on the basis of a comparison result between the captured image group IMG and the defect-corrected image group CIG1. The difference processing unit 134 associates the captured image group IMG with the defect candidate pixel detection map MP11. The imaging device 1 transmits the generated defect candidate pixel detection map MP11 and the captured image group IMG to the image processing device 70.
[0222] Then, using the captured image group IMG and the defect candidate pixel detection map MP11 received from the imaging device 1, the image processing device 70 performs the interpolation target defective pixel determining processing by the interpolation target defective pixel determining unit 132 or the defective pixel interpolating processing by the interpolation target defective pixel interpolating unit 133. Since the subsequent processing is similar to that in
1-7-5. Fifth Processing Example
[0223] Next, a fifth processing example will be described with reference to
[0224] First, an overview of the processing of
[0225] Hereinafter, differences from
[0226] The detection-type defect correcting circuit CC performs the defect candidate pixel detecting processing on each image of the captured image group IMG by the defect candidate pixel detecting processing by the defect candidate pixel detecting unit 131 and generates the defect candidate pixel detection map MP11 on the basis of the result. The detection-type defect correcting circuit CC also performs the defective pixel interpolating processing on a defective pixel by the defective pixel interpolating unit 133a by using the result of the defect candidate pixel detecting processing by the defect candidate pixel detecting unit 131. As a result, the detection-type defect correcting circuit CC generates a defect-corrected image group CIG1 including N images such as defect-corrected images CI11, CI12, and CI13 in which a defective pixel is corrected. The detection-type defect correcting circuit CC outputs the defect-corrected image group CIG1.
[0227] Furthermore, as described above, the imaging device 1 can transmit the result of the defect candidate pixel detecting processing by the defect candidate pixel detecting unit 131 of the detection-type defect correcting circuit CC to the image processing device 70. The imaging device 1 associates the captured image group IMG with the defect candidate pixel detection map MP11. The imaging device 1 acquires the defect candidate pixel detection map MP11 from the detection-type defect correcting circuit CC and associates the captured image group IMG, the defect candidate pixel detection map MP11, and the defect-corrected image group CIG1. The imaging device 1 transmits the defect candidate pixel detection map MP11, the captured image group IMG, or the defect-corrected image group CIG1 to the image processing device 70.
[0228] Then, the image processing device 70 selects a switch SW1 for a pixel for which the captured image group IMG is used and selects a switch SW2 for a pixel for which the defect-corrected image group CIG1 is used, thereby generating the defect-corrected image group CIG2. The image processing device 70 selects the switch SW2 for a pixel whose value in the defect candidate pixel detection map MP11 is larger than or equal to the threshold value. That is, the image processing device 70 uses a corrected pixel of the defect-corrected image group CIG1 for a pixel whose value in the defect candidate pixel detection map MP11 is larger than or equal to the threshold value, that is, for a defective pixel, and uses an uncorrected pixel of the captured image group IMG for a pixel that is not a defective pixel. The image processing device 70 may include a selection unit that performs selection by the switches SW1 and SW2.
[0229] As a result, the image processing device 70 generates the defect-corrected image group CIG2 including N images such as the defect-corrected images CI1, CI2, and CI3 in which a defective pixel is corrected. Then, the image processing device 70 generates a final image (output image) by performing the high-image-quality composite processing and the development processing using the defect-corrected image group CIG. Note that the image processing device 70 may receive only information necessary for generation of the defect-corrected image group CIG2 from the imaging device 1. In this case, the image processing device 70 may request the image selected for each pixel to the imaging device 1 and may receive information of the pixel of the selected image from the imaging device 1.
1-7-6. Sixth Processing Example
[0230] Next, a sixth processing example will be described with reference to
[0231] First, an overview of the processing of
[0232] Hereinafter, differences from
[0233] Using the detection-type defect correcting circuit CC, the imaging device 1 generates a defect-corrected image group CIG1 including N images such as the defect-corrected images CI11, CI12, and CI13 in which a defective pixel is corrected.
[0234] The difference processing unit 134 the difference processing unit 134 of the imaging device 1 generates the defect candidate pixel detection map MP11 by processing similar to that in
[0235] Then, the image processing device 70 selects a switch SW1 for a pixel for which the captured image group IMG is used and selects a switch SW2 for a pixel for which the defect-corrected image group CIG1 is used, thereby generating the defect-corrected image group CIG2. Since the subsequent processing is similar to that in
[0236] In addition, in
1-7-7. Seventh Processing Example
[0237] Next, a seventh processing example will be described with reference to
[0238] First, an overview of the processing of
[0239] Hereinafter, differences from
[0240] Using the detection-type defect correcting circuit CC, the imaging device 1 generates a defect-corrected image group CIG1 including N images such as the defect-corrected images CI11, CI12, and CI13 in which a defective pixel is corrected.
[0241] The difference processing unit 134 of the imaging device 1 performs difference processing of obtaining a difference between pixels at corresponding positions between each image of the captured image group IMG and an image of the defect-corrected image group CIG1 corresponding to the image of the captured image group IMG and generates a difference map including all the differences.
[0242] In
[0243] Then, the imaging device 1 generates a table (also referred to as an “address-difference table TB11”) for pixels having values other than zero in a number of images greater than or equal to the threshold value. Note that it is not limited to the case where the value is zero, and the address-difference table TB11 may be generated for pixels having a predetermined value in a number of images greater than or equal to the threshold value in consideration of noise or the like, for example. The address-difference table TB11 is information in which information (address) specifying each pixel is associated with information (difference) indicating a difference in each image. For example, the address-difference table TB11 includes information in which N differences such as a difference #1 indicating a difference of a first image combination (the captured image IM1 and the defect-corrected image CI11) and a difference #2 indicating a difference of a second image combination (the captured image IM2 and the defect-corrected image CI12) are associated with an address #1 specifying one pixel.
[0244] As described above, the data size necessary for the address-difference table TB11 is variable. Meanwhile, by sorting the pixels included in the address-difference table TB11 in the descending order of absolute values of the differences and sending only addresses of pixels in higher ranks, the table size can be set to a fixed length, and the data size can be reduced at the same time. In this case, a pixel having a small absolute value of differences, that is, a defect on a small level is not corrected.
[0245] The imaging device 1 associates the defect-corrected image group CIG1 with the address-difference table TB11. The imaging device 1 transmits the address-difference table TB11 or the defect-corrected image group CIG1 to the image processing device 70. Note that, in a case where the image processing device 70 does not have information such as the positional relationship of each image of the defect-corrected image group CIG1, the imaging device 1 may transmit metadata indicating the positional relationship or the like of each image of the defect-corrected image group CIG1 to the image processing device 70.
[0246] Then, a pixel value restoring unit 135 of the image processing device 70 restores pixel values before correction using the defect-corrected image group CIG1 received from the imaging device 1 and the address-difference table TB11. The pixel value restoring unit 135 generates (restores) the pixel values before correction from the difference information for the pixels at the addresses recorded in the address-difference table TB11. For example, the pixel value restoring unit 135, by using the address of a pixel indicated by the address-difference table TB11 and differences of the pixel, subtracts the differences from the pixel value of the pixel in the defect-corrected image group CIG1 and thereby restores the pixel value of the pixel in the captured image group IMG. As a result, the image processing device 70 generates the defect-corrected image group CIG2 including N images such as the defect-corrected images CI1, CI2, and CI3 in which pixel values of pixels indicated by the address-difference table TB11, among the pixels in the defect-corrected image group CIG1, are restored before correction. Then, the image processing device 70 generates a final image (output image) by performing the high-image-quality composite processing and the development processing using the defect-corrected image group CIG2.
[0247] Note that the numerical values in the address-difference table TB11 may be the original pixel values instead of the absolute values of the differences. Even in a mode in which N RAW images (data) are recorded for a higher image quality with pixel-shifting and sent to a PC, there is also a use case in which development with normal resolution is performed using only one of the N RAW images. Therefore, according to the method of
[0248] As described above, for each of
2. Other Embodiments
[0249] The processing according to the above embodiments may be performed in various different modes (modifications) other than in the above embodiments or modifications.
2-1. Others
[0250] Among the processing described in the above embodiments, the whole or a part of the processing described as that performed automatically can be performed manually, or the whole or a part of the processing described as that performed manually can be performed automatically by a known method. In addition, a processing procedure, a specific name, and information including various types of data or parameters illustrated in the above or in the drawings can be modified as desired unless otherwise specified. For example, various types of information illustrated in the drawings are not limited to the information illustrated.
[0251] In addition, each component of each device illustrated in the drawings is conceptual in terms of function and does not need to be necessarily physically configured as illustrated in the drawings. That is, a specific form of distribution or integration of each device is not limited to those illustrated in the drawings, and the whole or a part thereof can be functionally or physically distributed or integrated in any unit depending on various loads, usage status, and others.
[0252] In addition, the above embodiments and modifications can be combined as appropriate within a range where there is no conflict in the processing content.
[0253] Furthermore, the effects described herein are merely examples and are not limiting, and other effects may be achieved.
3. Effects of Present Disclosure
[0254] As described above, the image processing device (imaging device 1 in the embodiment) according to the present disclosure includes the defect candidate pixel detecting unit (defect candidate pixel detecting unit 131 in the embodiment) and the interpolation target defective pixel determining unit (interpolation target defective pixel determining unit 132 in the embodiment). The defect candidate pixel detecting unit detects a defect candidate pixel for each of a plurality of captured images captured in a state where positional relationships between an imaging range and the image sensor including a plurality of pixels are caused to be different from each other by performing the defect candidate pixel detecting processing on each of the plurality of captured images. The interpolation target defective pixel determining unit determines, as an interpolation target defective pixel, a pixel detected as a defect candidate pixel a number of times greater than or equal to a threshold value by the defect candidate pixel detecting unit.
[0255] As described above, the image processing device according to the present disclosure detects a defect candidate pixel for each of the plurality of captured images by performing the defect candidate pixel detecting processing on each of the plurality of captured images and determines a pixel detected as a defect candidate pixel a number of times greater than or equal to the threshold value as an interpolation target defective pixel, thereby enabling appropriate determination of a defective pixel and prevention of deterioration in the quality of an image obtained in the pixel-shifted high-image-quality imaging mode.
[0256] The threshold value is less than or equal to the number of the plurality of captured images. In this manner, the image processing device can appropriately determine the interpolation target defective pixel depending on the number of the plurality of captured images by using the threshold value which is less than or equal to the number of the plurality of captured images.
[0257] The threshold value is less than the number of the plurality of captured images. In this manner, the image processing device can appropriately determine the interpolation target defective pixel depending on the number of the plurality of captured images by using the threshold value which is less than the number of the plurality of captured images.
[0258] The defect candidate pixel detecting unit detects whether or not each of pixels of interest in each of the plurality of captured images is a defect candidate pixel on the basis of a comparison result between a pixel value of each of the pixels of interest and a pixel value of a detection neighboring pixel that is a neighboring pixel of each of the pixels of interest. As described above, the image processing device can appropriately detect a defect candidate pixel by detecting whether or not each pixel of interest is a defect candidate pixel on the basis of the comparison result between a pixel value of each pixel of interest and pixel values of neighboring pixels.
[0259] The interpolation target defective pixel determining unit determines a pixel of interest as an interpolation target defective pixel in a case where polarities of differences between the pixel of interest and each of a plurality of detection neighboring pixels are the same and absolute values of the differences exceed a threshold value and in a case where the pixel of interest is determined as a defect candidate pixel a number of times greater than or equal to a threshold value by the defect candidate pixel detecting processing of detecting the pixel of interest as a defect candidate pixel. In this manner, the image processing device can appropriately detect a defect candidate pixel by detecting a defect candidate pixel using the polarities of the differences between the pixel of interest and each of the plurality of detection neighboring pixels.
[0260] The image processing device includes the interpolation target defective pixel interpolating unit (interpolation target defective pixel interpolating unit 133 in the embodiment). The interpolation target defective pixel interpolating unit outputs a plurality of defect-corrected images corresponding to the plurality of captured images by interpolating the interpolation target defective pixel with an interpolation neighboring pixel that is a neighboring pixel of an interpolation target defective pixel in a case where each of the plurality of captured images includes the interpolation target defective pixel. As described above, the image processing device can interpolate the interpolation target defective pixel appropriately by interpolating the interpolation target defective pixel using neighboring pixels of the interpolation target defective pixel.
[0261] The interpolation target defective pixel interpolating unit interpolates the interpolation target defective pixel using a pixel included in the same captured image as a captured image of the interpolation target defective pixel as interpolation neighboring pixels. As described above, the image processing device can appropriately interpolate the interpolation target defective pixel by interpolating the interpolation target defective pixel using pixels included in the same captured image as that of the interpolation target defective pixel.
[0262] The image processing device includes a composition unit (composition processing unit 14 in the embodiment). The composition unit sets, among the plurality of captured images, for a captured image determined by the interpolation target defective pixel determining unit to include an interpolation target defective pixel, a captured image subjected to interpolation of the interpolation target defective pixel by the interpolation target defective pixel interpolating unit as a composition target image and, for an image determined not to include an interpolation target defective pixel by the interpolation target defective pixel determining unit, sets the image as a composition target image and then generates a composite image having a higher image quality than image quality of each of a plurality of composition target images by performing composite processing (high-image-quality composite processing) using the plurality of composition target images. In this manner, the image processing device can generate a composite image having higher image quality than those of the images before the composition.
[0263] The composite image has a larger number of pixels than that of each of the plurality of captured images. For example, an image obtained by imaging 8 images, 16 images, or more images, for example, with a shift amount of less than one pixel and combining these images has a larger number of pixels than an image developed by a normal method using one of the plurality of captured images. In this manner, the image processing device can generate a composite image having a larger number of pixels than those of the images before the composition.
[0264] The composite image has a higher color resolution than that of each of the plurality of captured images. For example, an image obtained by combining images obtained by capturing images four times with a shift by one pixel has higher oblique resolution and color resolution than those of an image obtained by developing one of the four images by a normal method. In this manner, the image processing device can generate a composite image having a higher color resolution than those of the images before the composition.
[0265] The image processing device calculates a difference between pixels at positions corresponding to each other between each of the plurality of captured images and a defect-corrected image corresponding to the captured image among the plurality of defect-corrected images and generates the interpolation target defective pixel information (interpolation target defective pixel information DPI) indicating that a pixel whose difference is greater than or equal to a predetermined value is a defective pixel. As described above, the image processing device can generate the interpolation target defective pixel information indicating interpolation target defective pixels by using the differences of pixels between the plurality of captured images and the defect-corrected images after the interpolation processing.
[0266] The image processing device includes the association unit (difference processing unit 134 in the embodiment). The association unit associates the plurality of captured images with the interpolation target defective pixel information. As described above, the image processing device can allow an interpolation target pixel to be specified among the pixels of the plurality of captured images based on association by associating the plurality of captured images with the interpolation target defective pixel information.
[0267] The plurality of captured images is obtained by capturing images in a state in which positional relationships between the imaging range and the image sensor are caused to be different from each other by a pixel or a subpixel. The image processing device can appropriately determine an interpolation target defective pixel for the plurality of captured images captured with positional relationships different by one pixel or by one subpixel.
[0268] The defect candidate pixel detecting unit detects a defect candidate pixel for each of a plurality of captured images captured in a state where positional relationships between an imaging range and the image sensor including a plurality of pixels are caused to be different from each other by performing the defect candidate pixel detecting processing on each of the plurality of captured images. The association unit (the recording control unit 16, the output unit 18, the memory unit 20, the control unit 21, or others in the embodiment) associates the captured images with the detection result of the defect candidate pixel detecting unit. As described above, the image processing device can allow a defect candidate pixel to be specified among the pixels of the plurality of captured images based on association by associating the plurality of captured images with the detection result of defect candidate pixels.
[0269] The detection result is defect candidate pixel detection information (defect candidate pixel detection map) indicating a number of times each pixel of the captured images is detected as a defect candidate pixel by the defect candidate pixel detecting unit. As described above, by associating the plurality of captured images with the defect candidate pixel detection information indicating the number of times of detection as a defect candidate pixel, the image processing device can allow the number of times each pixel of the plurality of captured images has been detected as a defect candidate pixel to be specified on the basis of the association.
[0270] The interpolation target defective pixel determining unit determines, as an interpolation target defective pixel, a pixel detected as a defect candidate pixel a number of times greater than or equal to a threshold value by the defect candidate pixel detecting unit. As described above, the image processing device can prevent deterioration in the quality of an image obtained in the pixel-shifted high-image-quality imaging mode by determining a pixel detected as a defect candidate pixel a number of times greater than or equal to the threshold value as an interpolation target defective pixel and thereby appropriately determining a defective pixel.
[0271] The detection result is interpolation target defective pixel address information (interpolation target defective pixel address list) indicating the addresses of interpolation target defective pixels. As described above, the image processing device can allow an address of an interpolation target defective pixel to be specified on the basis of association by associating the plurality of captured images with the interpolation target defective pixel address information indicating addresses of interpolation target defective pixels.
[0272] The association unit associates the address difference information with the plurality of defect-corrected images, the address difference information indicating addresses of pixels, of the highest rank to a predetermined rank in a descending order of absolute values of differences among absolute values of a plurality of differences calculated for pixels at positions corresponding to each other in the plurality of captured images and the plurality of defect-corrected images, and the differences. As described above, the image processing device can allow an image to which the address difference information is applied to be specified by associating the address difference information, indicating the addresses of the pixels having larger absolute values of the differences and the differences, with the plurality of defect-corrected images. As a result, a reception-side device that has received the address difference information from the image processing device can specify the image to which the address difference information is applied and apply the address difference information to the image. The image processing device can also suppress an increase in the data amount to be transmitted to an external device by using the address difference information.
[0273] The image processing device includes the pixel value restoring unit (pixel value restoring unit 135 in the embodiment). The pixel value restoring unit restores a pixel value of a pixel in the plurality of captured images by using address difference information indicating an address of the pixel, an absolute value of a difference calculated for which at positions corresponding to each other in a plurality of defect-corrected images corresponding to one of the plurality of captured images and the plurality of captured images is greater than or equal to a predetermined value, and differences for the pixel and the plurality of defect-corrected images. As described above, the image processing device can restore the pixel value of the pixel in the plurality of captured images from the plurality of defect-corrected images by using the address difference information.
4. Hardware Configuration
[0274] An information device such as the image processing device TD according to the embodiments described above is implemented by, for example, a computer 1000 having a configuration as illustrated in
[0275] The CPU 1100 operates in accordance with a program stored in the ROM 1300 or the HDD 1400 and controls each of the components. For example, the CPU 1100 loads a program stored in the ROM 1300 or the HDD 1400 in the RAM 1200 and executes processing corresponding to various programs.
[0276] The ROM 1300 stores a boot program such as a basic input output system (BIOS) executed by the CPU 1100 when the computer 1000 is activated, a program dependent on the hardware of the computer 1000, and the like.
[0277] The HDD 1400 is a computer-readable recording medium that non-transiently records a program to be executed by the CPU 1100, data used by such a program, and the like. Specifically, the HDD 1400 is a recording medium that records an image processing program according to the present disclosure, which is an example of program data 1450.
[0278] The communication interface 1500 is an interface for the computer 1000 to be connected with an external network 1550 (for example, the Internet). For example, the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
[0279] The input and output interface 1600 is an interface for connecting an input and output device 1650 and the computer 1000. For example, the CPU 1100 receives data from an input device such as a keyboard or a mouse via the input and output interface 1600. The CPU 1100 also transmits data to an output device such as a display, a speaker, or a printer via the input and output interface 1600. Furthermore, the input and output interface 1600 may function as a media interface that reads a program or the like recorded in a recording medium. A recording medium refers to, for example, an optical recording medium such as a digital versatile disc (DVD) or a phase change rewritable disk (PD), a magneto-optical recording medium such as a magneto-optical disk (MO), a tape medium, a magnetic recording medium, or a semiconductor memory.
[0280] For example, in a case where the computer 1000 functions as the imaging device 1 according to the embodiment, the CPU 1100 of the computer 1000 implements the function of the control unit 21 or other units by executing the image processing program loaded on the RAM 1200. Meanwhile, the HDD 1400 stores the image processing program according to the present disclosure or data in the memory unit 20. Note that although the CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program data 1450, as another example, these programs may be acquired from another device via the external network 1550.
[0281] Note that the present technology can also have the following configurations.
[0282] (1)
[0283] An Image Processing Device Comprising:
[0284] a defect candidate pixel detecting unit that detects a defect candidate pixel for each of captured images captured in a state where positional relationships between an imaging range and an image sensor comprising a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and
[0285] an interpolation target defective pixel determining unit that determines, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value by the defect candidate pixel detecting unit.
[0286] (2)
[0287] The image processing device according to (1),
[0288] wherein the threshold value is
[0289] less than or equal to the number of the plurality of captured images.
[0290] (3)
[0291] The image processing device according to (1),
[0292] wherein the threshold value is
[0293] less than the number of the plurality of captured images.
[0294] (4)
[0295] The image processing device according to any one of (1) to (3),
[0296] wherein the defect candidate pixel detecting unit
[0297] detects whether or not each of pixels of interest in each of the plurality of captured images is the defect candidate pixel on a basis of a comparison result between a pixel value of the pixel of interest and a pixel value of a detection neighboring pixel that is a neighboring pixel of each of the pixel of interest.
[0298] (5)
[0299] The image processing device according to (4),
[0300] wherein the interpolation target defective pixel determining unit
[0301] determines the pixel of interest as the interpolation target defective pixel in a case where polarities of differences between the pixel of interest and each of a plurality of the detection neighboring pixels are same and absolute values of the differences exceed a threshold value and in a case where the pixel of interest is determined to be the defect candidate pixel a number of times greater than or equal to the threshold value by the defect candidate pixel detecting processing of detecting the pixel of interest as the defect candidate pixel.
[0302] (6)
[0303] The image processing device according to any one of (1) to (5), further comprising:
[0304] an interpolation target defective pixel interpolating unit that outputs a plurality of defect-corrected images corresponding to the plurality of captured images by interpolating the interpolation target defective pixel with an interpolation neighboring pixel that is a neighboring pixel of the interpolation target defective pixel in a case where each of the plurality of captured images includes the interpolation target defective pixel.
[0305] (7)
[0306] The image processing device according to (6),
[0307] wherein the interpolation target defective pixel interpolating unit
[0308] interpolates the interpolation target defective pixel using a pixel included in a same captured image as a captured image of the interpolation target defective pixel as the interpolation neighboring pixel.
[0309] (8)
[0310] The image processing device according to (6) or (7), further comprising:
[0311] a composition unit that sets,
[0312] among the plurality of captured images,
[0313] for a captured image determined by the interpolation target defective pixel determining unit to include the interpolation target defective pixel, a captured image subjected to interpolation of the interpolation target defective pixel by the interpolation target defective pixel interpolating unit as a composition target image and,
[0314] for a captured image determined not to include the interpolation target defective pixel by the interpolation target defective pixel determining unit, the captured image as a composition target image and then
[0315] generates a composite image having a higher image quality than image quality of each of a plurality of the composition target images by performing composite processing using the plurality of composition target images.
[0316] (9)
[0317] The image processing device according to (8),
[0318] wherein the composite image has
[0319] a larger number of pixels than a number of pixels of each of the plurality of captured images.
[0320] (10)
[0321] The image processing device according to (8) or (9),
[0322] wherein the composite image has
[0323] a higher color resolution than a color resolution of each of the plurality of captured images.
[0324] (11)
[0325] The image processing device according to any one of (6) to (10),
[0326] wherein a difference between pixels at positions corresponding to each other between each of the plurality of captured images and a defect-corrected image corresponding to the captured image among the plurality of defect-corrected images is calculated, and interpolation target defective pixel information is generated, the interpolation target defective pixel information indicating that a pixel whose difference is greater than or equal to a predetermined value is a defective pixel.
[0327] (12)
[0328] The image processing device according to (11), further comprising:
[0329] an association unit that associates the plurality of captured images with the interpolation target defective pixel information.
[0330] (13)
[0331] The image processing device according to any one of (1) to (12),
[0332] wherein the plurality of captured images is
[0333] obtained by capturing images in a state in which positional relationships between the imaging range and the image sensor are caused to be different from each other by a pixel or a subpixel.
[0334] (14)
[0335] An image processing device comprising:
[0336] a defect candidate pixel detecting unit that detects a defect candidate pixel for each of captured images captured in a state where positional relationships between an imaging range and an image sensor comprising a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and
[0337] an association unit that associates the captured images and a detection result of the defect candidate pixel detecting unit.
[0338] (15)
[0339] The image processing device according to (14),
[0340] wherein the detection result is
[0341] defect candidate pixel detection information indicating a number of times each pixel of the captured images is detected as the defect candidate pixel by the defect candidate pixel detecting unit.
[0342] (16)
[0343] The image processing device according to (14) or (15), further comprising:
[0344] an interpolation target defective pixel determining unit that determines, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value by the defect candidate pixel detecting unit.
[0345] (17)
[0346] The image processing device according to (16),
[0347] wherein the detection result is
[0348] interpolation target defective pixel address information indicating an address of the interpolation target defective pixel.
[0349] (18)
[0350] The image processing device according to any one of (6) to (12), further comprising:
[0351] an association unit that associates address difference information with the plurality of defect-corrected images, the address difference information indicating addresses of pixels, of a highest rank to a predetermined rank in a descending order of absolute values of differences among absolute values of a plurality of differences calculated for pixels at positions corresponding to each other in the plurality of captured images and the plurality of defect-corrected images, and the differences.
[0352] (19)
[0353] An image processing device comprising:
[0354] a pixel value restoring unit that restores a pixel value of a pixel in a plurality of captured images by using address difference information indicating an address of the pixel, an absolute value of a difference calculated for which at positions corresponding to each other in a plurality of defect-corrected images corresponding to one of the plurality of captured images and the plurality of captured images is greater than or equal to a predetermined value, and differences for the pixel and the plurality of defect-corrected images.
[0355] (20)
[0356] An image processing method of executing control of:
[0357] detecting a defect candidate pixel for each of a plurality of captured images captured in a state where positional relationships between an imaging range and an image sensor comprising a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and
[0358] determining, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value.
[0359] (21)
[0360] An image processing program of executing control of:
[0361] detecting a defect candidate pixel for each of captured images captured in a state where positional relationships between an imaging range and an image sensor comprising a plurality of pixels are caused to be different from each other by performing defect candidate pixel detecting processing on each of the plurality of captured images; and
[0362] determining, as an interpolation target defective pixel, a pixel detected as the defect candidate pixel a number of times greater than or equal to a threshold value.
REFERENCE SIGNS LIST
[0363] 1 IMAGING DEVICE [0364] 11 LENS SYSTEM [0365] 12 IMAGING ELEMENT UNIT [0366] 121 IMAGE SENSOR [0367] 13 DETECTION-TYPE DEFECT CORRECTION PROCESSING UNIT [0368] 131 DEFECT CANDIDATE PIXEL DETECTING UNIT [0369] 132 INTERPOLATION TARGET DEFECTIVE PIXEL DETERMINING UNIT [0370] 133 INTERPOLATION TARGET DEFECTIVE PIXEL INTERPOLATING UNIT [0371] 14 COMPOSITION PROCESSING UNIT [0372] 15 DEVELOPMENT PROCESSING UNIT [0373] 16 RECORDING CONTROL UNIT [0374] 17 DISPLAY UNIT [0375] 18 OUTPUT UNIT [0376] 19 OPERATION UNIT [0377] 20 MEMORY UNIT [0378] 21 CONTROL UNIT [0379] 22 DRIVER UNIT [0380] 23 SENSOR UNIT [0381] 50 IMAGE PROCESSING SYSTEM [0382] 70 IMAGE PROCESSING DEVICE