THIN MULTI-APERTURE IMAGING SYSTEM WITH AUTO-FOCUS AND METHODS FOR USING SAME
20190149721 ยท 2019-05-16
Inventors
- Gal Shabtay (Tel-Aviv, IL)
- Noy Cohen (Tel-Aviv, IL)
- Nadav Geva (Tel-Aviv, IL)
- Oded Gigushinski (Herzlia, IL)
- Ephraim Goldenberg (Ashdod, IL)
Cpc classification
H04N23/45
ELECTRICITY
G02B7/36
PHYSICS
H04N23/67
ELECTRICITY
H04N25/133
ELECTRICITY
H04N23/951
ELECTRICITY
H04N5/2628
ELECTRICITY
H04N23/55
ELECTRICITY
G03B30/00
PHYSICS
G02B27/646
PHYSICS
International classification
H04N5/262
ELECTRICITY
G02B7/36
PHYSICS
G02B27/64
PHYSICS
Abstract
Dual-aperture digital cameras with auto-focus (AF) and related methods for obtaining a focused and, optionally optically stabilized color image of an object or scene. A dual-aperture camera includes a first sub-camera having a first optics bloc and a color image sensor for providing a color image, a second sub-camera having a second optics bloc and a clear image sensor for providing a luminance image, the first and second sub-cameras having substantially the same field of view, an AF mechanism coupled mechanically at least to the first optics bloc, and a camera controller coupled to the AF mechanism and to the two image sensors and configured to control the AF mechanism, to calculate a scaling difference and a sharpness difference between the color and luminance images, the scaling and sharpness differences being due to the AF mechanism, and to process the color and luminance images into a fused color image using the calculated differences.
Claims
1. A dual-aperture digital camera with autofocus (AF) for imaging an object or scene, comprising: a) a first sub-camera that includes a first optics bloc and a first, color image sensor with a first number of pixels, the first camera operative to provide a first, color image of the object or scene; b) a second sub-camera that includes a second optics bloc and a second image sensor having a second number of pixels, the second sub-camera operative to provide a second image of the object or scene; c) an AF mechanism coupled mechanically at least to the first optics bloc and used to perform an AF action on at least the first optics bloc; and d) a camera controller coupled to the AF mechanism and to the two image sensors and configured, based on calibration data acquired through a calibration step, to control the AF mechanism, to estimate the scale difference between the first and second images and to process the first and second images into a fused color image using the calculated scale difference.
2. The dual-aperture digital camera of claim 1, wherein the color image sensor includes a non-standard color filter array.
3. The dual-aperture digital camera of claim 1, wherein the camera controller configuration is based on calibration data acquired through a calibration step.
4. The dual-aperture digital camera of claim 3, wherein the calibration data is saved in a one-time programmable memory or an EEPROM in the dual-aperture digital camera.
5. The dual-aperture digital camera of claim 1, wherein the first number of pixels and the second numbers of pixels are the same.
6. The dual-aperture digital camera of claim 1, wherein the first number of pixels and the second numbers of pixels are different.
7. The dual-aperture digital camera of claim 1, wherein the first, color image sensor and the second image sensor have pixels of the same size.
8. The dual-aperture digital camera of claim 1, wherein the camera controller is further configured to find corresponding points in the first and second images, estimate a scale factor and use the scale factor to process the first and second images into a fused color image.
9. The dual-aperture digital camera of claim 1, wherein the camera controller is further configured to preprocess the first and second images, to obtain respective rectified first and second images, to register the rectified first and second images to obtain respective registered first and second images, and to fuse the registered first and second images into a fused color image.
10. The dual-aperture camera of claim 1, wherein the first sub-camera includes an infra-red (IR) filter that blocks IR wavelengths from entering the first image sensor and wherein the second sub-camera is configured to allow at least some IR wavelengths to enter the second image sensor.
11. The dual-aperture camera of claim 1, wherein the second image sensor is a luminance image sensor and wherein the second image is a luminance image.
12. The dual-aperture digital camera of claim 8, wherein the camera controller is further configured to preprocess the first and second images, to obtain respective rectified first and second images, to register the rectified first and second images to obtain respective registered first and second images, and to fuse the registered first and second images into a fused color image.
13. The dual-aperture digital camera of claim 9, wherein epipolar lines in the rectified first and second images are more-or-less parallel to horizontal or vertical axes of the rectified first and second images.
14. The dual-aperture digital camera of claim 12, wherein epipolar lines in the rectified first and second images are more-or-less parallel to horizontal or vertical axes of the rectified first and second images.
15. The dual-aperture digital camera of claim 14, wherein the camera controller configuration is based on calibration data acquired through a calibration step, wherein the calibration data is saved in a one-time programmable memory or an EEPROM in the dual-aperture digital camera.
16. The dual-aperture digital camera of claim 15, wherein the first number of pixels and the second numbers of pixels are different.
17. A method, comprising: in a dual camera: a) obtaining simultaneously a first, color image of an object or scene with a first sub-camera and a second image of the object or scene with a second sub-camera, wherein the first sub-camera includes a first optics bloc, wherein the second sub-camera includes a second optics bloc and wherein the first and second sub-cameras have substantially the same field of view; b) using an AF mechanism coupled mechanically at least to the first optics bloc to perform an AF action on at least the first optics bloc; c) preprocessing the first image and the second image to obtain respective rectified and scale-adjusted first and second images while considering scaling differences caused by the AF action; d) performing registration between the rectified and scale-adjusted first and second images to obtain registered images; and e) fusing the registered images into a focused fused color image.
18. The method of claim 17, wherein the fusing of the registered images is not performed in non-focused areas.
19. The method of claim 17, wherein the preprocessing to obtain respective rectified scale-adjusted first and second images includes calculating a set of corresponding points in the first and second images and estimating a scale factor S.
20. The method of claim 19, further comprising using scaling factor S to scale one of the images to match the other image, thereby obtaining the registered images.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] Non-limiting examples of embodiments disclosed herein are described below with reference to figures attached hereto that are listed following this paragraph. The drawings and descriptions are meant to illuminate and clarify embodiments disclosed herein, and should not be considered limiting in any way.
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
DETAILED DESCRIPTION
[0032]
[0033] The sensors used in each sub-camera may have different color filter arrays (CFAs). In some embodiments, sensor 1 may have one type of CFA, while sensor 2 may have another type of CFA. In some embodiments, sensor 1 may have a CFA and sensor 2 may have a white or clear filter array (marked by W)in which all the pixels absorb the same wide range of wavelengths, e.g. between 400 nm and 700 nm (instead of each pixel absorbing a smaller portion of the spectrum). A sensor having a color filter array may be referred to henceforth as a color image sensor, while a sensor with a clear or W filter array is referred to as a clear image sensor.
[0034] The CFA of sensor 1 may be standard or non-standard. As used herein, a standard CFA may include a known CFA such as Bayer, RGBE, CYYM, CYGM and different RGBW filters such as RGBW#1, RGBW#2 and RGBW#3. For example, non-Bayer CFA patterns include repetitions of a 2?2 micro-cell in which the color filter order is RRBB, RBBR or YCCY where Y=Yellow=Green+Red, C=Cyan=Green+Blue; repetition of a 3?3 micro-cell in which the color filter order is GBRRGBBRG (e.g. as in sensor 1 in FIG. 3A); and repetitions of a 6?6 micro-cell in which the color filter order is one of the following options: [0035] 1. Line 1: RBBRRB. Line 2: RWRBWB. Line 3: BBRBRR. Line 4: RRBRBB. Line 5: BWBRWR. Line 6: BRRBBR. [0036] 2. Line 1: BBGRRG. Line 2: RGRBGB. Line 3: GBRGRB. Line 4: RRGBBG. Line 5: BGBRGR. Line 6: GRBGBR. [0037] 3. Line 1: RBBRRB. Line 2: RGRBGB. Line 3: BBRBRR. Line 4: RRBRBB. Line 5: BGBRGR. Line 6: BRRBBR. [0038] 4. Line 1: RBRBRB. Line 2: BGBRGR. Line 3: RBRBRB. Line 4: BRBRBR. Line 5: RGRBGB. Line 6: BRBRBR.
[0039] The color CFA of sensor 1 in
[0040] The CFA pattern of sensor 1 in
[0041] Unlike a sensor with a color CFA (i.e. sensor 1), absorption of IR light does not introduce color cross-talk in clear sensor 2 (since the sensor records a panchromatic image of the scene).
[0042] Removing the IR filter may have some negative implications on image quality. For example, extending the range of wavelengths that are captured by the camera may lead to longitudinal chromatic aberrations that may degrade the Point Spread Function (PSF), resulting in a blurrier image. To address this issue, in an embodiment, the optics of sub-camera 2 are optimized across both the visible and the IR range, to mitigate the effect of chromatic aberrations and to result in a more compact PSF compared with standard compact camera optics that use an IR filter. This is unlike the standard optimization process, which considers only wavelengths inside the visible range.
[0043] In use, the two sub-cameras share a similar FOV and have substantially equal (limited only by manufacturing tolerances) focal lengths. An image capture process is synchronized, so that the two sub-cameras capture an image of the scene at a particular moment. Due to the small baseline between the two apertures (which could be only a few millimeters, for example 6.5 mm or 8.5 mm) of the sub-cameras, the output images may show parallax, depending on the object distances in the scene. A digital image processing algorithm combines the two images into one image, in a process called image fusion. Henceforth, the algorithm performing this process is called image fusion algorithm. The resulting image may have a higher resolution (in terms of image pixels) and/or a higher effective resolution (in terms of the ability to resolve spatial frequencies in the scene, higher effective resolution meaning the ability to resolve higher spatial frequencies) and/or a higher SNR than that of one sub-camera image.
[0044] In terms of resolution and exemplarily, if each sub-camera produces a 5 megapixel (2592?1944 pixels) image, the image fusion algorithm may combine the two images to produce one image with 8 megapixel (3264?2448 pixels) resolution. In terms of effective resolution, assuming that an imaged object or scene includes spatial frequencies, the use of a dual-aperture camera having a clear sensor and a color sensor as disclosed herein leads to an overall increase in effective resolution because of the ability of the clear sensor to resolve higher spatial frequencies of the luminance component of the scene, compared with a color sensor. The fusion of the color and clear images as performed in a method disclosed herein (see below) adds information in spatial frequencies which are higher than what could be captured by a color (e.g. Bayer) sub-camera.
[0045] In order to generate a higher-resolution or higher effective resolution image, the image fusion algorithm combines the color information from sub-camera 1 with the luminance information from sub-camera 2. Since clear sensor 2 samples the scene at a higher effective spatial sampling rate compared with any color channel or luminance thereof in the color sensor 1, the algorithm synthesizes an image that includes information at higher spatial frequencies compared with the output image from sub-camera 1 alone. The target of the algorithm is to achieve a spatial resolution similar to that obtained from a single-aperture camera with a sensor that has a higher number of pixels. Continuing the example above, the algorithm may combine two 5 megapixel images, one color and one luminance, to produce one 8 megapixel image with information content similar to that of a single-aperture 8 megapixel color camera.
[0046] In addition to improved spatial resolution, the image fusion algorithm uses the luminance information from clear sensor 2 to generate an image with increased SNR, vs. an image from a corresponding single-aperture camera. The fact that the pixels of sensor 2 are not covered by color filters allow each pixel to absorb light in a wider wavelength spectrum, resulting in a significant increase in the light efficiency compared with a color CFA camera. In an embodiment, the fusion of clear image information and color image information then provides a +3 dB SNR increase over that of a single aperture digital camera.
[0047] As clear sensor 2 is more sensitive than color sensor 1, there may be a need to adjust exposure times or analog gains to match the digital signal levels between the two cameras. This could be achieved by fixing the same exposure times to both sensors and configuring a different analog gain to each sensor, or by fixing the analog gain in both sensors and configuring a different exposure time to each sensor.
[0048]
[0049] The scale adjustment, done after the rectification step, is described now in more detail with reference to
S=(Y2*W*Y2)\Y2*W*Y1
where Y1 is a vector of Y coordinates of points taken from one image, Y2 is a vector of Y coordinates of points taken from the other image, and W is a diagonal matrix that holds the absolute values of Y2. Scaling factor S is then used in step 426 to scale one image in order to match the scale between the two images. In step 426, point coordinates in each image are multiplied by the same scaling factor S. Finally, in step 428, the corresponding pairs of scaled points are used to calculate a shift in x and y axes between the two images for each axis. In an embodiment, only a subset of the corresponding points that lie in a certain ROI is used to calculate the shift in x and y. For example, the ROI may be the region used to determine the focus, and may be chosen by the user or the camera software (SW). The estimated shift is applied on one of the images or on both images. The result of the scale adjustment process in
[0050] Returning now to
Auto-Focus
[0051] As mentioned with respect to
[0052]
[0053]
[0054] Using two images instead of one can help reduce the noise and improve the robustness and accuracy of the AF process (algorithm).
[0055] In an embodiment, some or all the optical elements of sub-camera 1 and sub-camera 2, are made on the same die, using wafer-level optics manufacturing techniques or injection molding of glass or plastic materials. In this case, the single AF mechanism moves the optical dies on which the optical elements of the two sub-cameras are fabricated, so that the two optical stacks move together.
[0056] In another embodiment, a camera is similar to camera 500 and includes a single AF mechanism placed on sub-camera 1 (with the color CFA). Sub-camera 2 does not have an AF mechanism, but uses instead fixed focus optics with unique characteristics that provide extended depth of focus, which is achieved by means of optical design (e.g., by employing optics with narrower aperture and higher F-number). The optical performance of the optics of sub-camera 2 is designed to support sharp images for object distances between infinity and several cm from the camerain this case, the fusion algorithm can be applied to enhance output resolution for a wider range of object distances compared with the single AF embodiment described above. There is usually a tradeoff between the DOF of the camera and the minimal achievable PSF size across the DOF range. An algorithm may be used to enhance the sharpness of the image captured by sub-camera 2 before the fusion algorithm is applied to combine the photos. Such an algorithm is known in the art.
[0057] To conclude, dual-aperture cameras and methods of use of such cameras disclosed herein have a number of advantages over single aperture cameras, in terms of camera height resolution, effective resolution and SNR. In terms of camera height, in one example, a standard 8 Mpix ? camera with a 70 degree diagonal FOV may have a module height of 5.7 mm. In comparison, a dual-aperture camera disclosed herein, with two 5 Mpix ? image sensors (one color and one clear), each with 70 degrees diagonal field of view may have a module height of 4.5 mm In another example, a standard 8 Mpix ? camera with a 76 degree diagonal FOV may have a module height of 5.2 mm. In comparison, a dual-aperture camera disclosed herein, with two 5 Mpix ? image sensors (one color and one clear), each with a 76 degree diagonal FOV, may have a module height of 4.1 mm.
[0058] While this disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of the embodiments and methods will be apparent to those skilled in the art. The disclosure is to be understood as not limited by the specific embodiments described herein, but only by the scope of the appended claims