Patent classifications
G06T3/4015
PARALLEL COMPUTER VISION AND IMAGE SCALING ARCHITECTURE
Embodiments relate to an architecture of a vision pipe included in an image signal processor. The architecture includes a front-end portion that includes a pair of image signal pipelines that generate an updated luminance image data. A back-end portion of the vision pipe architecture receives the updated luminance images from the front-end portion and performs, in parallel, scaling and various computer vision operations on the updated luminance image data. The back-end portion may repeatedly perform this parallel operation of computer vision operations on successively scaled luminance images to generate a pyramid image.
LENS AND COLOR FILTER ARRANGEMENT, SUPER-RESOLUTION CAMERA SYSTEM AND METHOD
A lens and colour filter assembly contains lens units, and each lens unit is assigned to a single-colour colour filter unit. The lens and colour filter assembly may be combined with pixel units such that a plurality of monochromatic, low-resolution images can be obtained, and the monochromatic images refer to shifted versions of the same image object. By a super-resolution technique comprising shift-compensation a mosaicked image is obtained which is then demosaiced. In the resultant image only few artefacts appear. Simple colour filter arrays allow a simplified fabrication process and provide less chromatic aberrations at less computational effort.
Systems and methods for obtaining color imagery using single photon avalanche diodes
A system for obtaining color imagery using SPADs includes a SPAD array that has a plurality of SPAD pixels. Each of the plurality of SPAD pixels includes a respective color filter positioned thereover. The system is configurable to capture an image frame using the SPAD array and generate a filtered image by performing a temporal filtering operation using the image frame and at least one preceding image frame. The at least one preceding image frame is captured by the SPAD array at a timepoint that temporally precedes a timepoint associated with the image frame. The system is also configurable to, after performing the temporal filtering operation, generate a color image by demosaicing the filtered image.
IMAGE PROCESSOR
An image processor processes an image having color pixels (R, G, B) arranged in a Bayer pattern. The image processor provides a de-correlated image composed of three types of components (Y, Cr, Cb). The image processor provides a component of the first type (Y) as a substitute for a pixel of the first type (G), whereby the component of the first type (Y) is a weighted combination of a cluster of pixels that includes the pixel of the first type (G) and neighboring pixels, wherein neighboring pixels of the second and third type (R, B) have an overall positive weighting factor corresponding to an overall addition of neighboring pixels of the second and third type (R, B) to the pixel of the first type (G). The image processor also provides a component of the second type (Cr) and a component of the third type (Cb) in similar fashion.
IMAGE SENSOR, IMAGING APPARATUS, ELECTRONIC DEVICE, IMAGE PROCESSING SYSTEM, AND SIGNAL PROCESSING METHOD
Provided are an image sensor, an imaging apparatus, and a signal processing method. The image sensor includes a filter array, a pixel array, and a processing circuit. The filter array includes a plurality of filter regions each including a plurality of filter units. The processing circuit is configured to: combine the electrical signals generated by the pixels corresponding to each filter unit for outputting as a combined luminance value and forming a first intermediate image; generate a first color signal, a second color signal, and a third color signal based on the electrical signals generated by the pixels corresponding to each filter region; and process the first color signal, the second color signal, and the third color signal to obtain a plurality of second intermediate images representing chrominance values of the filter region, and fuse the first intermediate image and the second intermediate images to obtain a first target image.
Image capturing device, image processing device and display device for setting different exposure conditions
An image capturing device includes: an image capturing element having a first image capturing region that captures an image of a photographic subject and outputs a first signal, and a second image capturing region that captures an image of the photographic subject and outputs a second signal; a setting unit that sets an image capture condition for the first image capturing region to an image capture condition that is different from an image capture condition for the second image capturing region; a correction unit that performs correction upon the second signal, for employment in interpolation of the first signal; and a generation unit that generates an image of the photographic subject that has been captured by the first image capturing region by employing a signal generated by interpolating the first signal according to the second signal as corrected by the correction unit.
Dual sensor imaging system and imaging method thereof
A dual sensor imaging system and an imaging method thereof are provided. The method includes: identifying an imaging scene; controlling a color sensor and an IR sensor to respectively capture color images and IR images by adopting capturing conditions suitable for the imaging scene; calculating a signal-to-noise ratio (SNR) difference between each color image and the IR images, and a luminance mean value of each color image; selecting the color image and IR image captured under capturing conditions of having the SNR difference less than an SNR threshold and the luminance mean value greater than a luminance threshold to execute a feature domain transformation to extract partial details of the imaging scene; and fusing the selected color image and IR image to adjust the partial details of the color image according to a guidance of the partial details of the IR image to obtain a scene image with full details.
SYSTEM AND METHOD FOR MULTI-EXPOSURE, MULTI-FRAME BLENDING OF RED-GREEN-BLUE-WHITE (RGBW) IMAGES
A method includes obtaining multiple images of a scene using at least one red-green-blue-white (RGBW) image sensor. The method also includes generating multi-channel frames at different exposure levels from the images. The method further includes estimating motion across exposure differences between the different exposure levels using a white channel of the multi-channel frames as a guidance signal to generate multiple motion maps. The method also includes estimating saturation across the exposure differences between the different exposure levels to generate multiple saturation maps. The method further includes using the generated motion maps and saturation maps to recover saturations from the different exposure levels and generate a saturation-free RGBW frame. In addition, the method includes processing the saturation-free RGBW frame to generate a final image of the scene.
IMAGE PROCESSING DEVICE, PROCESSING METHOD THEREOF, AND IMAGE PROCESSING SYSTEM INCLUDING THE IMAGE PROCESSING DEVICE
An image processing method performed by an image processing device may include receiving input image data from an image sensor, selecting a convolution filter corresponding to each unit region from among a plurality of convolution filters based on color pattern information of the input image data, and generating, based on the selected convolution filter, a first image to be displayed from the input image data.
MULTI-MODE DEMOSAICING FOR RAW IMAGE DATA
Embodiments relate to a multi-mode demosaicing circuit able to receive and demosaic image data in a different raw image formats, such as Bayer raw image format and Quad Bayer raw image format. The multi-mode demosaicing circuit comprises different circuitry for demosaicing different image formats that access a shared working memory. In addition, the multi-mode demosaicing circuit shares memory with a post-processing and scaling circuit configured to perform subsequent post-processing and/or scaling of the demosaiced image data, in which the operations of the post-processing and scaling circuit are modified based on the original raw image format of the demosaiced image data to use different amounts of the shared memory, to compensate for additional memory utilized by the multi-mode demosaicing circuit when demosaicing certain types of image data.