H04N13/232

Multi-aperture ranging devices and methods

Embodiments of systems and methods for multi-aperture ranging are disclosed. An embodiment of a device includes a main lens, configured to receive an image from the field of view of the main lens, a multi-aperture optical component having optical elements optically coupled to the main lens and configured to create a multi-aperture image set that includes a plurality of subaperture images, wherein at least one point in the field of view is captured by at least two of the subaperture images, an array of sensing elements, a ROIC configured to receive the signals, to convert the signals to digital data, and to output the digital data, and an image processing system, responsive to the digital data that is output from the ROIC, which is configured to generate disparity values that correspond to at least one point in common between the at least two subaperture images.

Image capture apparatus and image signal processing apparatus
09800861 · 2017-10-24 · ·

An image capture apparatus includes an image capture unit that has a plurality of unit pixels each including a plurality of photo-electric conversion units per condenser unit, and a recording unit that records captured image signals, which are captured by the image capture unit and are respectively read out from the plurality of photo-electric conversion units, and the recording unit records identification information which allows to identify each photo-electric conversion unit used to obtain the captured image signal in association with that captured image signal.

WIDE VIEWING ANGLE STEREO CAMERA APPARATUS AND DEPTH IMAGE PROCESSING METHOD USING THE SAME
20220060677 · 2022-02-24 · ·

Disclosed are a wide viewing angle stereo camera apparatus and a depth image processing method using the same. A stereo camera apparatus includes a receiver configured to receive a first image and a second image of a subj ect captured through a first lens and a second lens that are provided in a vertical direction; a converter configured to convert the received first image and second image using a map projection scheme; and a processing configured to extract a depth of the subject by performing stereo matching on the first image and the second image converted using the map projection scheme, in a height direction.

Estimating surface properties using a plenoptic camera

A plenoptic camera captures a plenoptic image of an object illuminated by a point source (preferably, collimated illumination). The plenoptic image is a sampling of the four-dimensional light field reflected from the object. The plenoptic image is made up of superpixels, each of which is made up of subpixels. Each superpixel captures light from a certain region of the object (i.e., a range of x,y spatial locations) and the subpixels within a superpixel capture light propagating within a certain range of directions (i.e., a range of u,v spatial directions). Accordingly, optical properties estimation, surface normal reconstruction, depth estimation, and three-dimensional rendering can be provided by processing only a single plenoptic image. In one approach, the plenoptic image is used to estimate the bidirectional reflectance distribution function (BRDF) of the object surface.

Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures

Imager arrays, array camera modules, and array cameras in accordance with embodiments of the invention utilize pixel apertures to control the amount of aliasing present in captured images of a scene. One embodiment includes a plurality of focal planes, control circuitry configured to control the capture of image information by the pixels within the focal planes, and sampling circuitry configured to convert pixel outputs into digital pixel data. In addition, the pixels in the plurality of focal planes include a pixel stack including a microlens and an active area, where light incident on the surface of the microlens is focused onto the active area by the microlens and the active area samples the incident light to capture image information, and the pixel stack defines a pixel area and includes a pixel aperture, where the size of the pixel apertures is smaller than the pixel area.

System and method for lightfield capture
11256214 · 2022-02-22 · ·

A system for generating holographic images or videos comprising a camera array, a plurality of processors, and a central computing system. A method for generating holographic images can include receiving a set of images and processing the images.

Method for compressing light-field data

The present invention relates to a method for compressing light-fields by exploiting their overall 4D redundancy using a hybrid approach that combines the benefits of sparse coding approach with pseudo-video sequence or multi-view coding to explore the inter-SAI redundancy and achieves very competitive results. This redundancy is particularly high when the light-fields are densely sampled. Therefore, this invention is especially efficient for densely sampled light-fields, such as the ones acquired by light-field cameras.

LIGHT-FIELD CAMERA AND CONTROLLING METHOD
20170289522 · 2017-10-05 ·

A method for controlling a light-field camera device includes controlling the light-field camera to capture a plurality of images. Situational measurements or markers of the light-field camera are recorded when the light-field camera captures each of the plurality of images, the plurality of images are composed into a three-dimensional image for composing and capturing a desired image.

MULTI-APERTURE RANGING DEVICES AND METHODS

Embodiments of systems and methods for multi-aperture ranging are disclosed. An embodiment of a device includes a main lens, configured to receive an image from the field of view of the main lens, a multi-aperture optical component having optical elements optically coupled to the main lens and configured to create a multi-aperture image set that includes a plurality of subaperture images, wherein at least one point in the field of view is captured by at least two of the subaperture images, an array of sensing elements, a ROIC configured to receive the signals, to convert the signals to digital data, and to output the digital data, and an image processing system, responsive to the digital data that is output from the ROIC, which is configured to generate disparity values that correspond to at least one point in common between the at least two subaperture images.

Methods and apparatus for environmental measurements and/or stereoscopic image capture

A camera rig including one or more stereoscopic camera pairs and/or one or more light field cameras are described. Images are captured by the light field cameras and stereoscopic camera pairs are captured at the same time. The light field images are used to generate an environmental depth map which accurately reflects the environment in which the stereoscopic images are captured at the time of image capture. In addition to providing depth information, images captured by the light field camera or cameras is combined with or used in place of stereoscopic image data to allow viewing and/or display of portions of a scene not captured by a stereoscopic camera pair.