H04N13/218

Image processing apparatus, image capturing apparatus, and image processing program

Images can be processed using an image processing apparatus including: an image data obtaining section that obtains at least two pieces of parallax image data from an image capturing element that includes color filters and opening masks so that one color filter and one opening mask correspond to one of at least a part of photoelectric conversion elements and outputs the at least two pieces of parallax image data; and a correcting section that corrects color imbalance of a corresponding pixel caused between the at least two pieces of parallax image data, based on at least one of a position of the at least a part of photoelectric conversion elements in the image capturing element and an opening displacement of the opening mask.

Depth sensor
09854227 · 2017-12-26 ·

A depth sensor comprises at least one imaging sensor, at least one multifocal lens, and a focus analyzer. The depth sensor analyzes the in-focus status of electromagnetic radiation, directed by the multifocal lens(es) onto sensing zone(s) of the imaging sensor(s) from spatial zone(s) in a measurement field, to detect the presence of object(s) in the spatial zone(s).

SYSTEM AND METHOD FOR CAPTURING HORIZONTAL DISPARITY STEREO PANORAMA
20170366800 · 2017-12-21 ·

A system for capturing horizontal disparity stereo panorama is disclosed. The system includes a multi surface selective light reflector unit, a secondary reflector and a computing unit. The multi surface selective light reflector unit (a) obtains light rays from a 3D scene of outside world that are relevant to create (i) a left eye panorama and (ii) a right eye panorama and (b) reflects the light rays without internal reflections between the light rays. The secondary reflector (a) obtains the reflected light rays from the multi surface selective light reflector unit and (b) reflects the light rays through the viewing aperture. The computing unit captures (i) the reflected light rays from the secondary reflector and (ii) the upper part of the 3D scene from a concave lens as a warped image and processes the warped image to (a) the left eye panorama and (b) the right eye panorama.

System and method of limiting processing by a 3D reconstruction system of an environment in a 3D reconstruction of an event occurring in an event space

A method of limiting processing by a 3D reconstruction system of an environment in a 3D reconstruction of an event includes dividing by the subdivision module the volume into sub-volumes; projecting from each camera each of the sub-volumes to create a set of sub-volumes masks relative to each camera; creating an imaging mask for each camera; comparing for each camera by the subdivision module the respective imaging mask to the respective sub-volume mask and extracting at least one feature from at least one imaging mask; saving by the subdivision module the at least one feature to a subspace division mask; cropping the at least one feature from the imaging frames using the subspace division mask; and processing only the at least one feature for a 3D reconstruction. The system includes cameras for recording the event in imaging frames; and a subdivision module for dividing the volume into sub-volumes.

Imaging Apparatus
20220385879 · 2022-12-01 · ·

Methods and apparatus provide for: capturing an image of a subject from a position; capturing a plurality of images of the subject from a plurality of further positions around the position such that the plurality of captured images of the subject are different in image quality or view angles than the image of the subject from the position; and generating data to be output on a basis of the image captured from the position and the plurality of images captured from the plurality of further positions, where at least one of: the capturing the image or the plurality of images includes pixels capable of detecting light in an infrared wavelength band, and the generating includes synthesizing the image from the position and the images captured from the further positions and changing a synthesis ratio according to an image synthesis position.

Imaging Apparatus
20220385879 · 2022-12-01 · ·

Methods and apparatus provide for: capturing an image of a subject from a position; capturing a plurality of images of the subject from a plurality of further positions around the position such that the plurality of captured images of the subject are different in image quality or view angles than the image of the subject from the position; and generating data to be output on a basis of the image captured from the position and the plurality of images captured from the plurality of further positions, where at least one of: the capturing the image or the plurality of images includes pixels capable of detecting light in an infrared wavelength band, and the generating includes synthesizing the image from the position and the images captured from the further positions and changing a synthesis ratio according to an image synthesis position.

WAFER LEVEL OPTICS FOR FOLDED OPTIC PASSIVE DEPTH SENSING SYSTEM
20170359568 · 2017-12-14 ·

Certain aspects relate to wafer level optical designs for a folded optic stereoscopic imaging system. One example folded optical path includes first and second reflective surfaces defining first, second, and third optical axes, and where the first reflective surface redirects light from the first optical axis to the second optical axis and where the second reflective surface redirects light from the second optical axis to the third optical axis. Such an example folded optical path further includes wafer-level optical stacks providing ten lens surfaces distributed along the first and second optical axes. A variation on the example folded optical path includes a prism having the first reflective surface, wherein plastic lenses are formed in or secured to the input and output surfaces of the prism in place of two of the wafer-level optical stacks.

FOLDED OPTIC PASSIVE DEPTH SENSING SYSTEM

Certain aspects relate to systems and techniques for folded optic stereoscopic imaging, wherein a number of folded optic paths each direct a different one of a corresponding number of stereoscopic images toward a portion of a single image sensor. Each folded optic path can include a set of optics including a first light folding surface positioned to receive light propagating from a scene along a first optical axis and redirect the light along a second optical axis, a second light folding surface positioned to redirect the light from the second optical axis to a third optical axis, and lens elements positioned along at least the first and second optical axes and including a first subset having telescopic optical characteristics and a second subset lengthening the optical path length. The sensor can be a three-dimensionally stacked backside illuminated sensor wafer and reconfigurable instruction cell array processing wafer that performs depth processing.

STEREO IMAGING SYSTEM
20170351085 · 2017-12-07 ·

A stereoscopic optical system that includes an image member that is located at a position along a center optical axis and that has a first stereoscopic image area on a first side of the optical axis for receipt of a first stereoscopic image thereon and a second, separate stereoscopic image area on a second, separate side of the optical axis for receipt of a second, separate stereoscopic image thereon. The system includes an optical arrangement extending along the center optical axis and includes a roof prism with first and second roof segments. The arrangement is configured to transmit image-forming rays passing through the first roof segment to the first stereoscopic image area along a first optical path through the arrangement and is configured to transmit image-forming rays passing through the second roof segment to the second stereoscopic image area along a second, different optical path through the arrangement.

Image processing device, imaging device, and image processing program

An image processing apparatus is provided, that is configured to: extract a first pixel value corresponding to a first viewpoint that is one of a plurality of viewpoints to capture a subject image, at a target pixel position from image data having the first pixel value; extract second and third luminance values corresponding to second and third viewpoints that are different from the first viewpoint, at the target pixel position from luminance image data having the second and third luminance values; and calculate at least any of second and third pixel values of the second and third viewpoints such that a relational expression between the second or third pixel value and the first pixel value extracted by the pixel value extracting unit remains correlated with a relational expression defined by the second and third luminance values.