H04N2013/0077

Method and apparatus for determining disparity

A disparity determination method and apparatus are provided. The disparity determination method includes receiving first signals of an event from a first sensor disposed at a first location and second signals of the event from a second sensor disposed at a second location that is different than the first location, and extracting a movement direction of the event, based on at least one among the first signals and the second signals. The disparity determination method further includes determining a disparity between the first sensor and the second sensor, based on the movement direction, a difference between times at which the event is sensed by corresponding pixels in the first sensor, and a difference between times at which the event is sensed by corresponding pixels in the first sensor and the second sensor.

PHOTOGRAPHING DEVICE AND VEHICLE

A photographing device includes a first image sensor, a first filter area, a second image sensor, a first distance calculating unit, and a second distance calculating unit. The first image sensor includes a first sensor receiving light of a first wavelength band and outputting a target image, and a second sensor receiving light of a second wavelength band and outputting a reference image. The first filter area transmits a first light of a third wavelength band, which includes at least part of the first wavelength band, the first light being a part of light incident on the first image sensor. The second image sensor outputs a first image. The first distance calculating unit calculates a first distance to an object captured in the target image and the reference image. The second distance calculating unit calculates a second distance to an object captured in the reference image and the first image.

Image processing device, imaging device, and image processing program

An image processing apparatus is provided, that is configured to: extract a first pixel value corresponding to a first viewpoint that is one of a plurality of viewpoints to capture a subject image, at a target pixel position from image data having the first pixel value; extract second and third luminance values corresponding to second and third viewpoints that are different from the first viewpoint, at the target pixel position from luminance image data having the second and third luminance values; and calculate at least any of second and third pixel values of the second and third viewpoints such that a relational expression between the second or third pixel value and the first pixel value extracted by the pixel value extracting unit remains correlated with a relational expression defined by the second and third luminance values.

IMAGING SYSTEM INCLUDING LIGHT SOURCE, IMAGE SENSOR, AND DOUBLE-BAND PASS FILTER
20170347086 · 2017-11-30 ·

An imaging system includes a light source that, in operation, emits an emitted light containing a near-infrared light in a first wavelength region, an image sensor, and a double-band pass filter that transmits a visible light in at least a part of a wavelength region out of a visible region and the near-infrared light in the first wavelength region. The image sensor includes light detection cells, a first filter that selectively transmits the near-infrared light in the first wavelength region, second to fourth filters that selectively transmit lights in second to fourth wavelength regions, respectively, which are contained in the visible light, and an infrared absorption filter. The infrared absorption filter faces the second to fourth filters and absorbs the near-infrared light in the first wavelength region.

System and method for generating dewarped image using projection patterns captured from omni-directional stereo cameras

A system for generating high-resolution de-warped omni-directional stereo image from captured omni-directional stereo image by correcting optical distortions using projection patterns is provided. The system includes a projection pattern capturing arrangement, a projector or a display, and a de-warping server. The projection pattern capturing arrangement includes one or more omnidirectional cameras to capture projection patterns from the captured omni-directional stereo image from each omni-directional stereo camera. The projector or the display displays the projection patterns. The de-warping server obtain the projection patterns and processes the projection patterns to generate high resolution de-warped omni-directional stereo image by correcting optical distortions in the captured omni-directional stereo image and mapping the captured omni-directional stereo image and the high resolution de-warped omni-directional stereo image.

Methods for automatic registration of 3D image data

A method for automatic registration of 3D image data, captured by a 3D image capture system having an RGB camera and a depth camera, includes capturing 2D image data with the RGB camera at a first pose; capturing depth data with the depth camera at the first pose; performing an initial registration of the RGB camera to the depth camera; capturing 2D image data with the RGB camera at a second pose; capturing depth data at the second pose; and calculating an updated registration of the RGB camera to the depth camera.

Systems and methods of creating a three-dimensional virtual image
11265531 · 2022-03-01 · ·

Aspects are generally directed toward creating a three-dimensional virtual model by a user identifying a three-dimensional object. Next, the system captures a plurality of two-dimensional images of the object in succession, the plurality of images being captured from different orientations, and recording the plurality of images on a storage medium. Then the system determines the relative change in position of the plurality of images by comparing two subsequent images, wherein the relative change is determined by a difference in color intensity values between the pixels of one image and another image, generates a plurality of arrays from the difference determined and generates a computer image from the plurality of arrays, wherein the computer image represents the three-dimensional object.

IMAGE CONVERSION APPARATUS AND METHOD

Disclosed herein are an image compression apparatus and method. The image conversion method includes generating a multi-plane image reconfigured into layers in a depth direction based on multiple pieces of multi-view image data, generating an aggregated layer by aggregating the layers of the multi-plane image into at least one layer, and converting a multi-plane image including the aggregated layer into a two-dimensional (2D) atlas image.

LIVE ACTION VOLUMETRIC VIDEO COMPRESSION / DECOMPRESSION AND PLAYBACK
20170310945 · 2017-10-26 ·

A method for compressing geometric data and video is disclosed. The method includes receiving video and associated geometric data for a physical location, generating a background video from the video, and generating background geometric data for the geometric data outside of a predetermined distance from a capture point for the video as a skybox sphere at a non-parallax distance. The method further includes generating a geometric shape for a first detected object within the predetermined distance from the capture point from the geometric data, generating shape textures for the geometric shape from the video, and encoding the background video and shape textures as compressed video along with the geometric shape and the background geometric data as encoded volumetric video.

SYSTEM AND METHOD FOR ASSISTED 3D SCANNING

A three-dimensional scanning system includes: a camera configured to capture images; a processor; and memory coupled to the camera and the processor, the memory being configured to store: the images captured by the camera; and instructions that, when executed by the processor, cause the processor to: control the camera to capture one or more initial images of a subject from a first pose of the camera; compute a guidance map in accordance with the one or more initial images to identify one or more next poses; control the camera to capture one or more additional images from at least one of the one or more next poses; update the guidance map in accordance with the one or more additional images; and output the images captured by the camera to generate a three-dimensional model.