G06T7/596

SIGNAL PROCESSING APPARATUS, MOVING BODY, AND STEREO CAMERA
20200065987 · 2020-02-27 ·

Parallax information in a first direction is obtained from a first imaging device and a second imaging device. Parallax information in a second direction differing from the first direction is obtained from a first photoelectric conversion portion included in the first imaging device and a second photoelectric conversion portion included in the first imaging device. Distance information on a distance to an object is obtained from the parallax information in the first direction and the parallax information in the second direction.

Depth estimation method and depth estimation apparatus of multi-view images
10573017 · 2020-02-25 · ·

A depth estimation method and a depth estimation apparatus of multi-view images where the method includes: taking each image among a plurality of images in a same scenario as a current image to perform the processing of: obtaining an initial depth value of each pixel in the current image; dividing the current image into a plurality of superpixels; obtaining plane parameters of the plurality of superpixels according to a predetermined constraint condition based on the initial depth values; and generating a depth value of each pixel in the superpixels based on the plane parameters of the superpixels; wherein the predetermined constraint condition includes: a co-connection constraint, which is related to a difference between depth values of adjacent points on neighboring superpixels that do not occlude each other.

Image capturing apparatus, monitoring system, image processing apparatus, image capturing method, and non-transitory computer readable recording medium

There is provided an image capturing apparatus that captures a plurality of images, calculates a three-dimensional position from the plurality of images, and outputs the plurality of images and information about the three-dimensional position. The image capturing apparatus includes an image capturing unit, a camera parameter storage unit, a position calculation unit, a position selection unit, and an image complementing unit. The image capturing unit outputs the plurality of images using at least three cameras. The camera parameter storage unit stores in advance camera parameters including occlusion information. The position calculation unit calculates three dimensional positions of a plurality of points. The position selection unit selects a piece of position information relating to a subject area that does not have an occlusion, and outputs selected position information. The image complementing unit generates a complementary image, and outputs the complementary image and the selected position information.

Systems and methods for enhanced 3D modeling of a complex object
10565787 · 2020-02-18 · ·

A system and method for remotely and accurately generating a 3D model of a complex object is provided through the use of laser scan data and a plurality of overlapping images taken of the complex object. To generate the 3D model first, second, and third 3D point clouds may be derived from laser scan data obtained from one or more LiDAR scanners at a first, second, and, third location, respectively, near a complex object. A fourth 3D point cloud of a first portion of the complex object may be derived from a plurality of overlapping images, wherein at least a section of the first portion of the complex object is partially or wholly occluded. The first, second, third, and fourth 3D point clouds may be combined into a single 3D point cloud and a 3D model of the complex object may be generated from the single 3D point cloud.

System and Method for Performing Quality Control of Manufactured Models

Disclosed herein are example embodiments of methods and systems for identifying manufacturing defects of a manufactured dentition model. One of the methods for performing quality control comprises: determining whether the manufactured dentition model is a good or a defective product based on a statistical characteristic of a differences model.

IMAGE PROCESSING APPARATUS, RANGING APPARATUS AND PROCESSING APPARATUS
20200051264 · 2020-02-13 · ·

According to one embodiment, an image processing apparatus includes a memory and one or more hardware processors electrically coupled to the memory. The one or more hardware processors acquire a first image of an object including a first shaped blur and a second image of the object including a second shaped blur. The first image and the second image are acquired by capturing at a time through a single image-forming optical system. The one or more hardware processors acquire distance information to the object based on the first image and the second image, with a statistical model that has learnt previously.

REMOVAL OF PROJECTION NOISE AND POINT-BASED RENDERING

Embodiments described herein provide an apparatus comprising a processor to divide a first image projection into a plurality of regions, the plurality of regions comprising a plurality of points, determine an accuracy rating for the plurality of regions, and apply one of a first rendering technique to a first region in the plurality of regions when the accuracy rating for the first region in the plurality of regions fails to meet an accuracy threshold or a second rendering technique to the first region in the plurality of regions when the accuracy rating for the first region in the plurality of regions meets an accuracy threshold, and a memory communicatively coupled to the processor. Other embodiments may be described and claimed.

Image processing device, object recognizing device, device control system, image processing method, and computer-readable medium

An image processing device includes: a first generating unit configured to generate, based on a distance image generated from a plurality of taken images imaged by a plurality of imaging units in a travelling direction, and made from distance values corresponding to the travelling direction, a first image indicating a frequency distribution associating actual distances in a direction orthogonal to the travelling direction with the distance values; a second generating unit configured to generate, based on the distance image, a second image indicating a frequency distribution associating a horizontal direction of the distance image with the distance values; a first processing unit configured to detect a face of an object represented by the first image, using at least the first image; and a second processing unit configured to identify a type of the face of the object using at least the second image.

Systems and Methods for Decoding Image Files Containing Depth Maps Stored as Metadata

Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.

SENSOR-BASED MOVING OBJECT LOCALIZATION SYSTEM AND METHOD

A moving object localization system includes: a sensor that detects a moving object and measures positional information of the moving object; and a server that collects measured information from the sensor and calculates a position of moving object.