Patent classifications
G06T7/596
INFORMATION PROCESSING APPARATUS RELATING TO GENERATION OF VIRTUAL VIEWPOINT IMAGE, METHOD AND STORAGE MEDIUM
An object is to make it possible to arbitrarily set a height and a moving speed of a virtual camera also and to obtain a virtual viewpoint video image by an easy operation in a short time. The information processing apparatus is an information processing apparatus that sets a movement path of a virtual viewpoint relating to a virtual viewpoint image generated based on a plurality of images obtained by a plurality of cameras, and includes: a specification unit configured to specify a movement path of a virtual viewpoint; a display control unit configured to display a plurality of virtual viewpoint images in accordance with the movement path specified by the specification unit on a display screen; a reception unit configured to receive an operation for at least one of the plurality of virtual viewpoint images displayed on the display screen; and a change unit configured to change the movement path specified by the specification unit in accordance with the operation received by the reception unit.
Viewpoint-Adaptive Three-Dimensional (3D) Personas
Systems and methods relate to receiving a plurality of video streams captured of a subject by a plurality of video cameras, each video stream including video frames time-synchronized according to a shared frame rate, each video camera having a known vantage point in a predetermined coordinate system; obtaining at least one three-dimensional (3D) mesh of the subject at the shared frame rate, the 3D mesh time-synchronized with the video frames of the video streams, the at least one mesh including a plurality of vertices with known locations in the predetermined coordinate system; calculating one or more lists of visible-vertices at the shared frame rate, each list including a subset of the plurality of vertices of the at least one 3D mesh of the subject, the subset being a function of the location of the known vantage point associated with at least one of the plurality of video cameras; generating one or more time-synchronized data streams at the shared frame rate, the one or more time-synchronized data streams including: one or more video streams encoding at least one of the plurality of video streams; and one or more geometric-data streams including the calculated one or more visible-vertices lists; and transmitting the one or more time-synchronized data streams to a receiver for rendering of a viewpoint-adaptive 3D persona of the subject.
Structured-stereo imaging assembly including separate imagers for different wavelengths
The present disclosure describes structured-stereo imaging assemblies including separate imagers for different wavelengths. The imaging assembly can include, for example, multiple imager sub-arrays, each of which includes a first imager to sense light of a first wavelength or range of wavelengths and a second imager to sense light of a different second wavelength or range of wavelengths. Images acquired from the imagers can be processed to obtain depth information and/or improved accuracy. Various techniques are described that can facilitate determining whether any of the imagers or sub-arrays are misaligned.
Method and System for Multiple Stereo Based Depth Estimation and Collision Warning/Avoidance Utilizing the Same
The present teaching relates to method, system, medium, and implementation of determining depth information in autonomous driving. Stereo images are first obtained from multiple stereo pairs selected from at least two stereo pairs. The at least two stereo pairs have stereo cameras installed with the same baseline and in the same vertical plane. Left images from the multiple stereo pairs are fused to generate a fused left image and right images from the multiple stereo pairs are fused to generate a fused right image. Disparity is then estimated based on the fused left and right images and depth information can be computed based on the stereo images and the disparity.
INFORMATION PROCESSING APPARATUS AND METHOD, VEHICLE, AND INFORMATION PROCESSING SYSTEM
An automobile-mounted imaging apparatus and a computer readable storage medium for detecting a distance to at least one object. The apparatus comprises circuitry configured to select at least two images from images captured by at least three cameras to use for detecting the distance to the at least one object based on at least one condition. Alternatively or additionally, the apparatus comprises circuitry configured to select two cameras of at least three cameras for detecting the distance to the at least one object based on at least one condition. Alternatively or additionally, the apparatus comprises circuitry configured to determine which of at least two cameras to use for detecting the distance to the at least one object based on at least one condition from at least three cameras capturing images.
Color Haar Classifier for Retail Shelf Label Detection
A method for a multiple camera sensor suite mounted on an autonomous robot to be able to detect and recognize shelf labels using color Haar classifiers is described.
Depth map generation apparatus, method and non-transitory computer-readable medium therefor
The present disclosure provides a depth map generation apparatus, including a camera assembly with at least three cameras, an operation mode determination module and a depth map generation module. The camera assembly with at least three cameras may a first camera, a second camera and a third camera that are sequentially aligned on a same axis. The operation mode determination module may be configured to determine an operation mode of the camera assembly. The operation mode includes at least: a first mode using images of non-adjacent cameras, and a second mode using images of adjacent cameras. Further, the depth map generation module may be configured to generate depth maps according to the determined operation mode.
Real-time omnidirectional stereo matching method using multi-view fisheye lenses and system thereof
Provided is a real-time omnidirectional stereo matching method in a camera system including a first pair of fisheye cameras including first and second fisheye cameras provided to perform shooting in opposite directions and a second pair of fisheye cameras including third and fourth fisheye cameras provided to perform shooting in opposite directions and in which the first pair of fisheye cameras and the second pair of fisheye cameras are vertically provided, including receiving fisheye images of a subject captured through the first to the fourth fisheye cameras; selecting one fisheye camera from among fisheye cameras for each pixel of a preset reference fisheye image among the fisheye images using a sweep volume for preset distance candidates; generating a distance map for all pixels using the reference fisheye image and a fisheye image of the one fisheye camera; and performing real-time stereo matching on the fisheye images using the distance map.
System and Method for Performing Quality Control of Manufactured Models
Disclosed herein are example embodiments of methods and systems for identifying manufacturing defects of a manufactured dentition model. One of the methods for performing quality control comprises: determining whether the manufactured dentition model is a good or a defective product based on a statistical characteristic of a differences model. The differences model can be generated based on differences between a scanned 3D patient-dentition data and a scanned 3D manufactured-dentition data. The scanned 3D patient-dentition data can be generated using 3D data of a patient's dentition, and the scanned 3D manufactured-dentition data can be generated using 3D data of the manufactured dentition model. The manufactured dentition model can be a 3D printed model.
Online calibration of 3D scan data from multiple viewpoints
A calibration system and method for online calibration of 3D scan data from multiple viewpoints is provided. The calibration system receives a set of depth scans and a corresponding set of color images of a scene that includes a human-object as part of a foreground of the scene. The calibration system extracts a first three-dimensional (3D) representation of the foreground based on a first depth scan and spatially aligns the extracted first 3D representation with a second 3D representation of the foreground. The first 3D representation and the second 3D representation are associated with a first viewpoint and a second viewpoint, respectively, in a 3D environment. The calibration system updates the spatially aligned first 3D representation based on the set of color images and a set of structural features of the human-object and reconstructs a 3D mesh of the human-object based on the updated first 3D representation of the foreground.