Patent classifications
G06T7/596
Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo
Systems and methods for estimating depths of features in a scene or environment surrounding a user of a spatial computing system, such as a virtual reality, augmented reality or mixed reality (collectively, cross reality) system, in an end-to-end process. The estimated depths can be utilized by a spatial computing system, for example, to provide an accurate and effective 3D cross reality experience.
Streaming mixed-reality environments between multiple devices
An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.
Shape measuring system and shape measuring method
A shape measuring system includes an object detection unit attached to a work machine and configured to detect an object and output information of the object, a shape detection unit configured to output shape information representing a three-dimensional shape of the object by using the information of the object detected by the object detection unit, and an information providing unit configured to attach time information for specifying the shape information to the shape information and output the shape information.
MULTIMODAL FOREGROUND BACKGROUND SEGMENTATION
The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.
DATA CAPTURE SYSTEM AND METHOD FOR OBJECT DIMENSIONING
A data capture system for object dimensioning includes: a motion sensor configured to generate a detection signal responsive to detecting an object at a capture position within a capture volume; a capture controller connected to the motion sensor and configured, responsive to receiving the detection signal, to generate and transmit a shutter command substantially simultaneously to each of a plurality of cameras that causes each camera to capture a respective image of a synchronous set of images of the capture volume; an image processing server connected to each of the plurality of cameras and configured to receive the synchronous set of images from the cameras, and to store the synchronous set of images in a common repository; the image processing server further configured to generate a point cloud representing the object based on the synchronous set of images, for use in determining dimensions of the object.
IMAGING APPARATUS, ACCESSORY, PROCESSING APPARATUS, PROCESSING METHOD, AND STORAGE MEDIUM
An imaging apparatus includes an image sensor configured to photoelectrically convert an object image formed by an imaging optical system in at least three states in which positions of light sources configured to emit light are different from each other, and to output at least three image data, and a luminance distribution acquirer configured to acquire information on a plurality of luminance distributions of the image data based on common information on a common light amount distribution regarding the at least three states.
THREE-DIMENSIONAL SENSOR SYSTEM AND THREE-DIMENSIONAL DATA ACQUISITION METHOD
A three-dimensional sensor system includes three cameras, a projector, and a processor. The projector simultaneously projects at least two linear patterns on the surface of an object. The three cameras synchronously capture a first two-dimensional (2D) image, a second 2D image, and a third 2D image of the object, respectively. The processor extracts a first set and a second set of 2D lines from the at least two linear patterns on the first 2D image and the second 2D image, respectively; generates a candidate set of three-dimensional (3D) points from the first set and the second set of 2D lines; and selects, from the candidate set of 3D points, an authentic set of 3D points that matches a projection contour line of the object surface by: performing data verification on the candidate set of 3D points using the third 2D image, and filtering the candidate set of 3D points.
Method and apparatus for processing image content
A method and system are provided for processing image content. The method comprises receiving information about a content image captured at least by one camera. The content includes multi-view representation of an image including both distorted and undistorted areas. The camera parameters and image parameters are then obtained and used to determine to which areas are undistorted and which areas are distorted in said image. This is used to calculate depth map of the image using the determined undistorted and distorted information. A final stereoscopic image is then rendered that uses the distorted and undistorted areas and calculation of depth map.
Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium
Provided are a three-dimensional reconstruction method, apparatus and system of a dynamic scene, a server and a medium. The method includes: acquiring multiple continuous depth image sequences of the dynamic scene, where the multiple continuous depth image sequences are captured by an array of drones equipped with depth cameras; fusing the multiple continuous depth image sequences to establish a three-dimensional reconstruction model of the dynamic scene; obtaining target observation points of the array of drones through calculation according to the three-dimensional reconstruction model and current poses of the array of drones; and instructing the array of drones to move to the target observation points to capture, and updating the three-dimensional reconstruction model according to multiple continuous depth image sequences captured by the array of drones at the target observation points.
Methods and apparatus for generating a three-dimensional reconstruction of an object with reduced distortion
Methods, systems, and computer readable media for generating a three-dimensional reconstruction of an object with reduced distortion are described. In some aspects, a system includes at least two image sensors, at least two projectors, and a processor. Each image sensor is configured to capture one or more images of an object. Each projector is configured to illuminate the object with an associated optical pattern and from a different perspective. The processor is configured to perform the acts of receiving, from each image sensor, for each projector, images of the object illuminated with the associated optical pattern and generating, from the received images, a three-dimensional reconstruction of the object. The three-dimensional reconstruction has reduced distortion due to the received images of the object being generated when each projector illuminates the object with an associated optical pattern from the different perspective.