Patent classifications
G06T7/596
IMAGE CAPTURING APPARATUS, MONITORING SYSTEM, IMAGE PROCESSING APPARATUS, IMAGE CAPTURING METHOD, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM
There is provided an image capturing apparatus that captures a plurality of images, calculates a three-dimensional position from the plurality of images, and outputs the plurality of images and information about the three-dimensional position. The image capturing apparatus includes an image capturing unit, a camera parameter storage unit, a position calculation unit, a position selection unit, and an image complementing unit. The image capturing unit outputs the plurality of images using at least three cameras. The camera parameter storage unit stores in advance camera parameters including occlusion information. The position calculation unit calculates three dimensional positions of a plurality of points. The position selection unit selects a piece of position information relating to a subject area that does not have an occlusion, and outputs selected position information. The image complementing unit generates a complementary image, and outputs the complementary image and the selected position information.
Systems and Methods for Encoding Image Files Containing Depth Maps Stored as Metadata
Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
SYSTEMS AND METHODS FOR OBJECT REPLACEMENT
Some embodiments provide systems and methods to enable object replacement. A central computing system can receive data associated with quantities of like physical objects from remote systems. The central computing system can adjust the first quantity of the like physical objects stored in the first one of the remote systems based on the second quantity of the like physical objects stored in the at least another one of the remote systems. The central computing system can determine the like physical objects are absent from the facility. An autonomous robot device can detect a vacant space at the designated location at which the like physical objects are supposed to be disposed. The autonomous robot device using the image capturing device can capture an image of the vacant space. The central computing system can determine a set of like replacement physical objects to be disposed in the vacant space.
Three-dimensional sensor system and three-dimensional data acquisition method
A three-dimensional sensor system includes three cameras, a projector, and a processor. The projector simultaneously projects at least two linear patterns on the surface of an object. The three cameras synchronously capture a first two-dimensional (2D) image, a second 2D image, and a third 2D image of the object, respectively. The processor extracts a first set and a second set of 2D lines from the at least two linear patterns on the first 2D image and the second 2D image, respectively; generates a candidate set of three-dimensional (3D) points from the first set and the second set of 2D lines; and selects, from the candidate set of 3D points, an authentic set of 3D points that matches a projection contour line of the object surface by: performing data verification on the candidate set of 3D points using the third 2D image, and filtering the candidate set of 3D points.
SELECTIVELY PAIRED IMAGING ELEMENTS FOR STEREO IMAGES
This disclosure describes a configuration of an aerial vehicle, such as an unmanned aerial vehicle (UAV), that includes a plurality of cameras that may be selectively combined to form a stereo pair for use in obtaining stereo images that provide depth information corresponding to objects represented in those images. Depending on the distance between an object and the aerial vehicle, different cameras may be selected for the stereo pair based on the baseline between those cameras and a distance between the object and the aerial vehicle. For example, cameras with a small baseline (close together) may be selected to generate stereo images and depth information for an object that is close to the aerial vehicle. In comparison, cameras with a large baseline may be selected to generate stereo images and depth information for an object that is farther away from the aerial vehicle.
Sensor-based moving object localization system and method
A moving object localization system includes: a sensor that detects a moving object and measures positional information of the moving object; and a server that collects measured information from the sensor and calculates a position of moving object.
Hybrid system with a structured-light stereo device and a time of flight device
A system for real-time depth sensing includes a structured-light stereo device, a time of flight device, and a computing device. The computing device executes instructions that cause the computing device to determine depth measurements of a scene using information received by the structured-light stereo device and to determine time of flight measurements of the scene using information received by the time of flight device. The computing device executes further instructions that cause the computing device to generate a depth map using the depth measurements, to generate calibration points using the time of flight measurements, and to update the depth map using the calibration points.
DRONE WITH WIDE FRONTAL FIELD OF VIEW
A drone includes a frame and a plurality of motors attached to the frame. Each motor of the plurality of motors is connected to a respective propeller located below the frame. A tail motor is attached to the frame. The tail motor is connected to a tail propeller located above the frame. Cameras are attached to the frame and located above the frame. The cameras have fields of view extending over the plurality of propellers.
APPARATUS AND METHODS FOR DETERMINING MULTI-SUBJECT PERFORMANCE METRICS IN A THREE-DIMENSIONAL SPACE
Apparatus and methods for extraction and calculation of multi-person performance metrics in a three-dimensional space. An example apparatus includes a detector to identify a first subject in a first image captured by a first image capture device based on a first set of two-dimensional kinematic keypoints in the first image, the two-dimensional kinematic keypoints corresponding to a joint of the first subject, the first image capture device associated with a first view of the first subject, a multi-view associator to verify the first subject using the first image and a second image captured by a second image capture device, the second image capture device associated with a second view of the first subject, the second view different than the first view, and a keypoint generator to generate three-dimensional keypoints for the first subject using the first set of two-dimensional kinematic keypoints.
Method and device for obtaining 3D images
A method and device are provided for obtaining a 3D image. The method includes sequentially projecting a plurality of beams to an object, each of the plurality of projected beams corresponding to a respective one of a plurality of sectors included in a pattern; detecting a plurality of beams reflected off of the object corresponding to the plurality of projected beams; identifying time-of-flight (ToF) of each of the plurality of projected beams based on the plurality of detected beams; identifying a distortion of the pattern, which is caused by the object, based on the plurality of detected beams; and generating a depth map for the object based on the distortion of the pattern and the ToF of each of the plurality of projected beams, wherein the plurality of detected beams are commonly used to identify the ToF and the distortion of the pattern.