G06T7/596

User-Guidance System Based on Augmented-Reality and/or Posture-Detection Techniques
20200311429 · 2020-10-01 ·

A user-guidance system that utilizes augmented-reality (AR) components and human-posture-detection techniques is presented. The user-guidance system can help users to use smart devices to conduct 3D body scans more efficiently and accurately. AR components are computer generated for the on-screen guidance to guide a camera operator to position the camera in a particular location in relation to a target object with a particular tilt orientation in relation to the target object to capture an image that includes a region of the target object for 3D reconstruction of the target object. Human-posture-detection techniques are used to detect a human user's real-time posture and provide real-time on-screen guidance feedback and instructions to the human user to adopt an intended best posture for 3D reconstruction of the human user.

Obstacle avoidance system based on embedded stereo vision for unmanned aerial vehicles

Embodiments described herein provide various examples of an automatic obstacle avoidance system for unmanned vehicles using embedded stereo vision techniques. In one aspect, an unmanned aerial vehicle (UAV) capable of performing autonomous obstacle detection and avoidance is disclosed. This UAV includes: a stereo vision camera set coupled to the one or more processors and the memory to capture a sequence of stereo images; and a stereo vision module configured to: receive a pair of stereo images captured by a pair of stereo vision cameras; perform a border cropping operation on the pair of stereo images to obtain a pair of cropped stereo images; perform a sub sampling operation on the pair of cropped stereo images to obtain a pair of sub sampled stereo images; and perform a dense stereo matching operation on the pair of sub sampled stereo images to generate a dense three-dimensional (3D) point map of a space corresponding to the pair of stereo images.

Systems and methods for estimating depth from projected texture using camera arrays

Systems and methods in accordance with embodiments of the invention estimate depth from projected texture using camera arrays. One embodiment of the invention includes: at least one two-dimensional array of cameras comprising a plurality of cameras; an illumination system configured to illuminate a scene with a projected texture; a processor; and memory containing an image processing pipeline application and an illumination system controller application. In addition, the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture. Furthermore, the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture capture a set of images of the scene illuminated with the projected texture; determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images. Also, generating a depth estimate for a given pixel location in the image from the reference viewpoint includes: identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths along a plurality of epipolar lines aligned at different angles; comparing the similarity of the corresponding pixels identified at each of the plurality of depths; and selecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint.

System and method for 3D profile determination using model-based peak selection

This invention provides a system and method for selecting the correct profile from a range of peaks generated by analyzing a surface with multiple exposure levels applied at discrete intervals. The cloud of peak information is resolved by comparison to a model profile into a best candidate to represent an accurate representation of the object profile. Illustratively, a displacement sensor projects a line of illumination on the surface and receives reflected light at a sensor assembly at a set exposure level. A processor varies the exposure level setting in a plurality of discrete increments, and stores an image of the reflected light for each of the increments. A determination process combines the stored images and aligns the combined images with respect to a model image. Points from the combined images are selected based upon closeness to the model image to provide a candidate profile of the surface.

Processing device, object recognition apparatus, device control system, processing method, and computer-readable recording medium

According to an embodiment, a processing device includes a generating unit, a detecting unit, and a determining unit. The generating unit is configured to generate two-dimensional distribution information of an object, the two-dimensional distribution information associating between at least a lateral direction distance and a depth direction distance of the object. The detecting unit is configured to detect a continuous area having continuity in a depth direction in the two-dimensional distribution information. The determining unit is configured to determine whether the continuous area represents a detection target.

Shape refinement of three dimensional shape model
10748351 · 2020-08-18 · ·

An electronic apparatus for shape refinement of a three-dimensional (3D) shape model is provided. The electronic apparatus generates a back-projected image for an object portion based on an initial 3D shape model of the object portion and a texture map of the object portion. The electronic apparatus computes an optical flow map between the back-projected image and a two-dimensional (2D) color image of the object portion. The electronic apparatus determines a plurality of 3D correspondence points for a corresponding plurality of vertices of the initial 3D shape model, based on the optical flow map and a depth image of the object portion. The electronic apparatus estimates a final 3D shape model that corresponds to a shape-refined 3D model of the object portion based on the initial 3D shape model and the plurality of 3D correspondence points for the corresponding plurality of vertices of the initial 3D shape model.

Image processing apparatus, object shape estimation method, and storage medium
10742852 · 2020-08-11 · ·

Highly accurate estimation results are obtained even though cameras used for shape estimation of an object are distributed in accordance with a plurality of points of interest. The image processing apparatus of the present invention includes: an estimation unit configured to estimate an object shape of an object within a multi-viewpoint video image captured by each of a plurality of camera groups in units of camera groups; and an integration unit configured to integrate estimation results of the object shapes estimated in units of camera groups based on a camera map indicating a position relationship between common image capturing areas in each of the plurality of camera groups.

Three-dimensional environment modeling based on a multi-camera convolver system

A three-dimensional model of the environment of one or more camera devices is determined, in which image processing for inferring the model may be performed at the one or more camera devices.

DETECTING, TRACKING AND COUNTING OBJECTS IN VIDEOS
20200242784 · 2020-07-30 · ·

Various embodiments are disclosed for detecting, tracking and counting objects of interest in video. In an embodiment, a method of detecting and tracking objects of interest comprises: obtaining, by a computing device, multiple frames of images from an image capturing device; detecting, by the computing device, objects of interest in each frame; accumulating, by the computing device, multiple frames of object detections; creating, by the computing device, object tracks based on a batch of object detections over multiple frames; and associating, by the computing device, the object tracks over consecutive batches.

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
20200244843 · 2020-07-30 · ·

The present disclosure relates to an information processing apparatus and an information processing method that are configured to be capable of efficiently acquiring information for use in generating three-dimensional data from two-dimensional image data. A grouping block sorts two or more virtual cameras for acquiring two-dimensional image data into two or more groups. A global table generation block generates a global table is which group information related with each of two or more groups is registered. A group table generation block generates, for each group, a group table in which camera information for use in generating three-dimensional data from two-dimensional image data acquired by a virtual camera sorted into a group is registered. The present disclosure is applicable to an encoding apparatus and the like, for example.