G06T7/596

VIRTUAL PHOTOGRAMMETRY
20200020155 · 2020-01-16 ·

Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.

PARAMETERIZING 3D SCENES FOR VOLUMETRIC VIEWING

A target view to a 3D scene depicted by a multiview image is determined. The multiview image comprising sampled views at sampled view positions distributed throughout a viewing volume. Each sampled view in the sampled views comprises a wide-field-of-view (WFOV) image and a WFOV depth map as seen from a respective sampled view position in the sampled view positions. The target view is used to select, from the sampled views, a set of sampled views. A display image is caused to be rendered on a display of a wearable device. The display image is generated based on a WFOV image and a WFOV depth map for each sampled view in the set of sampled views.

Three-dimensional (3D) reconstructions of dynamic scenes using a reconfigurable hybrid imaging system
10529086 · 2020-01-07 · ·

A computer-implemented method for a three-dimensional (3D) reconstruction of a dynamic scene includes receiving a plurality of color image sequences from a plurality of color imaging sensors, and at least one depth image sequence from at least one depth imaging sensor, where a color imaging sensor quantity is larger than a depth imaging sensor quantity. A plurality of calibrated color image sequences and at least one calibrated depth image sequence are generated based on the plurality of color imaging sequences and the at least one depth image sequence. A plurality of initial 3D patches is constructed using the plurality of calibrated color image sequences and the at least one calibrated depth image sequence. A 3D patch cloud is generated by expanding the plurality of initial 3D patches.

System for a depth mapping device

A depth mapping device comprises a plurality of imaging components arranged among one or more rings. A depth mapping device may be designed by identifying a plurality of configurations satisfying a set of input parameters that includes a number of imaging components, each configuration indicating a different arrangement of imaging components among one or more rings. Coverages for different configuration parameter sets for the configuration are analyzed to determine a configuration parameter set associated with the configuration having a highest coverage, where the coverage is indicative of an amount of a local area viewable by a depth mapping device using the configuration and associated configuration parameter set. A target configuration having a highest coverage is selected and used to generate a depth mapping device design to be manufactured.

SYSTEM AND METHOD FOR DETERMINING OPERATING DEFLECTION SHAPES OF A STRUCTURE USING OPTICAL TECHNIQUES
20190385326 · 2019-12-19 ·

A system for measuring total operating deflection shapes of a structure includes one or more imagers, each including two cameras spaced apart from one another and each oriented and positioned to have corresponding fields of view of a different corresponding section of the structure, with the corresponding sections that may include overlap area of the structure within each of the different sections of the structure. Each of the cameras generates a corresponding data stream, which is communicated to a controller, which is configured to measure the response of the structure to an excitation, such as a vibration or an impulse. The system is configured to convert time-domain data from each of the data streams to the frequency-domain data using a Fourier Transform algorithm and stitching the shapes to obtain the total operating deflection shapes of the structure by scaling and stitching together the frequency-domain data.

System and method for performing quality control of manufactured models

Disclosed herein are example embodiments of methods and systems for identifying manufacturing defects of a manufactured dentition model. One of the methods for performing quality control comprises: determining whether the manufactured dentition model is a good or a defective product based on a statistical characteristic of a differences model. The differences model can be generated based on differences between a scanned 3D patient-dentition data and a scanned 3D manufactured-dentition data. The scanned 3D patient-dentition data can be generated using 3D data of a patient's dentition, and the scanned 3D manufactured-dentition data can be generated using 3D data of the manufactured dentition model. The manufactured dentition model can be a 3D printed model.

METHOD OF REAL-TIME GENERATION OF 3D IMAGING
20240062463 · 2024-02-22 ·

The present invention relates to a method of real-time generation of a 3D geometry of an object. The method comprises the steps of calibrating at least one RGB camera pair arranged to provide images of the object, receiving input images of the object from the at least on RGB camera pair, and performing a stereo reconstruction in a first hierarchical level using the input images. The stereo reconstruction step further comprises the steps of a) performing a ray marching operation in a first resolution on the input images to determine geometry positions along each view ray of the images; b) applying a uniqueness criterion to the geometry positions; c) determining a normal for each geometry position; d) performing a regularization operation based on the geometry positions and the respective normal, providing updated geometry positions; and e) performing an interpolation operation on the updated geometry positions and respective normal. The method further comprises repeating steps a) and c)-e) in at least one iteration in at least one ascending hierarchical level, wherein the resolution in the ray marching operation is doubled for each iteration, resulting in a geometry buffer for each of the at least one camera pair. The present invention further relates to a 3D image generation device and a system comprising such device.

Acceleration method of depth estimation for multiband stereo cameras

The present invention belongs to the field of image processing and computer vision, and discloses an acceleration method of depth estimation for multiband stereo cameras. In the process of depth estimation, during binocular stereo matching in each band, through compression of matched images, on one hand, disparity equipotential errors caused by binocular image correction can be offset to make the matching more accurate, and on the other hand, calculation overhead is reduced. In addition, before cost aggregation, cost diagrams are transversely compressed and sparsely matched, thereby reducing the calculation overhead again. Disparity diagrams obtained under different modes are fused to obtain all-weather, more complete and more accurate depth information.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
20240054668 · 2024-02-15 ·

An object is to obtain three-dimensional shape data with high accuracy from three-dimensional shape data representing an approximate shape of an object. Three-dimensional shape data of an object captured in a plurality of captured images whose viewpoints are different is obtained. Further, surface three-dimensional information on the object is derived based on the plurality of captured images. Then, the derived surface three-dimensional information is selected based on a distance from the shape surface of the object represented by the three-dimensional shape data.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
20240054747 · 2024-02-15 ·

An object is to obtain three-dimensional shape data with high accuracy from three-dimensional shape data representing an approximate shape of an object. Three-dimensional shape data of an object existing in an image capturing space is obtained. Further, a plurality of distance images each representing a distance to the object and corresponding to a plurality of viewpoints is obtained. Then, correction to evaluate the three-dimensional shape data based on the plurality of distance images and based on results of the evaluation, delete a unit element estimated not to represent a shape of the object among unit elements configuring the three-dimensional shape data is performed.