Patent classifications
G06T2200/08
Systems and methods for generating three dimensional geometry
Systems and methods are described for creating three dimensional models of building objects by creating a point cloud from a plurality of input images, defining edges of the building object's surfaces represented by the point cloud, creating simplified geometries of the building object's surfaces and constructing a building model based on the simplified geometries. Input images may include ground, orthographic, or oblique images. The resultant model may be scaled according to correlation with select image types and textured.
Methods and systems for predicting pressure maps of 3D objects from 2D photos using deep learning
A structured 3D model of a real-world object is generated from a series of 2D photographs of the object, using photogrammetry, a keypoint detection deep learning network (DLN), and retopology. In addition, object parameters of the object are received. A pressure map of the object is then generated by a pressure estimation DLN based on the structured 3D model and the object parameters. The pressure estimation DLN was trained on structured 3D models, object parameters, and pressure maps of a plurality of objects belonging to a given object category. The pressure map of the real-world object can be used in downstream processes, such as custom manufacturing.
Systems and methods for generating three dimensional geometry
Systems and methods are described for creating three dimensional models of building objects by creating a point cloud from a plurality of input images, defining edges of the building object's surfaces represented by the point cloud, creating simplified geometries of the building object's surfaces and constructing a building model based on the simplified geometries. Input images may include ground, orthographic, or oblique images. The resultant model may be scaled according to correlation with select image types and textured.
Systems and methods for digitally representing a scene with multi-faceted primitives
Disclosed is a system and associated methods for generating and rendering a polyhedral point cloud that represents a scene with multi-faceted primitives. Each multi-faceted primitive stores multiple sets of values that represent different non-positional characteristics that are associated with a particular point in the scene from different angles. For instance, the system generates a multi-faceted primitive for a particular point of the scene that is captured in first capture from a first position and a second capture from a different second position. Generating the multi-faceted primitive includes defining a first facet with a first surface normal oriented towards the first position and first non-positional values based on descriptive characteristics of the particular point in the first capture, and defining a second facet with a second surface normal orientated towards the second position and second non-positional values based on different descriptive characteristics of the particular point in the second capture.
Network and system for pose and size estimation
A network for category-level 6D pose and size estimation, including a 3D-OCR module for 3D Orientation-Consistent Representation, a GeoReS module for Geometry-constrained Reflection Symmetry, and a MPDE module for Mirror-Paired Dimensional Estimation; wherein the 3D-OCR module and the GeoReS module are incorporated in parallel; the 3D-OCR module receives a canonical template shape including canonical category-specific keypoints; the GeoReS module receives an original input depth observation including pre-processed predicted category labels and potential masks of the target instances; the MPDE module receives the output from the GeoReS module as well as the original input depth observation; and the network outputs the estimation results based on the output of the MPDE module, the output of the 3D-OCR module, as well as the canonical template shape. Also provided are corresponding systems and methods.
SYSTEMS AND METHODS FOR MEDICAL IMAGE REGISTRATION
There is provided a method for registration of intravital anatomical imaging modality image data and nuclear medicine image data of a patient's heart comprising: obtaining anatomical image data including a heart of a patient outputted by an anatomical intravital imaging modality; obtaining at least one nuclear medicine image data outputted by a nuclear medicine imaging modality, the nuclear medicine image data including the heart of the patient; identifying a segmentation of a network of vessels of the heart in the anatomical image data; identifying a contour of at least part of the heart in the nuclear medicine image data, the contour including at least one muscle wall border of the heart; correlating between the segmentation and the contour; registering the correlated segmentation and the correlated contour to form a registered image of the anatomical image data and the nuclear medicine image data; and providing the registered image for display.
METHOD FOR TURBINE COMPONENT QUALIFICATION
A method for evaluating a turbine component includes inducing a thermal response of the component at an initial time, capturing a two-dimensional infrared image of the thermal response of the component with a thermal imaging device, wherein the two-dimensional infrared image comprises a plurality of infrared image pixels, generating a two-dimension to three-dimension mapping template to correlate two-dimensional infrared image data with three-dimensional locations on the component, mapping at least a subset of the plurality of infrared image pixels of the two-dimensional infrared image to three-dimensional coordinates using the mapping template, and generating a three-dimensional infrared image and infrared data of the component from the mapped infrared image pixels to three-dimensional coordinates, wherein the three-dimensional infrared image and infrared data is used to qualify the component for use.
Photometric-based 3D object modeling
Aspects of the present disclosure involve a system and a method for performing operations comprising: accessing a source image depicting a target structure; accessing one or more target images depicting at least a portion of the target structure; computing correspondence between a first set of pixels in the source image of a first portion of the target structure and a second set of pixels in the one or more target images of the first portion of the target structure, the correspondence being computed as a function of camera parameters that vary between the source image and the one or more target images; and generating a three-dimensional (3D) model of the target structure based on the correspondence between the first set of pixels in the source image and the second set of pixels in the one or more target images based on a joint optimization of target structure and camera parameters.
REINFORCEMENT LEARNING-BASED LABEL-FREE SIX-DIMENSIONAL OBJECT POSE PREDICTION METHOD AND APPARATUS
Provided are a reinforcement learning-based label-free six-dimensional object pose prediction method and apparatus. The method includes: obtaining a target image to be predicted, the target image being a two-dimensional image including a target object; performing pose prediction based on the target image by using a pre-trained pose prediction model to obtain a prediction result, the pose prediction model being obtained by performing reinforcement learning based on a sample image; and determining a three-dimensional position and a three-dimensional direction of the target object based on the prediction result. The pose prediction model is trained by introducing reinforcement learning, the pose prediction is performed based on the target image by using the pre-trained pose prediction model, and thus the problem of six-dimensional object pose estimation based on two-dimensional images can be solved in the absence of real pose annotation, which ensures the prediction effect of label-free six-dimensional object pose prediction.
THREE-DIMENSIONAL MODEL GENERATION METHOD AND THREE-DIMENSIONAL MODEL GENERATION DEVICE
A three-dimensional model generation method executed by an information processing device includes: obtaining images generated by shooting a subject from respective viewpoints; searching for a similar point that is similar to a first point in a first image among the images, from second points in a search area in a second image different from the first image, the search area being provided based on the first point; calculating an accuracy of a search result of the searching, using degrees of similarity between the first point and the respective second points; and generating a three-dimensional model using the search result and the accuracy.