Patent classifications
G06T17/00
Image processing apparatus and image processing method for generating a strobe image using a three-dimensional model of an object
Provided are an image processing apparatus and an image processing method that enable a strobe image using a 3D model to be generated. A strobe model in which 3D models of an object at a plurality of times generated from a plurality of viewpoint images captured from a plurality of viewpoints are disposed in a three-dimensional space is generated. When the strobe model is generated, a target object that is a target in which the 3D model is disposed in the strobe model is set according to a degree of object relevance indicating relevance with a key object serving as a reference for disposition of the 3D model in the strobe model.
Systems and methods for scanning three-dimensional objects
According to at least one aspect, a system for scanning an object is provided. The system comprises at least one hardware processor; and at least one non-transitory computer-readable storage medium storing processor executable instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform: generating a first 3-dimensional (3D) model of the object; identifying a set of imaging positions from which to capture at least one image based on the first 3D model of the object; obtaining a set of images of the object captured at, or approximately at, the set of imaging positions; and generating a second 3D model of the object based on the set of images.
Systems and methods for scanning three-dimensional objects
According to at least one aspect, a system for scanning an object is provided. The system comprises at least one hardware processor; and at least one non-transitory computer-readable storage medium storing processor executable instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform: generating a first 3-dimensional (3D) model of the object; identifying a set of imaging positions from which to capture at least one image based on the first 3D model of the object; obtaining a set of images of the object captured at, or approximately at, the set of imaging positions; and generating a second 3D model of the object based on the set of images.
Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.
Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.
Feature detection by deep learning and vector field estimation
A system and method for extracting features from a 2D image of an object using a deep learning neural network and a vector field estimation process. The method includes extracting a plurality of possible feature points, generating a mask image that defines pixels in the 2D image where the object is located, and generating a vector field image for each extracted feature point that includes an arrow directed towards the extracted feature point. The method also includes generating a vector intersection image by identifying an intersection point where the arrows for every combination of two pixels in the 2D image intersect. The method assigns a score for each intersection point depending on the distance from each pixel for each combination of two pixels and the intersection point, and generates a point voting image that identifies a feature location from a number of clustered points.
Feature detection by deep learning and vector field estimation
A system and method for extracting features from a 2D image of an object using a deep learning neural network and a vector field estimation process. The method includes extracting a plurality of possible feature points, generating a mask image that defines pixels in the 2D image where the object is located, and generating a vector field image for each extracted feature point that includes an arrow directed towards the extracted feature point. The method also includes generating a vector intersection image by identifying an intersection point where the arrows for every combination of two pixels in the 2D image intersect. The method assigns a score for each intersection point depending on the distance from each pixel for each combination of two pixels and the intersection point, and generates a point voting image that identifies a feature location from a number of clustered points.
Neural network processing for multi-object 3D modeling
Embodiments are directed to neural network processing for multi-object three-dimensional (3D) modeling. An embodiment of a computer-readable storage medium includes executable computer program instructions for obtaining data from multiple cameras, the data including multiple images, and generating a 3D model for 3D imaging based at least in part on the data from the cameras, wherein generating the 3D model includes one or more of performing processing with a first neural network to determine temporal direction based at least in part on motion of one or more objects identified in an image of the multiple images or performing processing with a second neural network to determine semantic content information for an image of the multiple images.
Neural network processing for multi-object 3D modeling
Embodiments are directed to neural network processing for multi-object three-dimensional (3D) modeling. An embodiment of a computer-readable storage medium includes executable computer program instructions for obtaining data from multiple cameras, the data including multiple images, and generating a 3D model for 3D imaging based at least in part on the data from the cameras, wherein generating the 3D model includes one or more of performing processing with a first neural network to determine temporal direction based at least in part on motion of one or more objects identified in an image of the multiple images or performing processing with a second neural network to determine semantic content information for an image of the multiple images.
Determining Spatial Relationship Between Upper Teeth and Facial Skeleton
A computer-implemented method includes receiving a 3D model representative of upper teeth (U1) of a patient (P) and receiving a plurality of images of a face of the patient (P). The method also includes generating a facial model (200) of the patient based on the received plurality of images and determining, based on the determined facial model (200), the received 3D model of 10 the upper teeth (U1) and the plurality of images, a spatial relationship between the upper teeth (U1) of the patient (P) and a facial skeleton of the patient (P).