Patent classifications
G06T17/00
POINT CLOUD DATA ENCODING METHOD AND DECODING METHOD, DEVICE, MEDIUM, AND PROGRAM PRODUCT
A point cloud data encoding method and decoding method, a device, a medium, and a program product are provided, and relate to the field of point cloud application technologies. One method includes obtaining point cloud data, the point cloud data comprising at least two data points; and sequentially encoding data points in the point cloud data according to encoding orders of the data points, to obtain encoded point cloud data corresponding to the point cloud data, wherein the encoding orders of the data points being determined based on distances among the data points. Another method includes obtaining encoded point cloud data, obtaining reference information, the reference information being used for indicating a start reference data point of an encoding queue; and sequentially decoding, based on the reference information and the encoded point cloud data, data points according to the encoding orders of the data points.
SYSTEM AND METHOD FOR GENERATING VIRTUAL PSEUDO 3D OUTPUTS FROM IMAGES
A method for generating virtual pseudo three dimensional 360 degree outputs from 2D images of an object 102 is provided. An image viewer plane of the object 102 in the 3D image to be rendered on a user device 108 is detected using an augmented reality technique. An image viewer plane is placed facing the user device 108 rendering ‘Image 0’ and movement coordinates of the user device 108 with respect to the image viewer plane is detected to calculate the virtual pseudo 3D image set to be displayed based on at least one angle of view by performing interpolation between two consecutive virtual pseudo 3D images. The image viewer plane is changed with respect to the movement of the user device 108 to change the virtual pseudo 3D image and the interpolated virtual pseudo 3D image on the plane and that image is displayed as an augmented reality object in real-time to the user device 108.
COMPUTER IMPLEMENTED METHODS FOR DENTAL DESIGN
Computer implemented method of generating a dental design, comprising: a) capturing a facial image comprising a head of a patient and a smile; b) displaying it as a first image; c) capturing a 3D intraoral scan; d) aligning the 3D scan to the head; e) determining bounding boxes in the 3D scan, each comprising a single tooth; f) showing a view of the 3D scan and the bounding boxes as a second image; g) showing the bounding boxes as overlay on the first image; i) allowing the bounding boxes to be resized/repositioned; ii) defining a limited set of parameters to characterize the tooth inside the bounding box, and searching a number of candidate matching teeth from a 3D digital library of teeth, and proposing a candidate matching tooth; iii) overlaying the first image with a digital representation of the proposed candidate matching tooth from the digital library.
COMPUTER IMPLEMENTED METHODS FOR DENTAL DESIGN
Computer implemented method of generating a dental design, comprising: a) capturing a facial image comprising a head of a patient and a smile; b) displaying it as a first image; c) capturing a 3D intraoral scan; d) aligning the 3D scan to the head; e) determining bounding boxes in the 3D scan, each comprising a single tooth; f) showing a view of the 3D scan and the bounding boxes as a second image; g) showing the bounding boxes as overlay on the first image; i) allowing the bounding boxes to be resized/repositioned; ii) defining a limited set of parameters to characterize the tooth inside the bounding box, and searching a number of candidate matching teeth from a 3D digital library of teeth, and proposing a candidate matching tooth; iii) overlaying the first image with a digital representation of the proposed candidate matching tooth from the digital library.
3D BUILDING GENERATION USING TOPOLOGY
Embodiments provide systems and methods for three-dimensional building generation from machine learning and topological models. The method uses topology models that are converted into vertices and edges. A BGAN (Building generative adversarial network) is used to create fake vertices/edges. The BGAN is then used to generate random samples from seen sample of different structures of building based on relationship of vertices and edges. The embeddings are then fed into a machine trained network to create a digital structure from the image.
SYSTEM AND METHOD FOR GENERATING 3D OBJECTS FROM 2D IMAGES OF GARMENTS
A system for generating three-dimensional (3D) objects from two-dimensional (2D) images of garments is presented. The system includes a data module configured to receive a 2D image of a selected garment and a target 3D model. The system further includes a computer vision model configured to generate a UV map of the 2D image of the selected garment. The system moreover includes a training module configured to train the computer vision model based on a plurality of 2D training images and a plurality of ground truth (GT) panels for a plurality of 3D training models. The system furthermore includes a 3D object generator configured to generate a 3D object corresponding to the selected garment based on the UV map generated by a trained computer vision model and the target 3D model. A related method is also presented.
SYSTEM AND METHOD FOR GENERATING 3D OBJECTS FROM 2D IMAGES OF GARMENTS
A system for generating three-dimensional (3D) objects from two-dimensional (2D) images of garments is presented. The system includes a data module configured to receive a 2D image of a selected garment and a target 3D model. The system further includes a computer vision model configured to generate a UV map of the 2D image of the selected garment. The system moreover includes a training module configured to train the computer vision model based on a plurality of 2D training images and a plurality of ground truth (GT) panels for a plurality of 3D training models. The system furthermore includes a 3D object generator configured to generate a 3D object corresponding to the selected garment based on the UV map generated by a trained computer vision model and the target 3D model. A related method is also presented.
System and method for generating a virtual mathematical model of the dental (stomatognathic) system
A method for forming a virtual 3D mathematical model of a dental system, including receiving DICOM files representing the dental system; identifying number and location of voxels of tissues of the dental system; combining the voxels of the tissues into voxels of organs of the dental system; combining the organs into the virtual 3D mathematical model of the dental system, wherein the virtual 3D mathematical models supports linear, non-linear and volumetric measurements of the dental system; and presenting the virtual 3D mathematical model to a user. The DICOM files can be cone beam or multispiral computed tomography, MRT, PET and/or ultrasonography. The tissues include enamel, dentin, pulp, cartilage, periodontium, and/or jaw bone. The organs include teeth, gums, temporomandibular joint and/or jaw. A size of the voxels is typically between 40 μm and 200 μm.
System and method for generating a virtual mathematical model of the dental (stomatognathic) system
A method for forming a virtual 3D mathematical model of a dental system, including receiving DICOM files representing the dental system; identifying number and location of voxels of tissues of the dental system; combining the voxels of the tissues into voxels of organs of the dental system; combining the organs into the virtual 3D mathematical model of the dental system, wherein the virtual 3D mathematical models supports linear, non-linear and volumetric measurements of the dental system; and presenting the virtual 3D mathematical model to a user. The DICOM files can be cone beam or multispiral computed tomography, MRT, PET and/or ultrasonography. The tissues include enamel, dentin, pulp, cartilage, periodontium, and/or jaw bone. The organs include teeth, gums, temporomandibular joint and/or jaw. A size of the voxels is typically between 40 μm and 200 μm.
System and method for performing a thermal simulation of a powder bed based additive process
A method for performing a thermal simulation of an additive manufacturing process that includes accessing a voxel model representing a representative system using one or more processors. The voxel model includes a first transition associated with a first group of one or more voxels transitioning between liquid and vapor, a second transition associated with a second group of one or more voxels transitioning between solid and liquid, a third transition associated with a third group of one or more voxels undergoing sinter, and a fourth transition associated with a fourth group of one or more voxels undergoing a solid state phase change. The method determines a flux imbalance metric based on a flux, a rate of change of the first transition, a rate of change of the second transition, a rate of change of the third transition, and a rate of change of the fourth transition. The method determines one or more temperatures for the representative system based on the flux imbalance metric.