G06T17/30

Environment model with surfaces and per-surface volumes

In one embodiment, a method includes receiving sensor data of a scene captured using one or more sensors, generating (1) a number of virtual surfaces representing a number of detected planar surfaces in the scene and (2) a point cloud representing detected features of objects in the scene based on the sensor data, assigning each point in the point cloud to one or more of the number of virtual surfaces, generating occupancy volumes for each of the number of virtual surfaces based on the points assigned to the virtual surface, generating a datastore including the number of virtual surfaces, the occupancy volumes of each of the number of virtual surfaces, and a spatial relationship between the number of virtual surfaces, receiving a query, and sending a response to the query, the response including an identified subset of the plurality of virtual surfaces in the datastore that satisfy the query.

PARAMETERIZATION OF DIGITAL ORGANIC GEOMETRIES
20220374556 · 2022-11-24 ·

Examples described herein provide a computer-implemented method that includes performing feature recognition and boundary-represented fitting for a given digital geometry representation to classify a geometric primitive and a freeform surface based on a fitting tolerance criteria. The method further includes parameterizing the geometric primitive and the freeform surface. The method further includes building an associative computer aided design (CAD) geometry using the parameterized geometric primitive and the parameterized freeform surface, wherein associativity is maintained between the digital geometry representation and the associative CAD geometry.

System for procedural generation of braid representations in a computer image generation system
11501493 · 2022-11-15 · ·

A computer-implemented method for procedurally simulating braided strands of fibers may include, under the control of one or more computer systems configured with executable instructions, obtaining a set of parameters of the braided strands of the fibers, the set of parameters indicating a braid spine, generating, based at least in part on the set of parameters, a set of interlacing strand spines that follow the braid spine within a tolerance according to the set of parameters, and computing a set of first geometric structures corresponding to the set of interlacing strand spines.

Method and apparatus for 3-D auto tagging

A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.

Method and apparatus for 3-D auto tagging

A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.

METHOD AND DEVICE FOR 3D SHAPE MATCHING BASED ON LOCAL REFERENCE FRAME
20220343105 · 2022-10-27 · ·

A method and a device for 3D shape matching based on a local reference frame are proposed. After acquiring a 3D point cloud and feature points in the method, the feature point set is projected to a plane, and feature transformation is performed on the projected points by using at least one factor from the distances between the 3D points and the feature points, the distances between the 3D points and the projected points, and the average distances between the 3D points and its 1-ring neighboring points to acquire a point distribution with a larger variance in a certain direction than the projected point set, and the local reference frame is determined based on the transformed point distribution. The 3D local feature descriptor established based on this local reference frame can encode the 3D local surface information more robustly, so as to obtain a better 3D shape matching effect.

Systems and methods for generating non-approximated three-dimensional representations

A modeling system includes a controller in communication with a memory, the memory including a unit cell defined by a unit cell structure and a control voxel, wherein the unit cell structure is mapped to the control voxel. The controller is configured to: divide a three-dimensional geometry into a plurality of voxels; and populate each voxel with a corresponding unit cell. For each voxel, the corresponding unit cell structure is modified to fit within each voxel according to the mapping of the unit cell structure to the control voxel surfaces by positioning the unit cell structure relative to the voxel surfaces corresponding to the control voxel surfaces of the mapping. At least one of the voxels of the plurality of voxels has a voxel shape that is different than a control voxel shape of the control voxel.

Systems and methods for generating non-approximated three-dimensional representations

A modeling system includes a controller in communication with a memory, the memory including a unit cell defined by a unit cell structure and a control voxel, wherein the unit cell structure is mapped to the control voxel. The controller is configured to: divide a three-dimensional geometry into a plurality of voxels; and populate each voxel with a corresponding unit cell. For each voxel, the corresponding unit cell structure is modified to fit within each voxel according to the mapping of the unit cell structure to the control voxel surfaces by positioning the unit cell structure relative to the voxel surfaces corresponding to the control voxel surfaces of the mapping. At least one of the voxels of the plurality of voxels has a voxel shape that is different than a control voxel shape of the control voxel.

Systems and methods for reconstructing a gingival profile in an arch form 3D digital model

A method of reconstructing a gingival profile comprising generating, by a processor, a defined cross section of an arch form model extending through a tooth axis of a given tooth and a defined gingiva region; identifying within a tooth cross section profile of the defined cross section a set of reference points for generating a parametric curve defining at least a portion of the tooth cross section profile; generating the parametric curve based on the set of reference points; generating a first undefined cross section of the arch form model extending through the tooth axis of the given tooth and an defined gingiva region; constructing in the first undefined cross section, at least a portion of the parametric curve, thereby generating a first reconstructed gingival profile, and updating the arch form model with the first reconstructed gingival profile; and storing the arch form model including the reconstructed gingival profile.

THREE-DIMENSIONAL (3D) MODEL GENERATION FROM TWO-DIMENSIONAL (2D) IMAGES
20230083607 · 2023-03-16 ·

A model generation system generates three-dimensional (3D) models for objects based on two-dimensional (2D) images of the objects. The model generation system may receive object images and generate a 3D object model for the object based on the object image. The model generation system may generate an object skeleton for the object based on the object image. The model generation system may use the object skeleton to generate pixel partitions representing parallel cross sections of the object. The model generation system may apply a machine-learning model (e.g., a neural network) to the object image to determine parameters for a shape that would best represent each parallel cross section and then generate the 3D object model for the object based on the shapes of each cross section, the object image, and the object skeleton.