Patent classifications
G06T17/00
System and method for performing a thermal simulation of a powder bed based additive process
A method for performing a thermal simulation of an additive manufacturing process that includes accessing a voxel model representing a representative system using one or more processors. The voxel model includes a first transition associated with a first group of one or more voxels transitioning between liquid and vapor, a second transition associated with a second group of one or more voxels transitioning between solid and liquid, a third transition associated with a third group of one or more voxels undergoing sinter, and a fourth transition associated with a fourth group of one or more voxels undergoing a solid state phase change. The method determines a flux imbalance metric based on a flux, a rate of change of the first transition, a rate of change of the second transition, a rate of change of the third transition, and a rate of change of the fourth transition. The method determines one or more temperatures for the representative system based on the flux imbalance metric.
Adaptive model updates for dynamic and static scenes
In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.
Adaptive model updates for dynamic and static scenes
In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.
Two-dimensional image collection for three-dimensional body composition modeling
Described are systems and method directed to generation of a dimensionally accurate three-dimensional (“3D”) body model of a body, such as a human body, based on two-dimensional (“2D”) images of that body. A user may use a 2D camera, such as a digital camera typically included in many of today's portable devices (e.g., cell phones, tablets, laptops, etc.) and obtain a series of 2D body images of their body from different directions with respect to the camera. The 2D body images may then be used to generate a plurality of predicted body parameters corresponding to the body represented in the 2D body images. Those predicted body parameters may then be further processed to generate a dimensionally accurate 3D model of the body of the user.
Two-dimensional image collection for three-dimensional body composition modeling
Described are systems and method directed to generation of a dimensionally accurate three-dimensional (“3D”) body model of a body, such as a human body, based on two-dimensional (“2D”) images of that body. A user may use a 2D camera, such as a digital camera typically included in many of today's portable devices (e.g., cell phones, tablets, laptops, etc.) and obtain a series of 2D body images of their body from different directions with respect to the camera. The 2D body images may then be used to generate a plurality of predicted body parameters corresponding to the body represented in the 2D body images. Those predicted body parameters may then be further processed to generate a dimensionally accurate 3D model of the body of the user.
Systems and methods for detecting and correcting data density during point cloud generation
A point cloud capture system is provided to detect and correct data density during point cloud generation. The system obtains data points that are distributed within a space and that collectively represent one or more surfaces of an object, scene, or environment. The system computes the different densities with which the data points are distributed in different regions of the space, and presents an interface with a first representation for a first region of the space in which a first subset of the data points are distributed with a first density, and a second representation for a second region of the space in which a second subset of the data points are distributed with a second density.
Single-pass object scanning
Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
Single-pass object scanning
Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
System for authorizing rendering of objects in three-dimensional spaces
Systems and methods for authorizing rendering of objects in three-dimensional spaces are described. The system may include a first system defining a virtual three-dimensional space including the placement of a plurality of objects in the three-dimensional space, and a second system including a plurality of rules associated with portions of the three-dimensional space and a device coupled to the first system and the second system. The device may receive a request to render a volume of three-dimensional space, retrieve objects for the volume of three-dimensional, retrieve rules associated with the three-dimensional, and apply the rules for the three-dimensional space to the objects.
Methods and apparatuses for customizing a rapid palatal expander
Methods for designing and fabrication of a series of apparatuses for expanding a patient's palate (“palatal expanders”). In particular, described herein are methods and apparatuses for forming palatal expanders, including rapid palatal expanders, as well as series of palatal expanders formed as described herein and apparatuses for designing and fabricating them.