Patent classifications
G06T2219/2004
Photo of a patient with new simulated smile in an orthodontic treatment review software
A computer-implemented method for generating a virtual depiction of an orthodontic treatment of a patient is disclosed herein. The computer-implemented method may involve gathering a three-dimensional (3D) model modeling the patient's dentition at a specific treatment stage of an orthodontic treatment plan. An image of the patient's face and dentition may be gathered. A first set of reference points modeled on the 3D model of the patient's dentition and a second set of reference points represented on the dentition of the image of the patient may be received. The image of the patient's dentition may be projected into a 3D space to create a projected 3D model of the image of the patient's dentition. Based on a comparison of the first reference points and projections of the second set of reference points, a plurality of modified images of the patient may be constructed to depict progressive stages of a treatment plan.
Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.
Determining Spatial Relationship Between Upper and Lower Teeth
A computer-implemented method includes receiving a 3D model of upper teeth (U1) of a patient (P) and a 3D model of lower teeth (L1) of the patient (P1), and receiving a plurality of 2D images, each image representative of at least a portion of the upper teeth (U1) and lower teeth (L1) of the patient (P). The method also includes determining, based on the 2D images, a spatial relationship between the upper teeth (U1) and lower teeth (L1) of the patient (P).
VIRTUAL REALITY SIMULATOR AND VIRTUAL REALITY SIMULATION PROGRAM
A VR (Virtual Reality) simulator projects or displays a virtual space image on a screen installed at a position distant from a user in a real space and not integrally moving with the user. More specifically, the VR simulator acquires a real user position being a position of the user's head in the real space. The VR simulator acquires a virtual user position being a position in a virtual space corresponding to the real user position. Then, the VR simulator acquires the virtual space image by imaging the virtual space by using a camera placed at the virtual user position in the virtual space, based on virtual space configuration information indicating a configuration of the virtual space. Here, the VR simulator acquires the virtual space image such that a vanishing point exists in a horizontal direction as viewed from the virtual user position.
GINGIVA STRIP PROCESSING USING ASYNCHRONOUS PROCESSING
Methods and apparatuses for asynchronously identifying and modeling a gingiva strip from the three-dimensional (3D) dental model of the patient's dentition. These methods may reduce the time required to generate accurate 3D dental models and therefore may reduce and streamline the process of generating dental treatment plans.
Cross reality system with fast localization
A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps, even maps of very large environments, and render virtual content specified in relation to those maps. The cross reality system may quickly process a batch of images acquired with a portable device to determine whether there is sufficient consistency across the batch in the computed localization. Processing on at least one image from the batch may determine a rough localization of the device to the map. This rough localization result may be used in a refined localization process for the image for which it was generated. The rough localization result may also be selectively propagated to a refined localization process for other images in the batch, enabling rough localization processing to be skipped for the other images.
Mesh updates via mesh frustum cutting
Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space, wherein each of the 3D meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that a mesh is visible in a current frame captured by an image sensor on the device; determining, based on the corresponding set of vertices and the corresponding set of faces for the mesh, a portion of the mesh that lies within a view frustum associated with the current frame; and updating the one or more 3D meshes by texturing the portion of the mesh with one or more pixels in the current frame onto which the portion is projected.
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
Performing 3D reconstruction via an unmanned aerial vehicle
In some examples, an unmanned aerial vehicle (UAV) employs one or more image sensors to capture images of a scan target and may use distance information from the images for determining respective locations in three-dimensional (3D) space of a plurality of points of a 3D model representative of a surface of the scan target. The UAV may compare a first image with a second image to determine a difference between a current frame of reference position for the UAV and an estimate of an actual frame of reference position for the UAV. Further, based at least on the difference, the UAV may determine, while the UAV is in flight, an update to the 3D model including at least one of an updated location of at least one point in the 3D model, or a location of a new point in the 3D model.
Real-time virtual try-on item modeling
A method includes generating, based on user images, a user 3-D model. The method proceeds with obtaining, via a user interface, a request to graphically represent an accessory on to a user graphical representation. This user graphical representation is generated using the user 3-D model. In response to this request, an accessory 3-D model is obtained. Further, the method includes positioning, via the user interface and based on parameters of the user 3-D model and of the accessory 3-D model, an accessory graphical representation on to the user graphical representation. The method further includes updating, in response to detecting user movement, the user 3-D model and the accessory 3-D model and presenting, via the user interface and based on these updated 3-D models, the accessory graphical representation and the user graphical representation in accordance with the user movement.