Patent classifications
G09B23/283
Anatomic Apparatus and Training System for Remote and Interactive Hands-On Surgical Training for Dentists
A training system for dental procedures having a training device and a self-assessment for use with the training device is described. The training device is printed with a 3D printer and includes a predetermined anatomic form of at least a portion of a human jaw structure and a predetermined anatomic form of at least one human tooth structure. Part of the printed human tooth structure is rooted in the printed human jaw structure, and both structures are designed to have at least one analogous physical property to their corresponding human structures. The self-assessment includes a pictorial array of procedural outcomes of a procedural step so that a user can identify which image in the pictorial array best represents the user's own procedural outcome of the procedural step performed by the user on the training device and at least one feedback instruction.
Kit and Method for Home Use for Interproximal Reduction (IPR) Stripping of Teeth for Braces or Aligners
An interproximal reduction (IPR) kit and method for using the same. The patient takes their own dental impression and submits it to a dental service provider for a treatment plan. A set of aligners based on the treatment plan are then delivered to the patient as part of a kit. The kit also includes a set of written instructions and/or a treatment diagram and a plurality of strips used to reduce the widths of specific spots of the patient's teeth according to the treatment plan. The patient follows the reapproximation instructions or is instructed to go to a website for supervision for the performance of the reapproximation process. After sufficient instruction, the patient applies the plurality of strips to the correct spots between their teeth. Once reapproximation is complete, the patient is ready to begin wearing the aligners as dictated by the treatment plan.
VISION-HAPTICS FUSED AUGMENTED REALITY SIMULATOR FOR DENTAL SURGICAL SKILL TRAINING
A vision-haptics fused augmented reality simulator for dental surgical skill training, including a dental simulation training platform constructed based on an artificial head phantom; a dental operation training system based on a haptic feedback device; an observation system based on an augmented reality head-mounted display; generating a virtual dental model by modeling based on CBCT data and scan data of a patient's dental cavity, to construct a virtual dental environment; based on the virtual dental model and feature points obtained through scanning on the artificial head phantom, performing a spatial matching of a virtual dental cavity and a dental model; in a virtual dental surgery simulation method, outputting haptics information and visual information at frequencies of not less than 1000 Hz and 60 Hz, respectively; performing a visual information processing method on grid data; and performing a haptics-vision space calibration method based on information of an operator's head.
Portable camera aided simulator (PortCAS) for minimally invasive surgical training
The present disclosure is directed to a system and method for surgical training with low cost, reusable materials and a highly customizable virtual environment for skill-building. According to various embodiments, a surgical training tool is usable in conjunction with a support structure configured to at least partially constrain the tool movement. Meanwhile, the tool is tracked in real-time with off-tool detectors to generate a tool path driving a virtual rendering of the surgical training tool in an operative environment. The virtual rendering may be visually observable via a display device and may include a customizable and/or selectable operative environment with one or more structures that can be operated on by the virtual surgical training tool. By tracking the virtual tool interaction with the virtual structures, a task path may be established for documenting and/or objectively assessing the performance of one or more operative tasks.
Movement tracking and simulation device and method
Apparatus for dental simulation comprising: a display; one or more processors; a hand piece comprising a light emitter and an inertial measurement unit (IMU), wherein the IMU is in communication with at least one of the one or more processors, and wherein the IMU is configured to generate IMU position and orientation data, wherein the IMU position and orientation data describes the position and orientation of the hand piece; at least two cameras in communication with at least one of the one or more processors, wherein each of the at least two cameras is configured to generate a series of images using light emitted by the light emitter; wherein the one or more processors are configured to: receive the series of images from each of the at least two cameras, generate, based on two dimensional coordinate data derived from the series of images, three dimensional coordinate data identifying the position of the LED; receive the IMU position and orientation data; and further wherein the one or more processors are configured to combine the IMU position and orientation data and the three dimensional coordinate data to generate simulation instructions, and to display, on the display, a three dimension virtual image of an object and animate movement of the object according to the simulation instructions.
Guided Operation of a Language-Learning Device Based on Learned User Memory Characteristics
A personal electronic device is adapted to construct data tables from user input received over time relating to translations of translatable items from a first language to a second language. Entries in a data table for a user are dynamic and may indicate likelihoods of the user correctly translating translatable items as a function of time. Each translatable item may have a different time-dependent likelihood of a correct translation. Operation of the personal electronic device for a user may be based in part on the acquired, time-dependent likelihoods for that user, so that information may be presented to the user in a more efficient manner.
METHODS FOR VISUALIZING TREATMENT OUTCOMES
Methods for visualizing treatment outcomes are provided. In some embodiments, a method includes obtaining a first 2D image of a patient's teeth from a mobile device, and obtaining an arch model representing an outcome for the patient's teeth after orthodontic treatment. The method can include matching the patient's teeth in the first 2D image to corresponding teeth in the arch model, using a parametric model. The method can also include determining a target position for the patient's teeth, based on a position of the matched corresponding teeth in the arch model. The method can further include generating a second 2D image depicting the patient's teeth in the target position.
In vitro dynamic mouth simulator
An in vitro dynamic mouth model includes an upper jaw that includes a plurality of protuberances simulating human teeth, a lower jaw that is coupled with a rounded silicone pad simulating human tongue, and a mouth wall that encapsulates food sample(s) subjected to in vitro mastication such that the food sample remains within the mouth model. The mouth wall contains at least one hole that allows injection of simulated saliva fluid. As simulated chewing takes place, the injected fluid directly interacts with the food sample.
Oral Hygiene Systems and Methods
A method for promoting compliance with an oral hygiene regimen includes displaying, on a display device, a representation of at least a portion of a set of teeth of a user. The method also includes overlaying an indicium on the representation such that the indicium is associated with a first section of the representation. Responsive to a determination, via at least one of one or more processors, that a head of an oral hygiene device is positioned directly adjacent to a first section of the set of teeth that corresponds to the first section of the representation for at least a predetermined amount of time, the indicium is removed from the display device.
Augmented reality enhancements for dental practitioners
A processing device receives, from an image capture device associated with an augmented reality (AR) display, a plurality of images of a face of a patient. The processing device selects a subset of the plurality of images that meet one or more image selection criteria. The selection comprises determining, from the plurality of images, a first image that represents a first position extreme for the face; determining, from the plurality of images, a second image that represents a second position extreme of the face; selecting the first image; and selecting the second image. The processing device further generates a model of a jaw of the patient based at least in part on the subset of the plurality of images that have been selected.