Patent classifications
G06T19/00
AUGMENTED REALITY GUIDANCE OVERLAP
Embodiments of the present invention provide computer-implemented methods, computer program products and computer systems. Embodiments of the present invention can, in response to receiving a request, identify a core component from source material based on topic analysis. Embodiments of the present invention can then generate three-dimensional representations of physical core components associated with the request. Finally, embodiments of the present invention then render the generated three-dimensional representations of the physical core components over the physical core components.
DYNAMIC WIDGET PLACEMENT WITHIN AN ARTIFICIAL REALITY DISPLAY
The disclosed computer-implemented method may include (1) identifying a trigger element within a field of view presented by a display element of an artificial reality device, (2) determining a position of the trigger element within the field of view, (3) selecting a position within the field of view for a virtual widget based on the position of the trigger element, and (4) presenting the virtual widget at the selected position via the display element. Various other methods, systems, and computer-readable media are also disclosed.
SYSTEM AND METHOD FOR GENERATING 3D OBJECTS FROM 2D IMAGES OF GARMENTS
A system for generating three-dimensional (3D) objects from two-dimensional (2D) images of garments is presented. The system includes a data module configured to receive a 2D image of a selected garment and a target 3D model. The system further includes a computer vision model configured to generate a UV map of the 2D image of the selected garment. The system moreover includes a training module configured to train the computer vision model based on a plurality of 2D training images and a plurality of ground truth (GT) panels for a plurality of 3D training models. The system furthermore includes a 3D object generator configured to generate a 3D object corresponding to the selected garment based on the UV map generated by a trained computer vision model and the target 3D model. A related method is also presented.
Method of Operating Intraoral Scanner for Fast and Accurate Full Mouth Reconstruction
An intraoral scanner includes an image capturing device and a processor. A method of operating the intraoral scanner includes the image capturing device sequentially capturing M images of a buccal bite, the processor generating M sets of buccal bite point clouds according to the M images, the processor matching the M sets of buccal bite point clouds to generate a bite model, when the number of data points of the bite model exceeds a first threshold, the processor computing P sets of bite feature descriptors of the bite model, when a predetermined quantity of bite feature descriptors in a set of bite feature descriptors of the P sets of bite feature descriptors exceeds a second threshold, the processor performing a registration on an upper arch model and a lower arch model to the buccal bite mode to generate a full mouth model.
Method of Operating Intraoral Scanner for Fast and Accurate Full Mouth Reconstruction
An intraoral scanner includes an image capturing device and a processor. A method of operating the intraoral scanner includes the image capturing device sequentially capturing M images of a buccal bite, the processor generating M sets of buccal bite point clouds according to the M images, the processor matching the M sets of buccal bite point clouds to generate a bite model, when the number of data points of the bite model exceeds a first threshold, the processor computing P sets of bite feature descriptors of the bite model, when a predetermined quantity of bite feature descriptors in a set of bite feature descriptors of the P sets of bite feature descriptors exceeds a second threshold, the processor performing a registration on an upper arch model and a lower arch model to the buccal bite mode to generate a full mouth model.
System and method for an augmented reality goal assistant
A method for an augmented reality goal assistant is described. The method includes detecting an object associated with a behavioral goal of a user. The method also includes altering an appearance of the object based on the behavioral goal of the user. The method further includes displaying the altered appearance of a detected object on an augmented reality headset, such that the altered appearance of the detected object is modified based on the behavioral goal of the user.
Adaptive model updates for dynamic and static scenes
In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.
3D user interface depth forgiveness
A head-worn device system includes one or more cameras, one or more display devices and one or more processors. The system also includes a memory storing instructions that, when executed by the one or more processors, configure the system to generate a virtual object, generate a virtual object collider for the virtual object, determine a conic collider for the virtual object, provide the virtual object to a user, detect a landmark on the user's hand in the real-world, generate a landmark collider for the landmark, and determine a selection of the first virtual object by the user based on detecting a collision between the landmark collider with the conic collider and with the virtual object collider.
Augmented reality object manipulation
Among other things, embodiments of the present disclosure improve the functionality of computer imaging software and systems by facilitating the manipulation of virtual content displayed in conjunction with images of real-world objects and environments. Embodiments of the present disclosure allow different virtual objects to be moved onto different physical surfaces, as well as manipulated in other ways.
Insertion of annotation labels in a CAD drawing
The current invention concerns a computer-implemented method, a computer system, and a computer program product for annotation positioning in a CAD drawing. A CAD drawing comprising N anchor points is obtained, with N≥2. A candidate set comprising multiple candidate points is obtained. From the candidate set, N placement points are selected. With each anchor point, a placement point is associated based on combinatorial optimization of an objective function dependent on distances. Each distance is thereby in between an anchor and a placement point. In the CAD drawing, for each anchor point, a leader line in between the anchor point and the associated placement point and an annotation label at or near the associated placement point are inserted.