G06T2219/20

OBJECT CREATION USING BODY GESTURES

An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.

Orthodontics treatment systems and methods
11696816 · 2023-07-11 · ·

Techniques for preparing a model (e.g., a digital model) of a person's teeth are described. The techniques include receiving a model of a maxillary arch and a mandibular arch of a person's teeth, operating an automated mesh cleaning operation to modify the model, segmenting the model to identify individual teeth and gum tissue in the model, identifying and marking features of each tooth of the model, and adjusting the individual teeth of the model into a recommended orientation relative to each other, and applying a treatment method. The treatment method may include a selection of a bracket type and an aligner type. The techniques may determine a proposed location of brackets or aligners on the individual teeth based on the applied treatment method and export a digital model of a bracket tray for formation of a bracket tray that can be used to secure the brackets to a person's teeth.

ALTERING PROPERTIES OF RENDERED OBJECTS VIA CONTROL POINTS

Altering properties of rendered objects and/or mixed reality environments utilizing control points associated with the rendered objects and/or mixed reality environments is described. Techniques described can include detecting a gesture performed by or in association with a control object. Based at least in part on detecting the gesture, techniques described can identify a target control point that is associated with a rendered object and/or a mixed reality environment. As the control object moves within the mixed reality environment, the target control point can track the movement of the control object. Based at least in part on the movement of the control object, a property of the rendered object and/or the mixed reality environment can be altered. A rendering of the rendered object and/or the mixed reality environment can be modified to reflect any alterations to the property.

ESTABLISHING A TOKENIZED LICENSE OF A VIRTUAL ENVIRONMENT LEARNING OBJECT

A method includes a computing device of a computing infrastructure interpreting a request from a user computing device of the computing infrastructure to cause a license of a set of learning objects pertaining to a common topic for use by the user computing device to produce licensee information for the set of learning objects. The method further includes identifying a non-fungible token (NFT) associated with the set of learning objects and establishing, with the user computing device, agreed licensing terms utilizing the licensee information and based on available licensing terms of a smart contract for the set of learning objects. The method further includes generating a license smart contract for the set of learning objects to include the licensee information and the agreed licensing terms and causing generation of a license block affiliated with the NFT via a blockchain of the object distributed ledger.

VIDEO STREAM AUGMENTING

Augmenting a video stream of an environment is provided, the environment containing a private entity to be augmented. Video of the environment is processed in accordance with an entity recognition process to identify the presence of at least part of an entity in the environment. It is determined whether the identified entity is to be augmented based on information relating to the identified entity and the private entity. Based on determining that the identified entity is to be augmented, the video stream is modified to replace at least a portion of the identified entity with a graphical element adapted to obscure the portion of the identified entity in the video stream. By modifying the video stream to obscure an entity, private or personal information in the environment may be prevented from being displayed to a viewer of the video stream.

Previewing changes on a geometric design
09741157 · 2017-08-22 · ·

Describe is a method for visually presenting, or previewing, changes to 3-dimensional geometry. In Onshape, a user may apply a sequence of configurable geometric operations in order to design a 3-dimensional model. When a user edits a specific operation, the method provides a way for the user to see the effects changes will have on a model. The method provides high-fidelity visualizations of the user's design as it would be before the operation is applied, after the operation is applied, and the operation's effects in conjunction with the effects of all operations in the sequence. The method also provides an interface for transitioning between these visualized states, allowing the user to effectively and efficiently understand the effect of the changes.

METHOD AND SYSTEM FOR BRACES REMOVAL FROM DENTITION MESH
20220233275 · 2022-07-28 ·

A method for generating a digital model of dentition, executed at least in part by a computer, acquires a 3-D digital mesh that is representative of the dentition along a dental arch, including includes braces, teeth, and gingival tissue. The method modifies the 3-D digital mesh to generate a digital mesh dentition model by processing the digital mesh and automatically detecting one or more initial bracket positions from the acquired mesh, processing the initial bracket positions to identify bracket areas for braces that lie against tooth surfaces, identifying one or more brace wires extending between brackets, removing one or more brackets and one or more wires from the dentition model, and forming a reconstructed tooth surface within the digital mesh dentition model where the one or more brackets have been removed. The modified 3-D digital mesh dentition model is displayed, stored, or transmitted over a network to another computer.

JOINT RETRIEVAL AND MESH DEFORMATION

Embodiments provide systems, methods, and computer storage media for generating a 3D model from a target 2D image or 3D point cloud (e.g., generated by a 3D scan). Given a particular target, a retrieval network retrieves or identifies a source model from a database, and a deformation network deforms the source model to fit the target. In some cases, joint learning is employed to enable the retrieval and deformation networks to jointly learn a deformation-aware retrieval embedding space and an individualized deformation space for each source model. In some cases, the retrieval network retrieves based on distance in the deformation-aware retrieval embedding space, enabling the retrieval module to retrieve a source model that best fits to the target after deformation. In some cases, a deformation is decomposed into a plurality of per-part deformations, and/or and the retrieval embedding space is used to select training data.

ORTHODONTICS TREATMENT SYSTEMS AND METHODS
20210401546 · 2021-12-30 ·

Techniques for preparing a model (e.g., a digital model) of a person's teeth are described. The techniques include receiving a model of a maxillary arch and a mandibular arch of a person's teeth, operating an automated mesh cleaning operation to modify the model, segmenting the model to identify individual teeth and gum tissue in the model, identifying and marking features of each tooth of the model, and adjusting the individual teeth of the model into a recommended orientation relative to each other, and applying a treatment method. The treatment method may include a selection of a bracket type and an aligner type. The techniques may determine a proposed location of brackets or aligners on the individual teeth based on the applied treatment method and export a digital model of a bracket tray for formation of a bracket tray that can be used to secure the brackets to a person's teeth.

Modifying three-dimensional representations using digital brush tools
11315315 · 2022-04-26 · ·

Systems, methods, and non-transitory computer-readable media are disclosed for modifying voxel-based 3D representations using 3D digital brush tools and/or resolution filters. For example, the disclosed systems can utilize 3D digital brush tools (e.g., a digital blur brush tool, a digital smudge brush tool, and/or a digital melt brush tool) to identify and modify one or more voxels within a 3D representation using multiple buffers of visual properties. Additionally, the disclosed systems can modify one or more voxels within a 3D representation by rendering the one or more voxels at varying levels of detail using an octree (e.g., a mosaic filter tool). In particular, the disclosed systems can identify one or more voxels within an octree that are smaller than a target voxel size. Moreover, the disclosed systems can combine the identified one or more voxels within the octree to render the 3D representation at varying levels of detail.