G06T2219/012

Patient-specific instrumentation for implant revision surgery

A system for creating at least one model of a bone and implanted implant comprises a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: obtaining at least one image of at least part of a bone and of an implanted implant on the bone, the at least one image being patient specific, obtaining a virtual model of the implanted implant using an identity of the implanted implant, overlaying the virtual model of the implanted implant on the at least one image to determine a relative orientation of the implanted implant relative to the bone in the at least one image, and generating and outputting a current bone and implant model using the at least one image, the virtual model of the implanted implant and the overlaying.

Systems and methods for controlling virtual scene perspective via physical touch input

Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.

Integration of a two-dimensional input device into a three-dimensional computing environment

A workstation enables operation of a 2D input device with a 3D interface. A cursor position engine determines the 3D position of a cursor controlled by the 2D input device as the cursor moves within a 3D scene displayed on a 3D display. The cursor position engine determines the 3D position of the cursor for a current frame of the 3D scene based on a current user viewpoint, a current mouse movement, a CD gain value, a Voronoi diagram, and an interpolation algorithm, such as the Laplacian algorithm. A CD gain engine computes CD gain optimized for the 2D input device operating with the 3D interface. The CD gain engine determines the CD gain based on specifications for the 2D input device and the 3D display. The techniques performed by the cursor position engine and the techniques performed by the CD gain engine can be performed separately or in conjunction.

Generating two-dimensional views with gridline information

An example computing system is configured to extract gridline information from a two-dimensional drawing file and determine, for the gridline information, first coordinate information that is based on a first datum. The computing system converts the first coordinate information into second coordinate information that is based on a second datum, where the second coordinate information is used by a three-dimensional drawing file. The computing system is also configured to receive a request to generate a two-dimensional view of the three-dimensional drawing file, where the two-dimensional view includes an intersection of two meshes within the three-dimensional drawing file. The computing device generates the two-dimensional view of the three-dimensional drawing file and adds, to the generated two-dimensional view, (i) at least one gridline corresponding to the gridline information and (ii) dimensioning information involving the at least one gridline and at least one of the two meshes.

METHOD FOR CONTROLLING DIMENSIONAL TOLERANCES, SURFACE QUALITY, AND PRINT TIME IN 3D-PRINTED PARTS
20230004143 · 2023-01-05 ·

A method for generating print images for additive manufacturing includes: accessing a part model; accessing a set of dimensional tolerances for the part model; and segmenting the part model into a set of model layers. The method also includes, and, for each model layer: detecting an edge in the model layer; assigning a dimensional tolerance to the edge; defining an outer exposure shell inset from the edge by an erosion distance inversely proportional to a width of the dimensional tolerance; defining an inner exposure shell inset from the outer exposure shell and scheduled for exposure separately from the outer exposure shell; defining an a outer exposure energy proportional to the width of the dimensional tolerance and assigned to the outer exposure shell; and defining an inner exposure energy greater than the outer exposure energy and assigned to the inner exposure shell.

GENERATION OF DIGITAL 3D MODELS OF BODY SURFACES WITH AUTOMATIC FEATURE IDENTIFICATION
20230005229 · 2023-01-05 · ·

A computer system obtains at least one 3D scan of a body surface; automatically identifies, based on the at least one 3D scan, one or more features (e.g., nose, lips, eyes, eyebrows, cheekbones, or specific portions thereof, or other features) of the body surface; and generates a digital 3D model of the body surface. The digital 3D model includes the identified features of the human body surface. In an embodiment, the step of generating of the digital 3D model is based on the at least one 3D scan and the identified features of the body surface. In an embodiment, the digital 3D model comprises a 3D mesh file. The digital 3D model can be used in various ways. For example, output of a manufacturing process (e.g., a 3D printed item, a cosmetics product, a personal care product) can be based on the digital 3D model.

METHOD TO FACILITATE MASS CONVERSION OF 2D DRAWINGS TO 3D MODELS
20220366662 · 2022-11-17 ·

An internet or cloud-based system, method, or platform (“platform”) used to facilitate the conversion of electronic two-dimensional drawings to three-dimensional models. A group of people (“crowd”) that has been found qualified to make such conversions, are selected for the conversion. The two-dimensional drawings are transmitted to the crowd for conversion to three-dimensional models. In some embodiments, multiple instances of the same two-dimensional drawings (or image data) is sent to multiple, independent crowd members in order that multiple versions of the same three-dimensional model can be created. Once the models are complete and returned, they are compared to each other on multiple features or characteristics. If two or more three-dimensional models are found to match within the prescribed tolerances, they are determined to be an accurate representation of the product or device shown in the two-dimensional drawings. In some embodiments, the two-dimensional drawings can be divided into subparts and submitted to different crowd members for conversion.

Dynamic dimensioning indicators

An example computing system is configured to (i) generate a cross-sectional view of a three-dimensional drawing file; (ii) receive a first user input indicating a selection of a first mesh, wherein the selection comprises a selection point that establishes a first end point; (iii) generate a first representation indicating an alignment of the first end point with at least one corresponding geometric feature of the first mesh and a second representation indicating a set of one or more directions; (iv) receive a second user input indicating a given direction; (v) based on receiving the second user input, generate a dynamic representation of the dimensioning information along the given direction; (vi) receive a third user input indicating that the second user input is complete; (vii) based on receiving the third user input, add the dimensioning information to the cross-sectional view between the first end point and the second end point.

Systems and methods for controlling cursor behavior

Systems, methods, and non-transitory computer readable media containing instructions for causing at least one processor to perform operations to enable cursor control in an extended reality space are provided. In one implementation, the processor is configured to perform operations comprising receiving from an image sensor first image data reflecting a first region of focus of a user of a wearable extended reality appliance; causing a first presentation of a virtual cursor in the first region of focus; receiving from the image sensor second image data reflecting a second region of focus of the user outside the initial field of view in the extended reality space; receiving input data indicative of a desire of the user to interact with the virtual cursor; and causing a second presentation of the virtual cursor in the second region of focus in response to the input data.

BROWSER OPTIMIZED INTERACTIVE ELECTRONIC MODEL BASED DETERMINATION OF ATTRIBUTES OF A STRUCTURE

An interactive 3D electronic representation of a physical scene is executed in a browser. The browser has a limited computing capability compared to a native application or hardware usable by the computer. The interactive 3D representation is configured to minimize overall computing resources and processing time. Attributes of data items corresponding to surfaces and/or contents in the physical scene are extracted from the interactive 3D representation. Interactive verification of the attributes of the subset of data items is performed in the browser by: flattening a selected view of a ceiling, floor, or wall two dimensions; receiving user adjustments (if needed) to the dimensions and/or locations of the selected ceiling, floor, or wall; receiving user indications (if needed) of cut outs in the selected ceiling, floor, or wall; and updating the interactive 3D representation based on adjustments to the dimensions and/or locations, and/or the indications of cut outs.