Patent classifications
G06T2219/012
ESTIMATING DIMENSIONS OF GEO-REFERENCED GROUND-LEVEL IMAGERY USING ORTHOGONAL IMAGERY
A system and method is provided for measurements of building facade elements by combining ground-level and orthogonal imagery. The measurements of the dimension of building facade elements are based on ground-level imagery that is scaled and geo-referenced using orthogonal imagery. The method continues by creating a tabular dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows or doors. The tabular dataset can be part of an estimate report.
METHOD FOR MOVING AND ALIGNING 3D OBJECTS IN A PLANE WITHIN THE 2D ENVIRONMENT
Example systems and methods for virtual visualization of a three-dimensional model of an object in a two-dimensional environment. The method may include moving and aligning the three-dimensional model of the object along a plane in the two-dimensional environment.
Dynamic adjustment of cross-sectional views
An example computing system is configured to (i) receive a request to generate a cross-sectional view of a three-dimensional drawing file, where the cross-sectional view is based on a location of a cross-section line within the three-dimensional drawing file and includes an intersection of two meshes within the three-dimensional drawing file; (ii) generate the cross-sectional view of the three-dimensional drawing file; (iii) add, to the generated cross-sectional view, dimensioning information involving at least one of the two meshes; (iv) generate one or more controls for adjusting a location of the cross-section line within the three-dimensional drawing file; and (v) based on an input indicating a selection of the one or more controls, adjust the location of the cross-section line within the three-dimensional drawing file, update the cross-sectional view based on the adjusted location of the cross-section line, and update the dimensioning information to correspond to the updated cross-sectional view.
Tactile solutions integration for patient specific treatment
Embodiments related to a method is described. The method comprises receiving an image file of a region of interest of an anatomy of a living organism, analyzing the image file and extracting image coordinates and density information of a plurality of points in the image file, training a neural network using collective information available in a database, registering the region of interest of the anatomy as a virtual version using an input from the neural network, and subsequently training the neural network using a user input from a user and the collective information available in the database. The collective information is recorded in the database with respect to a plurality of clusters of different physiological states of the living organism. The method further comprises performing a treatment on the region of interest. In an embodiment, the treatment is performed using the input from the neural network.
CONTROLLING 3D POSITIONS IN RELATION TO MULTIPLE VIRTUAL PLANES
Systems, methods, and non-transitory computer readable media containing instructions for moving virtual content between virtual planes in three-dimensional space are provided. In one implementation, the wearable extended reality appliance may present a virtual object at a first location on a first virtual plane; intraplanar input signals may be received for causing the virtual object to move to a second location on the first virtual plane, and in response the wearable extended reality appliance may virtually display an intraplanar movement of the virtual object from the first location to the second location; and interplanar input signals may be received for causing the virtual object to move to a third location on a second virtual plane, while the virtual object is in the second location, and in response the wearable extended reality appliance may virtually display an interplanar movement of the virtual object from the second location to the third location.
GESTURE INTERACTION WITH INVISIBLE VIRTUAL OBJECTS (AS AMENDED)
Methods, systems, apparatuses, and non-transitory computer-readable media are provided for enabling gesture interaction with invisible virtual objects. In one implementation, the computer-readable medium includes instructions to cause a processor to receive image data captured by at least one image sensor of a wearable extended reality appliance in a field of view; display a plurality of virtual objects in a portion of the field of view; receive a selection of a specific physical object; receive a selection of a specific virtual object; dock the specific virtual object with the specific physical object; when the specific physical object and the specific virtual object are outside the portion of the field of view such that the specific virtual object is invisible to a user of the wearable extended reality appliance, receive a gesture input indicating interaction with the specific virtual object; and cause an output associated with the specific virtual object.
SYSTEMS AND METHODS FOR CONTROLLING VIRTUAL SCENE PERSPECTIVE VIA PHYSICAL TOUCH INPUT
Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.
SYSTEMS AND METHODS FOR CONTROLLING CURSOR BEHAVIOR
Systems, methods, and non-transitory computer readable media containing instructions for causing at least one processor to perform operations to enable cursor control in an extended reality space are provided. In one implementation, the processor is configured to perform operations comprising receiving from an image sensor first image data reflecting a first region of focus of a user of a wearable extended reality appliance; causing a first presentation of a virtual cursor in the first region of focus; receiving from the image sensor second image data reflecting a second region of focus of the user outside the initial field of view in the extended reality space; receiving input data indicative of a desire of the user to interact with the virtual cursor; and causing a second presentation of the virtual cursor in the second region of focus in response to the input data.
INCREMENTAL HIGHLIGHTING DURING VIRTUAL OBJECT SELECTION
Methods, systems, apparatuses, and non-transitory computer-readable media are provided for incremental convergence in an extended reality environment during virtual object selection. In one implementation, the computer-readable medium may contain instructions that when executed by at least one processor cause the at least one processor to display a plurality of dispersed virtual objects across a plurality of virtual regions, the plurality of virtual regions including at least a first virtual region and a second virtual region that differs from the first virtual region; receive an initial kinesics input tending toward the first virtual region; highlight a group of virtual objects in the first virtual region based on the initial kinesics input; receive a refined kinesics input tending toward a particular virtual object from among the highlighted group of virtual objects; and trigger a functionality associated with the particular virtual object based on the refined kinesics input.
ENVIRONMENTALLY ADAPTIVE EXTENDED REALITY DISPLAY SYSTEM (AS AMENDED)
A method for facilitating an environmentally adaptive extended reality display in a physical environment includes virtually displaying content via a wearable extended reality appliance operating in the physical environment, wherein displaying content via the wearable extended reality appliance is associated with at least one adjustable extended reality display parameter. Image data is obtained from the wearable extended reality appliance and a specific environmental change unrelated to the virtually displayed content is detected in the image data. A group of rules associating environmental changes with changes in the at least one adjustable extended reality display parameter is accessed and a specific rule of the group of rules is determined, the specific rule corresponding to the specific environmental change. The specific rule is implemented to adjust the at least one adjustable extended reality display parameter based on the specific environmental change.