G06T2219/012

AUGMENTED REALITY INTERACTION AND CONTEXTUAL MENU SYSTEM
20210335043 · 2021-10-28 ·

Disclosed herein are various embodiments for an augmented reality contextual menu system. An embodiment operates by determining a position of a collapsed menu icon, representing a menu within an augmented reality computing environment, on an interface of the augmented reality computing environment, based on a position of a device used to view and interact with the interface. A hand gesture, from a user corresponding to an expansion command associated with displaying a plurality of menu options on the interface is detected. The collapsed menu icon is replaced with a plurality of expanded menu icons, wherein each expanded menu icon corresponds to one of the plurality of menu options and enable user access to functionality within the augmented reality computing environment.

AUGMENTED REALITY INTERACTION, MODELING, AND ANNOTATION SYSTEM
20210335044 · 2021-10-28 ·

Disclosed herein are various embodiments for an augmented reality interaction, modeling, and annotation system. An embodiment operates by displaying a digital model of an object within an interface of an augmented reality computing system, wherein the model is associated with a first set of dimensions corresponding to a size of the object and a second set of dimensions corresponding to a size of the digital model. A command to resize the digital model to a third set of dimensions different from the second set of dimensions is received from a user. The digital model is resized within the interface in accordance with the third set of dimensions. The resized digital model of the object is displayed in accordance with the third set of dimensions, wherein the first set of dimensions corresponding to the size of the object remains unchanged within the augmented reality computing system.

DESIGN ASSISTANCE DEVICE, DESIGN ASSISTANCE SYSTEM, SERVER, AND DESIGN ASSISTANCE METHOD

Provided are a design assistance device, a design assistance system, a server, and a design assistance method that displays shapes and specification information of plural items included in an item group so that a user can easily view the items. The design assistance device includes: a three-dimensional data acquisition unit that acquires three-dimensional data of an item group including a plurality of items; a display data generation unit that generates two-dimensional display data including a two-dimensional display representing an appearance of the item group and specification information of at least one of the items in one observation direction in the three-dimensional data; and a display mode determination unit that determines a display mode of the specification information in the two-dimensional display based on a shape of each of the items as well as relative positions and relative orientations between the items as viewed in the one observation direction.

Methods for determining environmental parameter data of a real object in an image
11144680 · 2021-10-12 · ·

Example systems and methods for virtual visualization of a three-dimensional (3D) model of an object in a two-dimensional (2D) environment. The method may include capturing the 2D environment and adding scale and perspective to the 2D environment. Further, a user may select intersection points on a ground plane of the 2D environment to form walls, thereby converting the 2D environment into a 3D space. The user may further add 3D models of objects on the wall plane such that the objects may remain flush with the wall plane.

Methods and apparatus for dimensioning an object using proximate devices

Methods and apparatus for dimensioning an object by enlisting proximate devices to obtain image data representative of the object from multiple perspectives are provided. An example method includes capturing, by a first image capture device, first image data representative of an object from a first perspective; determining whether a second image capture device is within proximity of the first image capture device; and when the second image capture device is within proximity of the first image capture device, sending a request to the second image capture device for second image data representative of the object from a second perspective, wherein the first image data and the second image data are combinable to form a composite representative of the object.

Augmented reality interaction and contextual menu system
11145135 · 2021-10-12 · ·

Disclosed herein are various embodiments for an augmented reality contextual menu system. An embodiment operates by determining a position of a collapsed menu icon, representing a menu within an augmented reality computing environment, on an interface of the augmented reality computing environment, based on a position of a device used to view and interact with the interface. A hand gesture, from a user corresponding to an expansion command associated with displaying a plurality of menu options on the interface is detected. The collapsed menu icon is replaced with a plurality of expanded menu icons, wherein each expanded menu icon corresponds to one of the plurality of menu options and enable user access to functionality within the augmented reality computing environment.

XR device for providing AR mode and VR mode and method for controlling the same
11138797 · 2021-10-05 · ·

An XR device for providing an augmented reality (AR) mode and a virtual reality (VR) mode and a method for controlling the same are disclosed. The XR device is applicable to 5G communication technology, robot technology, autonomous driving technology, and Artificial Intelligence (AI) technology. It is disclosed that the XR device that displays a virtual clothes item to the user's appearance in the mirror when the user selects the virtual item among the fitting items displayed while the user wears the XR device in front of a mirror.

IMAGE MATCHING FOR FRACTURE REDUCTION
20210361378 · 2021-11-25 ·

In one example a method of fracture reduction includes imaging, intraoperatively, a fractured bone of a patient to obtain a first representation of the fractured bone in a computing system. The fractured bone defines at least a first bone fragment, and a second bone fragment that is separated from the first bone fragment by a fracture. The method includes imaging, intraoperatively, a contralateral bone of the patient to obtain a second representation of the contralateral bone in the computing system. The method includes generating, intraoperatively in the computing system, a mirrored representation of the first representation or the second representation. The first representation is compared to a representation of a desired orientation of the fractured bone, where the representation of the desired orientation is the mirrored representation of the second representation. Alternatively, the mirrored representation is compared to the representation of the desired orientation of the fractured bone, where the representation of the desired orientation is the second representation.

Method for implementing virtual scene conversion and related apparatus

Embodiments of this application disclose a method for implementing virtual scene conversion. The method in the embodiments includes: displaying an initial virtual scene, a first scene conversion trigger set, and partial information of at least one target virtual scene in a screen area, a scene conversion trigger being used for implementing conversion between associated different virtual scenes; determining, in a case that a trigger operation on the first scene conversion trigger is received, the target virtual scene and a second scene conversion trigger set based on the first scene conversion trigger operation in the initial virtual scene, the second scene conversion trigger set including at least one second scene conversion trigger associated with a plurality of determined target virtual scenes; and rendering and displaying the target virtual scene and the second scene conversion trigger set in the screen area. The embodiments of this application are used for improving adaptability of different services and increasing application scenarios.

ESTIMATING DIMENSIONS OF GEO-REFERENCED GROUND-LEVEL IMAGERY USING ORTHOGONAL IMAGERY
20210217231 · 2021-07-15 · ·

A system and method is provided for measurements of building façade elements by combining ground-level and orthogonal imagery. The measurements of the dimension of building façade elements are based on ground-level imagery that is scaled and geo-referenced using orthogonal imagery. The method continues by creating a tabular dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows or doors. The tabular dataset can be part of an estimate report.