G06T19/006

AUGMENTED-REALITY ENDOSCOPIC VESSEL HARVESTING

An endoscopic vessel harvesting system for surgical removal of a blood vessel to be used for coronary bypass uses endoscopic instruments for isolating and severing the vessel. An endoscopic camera in the endoscopic instruments captures images from a distal tip of the instrument within a dissected tunnel around the vessel. An image processor assembles a three-dimensional model of the tunnel from a series of images captured by the endoscopic camera. An augmented-reality display coupled to the image processor renders (e.g., visibly displays to the user in their field of view) a consolidated map representing the three-dimensional model along with a marker in association with the map indicating a current location of the distal tip.

VISUALIZATION OF A KNOWLEDGE DOMAIN

An exemplary process obtains sensor data corresponding to a physical environment including one or more physical objects. A physical property of the one or more physical objects is determined based on the sensor data. A presentation mode associated with a knowledge domain is determined. An extended reality environment including a view of the physical environment and a visualization selected based on the determined presentation mode is provided. The visualization includes virtual content associated with the knowledge domain. The virtual content is provided based on display characteristics specified by the presentation mode that depend upon the physical property of the one or more objects.

AUGMENTED REALITY SYSTEM AND METHODS FOR STEREOSCOPIC PROJECTION AND CROSS-REFERENCING OF LIVE X-RAY FLUOROSCOPIC AND COMPUTED TOMOGRAPHIC C-ARM IMAGING DURING SURGERY
20230050636 · 2023-02-16 ·

A method for performing a procedure on a patient includes acquiring a three-dimensional image of a location of interest on the patient and a two-dimensional image of the location of interest can be acquired. A computer system can relate the three-dimensional image with the two-dimensional image to form a holographic image dataset. The computer system can register the holographic image dataset with the patient. The augmented reality system can render a hologram based on the holographic image dataset from the patient. The hologram can include a projection of the three-dimensional image and a projection of the two-dimensional image. The practitioner can view the hologram with the augmented reality system and perform the procedure on the patient. The practitioner can employ the augmented reality system to visualize a point on the projection of the three-dimensional image and a corresponding point on the projection of the two-dimensional image during the procedure.

WAVEGUIDES WITH INTEGRATED OPTICAL ELEMENTS AND METHODS OF MAKING THE SAME
20230047616 · 2023-02-16 ·

An example waveguide can include a polymer layer having substantially optically transparent material with first and second major surfaces configured such that light containing image information can propagate through the polymer layer being guided therein by reflecting from the first and second major surfaces via total internal reflection. The first surface can include first smaller and second larger surface portions monolithically integrated with the polymer layer and with each other. The first smaller surface portion can include at least a part of an in-coupling optical element configured to couple light incident on the in-coupling optical element into the polymer layer for propagation therethrough by reflection from the second major surface and the second larger surface portion of the first major surface.

METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
20230048185 · 2023-02-16 ·

Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.

ACCURATE POSITIONING OF AUGMENTED REALITY CONTENT

A system for accurately positioning augmented reality (AR) content within a coordinate system such as the World Geodetic System (WGS) may include AR content tethered to trackable physical features. As the system is used by mobile computing devices, each mobile device may calculate and compare relative positioning data between the trackable features. The system may connect and group the trackable features hierarchically, as measurements are obtained. As additional measurements are made of the trackable features in a group, the relative position data may be improved, e.g., using statistical methods.

GENERATING AUGMENTED REALITY IMAGES FOR DISPLAY ON A MOBILE DEVICE BASED ON GROUND TRUTH IMAGE RENDERING
20230048235 · 2023-02-16 ·

Systems and methods are disclosed herein for monitoring a location of a client device associated with a transportation service and generating augmented reality images for display on the client device. The systems and methods use sensor data from the client device and a device localization process to monitor the location of the client device by comparing renderings of images captured by the client device to renderings of the vicinity of the pickup location. The systems and methods determine navigation instructions from the user's current location to the pickup location and select one or more augmented reality elements associated with the navigation instructions and/or landmarks along the route to the pickup location. The systems and methods instruct the client device to overlay the selected augmented reality elements on a video feed of the client device.

COORDINATING ALIGNMENT OF COORDINATE SYSTEMS USED FOR A COMPUTER GENERATED REALITY DEVICE AND A HAPTIC DEVICE
20230050367 · 2023-02-16 ·

A first electronic device controls a second electronic device to measure a position of the first electronic device. The first electronic device includes a motion sensor, a network interface circuit, a processor, and a memory. The motion sensor senses motion of the first electronic device. The network interface circuit communicates with the second electronic device. The memory stores program code that is executed by the processor to perform operations that include, responsive to determining that the first electronic device has a level of motion that satisfies a defined rule, transmitting a request for the second electronic device to measure a position of the first electronic device. The position of the first electronic device is sensed and then stored in the memory. An acknowledgement is received from the second electronic device indicating that it has stored sensor data that can be used to measure the position of the first electronic device.

ROTATIONAL DEVICE FOR AN AUGMENTED REALITY DISPLAY SURFACE USING NFC TECHNOLOGY
20230050375 · 2023-02-16 ·

A device for displaying AR markings comprising a top and a base, with the top rotatably attached to the base, and the base configured to be held by a hand or placed on a fixed surface. The AR markings are positioned on the top such that when the top rotates with respect to the base, so do the AR markings. When the AR markings are scanned by an appropriate scanning and display device, such as a smart phone, a 3d image associated with the AR markings will be displayed on the display device as an augmented reality projection. When the top rotates with respect to the base, so too does the augmented reality projection.

SYSTEMS AND METHODS FOR MASKING A RECOGNIZED OBJECT DURING AN APPLICATION OF A SYNTHETIC ELEMENT TO AN ORIGINAL IMAGE
20230050857 · 2023-02-16 ·

An exemplary object masking system is configured to mask a recognized object during an application of a synthetic element to an original image. For example, the object masking system accesses a model of a recognized object depicted in an original image of a scene. The object masking system associates the model with the recognized object. The object masking system then generates presentation data for use by a presentation system to present an augmented version of the original image in which a synthetic element added to the original image is, based on the model as associated with the recognized object, prevented from occluding at least a portion of the recognized object. In this way, the synthetic element is made to appear as if located behind the recognized object. Corresponding systems and methods are also disclosed.