G02B2027/014

AUGMENTED REALITY SYSTEM AND METHODS FOR STEREOSCOPIC PROJECTION AND CROSS-REFERENCING OF LIVE X-RAY FLUOROSCOPIC AND COMPUTED TOMOGRAPHIC C-ARM IMAGING DURING SURGERY
20230050636 · 2023-02-16 ·

A method for performing a procedure on a patient includes acquiring a three-dimensional image of a location of interest on the patient and a two-dimensional image of the location of interest can be acquired. A computer system can relate the three-dimensional image with the two-dimensional image to form a holographic image dataset. The computer system can register the holographic image dataset with the patient. The augmented reality system can render a hologram based on the holographic image dataset from the patient. The hologram can include a projection of the three-dimensional image and a projection of the two-dimensional image. The practitioner can view the hologram with the augmented reality system and perform the procedure on the patient. The practitioner can employ the augmented reality system to visualize a point on the projection of the three-dimensional image and a corresponding point on the projection of the two-dimensional image during the procedure.

METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
20230048185 · 2023-02-16 ·

Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.

SYSTEM AND METHOD FOR ENHANCING VISUAL ACUITY

A head wearable display system comprising a target object detection module receiving multiple image pixels of a first portion and a second portion of a target object, and the corresponding depths; a first light emitter emitting multiple first-eye light signals to display a first-eye virtual image of the first portion and the second portion of the target object for a viewer; a first light direction modifier for respectively varying a light direction of each of the multiple first-eye light signals emitted from the first light emitter; a first collimator; a first combiner, for redirecting and converging the multiple first-eye light signals towards a first eye of the viewer. The first-eye virtual image of the first portion of the target object in a first field of view has a greater number of the multiple first-eye light signals per degree than that of the first-eye virtual image of the second portion of the target object in a second field of view.

PERIPHERAL LIGHT FIELD DISPLAY
20230049531 · 2023-02-16 ·

A Head Mounted Display (HMD) includes a pixel array having multiple pixels configured in a two-dimensional surface, each pixel providing multiple light beams forming an image provided to a user. The HMD also includes a first optical element configured to provide a central portion of a field of view for the image through an eyebox that limits a volume including a pupil of the user, and a second optical element configured to provide a peripheral portion of the field of view for the image through the eyebox, wherein the peripheral portion of the field of view comprises at least one steradian of a user's field of view at a resolution of at least fifteen arcminutes.

Metasurfaces with light-redirecting structures including multiple materials and methods for fabricating

Display devices include waveguides with metasurfaces as in-coupling and/or out-coupling optical elements. The metasurfaces may be formed on a surface of the waveguide and may include a plurality or an array of sub-wavelength-scale (e.g., nanometer-scale) protrusions. Individual protrusions may include horizontal and/or vertical layers of different materials which may have different refractive indices, allowing for enhanced manipulation of light redirecting properties of the metasurface. Some configurations and combinations of materials may advantageously allow for broadband metasurfaces. Manufacturing methods described herein provide for vertical and/or horizontal layers of different materials in a desired configuration or profile.

Visual-inertial tracking using rolling shutter cameras

Visual-inertial tracking of an eyewear device using a rolling shutter camera(s). The eyewear device includes a position determining system. Visual-inertial tracking is implemented by sensing motion of the eyewear device. An initial pose is obtained for a rolling shutter camera and an image of an environment is captured. The image includes feature points captured at a particular capture time. A number of poses for the rolling shutter camera is computed based on the initial pose and sensed movement of the device. The number of computed poses is responsive to the sensed movement of the mobile device. A computed pose is selected for each feature point in the image by matching the particular capture time for the feature point to the particular computed time for the computed pose. The position of the mobile device is determined within the environment using the feature points and the selected computed poses for the feature points.

Head up display speckle contrast determination systems and methods

A system for measuring speckle contrast includes: a head up display (HUD) system configured to output a predetermined image and having a first pixels per degree (PPD); an imaging colorimeter: having a field of view; positioned such that the predetermined image is in the field of view; having a second PPD that is at least 2.2 times greater than the first PPD of the HUD system; and configured to capture an image including the predetermined image; and a speckle contrast module configured to determine a speckle contrast of the HUD system based on the image.

Depth estimation using biometric data

Method of generating depth estimate based on biometric data starts with server receiving positioning data from first device associated with first user. First device generates positioning data based on analysis of a data stream comprising images of second user that is associated with second device. Server then receives a biometric data of second user from second device. Biometric data is based on output from a sensor or a camera included in second device. Server then determines a distance of second user from first device using positioning data and biometric data of the second user. Other embodiments are described herein.

Augmented reality for vehicle operations

A method, includes saving in-flight data from an aircraft during a simulated training exercise, wherein the in-flight data includes geospatial locations of the aircraft, positional attitudes of the aircraft, and head positions of a pilot operating the aircraft, saving simulation data relating to a simulated virtual object presented to the pilot as augmented reality content in-flight, wherein the virtual object was programmed to interact with the aircraft during the simulated training exercise and representing the in-flight data from the aircraft and the simulation data relating to the simulated virtual object as a replay of the simulated training exercise.

Color-sensitive virtual markings of objects
11582312 · 2023-02-14 · ·

Disclosed are systems, methods, and non-transitory computer readable media for making virtual colored markings on objects. Instructions may include receiving an indication of an object; receiving from an image sensor an image of a hand of an individual holding a physical marking implement; detecting in the image a color associated with the marking implement; receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip; determining from the image data when the locations of the tip correspond to locations on the object; and generating, in the detected color, virtual markings on the object at the corresponding locations.