G02B2027/0198

Method and device for controlling the positioning of a mounted information display device

This method, implemented in a mounted information display device, which incorporates a main sensor and an inertial sensor, and which determines the positioning by a hybrid inertial method including determining a calculated position by a main method using data acquired by the main sensor and determining a succession of estimated positions using the calculated position and data acquired by the inertial sensor, includes: obtaining at a first calculation time instant T1 a first estimated position of the device at a reference time instant, calculated by the hybrid inertia method; obtaining at a second time instant T2 a second estimated position of the device at the same reference time instant, calculated by the main method; comparing a difference between the first and second positions and a tolerance threshold; and, if the difference is less than the threshold, validating the positioning calculation by the hybrid inertial method, otherwise raising an alert.

Augmented reality display for macular degeneration
11681146 · 2023-06-20 · ·

Disclosed is a providing visual assistance using a head-worn device including one or more display devices and one or more cameras. The method comprises capturing a forward visual field using at least one of the cameras and displaying a portion of the forward visual field in a peripheral field of view using one or more of the display devices. The method may provide improved visual perception for people with macular degeneration. The method may include mapping a central portion of the forward visual field to a near-peripheral field of view, wherein the mapped central portion is displayed in a peripheral field of view using a forward display device of the head-worn device. A portion of the forward visual field may also be displayed in a peripheral field of view using a peripheral display device of the head-worn device.

Camera System

A device for MR/VR systems that includes a two-dimensional array of cameras that capture images of respective portions of a scene. The cameras are positioned along a spherical surface so that the cameras have adjacent fields of view. The entrance pupils of the cameras are positioned at or near the user’s eye while the cameras also form optimized images at the sensor. Methods for reducing the number of cameras in an array, as well as methods for reducing the number of pixels read from the array and processed by the pipeline, are also described.

DYNAMIC IMAGE PROCESSING DEVICE FOR HEAD MOUNTED DISPLAY, DYNAMIC IMAGE PROCESSING METHOD FOR HEAD MOUNTED DISPLAY AND HEAD MOUNTED DISPLAY SYSTEM

This dynamic image processing device (20) for a head mounted display includes an attitude detection means (30) capable of detecting the attitude of an imaging device affixed to the head of a user, a first image deviation amount calculation means (41) that calculates a first image deviation amount (G1) in the yawing and pitching directions of the imaging device based on the detection result of the attitude detection means, a second image deviation amount calculation means (42) that calculates a second image deviation amount (G2) between a past image (52) and a current frame image (51) based on the first image deviation amount, the current frame image captured by the imaging device, and the past image, and an image synthesis means (43) that corrects the past image based on the second image deviation amount and synthesizes the past image and the current frame image.

Alignment of 3D representations for hologram/avatar control

In various examples there is an apparatus for aligning three-dimensional, 3D, representations of people. The apparatus comprises at least one processor and a memory storing instructions that, when executed by the at least one processor, perform a method comprising accessing a first 3D representation which is an instance of a parametric model of a person; accessing a second 3D representation which is a photoreal representation of the person; computing an alignment of the first and second 3D representations; and computing and storing a hologram from the aligned first and second 3D representations such that the hologram depicts parts of the person which are observed in only one of the first and second 3D representations; or controlling an avatar representing the person where the avatar depicts parts of the person which are observed in only one of the first and second 3D representations.

Inflatable virtual reality headset system
09829711 · 2017-11-28 ·

An inflatable headset system for virtual and augmented reality applications includes multiple inflatable segments that can be inflated or deflated with one or more valves. One inflatable segment either houses a dedicated display device or defines a receptacle for a mobile device. Another inflatable segment includes one or more lenses that are positioned to cooperate with the display of the display device or mobile device when the inflatable segments are at least partially inflated. Preferably, at least one of the inflatable segments has a shape that, when at least partially inflated, acts as a headset frame. The system may further include optional fasteners or straps, optional computer components, optional sensors, and optional input/output devices.

EXTERNAL USER INTERFACE FOR HEAD WORN COMPUTING
20170336872 · 2017-11-23 ·

Head worn computers may include two physically separated cameras mounted on a front surface that capture movements of a finger of the user of the head-worn computer. The head worn computer includes an image source adapted to display content in a see-through display of the head-worn computer, wherein the content is positioned to be perceived by a user as positioned on a surface proximate the head-worn computer. A processor of the head worn computer is adapted to develop a 3D model, based on the captured finger movements and the position of the content, describing an interaction of the finger with the content.

VISUAL-FIELD INFORMATION COLLECTION METHOD AND SYSTEM FOR EXECUTING THE VISUAL-FIELD INFORMATION COLLECTION METHOD
20170336879 · 2017-11-23 ·

A visual field information collection method is capable of collecting visual-field information of a user wearing a head mounted display without applying a large calculation load to a processor. The visual-field information collection method to be executed by a processor includes arranging, in a virtual space in which a user wearing an HMD is immersed, a virtual camera that defines a visual axis for specifying a visual field of the user. The method includes determining the visual axis in accordance with one of a movement of the HMD and a line-of-sight direction of the user. The method includes determining whether or not the visual axis is moved from a predetermined first position to a predetermined second position. The method includes generating movement information representing that the visual axis is moved when the visual axis is moved from the predetermined first position to the predetermined second position.

SPATIAL LOCATION PRESENTATION IN HEAD WORN COMPUTING
20230176807 · 2023-06-08 ·

Aspects of the present invention relate presentation of digital content, in a see-through display, representing a known location in an environment proximate to a head worn computer.

Image Data Set Alignment for an AR Headset Using Anatomic Structures and Data Fitting
20230169740 · 2023-06-01 ·

A technology is described for aligning an image data set with a patient using an augmented reality (AR) headset. A method may include obtaining an image data set representing an anatomical structure of a patient. A two-dimensional (2D) X-ray generated image of at least a portion of the anatomical structure of the patient in the image data set and a visible marker may be obtained. The image data set can be aligned to the X-ray generated image by using data fitting. A location of the visible marker may be defined in the image data set using alignment with the X-ray generated image. The image data set may be aligned with a body of the patient, using the visible marker in the image data set as referenced to the visible marker seen on the patient through the AR headset.