G02B2027/014

DISPLAY CONTROL DEVICE AND HEAD-UP DISPLAY DEVICE
20230008648 · 2023-01-12 ·

In the case in which a loss of the viewpoint position of a driver occurs when a viewpoint position follow-up warping control is executed to update warping parameters according to the viewpoint position of the driver, and then the viewpoint position is re-detected, it is suppressed that the appearance of an image instantaneously changes in accordance with the update of the warping parameters and the driver is caused to feel uneasy. When the viewpoint loss in which at least one position of the right and left viewpoints becomes unclear is detected, a control unit, which executes the viewpoint position follow-up warping control, maintains, in a viewpoint loss period, the warping parameters set immediately before the viewpoint loss period, and, when the viewpoint position is re-detected after the viewpoint loss period, invalidates at least one warping process using the warping parameters corresponding to the re-detected viewpoint position.

CAMERA CONTROL USING SYSTEM SENSOR DATA

A method for using cameras in an augmented reality headset is provided. The method includes receiving a signal from a sensor mounted on a headset worn by a user, the signal being indicative of a user intention for capturing an image. The method also includes identifying the user intention for capturing the image, based on a model to classify the signal from the sensor according to the user intention, selecting a first image capturing device in the headset based on a specification of the first image capturing device and the user intention for capturing the image, and capturing the image with the first image capturing device. An augmented reality headset, a memory storing instructions, and a processor to execute the instructions to cause the augmented reality headset as above are also provided.

CONVERTIBLE WAVEGUIDE OPTICAL ENGINE ASSEMBLY FOR HEAD-MOUNTED DEVICE
20230010650 · 2023-01-12 ·

A head-mounted computing device having a convertible waveguide optical engine assembly is disclosed. The waveguide in accordance with aspects herein can be utilized in its transparent configuration, or may be provided with means for blocking light from passing through it either by using mechanical means, or by using different types of treatments that can switch the waveguide between opaque an transparent states based on an external stimulus, such as, for example, electricity, temperature, light, and the like. Further, the waveguide optical engine assembly comprises a compact footprint, which is advantageous for head-mounted computing devices. In addition to the compact footprint of the waveguide optical assembly, the configuration of the waveguide optical assembly, as disclosed, allows for maximization of advantages provided by the waveguide as related to eye box and eye relief.

Integration of a two-dimensional input device into a three-dimensional computing environment

A workstation enables operation of a 2D input device with a 3D interface. A cursor position engine determines the 3D position of a cursor controlled by the 2D input device as the cursor moves within a 3D scene displayed on a 3D display. The cursor position engine determines the 3D position of the cursor for a current frame of the 3D scene based on a current user viewpoint, a current mouse movement, a CD gain value, a Voronoi diagram, and an interpolation algorithm, such as the Laplacian algorithm. A CD gain engine computes CD gain optimized for the 2D input device operating with the 3D interface. The CD gain engine determines the CD gain based on specifications for the 2D input device and the 3D display. The techniques performed by the cursor position engine and the techniques performed by the CD gain engine can be performed separately or in conjunction.

Determination of position of a head-mounted device on a user

There is provided a method and system for determining if a head-mounted device for extended reality (XR) is correctly positioned on a user, and optionally performing a position correction procedure if the head-mounted device is determined to be incorrectly positioned on the user. Embodiments include: performing eye tracking by estimating, based on a first image of a first eye of the user, a position of a pupil in two dimensions; determining whether the estimated position of the pupil of the first eye is within a predetermined allowable area in the first image; and, if the determined position of the pupil of the first eye is inside the predetermined allowable area, concluding that the head-mounted device is correctly positioned on the user; or, if the determined position of the pupil of the first eye is outside the predetermined allowable area, concluding that the head-mounted device is incorrectly positioned on the user.

OPTICAL SYSTEMS AND METHODS FOR PREDICTING FIXATION DISTANCE

Head-mounted display systems may include an eye-tracking subsystem and a fixation distance prediction subsystem. The eye-tracking subsystem may be configured to determine at least a gaze direction of a user's eyes and an eye movement speed of the user's eyes. The fixation distance prediction subsystem may be configured to predict, based on the eye movement speed and the gaze direction of the user's eyes, a fixation distance at which the user's eyes will become fixated prior to the user's eyes reaching a fixation state associated with the predicted fixation distance. Additional methods, systems, and devices are also disclosed.

System for gaze interaction

The present invention provides improved methods and systems for assisting a user when interacting with a graphical user interface by combining gaze based input with gesture based user commands. The present invention provide systems, devices and method that enable a user of a computer system without a traditional touch-screen to interact with graphical user interfaces in a touch-screen like manner using a combination of gaze based input and gesture based user commands. Furthermore, the present invention offers a solution for touch-screen like interaction using gaze input and gesture based input as a complement or an alternative to touch-screen interactions with a computer device having a touch-screen, such as for instance in situations where interaction with the regular touch-screen is cumbersome or ergonomically challenging. Further, the present invention provides systems, devices and methods for combined gaze and gesture based interaction with graphical user interfaces to achieve a touchscreen like environment in computer systems without a traditional touchscreen or in computer systems having a touchscreen arranged ergonomically unfavourable for the user or a touchscreen arranged such that it is more comfortable for the user to use gesture and gaze for the interaction than the touchscreen.

ELECTRONIC CIRCUIT INTEGRATION TO SMART GLASSES FOR ENHANCED REALITY APPLICATIONS

A device including a frame and an eyepiece is provided. The eyepiece includes a front glass, a rear glass, and an active element sandwiched between the front glass and the rear glass. The active layer is electrically activated, via an interconnect, by a flex circuit enclosed between a top portion of the frame and a cap, the flex circuit including a memory and a processor. A method for assembling the above device is also provided.

Methods and Systems for Displaying Eye Images to Subjects and for Interacting with Virtual Objects
20230045213 · 2023-02-09 ·

A processing subsystem generates perceived images from information bearing nerve impulses that are transmitted from a subject's eye(s) to a visual processing region of the subject's brain along one or more nerves in response to the subject viewing a real-world scene. The processing subsystem generates display images based on the perceived images, and controls a display device to display the display images to the subject. In certain embodiments, the processing subsystem generates the display images by manipulating or modifying the perceived images to include virtual images, and provides a type of virtual pointing on the display images that is used to invoke one or more actions.

Virtually modeling clothing based on 3D models of customers

Three-dimensional models (or avatars) may be defined based on imaging data captured from a customer. The avatars may be based on a virtual mannequin having one or more dimensions in common with the customer, a body template corresponding to the customer, or imaging data captured from the customer. The avatars are displayed on displays or in user interfaces and used for any purpose, such as to depict how clothing will appear or behave while being worn by a customer alone or with other clothing. Customers may drag-and-drop images of clothing onto the avatars. One or more of the avatars may be displayed on any display, such as a monitor or a virtual reality headset, which may depict the avatars in a static or dynamic mode. Images of avatars and clothing may be used to generate print catalogs depicting the appearance or behavior of the clothing while worn by the customer.