Patent classifications
G02B27/017
Devices, systems and methods for predicting gaze-related parameters using a neural network
A method for creating and updating a database is disclosed. In one example, the method includes presenting a first stimulus to a first user wearing a head-wearable device, using a first camera of the head-wearable device to generate. When the first user is expected to respond to the first stimulus or expected to have responded to the first stimulus, using a second camera of the head-wearable device to generate a first right image of at least a portion of the right eye of the first user. A data connection is established between the head-wearable device and the database. A first dataset is generated comprising the first left image, the first right image and a first representation of a gaze-related parameter, the first representation being correlated with the first stimulus, and adding the first dataset to a device database.
VIRTUAL REALITY SIMULATOR AND VIRTUAL REALITY SIMULATION PROGRAM
A VR (Virtual Reality) simulator projects or displays a virtual space image on a screen installed at a position distant from a user in a real space and not integrally moving with the user. More specifically, the VR simulator acquires a real user position being a position of the user's head in the real space. The VR simulator acquires a virtual user position being a position in a virtual space corresponding to the real user position. Then, the VR simulator acquires the virtual space image by imaging the virtual space by using a camera placed at the virtual user position in the virtual space, based on virtual space configuration information indicating a configuration of the virtual space. Here, the VR simulator acquires the virtual space image such that a vanishing point exists in a horizontal direction as viewed from the virtual user position.
CAMERA CONTROL USING SYSTEM SENSOR DATA
A method for using cameras in an augmented reality headset is provided. The method includes receiving a signal from a sensor mounted on a headset worn by a user, the signal being indicative of a user intention for capturing an image. The method also includes identifying the user intention for capturing the image, based on a model to classify the signal from the sensor according to the user intention, selecting a first image capturing device in the headset based on a specification of the first image capturing device and the user intention for capturing the image, and capturing the image with the first image capturing device. An augmented reality headset, a memory storing instructions, and a processor to execute the instructions to cause the augmented reality headset as above are also provided.
CONVERTIBLE WAVEGUIDE OPTICAL ENGINE ASSEMBLY FOR HEAD-MOUNTED DEVICE
A head-mounted computing device having a convertible waveguide optical engine assembly is disclosed. The waveguide in accordance with aspects herein can be utilized in its transparent configuration, or may be provided with means for blocking light from passing through it either by using mechanical means, or by using different types of treatments that can switch the waveguide between opaque an transparent states based on an external stimulus, such as, for example, electricity, temperature, light, and the like. Further, the waveguide optical engine assembly comprises a compact footprint, which is advantageous for head-mounted computing devices. In addition to the compact footprint of the waveguide optical assembly, the configuration of the waveguide optical assembly, as disclosed, allows for maximization of advantages provided by the waveguide as related to eye box and eye relief.
Determination of position of a head-mounted device on a user
There is provided a method and system for determining if a head-mounted device for extended reality (XR) is correctly positioned on a user, and optionally performing a position correction procedure if the head-mounted device is determined to be incorrectly positioned on the user. Embodiments include: performing eye tracking by estimating, based on a first image of a first eye of the user, a position of a pupil in two dimensions; determining whether the estimated position of the pupil of the first eye is within a predetermined allowable area in the first image; and, if the determined position of the pupil of the first eye is inside the predetermined allowable area, concluding that the head-mounted device is correctly positioned on the user; or, if the determined position of the pupil of the first eye is outside the predetermined allowable area, concluding that the head-mounted device is incorrectly positioned on the user.
Utilizing dual cameras for continuous camera capture
An eyewear device that adjusts an on time and an off time of a pair of cameras to control heat of the cameras and of the eyewear device. Each of the pair of cameras has a duty cycle determining when the respective camera is on and off. A camera control chart contains the duty cycles. The eyewear may have a temperature sensor such that the on and off times of the cameras are a function of the temperature sensor.
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
DEVICE AND METHOD FOR FOVEATED RENDERING
A display driver includes image processing circuitry and drive circuitry. The image processing circuitry is configured to receive a foveal image, a full frame image, and coordinate data that specifies a position of the foveal image in the full frame image. The image processing circuitry is further configured to render a resulting image based on the full frame image independently of the foveal image in response to detection of a data error within the coordinate data. The drive circuitry is configured to drive a display panel based on the resulting image.
AN EYE TRACKING DEVICE AND METHOD THEREOF
The present invention relates to a stereo eye tracking technique. The eye tracking device comprises a processing unit configured and operable for receiving at least one image being indicative of a user's eye; identifying in the image a first data being indicative of pupil's parameters; receiving a second data being indicative of an alternative eye tracking, wherein the second data is more accurate than the first data; and correlating between the first and second data and determining a three-dimensional position and gaze direction of the user's eye.
System for gaze interaction
The present invention provides improved methods and systems for assisting a user when interacting with a graphical user interface by combining gaze based input with gesture based user commands. The present invention provide systems, devices and method that enable a user of a computer system without a traditional touch-screen to interact with graphical user interfaces in a touch-screen like manner using a combination of gaze based input and gesture based user commands. Furthermore, the present invention offers a solution for touch-screen like interaction using gaze input and gesture based input as a complement or an alternative to touch-screen interactions with a computer device having a touch-screen, such as for instance in situations where interaction with the regular touch-screen is cumbersome or ergonomically challenging. Further, the present invention provides systems, devices and methods for combined gaze and gesture based interaction with graphical user interfaces to achieve a touchscreen like environment in computer systems without a traditional touchscreen or in computer systems having a touchscreen arranged ergonomically unfavourable for the user or a touchscreen arranged such that it is more comfortable for the user to use gesture and gaze for the interaction than the touchscreen.