Patent classifications
G02B2027/014
Reducing head mounted display power consumption and heat generation through predictive rendering of content
Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.
Systems, methods, and graphical user interfaces for updating display of a device relative to a user's body
An electronic device, while the electronic device is worn over a predefined portion of the user's body, displays, via a display generation component arranged on the electronic device opposite the predefined portion of the user's body, a graphical representation of an exterior view of a body part that corresponds to the predefined portion of the user's body. The electronic device detects a change in position of the electronic device with respect to the predefined portion of the user's body. The electronic device, in response to detecting the change in the position of the electronic device with respect to the predefined portion of the user's body, modifies the graphical representation of the exterior view of the body part that corresponds to predefined portion of the user's body in accordance with the detected change in position of the electronic device with respect to the predefined portion of the user's body.
Image display device and reboot method for image display device
A HMD including an image display that notifies detection of an abnormality, a DP six-axis sensor that detects whether the image display unit is mounted, and a DP control unit that executes a first reboot mode for rebooting the HMD in accordance with a first order when mounting of the image display unit is not detected and the abnormality of the HMD is detected, and a second reboot mode for rebooting the HMD in accordance with a second order when mounting of the image display unit is detected and the abnormality of the HMD is detected. An order of reboot of the image display unit in the second order is set before an order of reboot of the image display unit in the first order.
Method and device for refraction adjustment, and augmented reality apparatus
A method and device for refraction adjustment in an augmented reality apparatus, and an augmented reality apparatus. The method for refraction adjustment includes: receiving light rays reflected from eyes of a user wearing an augmented reality apparatus; determining a pupil distance of the user according to the reflected light rays; and generating a refraction correction signal according to the pupil distance of the user and a desired diopter(s) for correcting diopters of the user's eyes by means of a refraction adjustment element.
Systems and methods for controlling virtual scene perspective via physical touch input
Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.
Measurement method and system
Methods and systems for determining an individual gaze value are disclosed herein. An exemplary method involves: (a) receiving gaze data for a first wearable computing device, wherein the gaze data is indicative of a wearer-view associated with the first wearable computing device, and wherein the first wearable computing device is associated with a first user-account; (b) analyzing the gaze data from the first wearable computing device to detect one or more occurrences of one or more advertisement spaces in the gaze data; (c) based at least in part on the one or more detected advertisement-space occurrences, determining an individual gaze value for the first user-account; and (d) sending a gaze-value indication, wherein the gaze-value indication indicates the individual gaze value for the first user-account.
Vehicular video camera display system
A vehicular video camera display system includes an interior rearview mirror assembly having a casing and an electro-optic reflective element, with a video display device disposed in the casing behind the electro-optic reflective element. With the interior rearview mirror assembly mounted at the interior cabin portion of the vehicle, a video display screen of the video display device is operable to display video images that are viewable through the electro-optic reflective element by a driver of the vehicle. A rearward-viewing video camera is disposed at a rear portion of the vehicle and views at least rearward of the vehicle. Control circuitry is disposed at the interior rearview mirror assembly. Image data captured by the rearward-viewing video camera is communicated from the rearward-viewing video camera via a twisted pair wire to the control circuitry disposed at the interior rearview mirror assembly.
Gaze tracking apparatus and systems
A system configured to perform an eye tracking process using a head-mountable eye-tracking arrangement, the system comprising an eye tracking unit, located on the head-mountable arrangement, operable to detect motion of one or both of the user's eyes, a relative motion identification unit operable to identify motion of the head-mountable arrangement relative to the user's head, and a correction unit operable to determine a correction to the eye tracking process in dependence upon the identified motion of the user's head relative to the head-mountable arrangement.
Display latency reduction
A display device dynamically determines pixel settle times to reduce a display latency. The display device includes a backlight unit (BLU) for providing light for displaying an image, a plurality of pixels for modulating the light provided by the BLU, and a controller circuit for controlling the BLU and the plurality of pixels. The controller circuit determines a settle time from display data for a current display frame and display data for a previous display frame, and turns on the BLU based on the determined settle time. The determined settle time corresponding to an expected amount of time for the plurality of pixel to transition from a first state corresponding to the display data for the previous display frame to a second state corresponding to the display data for the current display frame.
Head mounted display apparatus
The occlusion is faithfully expressed even in the binocular vision in the AR display by a head mounted display apparatus or the like. A head mounted display apparatus 10 includes a lens 12, a lens 13, a camera 14, a camera 15, and a control processor 16. A CG image for a right eye is displayed on the lens 12. A CG image for a left eye is displayed on the lens 13. The camera 14 captures an image for the right eye. The camera 15 captures an image for the left eye. The control processor 16 generates the CG image for the right eye in which occlusion at the time of seeing by the right eye is expressed and the CG image for the left eye in which occlusion at the time of seeing by the left eye is expressed, based on the images captured by the cameras 14 and 15 and projects the generated CG image for the right eye and CG image for the left eye onto the lenses 12 and 13. A center of a lens of the camera 14 is provided at the same position as a center of the lens 12. A center of a lens of the camera 15 is provided at the same position as a center of the lens 13.