Patent classifications
G02B27/0179
Sliced encoding and decoding for remote rendering
Disclosed herein are related to a device and a method of remotely rendering an image. In one approach, a device divides an image of an artificial reality space into a plurality of slices. In one approach, the device encodes a first slice of the plurality of slices. In one approach, the device encodes a portion of a second slice of the plurality of slices, while the device encodes a portion of the first slice. In one approach, the device transmits the encoded first slice of the plurality of slices to a head wearable display. In one approach, the device transmits the encoded second slice of the plurality of slices to the head wearable display, while the device transmits a portion of the encoded first slice to the head wearable display.
System and method for capturing a spatial orientation of a wearable device
A system and method capture a spatial orientation of a wearable device. The system has at least one capturing unit and at least one processor unit. The at least one capturing unit is designed to capture at least one first position parameter in relation to the wearable device and to capture at least one second position parameter in relation to a body part of a person on which the wearable device is arranged. The at least one processor unit is designed to determine a spatial orientation of the wearable device on the basis of the at least one first position parameter and the at least one second position parameter.
Systems and methods for operating a head-mounted display system based on user identity
Systems and methods for depth plane selection in display system such as augmented reality display systems, including mixed reality display systems, are disclosed. A display(s) may present virtual image content via image light to an eye(s) of a user. The display(s) may output the image light to the eye(s) of the user, the image light to have different amounts of wavefront divergence corresponding to different depth planes at different distances away from the user. A camera(s) may capture images of the eye(s). An indication may be generated based on obtained images of the eye(s), indicating whether the user is identified. The display(s) may be controlled to output the image light to the eye(s) of the user, the image light to have the different amounts of wavefront divergence based at least in part on the generated indication indicating whether the user is identified.
Surface Puck
An image orientation system is provided wherein images (rays of lights) are projected to a user based on the user's field of view or viewing angle. As the rays of light are projected, streams of air can be produced that bend or focus the rays of light toward the user's field of view. The streams of air can be cold air, hot air, or combinations thereof. Further, an image receiver can be utilized to receive the produced image/rays of light directly in line with the user's field of view. The image receiver can be a wearable device, such as a head mounted display.
HEAD MOUNTED DISPLAY AND METHOD FOR CONTROLLING THE SAME
Disclosed are a head mounted display (HMD), and a method for controlling the same. The HMD includes: a body having a display unit; a lens driving unit provided at the body, and configured to move a lens unit spaced apart from the display unit, wherein the lens driving unit includes: a lens frame having a first tube portion protruded in a first direction, and coupled to the body; a lens housing having a second tube portion protruded in the first direction and having the lens unit, the second tube portion relatively moved on the first tube portion; a link unit coupled to the lens frame and the lens housing, and configured to move the lens housing; and a driving unit provided at one side of the first tube portion, and configured to operate the link unit.
MIXED REALITY DISPLAY DEVICE
Examples are disclosed that relate to mixed reality display devices. One example provides a head-mounted display device comprising, a display, a lens system, and a curved Fresnel combiner. The curved Fresnel combiner is configured to direct light received from the display via the lens system toward an eyebox, and is at least partially transmissive to background light.
Addressable crossed line projector for depth camera assembly
A projector for illuminating a target area is presented. The projector includes an array of emitters positioned on a substrate according to a distribution. Each emitter in the array of emitters has a non-circular emission area. Operation of at least a portion of the array of emitters is controlled based in part on emission instructions to emit light. The light from the projector is configured to illuminate the target area. The projector can be part of a depth camera assembly for depth sensing of a local area, or part of an eye tracker for determining a gaze direction for an eye.
Head up display apparatus
A head up display apparatus includes: an image display apparatus having a light source and a display element and forming an image; an image-light projecting means displaying a virtual image onto a forward part of a vehicle by projecting the image light emitted from the image display apparatus to be reflected on a windshield 3; and a point-of-view detecting system 6 sensing a point of view of the driver. In the head up display apparatus, the image-light projecting means includes a means generating illumination light entirely made of single-color visible light emitted to a face of the driver in a predetermined state of the vehicle.
Display with image light steering
A display device includes a directional illuminator providing a light beam, a display panel downstream of a directional illuminator, for receiving and spatially modulating the light beam, and a beam redirecting module downstream of the display panel, for variably redirecting the spatially modulated light beam. Steering the illuminating light by the beam redirecting module enables one to steer the exit pupil of the display device to match the user's eye location(s).
Presentation of an enriched view of a physical setting
In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include presenting, on a display of an electronic device, first content representing a standard view of a physical setting depicted in image data generated by an image sensor of the electronic device. While presenting the first content, an interaction with an input device of the electronic device is detected that is indicative of a request to present an enriched view of the physical setting. In accordance with detecting the interaction, second content is formed representing the enriched view of the physical setting by applying an enrichment effect that alters or supplements the image data generated by the image sensor. The second content representing the enriched view of the physical setting is presented on the display.