Patent classifications
G02B27/0179
INTERACTION PERIPHERAL, DETECTION METHOD, VIRTUAL REALITY HEADSET, METHOD FOR REPRODUCING A REAL POINT IN VIRTUAL SPACE, DEVICE AND METHOD FOR VIRTUALISING A REAL SPACE
An interaction peripheral, a method for detecting a real point, a virtual reality headset, a method for reproducing a real point in virtual space, a device and a method for virtualising a real space, particularly allowing a plane to be obtained in two, three or n dimensions of a real space which may be reproducible in virtual reality. An interaction peripheral which can be connected to a virtual reality headset, includes a range finder which can supply, to the headset, a measurement signal including a relative position measurement of a real point of a real space, the real point being sighted by the range finder. The measurement signal enables reproduction of the real point measured in a virtual space generated by the headset. Thus, the real point can be reproduced in real space while reducing risks of errors because the measurement tools are simple interaction peripherals handled by a user.
Head Up Display Apparatus With a Bright Energy Efficient Backlight for a Vehicle
A head up display apparatus for a vehicle includes an imaging unit that generates a projection light beam with display content and includes a transmissive display indication layer with selectively controllable display elements distributed over an area, a matrix backlight that provides backlighting therefor and includes selectively controllable light sources distributed along the transmissive display indication layer, and a collimation array with collimators arranged between a light source and the transmissive display indication layer, and a projection panel in the beam path of the projection light beam generated by the imaging unit for reflecting the projection light beam to a user, the projection panel being arranged in the beam path such that a virtual display image is generated therebehind in the visual field of the user.
HEAD-UP DISPLAY DEVICE AND MOBILE OBJECT
A display region has a curved surface shape having upper and lower end portions disposed at positions closer to a visual field than a reference plane, and a central portion disposed at a position farther from the visual field than the reference plane. A first convergence angle difference between a convergence angle from the eye position to the upper end portion and a convergence angle from the eye position to a first point on the reference plane through the upper end portion, a second convergence angle difference between a convergence angle to the central portion and a convergence angle to a second point on the reference plane through the central portion, and a third convergence angle difference between a convergence angle to the lower end portion and a convergence angle to a third point on the reference plane through the lower end portion respectively fall within four milliradians.
Short distance illumination of a spatial light modulator using a single reflector
A display device includes a light source, a spatial light modulator, and an optical element. The optical element includes a reflective surface. The optical assembly is positioned relative to the light source so that at least a portion of the illumination light received by the optical element is reflected at the reflective surface back toward the light source. The spatial light modulator is positioned to receive at least a portion of the illumination light reflected by the reflective surface. A method performed by the display device is also disclosed.
Head-up display apparatus
The head-up display apparatus includes a display-light emitting unit having a second inclination angle around the axis parallel to the vertical direction so that the first direction side thereof is closer to the user than the second direction side thereof is, and being configured to emit display light forming a rectangular image corresponding to the rectangular virtual image. The head-up display apparatus includes a reflecting mirror having a third inclination angle around the axis parallel to the vertical direction so that the first direction side thereof is closer to the user than the second direction side thereof is, and being configured to reflect the display light and emit reflected light to the virtual-image display unit.
Robotic assistant and method for controlling the same
A robotic assistant includes a base; an elevation mechanism positioned on the base; a display rotatably mounted on the elevation mechanism; a camera positioned on the display; and a control system that receives command instructions. In response to the command instructions, the control system is to detect movement of a face of the user in a vertical direction based on the images captured by the camera. In response to detection of the movement of the face of the user in the vertical direction, the control system is to rotate the display and actuate the elevation mechanism to the move the display up and down to allow the camera to face the face of the user during the movement of the face of the user in the vertical direction.
MIXED, VIRTUAL AND AUGMENTED REALITY HEADSET AND SYSTEM
A mixed, virtual and augmented reality headset having a front casing (2) with a housing receiving a smartphone (19) facing the holographic display (5); a curved holographic display (5) in the front portion of the headset reflecting a projected image (11) via the display of a smartphone (19) and simultaneously allowing the user to see through same; a motorised mirror (14) positioned in a withdrawn position or in an extended position in front of the holographic display (5) reflecting the projected image (11) via the smartphone (19); two motorised lenses (15) positioned in a withdrawn position or in an extended position in front of the pupils (13) of the user; a mirror system (16) reflecting a real external image (10) with respect to the headset (1) towards a camera of the smartphone (19); and a control unit (50) controlling the position of the motorised lenses (15) and mirror (14).
DISPLAY DEVICE AND DISPLAY METHOD
The present technology provides a display device capable of appropriately displaying information in a visual field range of a user. The present technology provides a display device including: a display system configured to display information in a visual field range of a user by irradiating a retina of an eyeball with light using an element integrally provided on the eyeball of the user; a detection system configured to detect a change in an orientation and/or a position of the eyeball; and a control system configured to control a display position and/or a display mode of the information in the visual field range on the basis of a detection result in the detection system. According to the present technology, it is possible to provide a display device capable of appropriately displaying information in a visual field range of a user.
CAMERA CONTROL USING SYSTEM SENSOR DATA
A method for using cameras in an augmented reality headset is provided. The method includes receiving a signal from a sensor mounted on a headset worn by a user, the signal being indicative of a user intention for capturing an image. The method also includes identifying the user intention for capturing the image, based on a model to classify the signal from the sensor according to the user intention, selecting a first image capturing device in the headset based on a specification of the first image capturing device and the user intention for capturing the image, and capturing the image with the first image capturing device. An augmented reality headset, a memory storing instructions, and a processor to execute the instructions to cause the augmented reality headset as above are also provided.
OPTICAL SYSTEMS AND METHODS FOR PREDICTING FIXATION DISTANCE
Head-mounted display systems may include an eye-tracking subsystem and a fixation distance prediction subsystem. The eye-tracking subsystem may be configured to determine at least a gaze direction of a user's eyes and an eye movement speed of the user's eyes. The fixation distance prediction subsystem may be configured to predict, based on the eye movement speed and the gaze direction of the user's eyes, a fixation distance at which the user's eyes will become fixated prior to the user's eyes reaching a fixation state associated with the predicted fixation distance. Additional methods, systems, and devices are also disclosed.