Patent classifications
G02B2027/0185
Method and device for adjusting pupil distance of virtual reality display device
Disclosed are a method and device for adjusting a pupil distance of a virtual reality display device. The method adjusts a second pupil distance on a virtual reality display device by means of a first pupil distance of a user wearing the virtual reality display device, the first pupil distance referring to a pupil distance of the user wearing the virtual reality display device, and the second pupil distance being used as a distance between focal points of two lenses of the virtual reality display device; the method comprises: detecting whether the first pupil distance and the second pupil distance match; if the first pupil distance and the second pupil distance do not match, then executing a preset matching operation on the virtual reality display device. The present application solves the technical problem of a virtual reality display device being bulky and heavy.
Augmented reality imaging system
An optical system is presented for use in an augmented reality imaging system. The optical system comprises a light directing device, and a projecting optical device. The light directing device is configured for directing input light, including light indicative of an augmented image to be projected and input light indicative of a real image of an external scene, to propagate to an imaging plane. The projecting optical device has a fixed field of view and has a plurality of different focal parameters at different regions thereof corresponding to different vision zones within the field of view. The projecting optical device is configured to affect propagation of at least one of light indicative of the augmented image and light indicative of the real image, such that, for each of the different regions, interaction of a part of the light indicative of the augmented image and a part of the light indicative of the real image with said region of projecting optical device directs the parts of augmented image light and real image along a substantially common output propagation path, corresponding to the focal parameter of said region.
SEE-THROUGH COMPUTER DISPLAY SYSTEMS
Embodiments include a head-worn display including a display panel sized and positioned to produce a field of view to present digital content to an eye of a user, and a processor adapted to present the digital content to the display panel such that the digital content is only presented in a portion of the field of view, the portion being in the middle of the field of view such that horizontally opposing edges of the field of view are blank areas. The processor is adapted to shift the digital content into one of the blank areas to adjust the convergence distance of the digital content and thereby change the perceived distance from the user to the digital content.
IMAGE DISPLAY DEVICE INCLUDING MOVEABLE DISPLAY ELEMENT AND IMAGE DISPLAY METHOD
An image display device includes a processor that sets a location of a virtual image plane on which a virtual image is formed according to depth information included in first image data and generates second image data obtained by correcting the first image data based on the set location of the virtual image plane; an image forming optical system including a display element configured to modulate light to form a display image according to the second image data and a light transfer unit that forms the virtual image on the virtual image plane, the virtual image corresponding to the display image formed by the display element, the light transfer unit comprising a focusing member; and a drive unit that drives the image forming optical system to adjust the location of the virtual image plane.
Display device, in particular for vehicle, and vehicle having display device with volume hologram
A volume hologram is arranged inside a transparent portion of a pane of a display device for a vehicle. The display device further includes a light source by which light is coupled into the volume hologram. An image appearing three-dimensional to a human observer can be generated by use of the volume hologram. A camera includes a light-sensitive image sensor to acquire images via an optical unit which is at least partially formed by the transparent portion of the pane.
Field of View Optimization
Systems and methods disclosed herein include, among other aspects, a head-up display comprising an eye-box having a first dimension and a second dimension, where the head-up display is arranged to form first image content in a first image area at a first image area distance from the eye-box and second image content in a second image area at a second image area distance from the eye-box, where the first image area distance is less than the second image area distance and the first image area is at least partially overlapping in the first dimension with the second image area, and where the second image area extends less far in angular space than the first image area in at least one direction of the first dimension.
COMBINED BIREFRINGENT MATERIAL AND REFLECTIVE WAVEGUIDE FOR MULTIPLE FOCAL PLANES IN A MIXED-REALITY HEAD-MOUNTED DISPLAY DEVICE
An optical combiner in a display system of a mixed-reality head-mounted display (HMD) device comprises a lens of birefringent material and a ferroelectric liquid crystal (FLC) modulator that are adapted for use with a reflective waveguide to provide multiple different focal planes on which holograms of virtual-world objects (i.e., virtual images) are displayed. The birefringent lens has two orthogonal refractive indices, ordinary and extraordinary, depending on the polarization state of the incident light. Depending on the rotation of the polarization axis by the FLC modulator, the incoming light to the birefringent lens is focused either at a distance corresponding to the ordinary refractive index or the extraordinary refractive index. Virtual image light leaving the birefringent lens is in-coupled to a see-through reflective waveguide which is configured to form an exit pupil for the optical combiner to enable an HMD device user to view the virtual images from the source.
VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS
Methods and systems are disclosed for presenting virtual objects on a limited number of depth planes using, e.g., an augmented reality display system. A farthest one of the depth planes is within a mismatch tolerance of optical infinity. The display system may switch the depth plane on which content is actively displayed, so that the content is displayed on the depth plane on which a user is fixating. The impact of errors in fixation tracking is addressed using partially overlapping depth planes. A fixation depth at which a user is fixating is determined and the display system determines whether to adjust selection of a selected depth plane at which a virtual object is presented. The determination may be based on whether the fixation depth falls within a depth overlap region of adjacent depth planes. The display system may switch the active depth plane depending upon whether the fixation depth falls outside the overlap region.
A MULTI-PLANE DISPLAY DEVICE
A head-up display is described. A spatial light modulator is arranged to display a diffractive pattern of first picture content and/or second picture content. A screen assembly has first and second diffusers arranged in a stepped configuration so that the first diffuser is spatially offset from the second diffuser by a perpendicular distance. A light source is arranged to illuminate the diffractive pattern such that the first picture content is formed on the first diffuser and/or the second picture content is formed on the second diffuser. An optical system comprising at least one optical element having optical power is arranged so that the first and second diffusers have different object distances to the optical system.
System and method for presenting image content on multiple depth planes by providing multiple intra-pupil parallax views
An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity. Advantageously, the wavefront divergence, and the accommodation cue provided to the eye of the user, may be varied by appropriate selection of parallax disparity, which may be set by selecting the amount of spatial separation between the locations of light output.