Patent classifications
H04N13/341
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
System and method for generating a mixed reality experience
An electronic device includes an image sensor, a projector, an adjustable mount, a processor, and a memory. The memory stores instructions executable by the processor to: receive at least one image of an environment around the electronic device from the image sensor; determine a face pose of a viewer based on the at least one image; determine characteristics of a projection surface based on the at least one image; control the projector to project a plurality of images onto the projection surface, the images determined based in part on the face pose of the viewer and the characteristics of the projection surface, wherein the images are configured to be perceived as a three-dimensional (3D) object image when viewed through 3D glasses; and control the adjustable mount to adjust a position or an orientation of the projector based in part on a change in the face pose of the viewer.
System and method for generating a mixed reality experience
An electronic device includes an image sensor, a projector, an adjustable mount, a processor, and a memory. The memory stores instructions executable by the processor to: receive at least one image of an environment around the electronic device from the image sensor; determine a face pose of a viewer based on the at least one image; determine characteristics of a projection surface based on the at least one image; control the projector to project a plurality of images onto the projection surface, the images determined based in part on the face pose of the viewer and the characteristics of the projection surface, wherein the images are configured to be perceived as a three-dimensional (3D) object image when viewed through 3D glasses; and control the adjustable mount to adjust a position or an orientation of the projector based in part on a change in the face pose of the viewer.
Generating composite stereoscopic images
A system, method or compute program product for generating composite images. One of the systems includes a capture device to capture an image of a physical environment; and one or more storage devices storing instructions that are operable, when executed by one or more processors of the system, to cause the one or more processors to: obtain an image of the physical environment as captured by the capture device, identify a visually-demarked region on a surface in the physical environment as depicted in the image, process the image to generate a composite image of the physical environment that includes a depiction of a virtual object, wherein a location of the depiction of the virtual object in the composite image is based on a location of the depiction of the visually-demarked region in the image, and cause the composite image to be displayed for a user.
System and method for presenting image content on multiple depth planes by providing multiple intra-pupil parallax views
An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity. Advantageously, the wavefront divergence, and the accommodation cue provided to the eye of the user, may be varied by appropriate selection of parallax disparity, which may be set by selecting the amount of spatial separation between the locations of light output.
System and method for presenting image content on multiple depth planes by providing multiple intra-pupil parallax views
An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity. Advantageously, the wavefront divergence, and the accommodation cue provided to the eye of the user, may be varied by appropriate selection of parallax disparity, which may be set by selecting the amount of spatial separation between the locations of light output.
SYSTEMS AND METHODS FOR EFFICIENT GENERATION OF SINGLE PHOTON AVALANCHE DIODE IMAGERY WITH PERSISTENCE
A system for efficiently generating SPAD imagery with persistence is configurable to capture an image frame, capture pose data associated with the capturing of the image frame, and access a persistence frame. The persistence frame includes a preceding composite image frame generated based on at least two preceding image frames. The at least two preceding image frames are associated with timepoints that precede a capture timepoint associated with the image frame. The system is configurable to generate a persistence term based on (i) the pose data, (ii) a similarity comparison based on the image frame and the persistence frame, or (iii) a signal strength associated with the image frame. The system is configurable to generate a composite image based on the image frame, the persistence frame, and the persistence term. The persistence term defines a contribution of the image frame and the persistence frame to the composite image.
SYSTEMS AND METHODS FOR EFFICIENT GENERATION OF SINGLE PHOTON AVALANCHE DIODE IMAGERY WITH PERSISTENCE
A system for efficiently generating SPAD imagery with persistence is configurable to capture an image frame, capture pose data associated with the capturing of the image frame, and access a persistence frame. The persistence frame includes a preceding composite image frame generated based on at least two preceding image frames. The at least two preceding image frames are associated with timepoints that precede a capture timepoint associated with the image frame. The system is configurable to generate a persistence term based on (i) the pose data, (ii) a similarity comparison based on the image frame and the persistence frame, or (iii) a signal strength associated with the image frame. The system is configurable to generate a composite image based on the image frame, the persistence frame, and the persistence term. The persistence term defines a contribution of the image frame and the persistence frame to the composite image.
DISPLAY SYSTEM AND METHOD THEREOF
A display system includes an image display device and a pair of shutter glasses. The image display device includes a display panel, a display communication circuit and a control circuit. The control circuit is configured to control the display communication circuit to transmit a shutter glasses control signal, and configured to control the display panel to alternately display a normal image and a compensate image according to a timing control signal. The pair of shutter glasses includes a pair of shutters, a glasses communication circuit and a shutter control circuit. The glasses communication circuit is configured to receive the shutter glasses control signal. The shutter control circuit is configured to alternately open and close the pair of shutters according to the shutter glasses control signal.