Patent classifications
H04N13/361
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
System and method for combining text with three dimensional content
A system and method for combining and/or displaying text with three-dimensional content. The system and method inserts text at the same level as the highest depth value in the 3D content. One example of 3D content is a two-dimensional image and an associated depth map. In this case, the depth value of the inserted text is adjusted to match the largest depth value of the given depth map. Another example of 3D content is a plurality of two-dimensional images and associated depth maps. In this case, the depth value of the inserted text is continuously adjusted to match the largest depth value of a given depth map. A further example of 3D content is stereoscopic content having a right eye image and a left eye image. In this case the text in one of the left eye image and right eye image is shifted to match the largest depth value in the stereoscopic image. Yet another example of 3D content is stereoscopic content having a plurality of right eye images and left eye images. In this case the text in one of the left eye images or right eye images is continuously shifted to match the largest depth value in the stereoscopic images. As a result, the system and method of the present disclosure produces text combined with 3D content wherein the text does not obstruct the 3D effects in the 3D content and does not create visual fatigue when viewed by a viewer.
A SYSTEM COMPRISING MULTIPLE DIGITAL CAMERAS VIEWING A LARGE SCENE
Multiple digital cameras view a large scene, such as a part of a city. Some of the cameras view different parts of that scene, and video feeds from the cameras are processed at a computer to generate a photo-realistic synthetic 3D model of the scene. This enables the scene to be viewed from any viewing angle, including angles that the original, real cameras do not occupy—i.e. as though viewed from a ‘virtual camera’ that can be positioned in any arbitrary position. The 3D model combines both static elements that do not alter in real-time, and also dynamic elements that do alter in real-time or near real-time.
THREE-DIMENSIONAL DISPLAY DEVICE, THREE-DIMENSIONAL DISPLAY SYSTEM, HEAD-UP DISPLAY, AND MOBILE OBJECT
A display device includes: a display panel configured to display a parallax image including a first image to be viewed by a first eye of a user and a second image to be viewed by a second eye of the user; and an optical member including a plurality of optical elements arranged along a predetermined direction which includes a component in a parallax direction of the first eye and the second eye. A beam direction of the parallax image is defined by the plurality of optical elements. The display panel includes a plurality of subpixels including a plurality of minipixel. Each of the minipixels included in the plurality of subpixels is configured to be able to display different images.
THREE-DIMENSIONAL DISPLAY DEVICE, THREE-DIMENSIONAL DISPLAY SYSTEM, HEAD-UP DISPLAY, AND MOBILE OBJECT
A display device includes: a display panel configured to display a parallax image including a first image to be viewed by a first eye of a user and a second image to be viewed by a second eye of the user; and an optical member including a plurality of optical elements arranged along a predetermined direction which includes a component in a parallax direction of the first eye and the second eye. A beam direction of the parallax image is defined by the plurality of optical elements. The display panel includes a plurality of subpixels including a plurality of minipixel. Each of the minipixels included in the plurality of subpixels is configured to be able to display different images.
Selective mono/stereo visual displays
To enhance a mono-output-only controller such as a mobile OS to support selective mono/stereo/mixed output, a stereo controller is instantiated in communication with the mono controller. The stereo controller coordinates stereo output, but calls and adapts functions already present in the mono controller for creating surface and image buffers, rendering, compositing, and/or merging. For content designated for 2D display, left and right surfaces are rendered from a mono perspective; for content designated for 3D display, left and right surfaces are rendered from left and right stereo perspectives, respectively. Some, all, or none of available content may be delivered to a stereo display in 3D, with a remainder delivered in 2D, and with comparable content still delivered in 2D to the mono display. The stereo controller is an add-on; the mono controller need not be replaced, removed, deactivated, or modified, facilitating transparency and backward compatibility.
Selective mono/stereo visual displays
To enhance a mono-output-only controller such as a mobile OS to support selective mono/stereo/mixed output, a stereo controller is instantiated in communication with the mono controller. The stereo controller coordinates stereo output, but calls and adapts functions already present in the mono controller for creating surface and image buffers, rendering, compositing, and/or merging. For content designated for 2D display, left and right surfaces are rendered from a mono perspective; for content designated for 3D display, left and right surfaces are rendered from left and right stereo perspectives, respectively. Some, all, or none of available content may be delivered to a stereo display in 3D, with a remainder delivered in 2D, and with comparable content still delivered in 2D to the mono display. The stereo controller is an add-on; the mono controller need not be replaced, removed, deactivated, or modified, facilitating transparency and backward compatibility.
DISPLAY DEVICE
According to one embodiment, a display device includes a display area, a light control element, a projection surface and at least one of one or more mirrors or one or more lenses. The display area includes subpixels, and includes a first area containing first subpixels in the subpixels and a second area containing second subpixels in the subpixels. The light control element is overlapped with the first area. The projection surface projects an image displayed in the display area. A virtual image perceived by a user who views the projection surface includes a first virtual image corresponding to the first area and perceived as a stereoscopic virtual image, and a second virtual image corresponding to the second area and perceived as a planar virtual image.
Integrated display rendering
A system, method or compute program product for displaying a first image based on first image data in a display area of a first display device, receiving at least one camera-captured second image of an environment with the second image capturing at least a portion of the first image displayed in the display area, determining a location and orientation of the first display device relative to the camera, determining a portion of the second image that corresponds to the portion of the first image displayed in the display area, generating a third image that corresponds to the portion of the first image displayed on the first display device as viewed from a point of view of the camera from the first image data, and generating a composite image of the environment by replacing at least a portion of the second image with the third image, and displaying the composite image.