Patent classifications
H04N13/361
Integrated display rendering
A system, method or compute program product for displaying a first image based on first image data in a display area of a first display device, receiving at least one camera-captured second image of an environment with the second image capturing at least a portion of the first image displayed in the display area, determining a location and orientation of the first display device relative to the camera, determining a portion of the second image that corresponds to the portion of the first image displayed in the display area, generating a third image that corresponds to the portion of the first image displayed on the first display device as viewed from a point of view of the camera from the first image data, and generating a composite image of the environment by replacing at least a portion of the second image with the third image, and displaying the composite image.
Partial light field display architecture
The disclosure describes various aspects of a partial light field display architecture. In an aspect, a light field display includes multiple picture elements (e.g., super-raxels), where each picture element includes a first portion having a first set of light emitting elements, where the first portion is configured to produce light outputs that contribute to at least one a two-dimensional (2D) view. Each picture element also includes a second portion including a second set of light emitting elements (e.g., sub-raxels) configured to produce light outputs (e.g., ray elements) that contribute to at least one three-dimensional (3D) view. The light field display also includes electronic means configured to drive the first set of light emitting elements and the second set of light emitting elements in each picture element. The light field display can also dynamically identify the first portion and the second portion and allocate light emitting elements accordingly.
METHOD AND APPARATUS FOR SELECTIVE MONO/STEREO VISUAL DISPLAY
To enhance a mono-output-only controller such as a mobile OS to support selective mono/stereo/mixed output, a stereo controller is instantiated in communication with the mono controller. The stereo controller coordinates stereo output, but calls and adapts functions already present in the mono controller for creating surface and image buffers, rendering, compositing, and/or merging. For content designated for 2D display, left and right surfaces are rendered from a mono perspective; for content designated for 3D display, left and right surfaces are rendered from left and right stereo perspectives, respectively. Some, all, or none of available content may be delivered to a stereo display in 3D, with a remainder delivered in 2D, and with comparable content still delivered in 2D to the mono display. The stereo controller is an add-on; the mono controller need not be replaced, removed, deactivated, or modified, facilitating transparency and backward compatibility.
METHOD AND APPARATUS FOR SELECTIVE MONO/STEREO VISUAL DISPLAY
To enhance a mono-output-only controller such as a mobile OS to support selective mono/stereo/mixed output, a stereo controller is instantiated in communication with the mono controller. The stereo controller coordinates stereo output, but calls and adapts functions already present in the mono controller for creating surface and image buffers, rendering, compositing, and/or merging. For content designated for 2D display, left and right surfaces are rendered from a mono perspective; for content designated for 3D display, left and right surfaces are rendered from left and right stereo perspectives, respectively. Some, all, or none of available content may be delivered to a stereo display in 3D, with a remainder delivered in 2D, and with comparable content still delivered in 2D to the mono display. The stereo controller is an add-on; the mono controller need not be replaced, removed, deactivated, or modified, facilitating transparency and backward compatibility.
Display device, display system, and movable vehicle
A display device is switchable between 3D image display and 2D image display. The display device includes a first display panel, a second display panel, a controller, and an optical system. The first and second display panels each include subpixels arranged in a grid. The controller performs switching and/or is configured to perform switching between multiple display modes including a first display mode for displaying a 2D image and a second display mode for displaying a parallax image on the first display panel, and switches and/or is configured to switch a drive mode of the second display panel between multiple drive modes including a first drive mode corresponding to the first display mode and a second drive mode corresponding to the second display mode.
SELECTIVE MONO/STEREO VISUAL DISPLAYS
To enhance a mono-output-only controller such as a mobile OS to support selective mono/stereo/mixed output, a stereo controller is instantiated in communication with the mono controller. The stereo controller coordinates stereo output, but calls and adapts functions already present in the mono controller for creating surface and image buffers, rendering, compositing, and/or merging. For content designated for 2D display, left and right surfaces are rendered from a mono perspective; for content designated for 3D display, left and right surfaces are rendered from left and right stereo perspectives, respectively. Some, all, or none of available content may be delivered to a stereo display in 3D, with a remainder delivered in 2D, and with comparable content still delivered in 2D to the mono display. The stereo controller is an add-on; the mono controller need not be replaced, removed, deactivated, or modified, facilitating transparency and backward compatibility.
SELECTIVE MONO/STEREO VISUAL DISPLAYS
To enhance a mono-output-only controller such as a mobile OS to support selective mono/stereo/mixed output, a stereo controller is instantiated in communication with the mono controller. The stereo controller coordinates stereo output, but calls and adapts functions already present in the mono controller for creating surface and image buffers, rendering, compositing, and/or merging. For content designated for 2D display, left and right surfaces are rendered from a mono perspective; for content designated for 3D display, left and right surfaces are rendered from left and right stereo perspectives, respectively. Some, all, or none of available content may be delivered to a stereo display in 3D, with a remainder delivered in 2D, and with comparable content still delivered in 2D to the mono display. The stereo controller is an add-on; the mono controller need not be replaced, removed, deactivated, or modified, facilitating transparency and backward compatibility.
Method and apparatus for combining 3D image and graphical data
Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display (30) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data (31) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention (32) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.
Method and apparatus for combining 3D image and graphical data
Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display (30) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data (31) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention (32) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.
Systems and methods for occluding images and videos subject to augmented-reality effects
In one embodiment, a method includes a system accessing an image, which may comprise covered and uncovered portions, and an overlay image comprising opaque pixels. The covered portion may be configured to be covered by the opaque pixels of the overlay image. The system may generate a data structure comprising data elements associated with pixels of the image. Each of the data elements associated with a covered pixel in the covered portion of the image may be configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel. Each covered pixel in the covered portion of the image may be modified by accessing the data element associated with the covered pixel, determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element, and modifying a color of the covered pixel based on the distance.