Patent classifications
H04N13/302
Layered scene decomposition codec system and methods
A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.
Method and computing device for interacting with autostereoscopic display, autostereoscopic display system, autostereoscopic display, and computer-readable storage medium
A method for interacting with an autostereoscopic display is disclosed. The method includes initiating displaying by the autostereoscopic display a left eye view and a right eye view that contain a virtual manipulated object, determining a real-world coordinate of the virtual manipulated object perceived by a user located at a predetermined viewing position of the auto stereoscopic display, receiving an interactive action of the user's manipulating body acquired by a motion tracker, where the interaction action includes a real-world coordinate of the manipulating body, determining whether an interaction condition is triggered based at least in part on the real-world coordinate of the virtual manipulated object and the real-world coordinate of the manipulating body, and refreshing the left eye view and the right eye view based on the interactive action of the manipulating body acquired by the motion tracker, in response to determining that the interaction condition is triggered.
Method and computing device for interacting with autostereoscopic display, autostereoscopic display system, autostereoscopic display, and computer-readable storage medium
A method for interacting with an autostereoscopic display is disclosed. The method includes initiating displaying by the autostereoscopic display a left eye view and a right eye view that contain a virtual manipulated object, determining a real-world coordinate of the virtual manipulated object perceived by a user located at a predetermined viewing position of the auto stereoscopic display, receiving an interactive action of the user's manipulating body acquired by a motion tracker, where the interaction action includes a real-world coordinate of the manipulating body, determining whether an interaction condition is triggered based at least in part on the real-world coordinate of the virtual manipulated object and the real-world coordinate of the manipulating body, and refreshing the left eye view and the right eye view based on the interactive action of the manipulating body acquired by the motion tracker, in response to determining that the interaction condition is triggered.
3D DISPLAY SYSTEM AND 3D DISPLAY METHOD
A 3D display system and a 3D display method are provided. The 3D display system includes a 3D display, a memory and one or more processors. The memory records a plurality of modules, and the processor accesses and executes the modules recorded by the memory. The modules include a bridge interface module and a 3D display service module. When an application is executed by the processor, the bridge interface module creates a virtual extend screen, and moves the application to the virtual extend screen. The bridge interface module obtains a 2D content frame of the application from the virtual extend screen by a screenshot function. The 3D display service module converts the 2D content frame into a 3D format frame by communicating with a third-party software development kit, and provides the 3D format frame to the 3D display for displaying.
3D DISPLAY SYSTEM AND 3D DISPLAY METHOD
A 3D display system and a 3D display method are provided. The 3D display system includes a 3D display, a memory and one or more processors. The memory records a plurality of modules, and the processor accesses and executes the modules recorded by the memory. The modules include a bridge interface module and a 3D display service module. When an application is executed by the processor, the bridge interface module creates a virtual extend screen, and moves the application to the virtual extend screen. The bridge interface module obtains a 2D content frame of the application from the virtual extend screen by a screenshot function. The 3D display service module converts the 2D content frame into a 3D format frame by communicating with a third-party software development kit, and provides the 3D format frame to the 3D display for displaying.
FLOATING-INFORMATION DISPLAY
A floating-information display includes a first quarter-wave retarder disposed on a side of an optical plate. A reflective polarizer is disposed between the first quarter-wave retarder and the optical plate. A first display is configured to transmit a first image along a first axis through the first quarter-wave retarder to the reflective polarizer. The reflective polarizer redirects the first image along a second axis through the first quarter-wave retarder toward a viewer. The first image appears to the viewer to be oriented normal to the second axis and at a first location. A second display is configured to transmit a second image to the optical plate. The second image is transferred through the first quarter-wave retarder along the second axis toward the viewer. The second image appears to the viewer to be oriented normal to the second axis and at a second location.
DISPLAY DEVICE
According to an aspect, a display device includes: a display panel including a plurality of pixels disposed in a matrix having a row-column configuration; and a light control panel configured to control light traveling from the display panel to a plurality of viewpoints such that the light varies from viewpoint to viewpoint. The pixels include a first pixel configured to output light in a first color and a second pixel configured to output light in a second color. The first and second pixels are arranged in a column direction orthogonal to an arrangement direction of the viewpoints. A width of each pixel in a row direction along the arrangement direction of the viewpoints is greater than a width of the pixel in the column direction.
EYE TRACKING METHOD AND EYE TRACKING DEVICE
The disclosure provides an eye tracking method and an eye tracking device. The method includes obtaining a reference interpupillary distance value; taking images of a user of a 3D display, and finding a first eye pixel coordinate corresponding to a first eye of the user and a second eye pixel coordinate corresponding to a second eye of the user in each image; detecting a first and a second eye spatial coordinates of the first and the second eyes, and determining projection coordinates based on the first eye spatial coordinate, the second eye spatial coordinate, and optical parameters of image capturing elements; determining an optimization condition related to the first and second eye spatial coordinates based on the first and second eye pixel coordinates, the projection coordinates, and the reference interpupillary distance value of each image; and optimizing the first and second eye spatial coordinates based on the optimization condition.
EYE TRACKING METHOD AND EYE TRACKING DEVICE
The disclosure provides an eye tracking method and an eye tracking device. The method includes obtaining a reference interpupillary distance value; taking images of a user of a 3D display, and finding a first eye pixel coordinate corresponding to a first eye of the user and a second eye pixel coordinate corresponding to a second eye of the user in each image; detecting a first and a second eye spatial coordinates of the first and the second eyes, and determining projection coordinates based on the first eye spatial coordinate, the second eye spatial coordinate, and optical parameters of image capturing elements; determining an optimization condition related to the first and second eye spatial coordinates based on the first and second eye pixel coordinates, the projection coordinates, and the reference interpupillary distance value of each image; and optimizing the first and second eye spatial coordinates based on the optimization condition.
Method for optimized viewing experience and reduced rendering for autostereoscopic 3D, multiview and volumetric displays
A system and method for creating an improved three-dimensional image includes several steps. One step includes providing one or more adjacent viewing zones, where each of the adjacent viewing zones includes several views of content, and where the adjacent viewing zones include central subset zones that include centrally located views within the adjacent viewing zones, and transition subset views that include views at edges of the adjacent viewing zones. Another step includes inserting at least one of the views from the central subset views into the transition zone to create an expanded transition zone. A further step includes removing at least one transition subset view from the adjacent viewing zone and replacing the removed at least one transition subset view with the inserted at least one of the views from the central subset views.