H04N13/366

Video transmission method, video transmission device, video receiving method and video receiving device
11528509 · 2022-12-13 · ·

A video transmission method that includes predicting, from a texture picture or a depth picture of an anchor viewing position, a picture for a target viewing position on the basis of target viewing position information and processing a prediction error with respect to the predicted picture on the basis of a source picture of the target viewing position. An error-prone region map is generated on the basis of the predicted picture and the source picture. The video transmission method also includes patch packing the prediction error-processed picture on the basis of the error-prone region map and encoding the packed patch on the basis of the texture picture or the depth picture of the anchor viewing position.

DISPLAY PANEL, DISPLAY METHOD THEREOF AND DISPLAY DEVICE
20220394236 · 2022-12-08 ·

A display panel, a display method thereof and a display device are provided. The display panel includes cylindrical lens array on light emitting side of display substrate. The display substrate includes back plate, pixel definition layer and sub-pixel unit array on back plate. Each sub-pixel unit is in pixel region defined by pixel definition layer and includes at least two secondary sub-pixels; and cylindrical lenses are corresponding to sub-pixel units, each cylindrical lens has cylindrical surface away from back plate, and focus point on surface of a sub-pixel unit away from back plate. In the display panel, multi-viewpoints parallax 3D display compatible with near-to-eye light field display and 2D display with sub-pixels of same gray scale is achieved, effectively reducing the crosstalk, improving Moire phenomenon and increasing stereo perception of the parallax 3D display and solving visual fatigue due to conflict between monocular focusing and binocular convergence.

DISPLAY PANEL, DISPLAY METHOD THEREOF AND DISPLAY DEVICE
20220394236 · 2022-12-08 ·

A display panel, a display method thereof and a display device are provided. The display panel includes cylindrical lens array on light emitting side of display substrate. The display substrate includes back plate, pixel definition layer and sub-pixel unit array on back plate. Each sub-pixel unit is in pixel region defined by pixel definition layer and includes at least two secondary sub-pixels; and cylindrical lenses are corresponding to sub-pixel units, each cylindrical lens has cylindrical surface away from back plate, and focus point on surface of a sub-pixel unit away from back plate. In the display panel, multi-viewpoints parallax 3D display compatible with near-to-eye light field display and 2D display with sub-pixels of same gray scale is achieved, effectively reducing the crosstalk, improving Moire phenomenon and increasing stereo perception of the parallax 3D display and solving visual fatigue due to conflict between monocular focusing and binocular convergence.

Providing a three-dimensional preview of a three-dimensional reality video

A illustrative method includes providing, for display by a display device, a first preview of a three-dimensional (3D) virtual reality video in a 3D object in a virtual reality menu user interface, the first preview of the 3D virtual reality video corresponding to a first location within the 3D object, receiving data describing a rotation of the 3D object in the virtual reality menu user interface, and providing, for display by the display device and based on the rotation of the 3D object, a second preview of the 3D virtual reality video in the 3D object in the virtual reality menu user interface, the second preview of the 3D virtual reality video corresponding to a second location within the 3D object.

Providing a three-dimensional preview of a three-dimensional reality video

A illustrative method includes providing, for display by a display device, a first preview of a three-dimensional (3D) virtual reality video in a 3D object in a virtual reality menu user interface, the first preview of the 3D virtual reality video corresponding to a first location within the 3D object, receiving data describing a rotation of the 3D object in the virtual reality menu user interface, and providing, for display by the display device and based on the rotation of the 3D object, a second preview of the 3D virtual reality video in the 3D object in the virtual reality menu user interface, the second preview of the 3D virtual reality video corresponding to a second location within the 3D object.

VIRTUAL REALITY INTERACTION METHOD, DEVICE AND SYSTEM
20220385884 · 2022-12-01 ·

A method and system for aligning exposure center points of multiple cameras in a VR system are provided. The method includes: acquiring image data of a first type frame according to a preset frame rate; adjusting VTS data of the first type frame, the VTS data changing with the change of the exposure parameters so as to fix a time interval between an exposure center point of the first type frame and an FSIN synchronization signal in a VR system; acquiring image data of a second type frame according to the preset frame rate; and adjusting VTS data of the second type frame according to the VTS data of the first type frame, and fixing a time interval between an exposure center point of the second type frame and the FSIN synchronization signal so as to complete the alignment of exposure center points of the cameras.

VIRTUAL REALITY INTERACTION METHOD, DEVICE AND SYSTEM
20220385884 · 2022-12-01 ·

A method and system for aligning exposure center points of multiple cameras in a VR system are provided. The method includes: acquiring image data of a first type frame according to a preset frame rate; adjusting VTS data of the first type frame, the VTS data changing with the change of the exposure parameters so as to fix a time interval between an exposure center point of the first type frame and an FSIN synchronization signal in a VR system; acquiring image data of a second type frame according to the preset frame rate; and adjusting VTS data of the second type frame according to the VTS data of the first type frame, and fixing a time interval between an exposure center point of the second type frame and the FSIN synchronization signal so as to complete the alignment of exposure center points of the cameras.

SPATIAL IMAGE CLUSTER FOR VEHICLE
20220379726 · 2022-12-01 · ·

A cluster according to an embodiment of the disclosure includes a display and a spatial image panel. The display is installed in the vehicle to output predetermined information as a 2D image. The spatial image panel is configured to output a 3D image in a predetermined space in front. The spatial image panel includes a first lens array, a second lens array, and a refractive medium. The first lens array is disposed adjacent to the display and includes a plurality of first lenses arranged on the same plane. The second lens array is disposed in parallel with the first array so that the first lenses and second lenses overlap each other. The refractive medium is disposed between the first lens array and the second lens array.

SPATIAL IMAGE CLUSTER FOR VEHICLE
20220379726 · 2022-12-01 · ·

A cluster according to an embodiment of the disclosure includes a display and a spatial image panel. The display is installed in the vehicle to output predetermined information as a 2D image. The spatial image panel is configured to output a 3D image in a predetermined space in front. The spatial image panel includes a first lens array, a second lens array, and a refractive medium. The first lens array is disposed adjacent to the display and includes a plurality of first lenses arranged on the same plane. The second lens array is disposed in parallel with the first array so that the first lenses and second lenses overlap each other. The refractive medium is disposed between the first lens array and the second lens array.

CALIBRATING SENSOR ALIGNMENT WITH APPLIED BENDING MOMENT

Examples are disclosed that relate to calibration data related to a determined alignment of sensors on a wearable display device. One example provides a wearable display device comprising a frame, a first sensor and a second sensor, one or more displays, a logic system, and a storage system. The storage system comprises calibration data related to a determined alignment of the sensors with the frame in a bent configuration and instructions executable by the logic system. The instructions are executable to obtain a first sensor data and a second sensor data respectfully from the first and second sensors, determine a distance from the wearable display device to a feature based at least upon the first and second sensor data using the calibration data, obtain a stereo image to display based upon the distance from the wearable display device to the feature, and output the stereo image via the displays.