H04N13/351

SYSTEM AND METHOD FOR CAPTURING AND VIEWING PANORAMIC IMAGES HAVING MOTION PARALLAX DEPTH PERCEPTION WITHOUT IMAGE STITCHING
20170366803 · 2017-12-21 ·

A system for acquiring a sequence of image frames for display having depth perception through motion parallax includes a base unit, a stage unit, and a camera unit. The stage unit is disposed over the base unit and is configured to rotate, with respect to the base unit, about an axis of rotation, and is configured to hold the camera element thereon at a predetermined offset, as measured from the axis of rotation to a no-parallax point or least-parallax point of the camera element. The camera element is configured to acquire a sequence of image frames, as it is rotated about the axis of rotation by the stage unit and is kept at the predetermined offset, and is configured to acquire the sequence of image frames during the rotation. The predetermined offset is a positive distance value.

UPDATING AN ASSET WITHIN A VIRTUAL REALITY ENVIRONMENT

A method executed by a computing entity includes generating a virtual reality environment utilizing a group of object representations by rendering an assessment asset for a first set of object representations to produce first portrayal 3-D video frames of a first piece of information for the virtual reality environment. The method further includes obtaining a first conveyance effectiveness level with regards to the first piece of information based on the first portrayal 3-D video frames within the virtual reality environment. When the first conveyance effectiveness level is less than a minimum conveyance effectiveness threshold level, the method further includes rendering an updated first set of object representations based on the first set of object representations and the first conveyance effectiveness level to produce updated first portrayal 3-D video frames for the first piece of information within the virtual reality environment.

Multi-view display control

A multi-view display controller determines view angles for each view of a multi-view media content for each viewer watching a multi-view display. The view angles determined for a viewer collectively define a viewer cone that displays the views onto the viewer. Media data of the multi-view media content is output together with information of the determined view angles to the multi-view display in order to allow each viewer to have the same experience of displayed media content regardless of where the viewer is positioned relative to the multi-view display.

Method and System for Evaluating Viewpoint Density, Processing Device, and Computer Storage Medium
20230188700 · 2023-06-15 ·

A method, a system, a processing device and a computer storage medium for evaluating a viewpoint density are provided. The method includes: acquiring a quantity of viewpoints of a display panel; comparing a size of an image spot radius of each viewpoint and image point spacing between the viewpoint and an adjacent viewpoint, and selecting one viewpoint as a reference viewpoint, calculating a crosstalk value between another viewpoint except the reference viewpoint and the reference viewpoint; and evaluating a viewpoint density for the auto-stereoscopic display according to the comparison of the size of the image spot radius of each viewpoint and the image point spacing between the viewpoint and the adjacent viewpoint and the calculated crosstalk value between the another viewpoint and the reference viewpoint.

AUTOSTEREOSCOPIC MULTI-VIEW SYSTEM
20170347083 · 2017-11-30 ·

A method for the autostereoscopic representation of images on a display screen includes the steps of selecting a view mode from a plurality of pre-defined view modes; creating a channel mask that defines a number N of channels per segment of the optical plate, wherein N is larger than or equal to the number of views in the selected view mode; providing a texture for each of the N channels; correlating each screen pixel with at least one texture by reference to the channel mask; applying an allocation algorithm for allocating the total of the image information to be displayed at a time to at least two textures such that each texture includes the information for one view

Apparatus and method for displaying multi-depth image
11677929 · 2023-06-13 · ·

A method is provided for displaying a multi-depth image in which one or more images are inserted into a main image in a tree structure and one or more objects are mapped to at least some of the one or more images. The method comprises generating at least one content tree from the tree structure of the multi-depth image; and configuring a display area corresponding to each of the at least one content tree. The display area includes a first area for displaying an image corresponding to each node of the content tree, and a second area for displaying an object corresponding to each node of the content tree or a playback user interface (UI) of the object. The first area and the second area respectively display the image and the object mapped thereto or the playback UI in synchronization with each other.

Apparatus and method for displaying multi-depth image
11677929 · 2023-06-13 · ·

A method is provided for displaying a multi-depth image in which one or more images are inserted into a main image in a tree structure and one or more objects are mapped to at least some of the one or more images. The method comprises generating at least one content tree from the tree structure of the multi-depth image; and configuring a display area corresponding to each of the at least one content tree. The display area includes a first area for displaying an image corresponding to each node of the content tree, and a second area for displaying an object corresponding to each node of the content tree or a playback user interface (UI) of the object. The first area and the second area respectively display the image and the object mapped thereto or the playback UI in synchronization with each other.

Systems and methods for projecting images from light field displays based on positional tracking data
11675212 · 2023-06-13 · ·

Systems and methods presented herein include light field displays configured to display primary autostereoscopic images and to simultaneously project light rays toward display devices (e.g., either reflective devices or cameras) to display secondary autostereoscopic images via the display devices. The light rays projected from the light field displays are controlled by a control system based at least in part on positional tracking data (e.g., position, orientation, and/or movement) of the display devices and/or of a portion of humans associated with the display devices, which may be detected via sensors of the display devices and/or via cameras disposed about a physical environment within which the display devices and the humans are located. Specifically, the control system calculates light field vector functions for light rays to be projected toward each individual display device based at least in part on positional tracking data for that particular display device and/or its associated human.

Systems and methods for projecting images from light field displays based on reflected light rays
11675213 · 2023-06-13 · ·

Systems and methods presented herein include light field displays configured to display primary autostereoscopic images and to simultaneously project (e.g., in real time, while displaying their own primary autostereoscopic images) light rays toward display devices (e.g., either reflective devices or cameras) to display secondary autostereoscopic images via the display devices. The light rays projected from the light field displays are controlled by a control system based at least in part on positional data (e.g., position, orientation, and/or movement) of the display devices, which may be determined by the control system based at least in part on detection of light rays that are reflected off the display devices.

Autostereoscopic 3D image display device for flattening viewing zone and minimizing dynamic crosstalk

The present invention relates to a 3D image display device and includes an image display panel for displaying a 3D image, a control unit for controlling a viewpoint image, and a viewer position tracking system for determining the position of a viewer's pupil and transmitting positional information to the control unit, wherein the image display panel provides multiple viewpoints such as four or more viewpoints, and the intersection of the viewing zone for any one of the multiple viewpoints with the field of view of an adjacent viewpoint is at least 85% of the maximum brightness of one viewpoint.