H04N13/351

Display Device

A display device is provided. The display device includes: a plurality of sub-pixels and a light splitting structure including a plurality of light splitting portions. Each of the plurality of sub-pixels includes a plurality of display units, the plurality of sub-pixels are arranged as a plurality of sub-pixel row groups, each of the plurality of sub-pixel row groups includes at least two rows of sub-pixels, a gap is provided between two adjacent sub-pixels in each row of sub-pixels, and two adjacent rows of sub-pixels in each sub pixel row groups are shifted from each other in a row direction so that a sub-pixel in one row and the gap between two adjacent sub-pixels in another row are shifted from each other; a ratio of a size of each light splitting portions along the row direction to a pitch of the sub-pixels is in a range from 0.9 to 1.1.

Auto-stereoscopic display device with a striped backlight and two lenticular lens arrays

Autostereoscopic display device comprising a backlight (66), a display panel (62) comprising rows and columns of pixels and a lenticular arrangement (60, 64), wherein the backlight (66) provides a striped output comprising stripes in the column direction or offset by an acute angle to the column direction the lenticular arrangement comprises a first lenticular lens array (60) on the side of the display panel (62) facing the display output for directing different display panel pixel outputs in different directions and a second lenticular lens array (64) on the opposite side of the display panel (62), facing the backlight (66), for providing collimation of the striped back-light output.

Auto-stereoscopic display device with a striped backlight and two lenticular lens arrays

Autostereoscopic display device comprising a backlight (66), a display panel (62) comprising rows and columns of pixels and a lenticular arrangement (60, 64), wherein the backlight (66) provides a striped output comprising stripes in the column direction or offset by an acute angle to the column direction the lenticular arrangement comprises a first lenticular lens array (60) on the side of the display panel (62) facing the display output for directing different display panel pixel outputs in different directions and a second lenticular lens array (64) on the opposite side of the display panel (62), facing the backlight (66), for providing collimation of the striped back-light output.

Hybrid stereo rendering for depth extension in dynamic light field displays
11483543 · 2022-10-25 · ·

An apparatus and method for hybrid rendering. For example, one embodiment of a method comprises: identifying left and right views of a user's eyes; generating at least one depth map for the left and right views; calculating depth clamping thresholds including a minimum depth value and a maximum depth value; transforming the depth map in accordance with the minimum depth value and maximum depth value; and performing view synthesis to render left and right views using the transformed depth map.

Hybrid stereo rendering for depth extension in dynamic light field displays
11483543 · 2022-10-25 · ·

An apparatus and method for hybrid rendering. For example, one embodiment of a method comprises: identifying left and right views of a user's eyes; generating at least one depth map for the left and right views; calculating depth clamping thresholds including a minimum depth value and a maximum depth value; transforming the depth map in accordance with the minimum depth value and maximum depth value; and performing view synthesis to render left and right views using the transformed depth map.

Precision multi-view display

A precision multi-view (MV) display system can accurately and simultaneously display different content to different viewers over a wide field of view. The MV display system may include features that enable individual MV display devices to be easily and efficiently tiled to form a larger MV display. A graphical interface enables a user to graphically specify viewing zones and associate content that will be visible in those zones in a simple manner. A calibration procedure enables the specification of content at precise viewing locations.

INFORMATION PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD
20230128305 · 2023-04-27 · ·

There is provided an image processing system including a first information processing apparatus and a second information processing apparatus. The first information processing apparatus performs processing, for a plurality of captured images simultaneously captured by a plurality of image capturing devices, for identifying a generation target image section for which a free-viewpoint image is generated, performs control to transmit image data for the generation target image section in each of the plurality of captured images as image data used for generation of a free-viewpoint image in the second information processing apparatus, and generates an output image including a free-viewpoint image received. The second information processing apparatus performs processing for acquiring image data for the generation target image section in each of the plurality of captured images, performs processing for generating a free-viewpoint image by using the image data acquired, and performs control to transmit the free-viewpoint image generated to the first information processing apparatus.

INFORMATION PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD
20230128305 · 2023-04-27 · ·

There is provided an image processing system including a first information processing apparatus and a second information processing apparatus. The first information processing apparatus performs processing, for a plurality of captured images simultaneously captured by a plurality of image capturing devices, for identifying a generation target image section for which a free-viewpoint image is generated, performs control to transmit image data for the generation target image section in each of the plurality of captured images as image data used for generation of a free-viewpoint image in the second information processing apparatus, and generates an output image including a free-viewpoint image received. The second information processing apparatus performs processing for acquiring image data for the generation target image section in each of the plurality of captured images, performs processing for generating a free-viewpoint image by using the image data acquired, and performs control to transmit the free-viewpoint image generated to the first information processing apparatus.

Method for optimized viewing experience and reduced rendering for autostereoscopic 3D, multiview and volumetric displays

A system and method for creating an improved three-dimensional image includes several steps. One step includes providing one or more adjacent viewing zones, where each of the adjacent viewing zones includes several views of content, and where the adjacent viewing zones include central subset zones that include centrally located views within the adjacent viewing zones, and transition subset views that include views at edges of the adjacent viewing zones. Another step includes inserting at least one of the views from the central subset views into the transition zone to create an expanded transition zone. A further step includes removing at least one transition subset view from the adjacent viewing zone and replacing the removed at least one transition subset view with the inserted at least one of the views from the central subset views.

METHOD AND APPARATUS FOR VIRTUAL SPACE CONSTRUCTING BASED ON STACKABLE LIGHT FIELD

The electronic apparatus includes a memory stored with a multiple light field unit (LFU) structure in which a plurality of light fields is arranged in a lattice structure, and a processor configured to, based on a view position within the lattice structure being determined, generate a 360-degree image based on the view position by using the multiple LFU structure, and the processor is configured to select an LFU to which the view position belongs from among the multiple LFU structure, allocate a rendering field-of-view (FOV) in predetermined degrees based on the view position, generate a plurality of view images based on a plurality of light fields comprising the selected LFU and the allocated FOV, and generate the 360-degree image by incorporating the generated plurality of view images.