H04N13/194

Distribution of multiple signals of video content independently over a network

A stereoscopic production solution, e.g., for live events, that provides 3D video asset distribution to multiple devices and networks is described. In some embodiments, live or recorded 3D video content may be accessible by different service providers with different subscribers/users and protocols across a network of the content provider. A first video signal corresponding to a first video feed for one eye of a viewer may be received and a second video signal corresponding to a second video feed for the second eye of the viewer may be received. The first video signal and the second video signal may be encoded. The encoded first video signal and the encoded second video signal may be transmitted independently over a network. The two video signals may be received and frame synced at an off-site location for eventual rendering to a display device.

Distribution of multiple signals of video content independently over a network

A stereoscopic production solution, e.g., for live events, that provides 3D video asset distribution to multiple devices and networks is described. In some embodiments, live or recorded 3D video content may be accessible by different service providers with different subscribers/users and protocols across a network of the content provider. A first video signal corresponding to a first video feed for one eye of a viewer may be received and a second video signal corresponding to a second video feed for the second eye of the viewer may be received. The first video signal and the second video signal may be encoded. The encoded first video signal and the encoded second video signal may be transmitted independently over a network. The two video signals may be received and frame synced at an off-site location for eventual rendering to a display device.

Enabling motion parallax with multilayer 360-degree video

Systems and methods are described for simulating motion parallax in 360-degree video. In an exemplary embodiment for producing video content, a method includes obtaining a source video, based on information received from a client device, determining a selected number of depth layers, producing, from the source video, a plurality of depth layer videos corresponding to the selected number of depth layers, wherein each depth layer video is associated with at least one respective depth value, and wherein each depth layer video includes regions of the source video having depth values corresponding to the respective associated depth value, and sending the plurality of depth layer videos to the client device.

Enabling motion parallax with multilayer 360-degree video

Systems and methods are described for simulating motion parallax in 360-degree video. In an exemplary embodiment for producing video content, a method includes obtaining a source video, based on information received from a client device, determining a selected number of depth layers, producing, from the source video, a plurality of depth layer videos corresponding to the selected number of depth layers, wherein each depth layer video is associated with at least one respective depth value, and wherein each depth layer video includes regions of the source video having depth values corresponding to the respective associated depth value, and sending the plurality of depth layer videos to the client device.

Signaling a cancel flag in a video bitstream
11706398 · 2023-07-18 · ·

A method of coding implemented by a video encoder. The method includes encoding a representation of video data into a bitstream, the bitstream being prohibited from including a fisheye supplemental enhancement information (SEI) message and one of a projection indication SEI message and a frame packing indication SEI message that both apply to any particular coded picture in the bitstream; and transmitting the bitstream to the video decoder.

Signaling a cancel flag in a video bitstream
11706398 · 2023-07-18 · ·

A method of coding implemented by a video encoder. The method includes encoding a representation of video data into a bitstream, the bitstream being prohibited from including a fisheye supplemental enhancement information (SEI) message and one of a projection indication SEI message and a frame packing indication SEI message that both apply to any particular coded picture in the bitstream; and transmitting the bitstream to the video decoder.

Positional zero latency

Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.

Positional zero latency

Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.

Depth map re-projection on user electronic devices

A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.

Data generation method, driving method, computer device, display apparatus and system

Disclosed are a method for generating display data by a rotatory stereoscopic display apparatus, a display driving method, a computer device, a rotatory stereoscopic display apparatus, and a stereoscopic display system. The method for generating display data includes: generating, based on display parameters of the rotatory stereoscopic display apparatus and a model to be displayed, an image array for displaying the model; generating, for an image in the image array, an initial data stream of the image, the initial data stream including: grayscale datum of each pixel in the image; and performing data compression on the initial data stream to generate a compressed data stream, the compressed data stream including: data units of pixels whose grayscale data is non-zero data, each data unit including: grayscale datum of the pixel and an order of the grayscale datum in the initial data stream.