H04N13/161

Method and apparatus for transmitting video content using edge computing service
11570486 · 2023-01-31 · ·

An example method, performed by an edge data network, of transmitting video content includes: obtaining first bearing information from an electronic device connected to the edge data network; determining second predicted bearing information based on the first bearing information; determining a second predicted partial image corresponding to the second predicted bearing information; transmitting, to the electronic device, a second predicted frame generated by encoding the second predicted partial image; obtaining, from the electronic device, second bearing information corresponding to a second partial image; comparing the second predicted bearing information to the obtained second bearing information; generating, based on a result of the comparing, a compensation frame using at least two of a first partial image corresponding to the first bearing information, the second predicted partial image, or the second partial image corresponding to the second bearing information; and transmitting the generated compensation frame to the electronic device based on the result of the comparing.

Method and apparatus for transmitting video content using edge computing service
11570486 · 2023-01-31 · ·

An example method, performed by an edge data network, of transmitting video content includes: obtaining first bearing information from an electronic device connected to the edge data network; determining second predicted bearing information based on the first bearing information; determining a second predicted partial image corresponding to the second predicted bearing information; transmitting, to the electronic device, a second predicted frame generated by encoding the second predicted partial image; obtaining, from the electronic device, second bearing information corresponding to a second partial image; comparing the second predicted bearing information to the obtained second bearing information; generating, based on a result of the comparing, a compensation frame using at least two of a first partial image corresponding to the first bearing information, the second predicted partial image, or the second partial image corresponding to the second bearing information; and transmitting the generated compensation frame to the electronic device based on the result of the comparing.

A METHOD AND APPARATUS FOR ENCODING AND DECODING OF MULTIPLE-VIEWPOINT 3DOF+ CONTENT

A method for encoding a volumetric video content representative of a 3D scene is disclosed. The method comprises obtaining a reference viewing box and an intermediate viewing box defined within the 3D scene. For the reference viewing bounding box, the volumetric video reference subcontent is encoded as a central image and peripheral patches for parallax. For the intermediate viewing bounding box, the volumetric video intermediate sub-content is encoded as intermediate central patches which are differences between the intermediate central image and the reference central image.

A METHOD AND APPARATUS FOR ENCODING AND DECODING OF MULTIPLE-VIEWPOINT 3DOF+ CONTENT

A method for encoding a volumetric video content representative of a 3D scene is disclosed. The method comprises obtaining a reference viewing box and an intermediate viewing box defined within the 3D scene. For the reference viewing bounding box, the volumetric video reference subcontent is encoded as a central image and peripheral patches for parallax. For the intermediate viewing bounding box, the volumetric video intermediate sub-content is encoded as intermediate central patches which are differences between the intermediate central image and the reference central image.

IMAGE PROCESSING DEVICE
20230231965 · 2023-07-20 · ·

An image processing device includes a rotation processor and an image processor. The rotation processor receives an input image and generates a temporary image according to the input image. The image processor is coupled to the rotation processor and outputs a processed image according to the temporary image, wherein the image processor has a predetermined image processing width, a width of the input image is larger than the predetermined image processing width, and a width of the temporary image is less than the predetermined image processing width.

IMAGE PROCESSING DEVICE
20230231965 · 2023-07-20 · ·

An image processing device includes a rotation processor and an image processor. The rotation processor receives an input image and generates a temporary image according to the input image. The image processor is coupled to the rotation processor and outputs a processed image according to the temporary image, wherein the image processor has a predetermined image processing width, a width of the input image is larger than the predetermined image processing width, and a width of the temporary image is less than the predetermined image processing width.

Signaling a cancel flag in a video bitstream
11706398 · 2023-07-18 · ·

A method of coding implemented by a video encoder. The method includes encoding a representation of video data into a bitstream, the bitstream being prohibited from including a fisheye supplemental enhancement information (SEI) message and one of a projection indication SEI message and a frame packing indication SEI message that both apply to any particular coded picture in the bitstream; and transmitting the bitstream to the video decoder.

Signaling a cancel flag in a video bitstream
11706398 · 2023-07-18 · ·

A method of coding implemented by a video encoder. The method includes encoding a representation of video data into a bitstream, the bitstream being prohibited from including a fisheye supplemental enhancement information (SEI) message and one of a projection indication SEI message and a frame packing indication SEI message that both apply to any particular coded picture in the bitstream; and transmitting the bitstream to the video decoder.

Positional zero latency

Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.

Positional zero latency

Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.