H04N19/436

Bitstream signaling of error mitigation in sub-picture bitstream based viewport dependent video coding

A video coding mechanism for viewpoint dependent video coding is disclosed. The mechanism includes mapping a spherical video sequence into a plurality of sub-picture video sequences. The mechanism further includes encoding the plurality of sub-picture video sequences as sub-picture bitstreams to support merging of the plurality of sub-picture bitstreams, the encoding ensuring that each sub-picture bitstream is self-referenced and two or more of the sub-picture bitstreams can be merged to generate a single video bitstream using a lightweight bitstream rewriting process that does not involve changing of any block-level coding results. A mergable indication is encoded to indicate that the sub-picture bitstream containing the indication is compatible with a multi-bitstream merge function for reconstruction of the spherical video sequence. A set of the sub-picture bitstreams and the mergable indication are transmitted toward the decoder to support decoding and displaying a virtual reality video viewport.

Bitstream signaling of error mitigation in sub-picture bitstream based viewport dependent video coding

A video coding mechanism for viewpoint dependent video coding is disclosed. The mechanism includes mapping a spherical video sequence into a plurality of sub-picture video sequences. The mechanism further includes encoding the plurality of sub-picture video sequences as sub-picture bitstreams to support merging of the plurality of sub-picture bitstreams, the encoding ensuring that each sub-picture bitstream is self-referenced and two or more of the sub-picture bitstreams can be merged to generate a single video bitstream using a lightweight bitstream rewriting process that does not involve changing of any block-level coding results. A mergable indication is encoded to indicate that the sub-picture bitstream containing the indication is compatible with a multi-bitstream merge function for reconstruction of the spherical video sequence. A set of the sub-picture bitstreams and the mergable indication are transmitted toward the decoder to support decoding and displaying a virtual reality video viewport.

Sliced encoding and decoding for remote rendering

Disclosed herein are related to a device and a method of remotely rendering an image. In one approach, a device divides an image of an artificial reality space into a plurality of slices. In one approach, the device encodes a first slice of the plurality of slices. In one approach, the device encodes a portion of a second slice of the plurality of slices, while the device encodes a portion of the first slice. In one approach, the device transmits the encoded first slice of the plurality of slices to a head wearable display. In one approach, the device transmits the encoded second slice of the plurality of slices to the head wearable display, while the device transmits a portion of the encoded first slice to the head wearable display.

Sample array coding for low-delay

The entropy coding of a current part of a predetermined entropy slice is based on, not only, the respective probability estimations of the predetermined entropy slice as adapted using the previously coded part of the predetermined entropy slice, but also probability estimations as used in the entropy coding of a spatially neighboring, in entropy slice order preceding entropy slice at a neighboring part thereof. Thereby, the probability estimations used in entropy coding are adapted to the actual symbol statistics more closely, thereby lowering the coding efficiency decrease normally caused by lower-delay concepts. Temporal interrelationships are exploited additionally or alternatively.

Sample array coding for low-delay

The entropy coding of a current part of a predetermined entropy slice is based on, not only, the respective probability estimations of the predetermined entropy slice as adapted using the previously coded part of the predetermined entropy slice, but also probability estimations as used in the entropy coding of a spatially neighboring, in entropy slice order preceding entropy slice at a neighboring part thereof. Thereby, the probability estimations used in entropy coding are adapted to the actual symbol statistics more closely, thereby lowering the coding efficiency decrease normally caused by lower-delay concepts. Temporal interrelationships are exploited additionally or alternatively.

IMAGE ENCODING AND DECODING METHODS AND DEVICES THEREOF
20180007375 · 2018-01-04 ·

Image encoding and decoding methods and devices thereof are provided. The encoding method includes: performing downsampling on a first image to obtain a second image; encoding the second image to obtain a second image bit stream, and sending the second image bit stream to a decoding end; processing the second image to obtain a third image having a resolution the same as that of the first image; calculating a difference between the third image and the first image to obtain a first difference image; regulating pixel values of the first difference image to fall within a pre-set range, to obtain a second difference image; and encoding the second difference image to obtain a second difference image bit stream, and sending the second difference image bit stream to the decoding end to enable the decoding end to reconstruct the first image.

ADAPTIVE TILE DATA SIZE CODING FOR VIDEO AND IMAGE COMPRESSION
20180007366 · 2018-01-04 ·

A method for encoding a video signal includes estimating a space requirement for encoding a tile of a video frame, writing a first value in a first value space of the bitstream, wherein the first value describes a size of a second value space, and defining the second value space in the bitstream, wherein the size of the second value space is based on an estimated space requirement. The method also includes writing encoded content in a content space of the bitstream, determining a size of the content space subsequent to writing encoded content in the content space, and writing a second value in the second value space of the bitstream, wherein the second value describes the size of the content space.

BLOCK-BASED PARALLEL DEBLOCKING FILTER IN VIDEO CODING
20180014035 · 2018-01-11 ·

Deblocking filtering is provided in which an 8×8 filtering block covering eight sample vertical and horizontal boundary segments is divided into filtering sub-blocks that can be independently processed. To process the vertical boundary segment, the filtering block is divided into top and bottom 8×4 filtering sub-blocks, each covering a respective top and bottom half of the vertical boundary segment. To process the horizontal boundary segment, the filtering block is divided into left and right 4×8 filtering sub-blocks, each covering a respective left and right half of the horizontal boundary segment. The computation of the deviation d for a boundary segment in a filtering sub-block is performed using only samples from rows or columns in the filtering sub-block. Consequently, the filter on/off decisions and the weak/strong filtering decisions of the deblocking filtering are performed using samples contained within individual filtering blocks, thus allowing full parallel processing of the filtering blocks.

BLOCK-BASED PARALLEL DEBLOCKING FILTER IN VIDEO CODING
20180014035 · 2018-01-11 ·

Deblocking filtering is provided in which an 8×8 filtering block covering eight sample vertical and horizontal boundary segments is divided into filtering sub-blocks that can be independently processed. To process the vertical boundary segment, the filtering block is divided into top and bottom 8×4 filtering sub-blocks, each covering a respective top and bottom half of the vertical boundary segment. To process the horizontal boundary segment, the filtering block is divided into left and right 4×8 filtering sub-blocks, each covering a respective left and right half of the horizontal boundary segment. The computation of the deviation d for a boundary segment in a filtering sub-block is performed using only samples from rows or columns in the filtering sub-block. Consequently, the filter on/off decisions and the weak/strong filtering decisions of the deblocking filtering are performed using samples contained within individual filtering blocks, thus allowing full parallel processing of the filtering blocks.

ENCODING METHOD AND APPARATUS, AND DECODING METHOD AND APPARATUS

An encoding apparatus for encoding an image includes: a communicator configured to receive, from a device, device information related to the device; and a processor configured to encode the image by using image information of the image and the device information, wherein the processor is further configured to process the image according to at least one of the device information and the image information, determine a non-encoding region, a block-based encoding region, and a pixel-based encoding region of the image according to at least one of the device information and the image information, performs block-based encoding on the block-based encoding region by using a quantization parameter determined according to at least one of the device information and the image information, perform pixel-based encoding on the pixel-based encoding region, generates an encoded image by entropy encoding a symbol determined by the block-based encoding or the pixel-based encoding, and generate a bitstream comprising the encoded image, region information of the block-based encoding region and the pixel-based encoding region, and quantization information of the quantization parameter, and wherein the communicator is further configured to transmit the bitstream to the device.