Patent classifications
H04N19/70
VOLUMETRIC VIDEO WITH AUXILIARY PATCHES
Methods and devices for encoding and decoding data representative of a 3D scene are disclosed. A set of first patches is generated from a first MVD content acquired from a first region of the 3D scene. A patch is a part of one of the views of the MVD content. A set of second patches is generated from a second MVD content acquired from a second region of the 3D scene. An atlas packing first and second patches is generated and associated with metadata indicating, for a patch of the atlas, whether the patch is a first or a second patch At the decoding side, first patches are used for rendering the viewport image and second patches are used for pre-processing or post-processing the viewport image.
VIDEO DECODING METHOD AND APPARATUS
Disclosed herein is a video decoding method for decoding an input bitstream in which each of pictures has been encoded with being split into a plurality of tiles. The method includes decoding partial decoding information included in the input bitstream and determining one or more target tiles to be decoded among the plurality of tiles according to the partial decoding information; and decoding video data corresponding to the one or more target tiles, wherein the partial decoding information includes at least one of first information indicating whether to perform partial decoding and second information indicating an area on which partial decoding is to be performed.
VIDEO DECODING METHOD AND APPARATUS
Disclosed herein is a video decoding method for decoding an input bitstream in which each of pictures has been encoded with being split into a plurality of tiles. The method includes decoding partial decoding information included in the input bitstream and determining one or more target tiles to be decoded among the plurality of tiles according to the partial decoding information; and decoding video data corresponding to the one or more target tiles, wherein the partial decoding information includes at least one of first information indicating whether to perform partial decoding and second information indicating an area on which partial decoding is to be performed.
VIDEO DATA STREAM, VIDEO ENCODER, APPARATUS AND METHODS FOR HRD TIMING FIXES, AND FURTHER ADDITIONS FOR SCALABLE AND MERGEABLE BITSTREAMS
A video data stream having a video encoded thereinto is provided. The video data stream comprises an indication that indicates whether or not one or more scalable nesting supplemental enhancement information messages comprising timing information for each of one or more output layer sets are present within the video data stream.
VIDEO DATA STREAM, VIDEO ENCODER, APPARATUS AND METHODS FOR HRD TIMING FIXES, AND FURTHER ADDITIONS FOR SCALABLE AND MERGEABLE BITSTREAMS
A video data stream having a video encoded thereinto is provided. The video data stream comprises an indication that indicates whether or not one or more scalable nesting supplemental enhancement information messages comprising timing information for each of one or more output layer sets are present within the video data stream.
QUANTIZATION PARAMETER CODING
An apparatus for processing a video may receive a quantization parameter (QP) adjustment value associated with a syntax level at the syntax level. In examples, the apparatus may obtain the QP adjustment value associated with the syntax level, for example, via signalling at the syntax level. The apparatus may apply the QP adjustment value to a QP associated with the syntax level to obtain an adjusted QP associated with the syntax level. The syntax level may include a coding block level or a transform unit (TU) level. In examples, if the syntax level is a TU level, the decoder may receive the QP adjustment value for a first CU (for example, a current TU) and obtain a QP for the second TU that precedes the first CU in a coding order based on a QP predictor, for example, instead of the QP adjustment value for the first TU.
QUANTIZATION PARAMETER CODING
An apparatus for processing a video may receive a quantization parameter (QP) adjustment value associated with a syntax level at the syntax level. In examples, the apparatus may obtain the QP adjustment value associated with the syntax level, for example, via signalling at the syntax level. The apparatus may apply the QP adjustment value to a QP associated with the syntax level to obtain an adjusted QP associated with the syntax level. The syntax level may include a coding block level or a transform unit (TU) level. In examples, if the syntax level is a TU level, the decoder may receive the QP adjustment value for a first CU (for example, a current TU) and obtain a QP for the second TU that precedes the first CU in a coding order based on a QP predictor, for example, instead of the QP adjustment value for the first TU.
DETERMINING A PARAMETRIZATION FOR CONTEXT-ADAPTIVE BINARY ARITHMETIC CODING
A video decoder employs context-adaptive binary arithmetic coding for decoding a video from a data stream. The video decoder determines a parametrization for the context-adaptive binary arithmetic coding.
ENCODING AND DECODING VIEWS ON VOLUMETRIC IMAGE DATA
An encoding method comprises obtaining (101) an input set of volumetric image data, selecting (103) data from the image data for multiple views based on a visibility of the data from a respective viewpoint at a respective viewing direction and/or within a respective field of view such that a plurality of the views comprises only a part of the image data, encoding (105) each of the views as a separate output set (31), and generating (107) metadata which indicates the viewpoints. A decoding method comprises determining (121) a desired user viewpoint, obtaining (123) the metadata, selecting (125) one or more of the available viewpoints based on the desired user viewpoint, obtaining (127) one or more sets of image data in which one or more available views corresponding to the selected one or more available viewpoints have been encoded, and decoding (129) at least one of the one or more available views.
METHOD, APPARATUS AND SYSTEM FOR ENCODING AND DECODING A CODING TREE UNIT
A method of decoding a coding unit from a coding tree unit of an image frame from a video bitstream. The method comprises determining a scan pattern for a transform block the scan pattern progressing from a current collection to a next collection of the plurality of collections after completing scanning of the current collection; decoding residual coefficients from the video bitstream according to the scan pattern; determining a multiple transform selection index for the coding unit, decoding the multiple transform selection index from the video bitstream if a last significant coefficient encountered along the scan pattern is at or within a threshold cartesian location of the transform block, and determining the multiple transform selection index to indicate that multiple transform selection is not used if the last significant residual coefficient position of the transform block along the scan pattern outside the threshold location; and transforming the decoded residual coefficients.