Patent classifications
H04N19/139
Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
A moving picture coding apparatus includes an intra-inter prediction unit which calculates a second motion vector by performing a scaling process on a first motion vector of a temporally neighboring corresponding block, when selectively adding, to a list, a motion vector of each of one or more corresponding blocks each of which is either a block included in a current picture to be coded and spatially neighboring a current block to be coded or a block included in a picture other than the current picture and temporally neighboring the current block, determines whether the second motion vector has a magnitude that is within a predetermined magnitude or not within the predetermined magnitude, and adds the second motion vector to the list when the intra-inter prediction unit determines that the second motion vector has a magnitude that is within the predetermined magnitude range.
IMAGE ENCODING DEVICE, IMAGE DECODING DEVICE, AND THE PROGRAMS THEREOF
An image coding device is provided with a determination unit which determines whether to apply an orthogonal transform to a transform block obtained by dividing a prediction difference signal indicating a difference between an input image and a predicted image or perform a transform skip by which the orthogonal transform is not applied, and an orthogonal transform unit which performs processing selected on the basis of the determination, the image coding device comprising a quantization unit which, when the transform skip is selected on the basis of the determination, quantizes the transform block using a first quantization matrix in which the quantization roughnesses of all elements previously shared with a decoding side are equal, and when the orthogonal transform is applied to the transform block on the basis of the determination, quantizes the transform block using the first quantization matrix or a second quantization matrix that is transmitted to the decoding side.
IMAGE ENCODING DEVICE, IMAGE DECODING DEVICE, AND THE PROGRAMS THEREOF
An image coding device is provided with a determination unit which determines whether to apply an orthogonal transform to a transform block obtained by dividing a prediction difference signal indicating a difference between an input image and a predicted image or perform a transform skip by which the orthogonal transform is not applied, and an orthogonal transform unit which performs processing selected on the basis of the determination, the image coding device comprising a quantization unit which, when the transform skip is selected on the basis of the determination, quantizes the transform block using a first quantization matrix in which the quantization roughnesses of all elements previously shared with a decoding side are equal, and when the orthogonal transform is applied to the transform block on the basis of the determination, quantizes the transform block using the first quantization matrix or a second quantization matrix that is transmitted to the decoding side.
METHOD AND DEVICE FOR IMAGE DECODING ACCORDING TO INTER-PREDICTION IN IMAGE CODING SYSTEM
An image decoding method performed by a decoding device according to the present disclosure comprises: a step of deriving reference picture list 0 (L0) and reference picture list 1 (L1); a step of deriving two motion vectors (MV) for a current block, the two MVs including MVL0 for the L0 and MVL1 for the L1; a step of determining whether to apply bi-prediction optical flow (BIO) prediction for deriving refined motion vectors by sub-blocks to the current block; a step of deriving a refined motion vector for a sub-block of the current block based on the MVL0 and MVL1, if the BIO prediction is applied to the current block; and a step of deriving a prediction sample based on the refined motion vector.
METHOD AND DEVICE FOR IMAGE DECODING ACCORDING TO INTER-PREDICTION IN IMAGE CODING SYSTEM
An image decoding method performed by a decoding device according to the present disclosure comprises: a step of deriving reference picture list 0 (L0) and reference picture list 1 (L1); a step of deriving two motion vectors (MV) for a current block, the two MVs including MVL0 for the L0 and MVL1 for the L1; a step of determining whether to apply bi-prediction optical flow (BIO) prediction for deriving refined motion vectors by sub-blocks to the current block; a step of deriving a refined motion vector for a sub-block of the current block based on the MVL0 and MVL1, if the BIO prediction is applied to the current block; and a step of deriving a prediction sample based on the refined motion vector.
SIGNALING OF FLAG INDICATING ZERO MOTION VECTOR DIFFERENCE FOR A CONTROL POINT
A method for video encoding includes determining a corresponding motion vector for each of multiple control points of a base predictor. The method further includes determining a corresponding motion vector difference for each of the multiple control points of the base predictor based on the determined motion vector for each respective control point. The method further includes generating prediction information of the current block to be included in a coded video bitstream. The prediction information includes (i) a usage flag indicative of the affine merge mode with offset, (ii) offset parameters defining the determined corresponding motion vector difference for each of the one or more of the control points, and (iii) a zero motion vector difference flag for the multiple control points of the base predictor. The zero motion vector difference flag indicates whether offset parameters for the respective control point are provided in the prediction information.
SIGNALING OF FLAG INDICATING ZERO MOTION VECTOR DIFFERENCE FOR A CONTROL POINT
A method for video encoding includes determining a corresponding motion vector for each of multiple control points of a base predictor. The method further includes determining a corresponding motion vector difference for each of the multiple control points of the base predictor based on the determined motion vector for each respective control point. The method further includes generating prediction information of the current block to be included in a coded video bitstream. The prediction information includes (i) a usage flag indicative of the affine merge mode with offset, (ii) offset parameters defining the determined corresponding motion vector difference for each of the one or more of the control points, and (iii) a zero motion vector difference flag for the multiple control points of the base predictor. The zero motion vector difference flag indicates whether offset parameters for the respective control point are provided in the prediction information.
METHODS AND APPARATUS FOR PERFORMING REAL-TIME VVC DECODING
Apparatus and methods for implementing a real-time Versatile Video Coding (VVC) decoder use multiple threads to address the limitation with existing parallelization techniques and fully utilizes the available CPU computation resource without compromising on the coding efficiency. The proposed Multi-threaded (MT) framework uses CTU level parallel processing techniques without compromising on the memory bandwidth. Picture level parallel processing separates the sequence into temporal levels by considering the picture's referencing hierarchy. Embodiments are provided using various optimization techniques to achieve real-time VVC decoding on heterogenous platforms with multi-core CPUs, for those bitstreams generated using a VVC reference encoder with a default configuration.
METHODS AND APPARATUS FOR PERFORMING REAL-TIME VVC DECODING
Apparatus and methods for implementing a real-time Versatile Video Coding (VVC) decoder use multiple threads to address the limitation with existing parallelization techniques and fully utilizes the available CPU computation resource without compromising on the coding efficiency. The proposed Multi-threaded (MT) framework uses CTU level parallel processing techniques without compromising on the memory bandwidth. Picture level parallel processing separates the sequence into temporal levels by considering the picture's referencing hierarchy. Embodiments are provided using various optimization techniques to achieve real-time VVC decoding on heterogenous platforms with multi-core CPUs, for those bitstreams generated using a VVC reference encoder with a default configuration.
INTER PREDICTION METHOD BASED ON VARIABLE COEFFICIENT DEEP LEARNING
An inter prediction method allows a variable coefficient deep learning model to adaptively learn characteristics of a video; transmits a variable coefficient deep learning model parameter generated from the learning from an image encoding device to an image decoding device; and refers to a virtual reference frame generated by the variable coefficient deep learning model.