Patent classifications
H04N19/577
METHOD AND APPARATUS FOR ENCODING/DECODING IMAGE
Disclosed herein are an image encoding method and an image decoding method. The image decoding method includes determining an initial motion vector of a current block using a motion vector of a reconstructed region, searching for the motion vector of the current block based on the initial motion vector, and generating a prediction sample of the current block using the motion vector. The initial motion vector includes a motion vector in a past direction and a motion vector in a future direction.
RESTRICTIONS OF USAGE OF TOOLS ACCORDING TO REFERENCE PICTURE TYPES
A video processing method includes determining, for a conversion between a current video block of a video including multiple video blocks and a coded representation of the video, and from types of reference pictures used for the conversion, applicability of a coding tool to the current video block and performing the conversion based on the determining. The method may be performed by a video decoder or a video encoder or a video transcoder.
SYSTEMS AND METHODS FOR PERFORMING MOTION VECTOR PREDICTION USING A DERIVED SET OF MOTION VECTORS
This disclosure relates to video coding and more particularly to techniques for performing motion vector prediction. According to an aspect of an invention, a motion vector and a corresponding reference picture identifier for the motion vector are received; a reference picture corresponding to a second motion vector is determined based on the reference picture corresponding to the received motion vector and a current picture; a scaling value is determined based on the determined reference picture, the reference picture corresponding to the received motion vector, and the current picture; and the second motion vector is generated from the received motion vector by scaling with the scaling value.
SYSTEMS AND METHODS FOR PERFORMING MOTION VECTOR PREDICTION USING A DERIVED SET OF MOTION VECTORS
This disclosure relates to video coding and more particularly to techniques for performing motion vector prediction. According to an aspect of an invention, a motion vector and a corresponding reference picture identifier for the motion vector are received; a reference picture corresponding to a second motion vector is determined based on the reference picture corresponding to the received motion vector and a current picture; a scaling value is determined based on the determined reference picture, the reference picture corresponding to the received motion vector, and the current picture; and the second motion vector is generated from the received motion vector by scaling with the scaling value.
SIGNALING OF FLAG INDICATING ZERO MOTION VECTOR DIFFERENCE FOR A CONTROL POINT
A method for video encoding includes determining a corresponding motion vector for each of multiple control points of a base predictor. The method further includes determining a corresponding motion vector difference for each of the multiple control points of the base predictor based on the determined motion vector for each respective control point. The method further includes generating prediction information of the current block to be included in a coded video bitstream. The prediction information includes (i) a usage flag indicative of the affine merge mode with offset, (ii) offset parameters defining the determined corresponding motion vector difference for each of the one or more of the control points, and (iii) a zero motion vector difference flag for the multiple control points of the base predictor. The zero motion vector difference flag indicates whether offset parameters for the respective control point are provided in the prediction information.
DMVR-BASED INTER-PREDICTION METHOD AND DEVICE
An image decoding method includes: acquiring, from a bitstream, luma weight L0 flag information indicating whether there is an L0 prediction-related weight factor and luma weight L1 flag information indicating whether there is an L1 prediction-related weight factor; determining to apply decoder-side motion vector refinement (DMVR) to an L0 motion vector and L1 motion vector for a current block, when the luma weight L0 flag information and the luma weight L1 flag information are both zero; when it has been determined to apply DMVR, deriving a refined L0 motion vector and a refined L1 motion vector by applying the DMVR to the current block; deriving prediction samples for the current block on the basis of L0 prediction using the refined L0 motion vector and L1 prediction using the refined L1 motion vector; and generating reconstruction samples for the current block on the basis of the prediction samples.
DMVR-BASED INTER-PREDICTION METHOD AND DEVICE
An image decoding method includes: acquiring, from a bitstream, luma weight L0 flag information indicating whether there is an L0 prediction-related weight factor and luma weight L1 flag information indicating whether there is an L1 prediction-related weight factor; determining to apply decoder-side motion vector refinement (DMVR) to an L0 motion vector and L1 motion vector for a current block, when the luma weight L0 flag information and the luma weight L1 flag information are both zero; when it has been determined to apply DMVR, deriving a refined L0 motion vector and a refined L1 motion vector by applying the DMVR to the current block; deriving prediction samples for the current block on the basis of L0 prediction using the refined L0 motion vector and L1 prediction using the refined L1 motion vector; and generating reconstruction samples for the current block on the basis of the prediction samples.
DMVR-BASED INTER PREDICTION METHOD AND APPARATUS
A video decoding method comprises: deriving L0 and L1 motion vectors of a current block; deriving decoder-side motion vector refinement (DMVR) flag information indicating whether to apply a DMVR to the current block; when the DMVR flag information indicates that the DMVR is to be applied to the current block, deriving refined L0 and L1 motion vectors based on the L0 and L1 motion vectors by applying the DMVR to the current block; deriving prediction samples of the current block based on the refined L0 and L1 motion vectors; and generating reconstructed samples of the current block based on the predicted samples, wherein deriving DMVR flag information comprises deriving the DMVR flag information by applying the DMVR to the current block when the height of the current block is 8 or more, and when the values of L0 and L1 luma weighted prediction flag information are both 0.
Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data
Particular embodiments may remove a condition check in the semantics for checking a high-precision data flag. This simplifies the semantics used in the encoding and decoding process. In this case, even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value. For example, the syntax elements luma_offset_l0[i], luma_offset_l1[i], delta_chroma_offset_l0[i][j], and delta_chroma_offset_l1[i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data
Particular embodiments may remove a condition check in the semantics for checking a high-precision data flag. This simplifies the semantics used in the encoding and decoding process. In this case, even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value. For example, the syntax elements luma_offset_l0[i], luma_offset_l1[i], delta_chroma_offset_l0[i][j], and delta_chroma_offset_l1[i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.