Patent classifications
H04N19/109
Priority-based non-adjacent merge design
Devices, systems and methods for constructing low-complexity non-adjacent merge candidates. In a representative aspect, a method for video processing includes receiving a current block of video data, selecting, based on a rule, a first non-adjacent block that is not adjacent to the current block, constructing a first merge candidate comprising motion information based on the first non-adjacent block, identifying a second non-adjacent block that is not adjacent to the current block and different from the first non-adjacent block, based on determining that the second non-adjacent block fails to satisfy the rule, refraining adding a second merge candidate derived from the second non-adjacent block, constructing a merge candidate list based on the first non-adjacent block, and decoding the current block based on the merge candidate list.
INTER PREDICTION METHOD AND APPARATUS FOR SAME
According to the present invention, an image encoding apparatus comprises: a motion prediction unit which derives motion information on a current block in the form of the motion information including L0 motion information and L1 motion information; a motion compensation unit which performs a motion compensation for the current block on the basis of at least one of the L0 motion information and L1 motion information so as to generate a prediction block corresponding to the current block; and a restoration block generating unit which generates a restoration block corresponding to the current block based on the prediction block. According to the present invention, image encoding efficiency can be improved.
SETTING MOTION VECTOR PRECISION FOR INTRA PREDICTION WITH MOTION VECTOR DIFFERENCE
A method for video encoding includes setting a motion vector precision associated with a current block to be encoded in a current picture, and determining a motion vector for encoding the current block based on the motion vector precision. The method also includes determining a motion vector difference for the current block based on (i) the determined motion vector for encoding the current block, (ii) a predicted motion vector of the current block in inter prediction mode, and (iii) the motion vector precision. The method further includes encoding the current block according to the determined motion vector, and generating a coded video bitstream including the encoded current block and including prediction information indicating that the current block is coded in inter prediction mode and indicating the determined motion vector difference for the current block.
SETTING MOTION VECTOR PRECISION FOR INTRA PREDICTION WITH MOTION VECTOR DIFFERENCE
A method for video encoding includes setting a motion vector precision associated with a current block to be encoded in a current picture, and determining a motion vector for encoding the current block based on the motion vector precision. The method also includes determining a motion vector difference for the current block based on (i) the determined motion vector for encoding the current block, (ii) a predicted motion vector of the current block in inter prediction mode, and (iii) the motion vector precision. The method further includes encoding the current block according to the determined motion vector, and generating a coded video bitstream including the encoded current block and including prediction information indicating that the current block is coded in inter prediction mode and indicating the determined motion vector difference for the current block.
DMVR-BASED INTER PREDICTION METHOD AND APPARATUS
A video decoding method comprises: deriving L0 and L1 motion vectors of a current block; deriving decoder-side motion vector refinement (DMVR) flag information indicating whether to apply a DMVR to the current block; when the DMVR flag information indicates that the DMVR is to be applied to the current block, deriving refined L0 and L1 motion vectors based on the L0 and L1 motion vectors by applying the DMVR to the current block; deriving prediction samples of the current block based on the refined L0 and L1 motion vectors; and generating reconstructed samples of the current block based on the predicted samples, wherein deriving DMVR flag information comprises deriving the DMVR flag information by applying the DMVR to the current block when the height of the current block is 8 or more, and when the values of L0 and L1 luma weighted prediction flag information are both 0.
INTER PREDICTION METHOD BASED ON VARIABLE COEFFICIENT DEEP LEARNING
An inter prediction method allows a variable coefficient deep learning model to adaptively learn characteristics of a video; transmits a variable coefficient deep learning model parameter generated from the learning from an image encoding device to an image decoding device; and refers to a virtual reference frame generated by the variable coefficient deep learning model.
INTER PREDICTION METHOD BASED ON VARIABLE COEFFICIENT DEEP LEARNING
An inter prediction method allows a variable coefficient deep learning model to adaptively learn characteristics of a video; transmits a variable coefficient deep learning model parameter generated from the learning from an image encoding device to an image decoding device; and refers to a virtual reference frame generated by the variable coefficient deep learning model.
CASCADE PREDICTION
A first predictor is applied to an input image to generate first-stage predicted codewords approximating prediction target codewords of a prediction target image. Second-stage prediction target values are created by performing an inverse cascade operation on the prediction target codewords and the first-stage predicted codewords. A second predictor is applied to the input image to generate second-stage predicted values approximating the second-stage prediction target values. Multiple sets of cascade prediction coefficients are generated to comprise first and second sets of cascade prediction coefficients specifying the first and second predictors. The multiple sets of cascade prediction coefficients are encoded, in a video signal, as image metadata. The video signal is further encoded with the input image.
CONSTRAINTS ON REFERENCE PICTURE LISTS ENTRIES
A video processing method includes performing a conversion between a video having one or more video layers including one or more video pictures and a bitstream of the video according to a rule. The rule specifies a condition under which no picture that has been generated by a decoding process for generating an unavailable reference picture is referred to by an active entry in a reference picture list of a current slice of a current picture.
CONSTRAINTS ON REFERENCE PICTURE LISTS ENTRIES
A video processing method includes performing a conversion between a video having one or more video layers including one or more video pictures and a bitstream of the video according to a rule. The rule specifies a condition under which no picture that has been generated by a decoding process for generating an unavailable reference picture is referred to by an active entry in a reference picture list of a current slice of a current picture.