Patent classifications
H04N19/56
Limited memory access window for motion vector refinement
The present disclosure relates to motion vector refinement. As a first step, an initial motion vector and a template for the block are obtained. Then, the refinement of the initial motion vector is determined by template matching with said template in a search space. The search space is located on a position given by the initial motion vector and includes one or more fractional sample positions, wherein each of fractional sample positions belonging to the search space is obtained by interpolation filtering with a filter of a predefined tap-size assessing integer samples only within a window, said window being formed by integer samples accessible for the template matching in said search space.
History-based motion vector prediction
Systems, methods, and computer-readable media are provided for updating history-based motion vector tables. In some examples, a method can include obtaining one or more blocks of video data; determining a first motion vector derived from a first control point of a block of the one or more blocks, the block being coded using an affine motion mode; determining a second motion vector derived from a second control point of the block; based on the first motion vector and the second motion vector, estimating a third motion vector for a predetermined location within the block; and populating a history-based motion vector predictor (HMVP) table with the third motion vector.
History-based motion vector prediction
Systems, methods, and computer-readable media are provided for updating history-based motion vector tables. In some examples, a method can include obtaining one or more blocks of video data; determining a first motion vector derived from a first control point of a block of the one or more blocks, the block being coded using an affine motion mode; determining a second motion vector derived from a second control point of the block; based on the first motion vector and the second motion vector, estimating a third motion vector for a predetermined location within the block; and populating a history-based motion vector predictor (HMVP) table with the third motion vector.
Inter Prediction Method, Encoder, Decoder, and Storage Medium
Embodiments of the present application provide an inter prediction method, an encoder, a decoder, and a storage medium. The method comprises: determining a prediction mode parameter of a current block; when the prediction mode parameter indicates that the current block determines an inter prediction value by using a geometrical partition mode (GPM), constructing a merge candidate list of the current block; constructing a GPM motion information candidate list according to first motion information in the merge candidate list of the current block; and determining an inter prediction value of the current block according to the GPM motion information candidate list.
Inter Prediction Method, Encoder, Decoder, and Storage Medium
Embodiments of the present application provide an inter prediction method, an encoder, a decoder, and a storage medium. The method comprises: determining a prediction mode parameter of a current block; when the prediction mode parameter indicates that the current block determines an inter prediction value by using a geometrical partition mode (GPM), constructing a merge candidate list of the current block; constructing a GPM motion information candidate list according to first motion information in the merge candidate list of the current block; and determining an inter prediction value of the current block according to the GPM motion information candidate list.
Sub-block motion derivation and decoder-side motion vector refinement for merge mode
Systems, methods, and instrumentalities for sub-block motion derivation and motion vector refinement for merge mode may be disclosed herein. Video data may be coded (e.g., encoded and/or decoded). A collocated picture for a current slice of the video data may be identified. The current slice may include one or more coding units (CUs). One or more neighboring CUs may be identified for a current CU. A neighboring CU (e.g., each neighboring CU) may correspond to a reference picture. A (e.g., one) neighboring CU may be selected to be a candidate neighboring CU based on the reference pictures and the collocated picture. A motion vector (MV) (e.g., collocated MV) may be identified from the collocated picture based on an MV (e.g., a reference MV) of the candidate neighboring CU. The current CU may be coded (e.g., encoded and/or decoded) using the collocated MV.
MOTION VECTOR CODING METHOD AND MOTION VECTOR DECODING METHOD
A motion vector coding unit executes processing including a neighboring block specification step of specifying a neighboring block which is located in the neighborhood of a current block; a judgment step of judging whether or not the neighboring block has been coded using a motion vector of another block; a prediction step of deriving a predictive motion vector of the current block using a motion vector calculated from the motion vector of the other block as a motion vector of the neighboring block; and a coding step of coding the motion vector of the current block using the predictive motion vector.
MOTION VECTOR CODING METHOD AND MOTION VECTOR DECODING METHOD
A motion vector coding unit executes processing including a neighboring block specification step of specifying a neighboring block which is located in the neighborhood of a current block; a judgment step of judging whether or not the neighboring block has been coded using a motion vector of another block; a prediction step of deriving a predictive motion vector of the current block using a motion vector calculated from the motion vector of the other block as a motion vector of the neighboring block; and a coding step of coding the motion vector of the current block using the predictive motion vector.
IMAGE BLOCK-BASED MATCHING METHOD AND SYSTEM, AND VIDEO PROCESSING DEVICE
This application provides an image block matching method performed at a computing device, the method including: obtaining a target image block and an image; identifying a candidate image block within the image and multiple search points in the candidate image block; calculating a plurality of differences between the target image block and the candidate image block, each difference corresponding to a respective search point; choosing, among the plurality of differences, a smallest value and a corresponding smallest-value search point; and when the smallest-value search point is at the center of the target image block, choosing the candidate image block as a match of the target image block.
IMAGE BLOCK-BASED MATCHING METHOD AND SYSTEM, AND VIDEO PROCESSING DEVICE
This application provides an image block matching method performed at a computing device, the method including: obtaining a target image block and an image; identifying a candidate image block within the image and multiple search points in the candidate image block; calculating a plurality of differences between the target image block and the candidate image block, each difference corresponding to a respective search point; choosing, among the plurality of differences, a smallest value and a corresponding smallest-value search point; and when the smallest-value search point is at the center of the target image block, choosing the candidate image block as a match of the target image block.