H04N19/583

CODING VIDEO DATA USING A TWO-LEVEL MULTI-TYPE-TREE FRAMEWORK

An example device for decoding video data includes a video decoder configured to decode one or more syntax elements at a region-tree level of a region-tree of a tree data structure for a coding tree block (CTB) of video data, the region-tree having one or more region-tree nodes including region-tree leaf and non-leaf nodes, each of the region-tree non-leaf nodes having at least four child region-tree nodes, decode one or more syntax elements at a prediction-tree level for each of the region-tree leaf nodes of one or more prediction trees of the tree data structure for the CTB, the prediction trees each having one or more prediction-tree leaf and non-leaf nodes, each of the prediction-tree non-leaf nodes having at least two child prediction-tree nodes, each of the prediction leaf nodes defining respective coding units (CUs), and decode video data for each of the CUs.

Compressed dynamic image encoding device, compressed dynamic image decoding device, compressed dynamic image encoding method and compressed dynamic image decoding method

A compressed dynamic image encoding device is provided, in which a motion vector is generated by searching a reference image for an image area most similar to an image area of a video input signal; a motion-compensated reference image is generated from the motion vector and the reference image; a prediction residual is generated, by subtracting the motion-compensated reference image from the video input signal; the reference image is generated, by adding the motion-compensated reference image and the result of processing performed to the prediction residual; and an encoded video output signal is generated by the processing performed to the prediction residual. The reference image comprises on-screen reference images, located inside a video display screen, and an off-screen reference image located outside the video display screen, and the off-screen reference image is generated based on the positional relationship of plural similar reference images of the on-screen reference images.

Method and system of coding prediction for screen video

According to one to one exemplary embodiment, the disclosure provides a method of coding prediction for screen video. The method classifies a plurality of coding blocks into a plurality of block types by using a classifier; and uses a computing device to filter at least one candidate block from the plurality of coding blocks, according to the plurality of block types of the plurality of coding blocks, and compute a first candidate motion vector set of a type-based motion merge mode and a second candidate motion vector set of a type-based advanced motion vector prediction mode, wherein each of the at least one candidate block has a block-type different from that of a current coding block.

Method and apparatus for fine-grained motion boundary processing
09813730 · 2017-11-07 · ·

A method and apparatus for deriving fine-grained motion compensated prediction of boundary pixels in a video coding system are disclosed. Embodiments of the present invention determine one or more neighboring coding units (CUs) adjacent to a current coding unit (CU). For each neighboring CU, motion-compensated prediction is derived for each neighboring CU using the MV of the neighboring CU. The pre-generated predictors at a bottom side or a right side of each neighboring CUs are derived and stored on a smallest CU (SCU) basis. The pre-generated predictors and the motion compensated predictor for a current boundary pixel are combined using weighting factors to form a final predictor for the current pixel.

Method and apparatus for fine-grained motion boundary processing
09813730 · 2017-11-07 · ·

A method and apparatus for deriving fine-grained motion compensated prediction of boundary pixels in a video coding system are disclosed. Embodiments of the present invention determine one or more neighboring coding units (CUs) adjacent to a current coding unit (CU). For each neighboring CU, motion-compensated prediction is derived for each neighboring CU using the MV of the neighboring CU. The pre-generated predictors at a bottom side or a right side of each neighboring CUs are derived and stored on a smallest CU (SCU) basis. The pre-generated predictors and the motion compensated predictor for a current boundary pixel are combined using weighting factors to form a final predictor for the current pixel.

SIMPLIFICATION OF HISTORY-BASED MOTION VECTOR PREDICTION
20220046273 · 2022-02-10 ·

A method of coding video data, including constructing a history-based motion vector prediction (HMVP) candidate history table that includes motion vector information of previously coded blocks that extend beyond adjacent neighboring blocks of a current block, constructing a motion vector predictor list, and adding one or more HMVP candidates from the HMVP candidate history table to the motion vector predictor list. Adding the one or more HMVP candidates from the HMVP candidate history table comprises comparing a first HMVP candidate in the HMVP candidate history table to two entries in the motion vector predictor list and no other entries, and adding the first HMVP candidate to the motion vector predictor list when the first HMVP candidate is different than both of the two entries in the motion vector predictor list. The method also includes coding the current block of video data using the motion vector predictor list.

SIMPLIFICATION OF HISTORY-BASED MOTION VECTOR PREDICTION
20220046273 · 2022-02-10 ·

A method of coding video data, including constructing a history-based motion vector prediction (HMVP) candidate history table that includes motion vector information of previously coded blocks that extend beyond adjacent neighboring blocks of a current block, constructing a motion vector predictor list, and adding one or more HMVP candidates from the HMVP candidate history table to the motion vector predictor list. Adding the one or more HMVP candidates from the HMVP candidate history table comprises comparing a first HMVP candidate in the HMVP candidate history table to two entries in the motion vector predictor list and no other entries, and adding the first HMVP candidate to the motion vector predictor list when the first HMVP candidate is different than both of the two entries in the motion vector predictor list. The method also includes coding the current block of video data using the motion vector predictor list.

BLOCK BOUNDARY PREDICTION REFINEMENT WITH OPTICAL FLOW
20220239921 · 2022-07-28 · ·

Systems, methods, and instrumentalities are disclosed for sub-block/block refinement, including sub-block/block boundary refinement, such as block boundary prediction refinement with optical flow (BBPROF). A block comprising a current sub-block may be decoded based on a sample value for a first pixel that is obtained based on, for example, an MV for a current sub-block, an MV for a sub-block adjacent the current sub-block, and a sample value for a second pixel adjacent the first pixel. BBPROF may include determining spatial gradients at pixel(s)/sample location(s). An MV difference may be calculated between a current sub-block and one or more neighboring sub-blocks. An MV offset may be determined at pixel(s)/sample location(s) based on the MV difference. A sample value offset for the pixel in a current sub-block may be determined. The prediction for a reference picture list may be refined by adding the calculated sample value offset to the sub-block prediction.

Method and device for video signal processing
11206419 · 2021-12-21 · ·

An image decoding method according to the present invention may comprise obtaining a motion vector of a current block, updating the motion vector when bi-directional optical flow is applied to the current block, performing motion compensation on the current block by using the updated motion vector.

Method and device for video signal processing
11206419 · 2021-12-21 · ·

An image decoding method according to the present invention may comprise obtaining a motion vector of a current block, updating the motion vector when bi-directional optical flow is applied to the current block, performing motion compensation on the current block by using the updated motion vector.