Patent classifications
H04N19/159
Video coding method on basis of secondary transform, and device for same
A video decoding method according to the present document is characterized by comprising: a step for deriving transform coefficients through inverse quantization on the basis of quantized transform coefficients for a target block; a step for deriving modified transform coefficients on the basis of an inverse reduced secondary transform (RST) of the transform coefficients; and a step for generating a reconstructed picture on the basis of residual samples for the target block on the basis of an inverse primary transform of the modified transform coefficients, wherein the inverse RST using a transform kernel matrix is performed on transform coefficients of the upper-left 4×4 region of an 8×8 region of the target block, and the modified transform coefficients of the upper-left 4×4 region, upper-right 4×4 region, and lower-left 4×4 region of the 8×8 region are derived through the inverse RST.
Video coding method on basis of secondary transform, and device for same
A video decoding method according to the present document is characterized by comprising: a step for deriving transform coefficients through inverse quantization on the basis of quantized transform coefficients for a target block; a step for deriving modified transform coefficients on the basis of an inverse reduced secondary transform (RST) of the transform coefficients; and a step for generating a reconstructed picture on the basis of residual samples for the target block on the basis of an inverse primary transform of the modified transform coefficients, wherein the inverse RST using a transform kernel matrix is performed on transform coefficients of the upper-left 4×4 region of an 8×8 region of the target block, and the modified transform coefficients of the upper-left 4×4 region, upper-right 4×4 region, and lower-left 4×4 region of the 8×8 region are derived through the inverse RST.
Video decoding method and apparatus and video encoding method and apparatus
Provided is a video decoding method including determining a displacement vector per unit time of pixels of a current block in a horizontal direction or a vertical direction, the pixels including a pixel adjacent to an inside of a boundary of the current block, by using values about reference pixels included in a first reference block and a second reference block, without using a stored value about a pixel located outside boundaries of the first reference block and the second reference block; and obtaining a prediction block of the current block by performing block-unit motion compensation and pixel group unit motion compensation on the current block by using a gradient value in the horizontal direction or the vertical direction of a first corresponding reference pixel in the first reference block which corresponds to a current pixel included in a current pixel group in the current block, a gradient value in the horizontal direction or the vertical direction of a second corresponding reference pixel in the second reference block which corresponds to the current pixel, a pixel value of the first corresponding reference pixel, a pixel value of the second corresponding reference pixel, and a displacement vector per unit time of the current pixel in the horizontal direction or the vertical direction. In this regard, the current pixel group may include at least one pixel.
Video decoding method and apparatus and video encoding method and apparatus
Provided is a video decoding method including determining a displacement vector per unit time of pixels of a current block in a horizontal direction or a vertical direction, the pixels including a pixel adjacent to an inside of a boundary of the current block, by using values about reference pixels included in a first reference block and a second reference block, without using a stored value about a pixel located outside boundaries of the first reference block and the second reference block; and obtaining a prediction block of the current block by performing block-unit motion compensation and pixel group unit motion compensation on the current block by using a gradient value in the horizontal direction or the vertical direction of a first corresponding reference pixel in the first reference block which corresponds to a current pixel included in a current pixel group in the current block, a gradient value in the horizontal direction or the vertical direction of a second corresponding reference pixel in the second reference block which corresponds to the current pixel, a pixel value of the first corresponding reference pixel, a pixel value of the second corresponding reference pixel, and a displacement vector per unit time of the current pixel in the horizontal direction or the vertical direction. In this regard, the current pixel group may include at least one pixel.
Adaptive motion vector precision for affine motion model based video coding
Systems and methods are described for video coding using affine motion models with adaptive precision. In an example, a block of video is encoded in a bitstream using an affine motion model, where the affine motion model is characterized by at least two motion vectors. A precision is selected for each of the motion vectors, and the selected precisions are signaled in the bitstream. In some embodiments, the precisions are signaled by including in the bitstream information that identifies one of a plurality of elements in a selected predetermined precision set. The identified element indicates the precision of each of the motion vectors that characterize the affine motion model. In some embodiments, the precision set to be used is signaled expressly in the bitstream; in other embodiments, the precision set may be inferred, e.g., from the block size, block shape or temporal layer.
Adaptive motion vector precision for affine motion model based video coding
Systems and methods are described for video coding using affine motion models with adaptive precision. In an example, a block of video is encoded in a bitstream using an affine motion model, where the affine motion model is characterized by at least two motion vectors. A precision is selected for each of the motion vectors, and the selected precisions are signaled in the bitstream. In some embodiments, the precisions are signaled by including in the bitstream information that identifies one of a plurality of elements in a selected predetermined precision set. The identified element indicates the precision of each of the motion vectors that characterize the affine motion model. In some embodiments, the precision set to be used is signaled expressly in the bitstream; in other embodiments, the precision set may be inferred, e.g., from the block size, block shape or temporal layer.
Apparatus of decoding video data
An apparatus can include a prediction mode decoding module configured to derive a luma intra prediction mode and a chroma intra prediction mode; a prediction size determining module configured to determine a size of a luma transform unit and a size of a chroma transform unit using transform size information; a reference pixel generating module configured to generate referential pixels if at least one reference pixel is unavailable; a reference pixel filtering module configured to adaptively filter the reference pixels of a current luma block based on the luma intra prediction mode and the size of the luma transform unit, and not to filter the reference pixels of a current chroma block; a prediction block generating module configured to generate prediction blocks of the current luma block and the current chroma block; a residual bock generating module configured to generate a luma residual block and a chroma residual block; and an adder.
Apparatus of decoding video data
An apparatus can include a prediction mode decoding module configured to derive a luma intra prediction mode and a chroma intra prediction mode; a prediction size determining module configured to determine a size of a luma transform unit and a size of a chroma transform unit using transform size information; a reference pixel generating module configured to generate referential pixels if at least one reference pixel is unavailable; a reference pixel filtering module configured to adaptively filter the reference pixels of a current luma block based on the luma intra prediction mode and the size of the luma transform unit, and not to filter the reference pixels of a current chroma block; a prediction block generating module configured to generate prediction blocks of the current luma block and the current chroma block; a residual bock generating module configured to generate a luma residual block and a chroma residual block; and an adder.
Techniques for decoding or coding images based on multiple intra-prediction modes
Aspects of the present disclosure provide techniques for derive one or more intra prediction modes (IPMs) from a subset of IPM candidates in order to determine a predictor to use for decoding a block of an image. In some aspects, the subset of IPM candidates may include IPMs that are less than the number of IPMs in a full set of all available IPM candidates (e.g., 67 IPMs in VVC or 35 in HEVC). In some aspects, the subset of IPM candidates may be based on a most probable mode (MPM) list that can be used to determine or signal an IPM based on IPMs previously used in decoding other blocks.
Techniques for decoding or coding images based on multiple intra-prediction modes
Aspects of the present disclosure provide techniques for derive one or more intra prediction modes (IPMs) from a subset of IPM candidates in order to determine a predictor to use for decoding a block of an image. In some aspects, the subset of IPM candidates may include IPMs that are less than the number of IPMs in a full set of all available IPM candidates (e.g., 67 IPMs in VVC or 35 in HEVC). In some aspects, the subset of IPM candidates may be based on a most probable mode (MPM) list that can be used to determine or signal an IPM based on IPMs previously used in decoding other blocks.