H04N19/70

Restrictions on in-loop filtering

Devices, systems and methods for digital video coding, which includes: determining whether a sample is located at sub-block transform boundaries in case that a sub-block transform is applied; applying deblocking filter process if it is determined that the sample is located at sub-block transform boundaries; and performing a conversion between the video and a bitstream representation of the video.

Palette coding for screen content coding
11558627 · 2023-01-17 · ·

Sketch copy mode may be used to code blocks comprising irregular lines, syntax redundancy may be removed from blocks with special characteristics, and/or run value coding may be simplified. The parsing dependencies in palette coding design may be removed. For example, the context modeling dependency of the syntax element palette_transpose_flag may be removed, for example, by simplifying the corresponding context model. The context modeling of the syntax element palette_mode may be removed, for example, by using run-length coding without using context. The syntax parsing dependencies and/or the syntax signaling dependencies that are related with escape color signaling may be removed. A palette table generation process may handle input screen content video with high bit depths, for example, at the encoder side.

Palette coding for screen content coding
11558627 · 2023-01-17 · ·

Sketch copy mode may be used to code blocks comprising irregular lines, syntax redundancy may be removed from blocks with special characteristics, and/or run value coding may be simplified. The parsing dependencies in palette coding design may be removed. For example, the context modeling dependency of the syntax element palette_transpose_flag may be removed, for example, by simplifying the corresponding context model. The context modeling of the syntax element palette_mode may be removed, for example, by using run-length coding without using context. The syntax parsing dependencies and/or the syntax signaling dependencies that are related with escape color signaling may be removed. A palette table generation process may handle input screen content video with high bit depths, for example, at the encoder side.

Sub-block motion derivation and decoder-side motion vector refinement for merge mode
11558633 · 2023-01-17 · ·

Systems, methods, and instrumentalities for sub-block motion derivation and motion vector refinement for merge mode may be disclosed herein. Video data may be coded (e.g., encoded and/or decoded). A collocated picture for a current slice of the video data may be identified. The current slice may include one or more coding units (CUs). One or more neighboring CUs may be identified for a current CU. A neighboring CU (e.g., each neighboring CU) may correspond to a reference picture. A (e.g., one) neighboring CU may be selected to be a candidate neighboring CU based on the reference pictures and the collocated picture. A motion vector (MV) (e.g., collocated MV) may be identified from the collocated picture based on an MV (e.g., a reference MV) of the candidate neighboring CU. The current CU may be coded (e.g., encoded and/or decoded) using the collocated MV.

Sub-block motion derivation and decoder-side motion vector refinement for merge mode
11558633 · 2023-01-17 · ·

Systems, methods, and instrumentalities for sub-block motion derivation and motion vector refinement for merge mode may be disclosed herein. Video data may be coded (e.g., encoded and/or decoded). A collocated picture for a current slice of the video data may be identified. The current slice may include one or more coding units (CUs). One or more neighboring CUs may be identified for a current CU. A neighboring CU (e.g., each neighboring CU) may correspond to a reference picture. A (e.g., one) neighboring CU may be selected to be a candidate neighboring CU based on the reference pictures and the collocated picture. A motion vector (MV) (e.g., collocated MV) may be identified from the collocated picture based on an MV (e.g., a reference MV) of the candidate neighboring CU. The current CU may be coded (e.g., encoded and/or decoded) using the collocated MV.

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
20230009580 · 2023-01-12 · ·

The present technology relates to an image processing device and an image processing method that enable simplification of processing.

When matrix-based intra prediction that is intra prediction using a matrix operation is performed on a current prediction block to be encoded or decoded, matrix-based intra prediction is performed using a coefficient related to a sum of change amounts of pixel values and set to a fixed value, and a predicted image of the current prediction block is generated. Then, the current prediction block is encoded or decoded using the predicted image. The present technology can be applied, for example, in a case of encoding and decoding an image.

SPATIAL NEIGHBOR BASED AFFINE MOTION DERIVATION

An electronic apparatus performs a method of coding video data. The method includes receiving, from a bitstream of the video data, a first syntax that indicates an affine motion model enabled for a current coding block, estimating parameters of the affine motion model using gradients of motion vectors of multiple spatial neighboring blocks of the current coding block, and constructing motion vectors of the affine motion model for the current coding block by using the estimated parameters. In some embodiments, constructing motion vectors further includes converting the estimated parameters into control point motion vectors (CPMVs), and adding the CPMVs into a current affine merge candidate list. In some embodiments, constructing motion vectors further includes deriving a motion vector predictor for an affine mode.

SPATIAL NEIGHBOR BASED AFFINE MOTION DERIVATION

An electronic apparatus performs a method of coding video data. The method includes receiving, from a bitstream of the video data, a first syntax that indicates an affine motion model enabled for a current coding block, estimating parameters of the affine motion model using gradients of motion vectors of multiple spatial neighboring blocks of the current coding block, and constructing motion vectors of the affine motion model for the current coding block by using the estimated parameters. In some embodiments, constructing motion vectors further includes converting the estimated parameters into control point motion vectors (CPMVs), and adding the CPMVs into a current affine merge candidate list. In some embodiments, constructing motion vectors further includes deriving a motion vector predictor for an affine mode.

METHOD FOR ENCODING IMMERSIVE IMAGE AND METHOD FOR DECODING IMMERSIVE IMAGE
20230011027 · 2023-01-12 ·

Disclosed herein is a method for encoding an immersive image. The method includes detecting a non-diffuse surface in a first texture image of a first view, generating an additional texture image from the first texture image based on the detected non-diffuse surface, performing pruning on the additional texture image based on a second texture image of a second view, generating a texture atlas based on the pruned additional texture image, and encoding the texture atlas.

Contour mode prediction

A video decoder and method for supporting a prediction mode for predicting blocks of a video is configured to predict each of the blocks by extrapolating a neighborhood of the respective block into the block along a direction which varies across the respective block.