H04N19/80

ADAPTIVE INTERPOLATION FILTER FOR MOTION COMPENSATION
20220385897 · 2022-12-01 · ·

A video processing apparatus may comprise one or more processors that are configured to determine an interpolation filter length for an interpolation filter associated with a coding unit (CU) based on a size of the CU. The one or more processor may be configured to determine an interpolated reference sample based on the determined interpolation filter length for the interpolation filter and a reference sample for the CU. The one or more processor may be configured to predict the CU based on the interpolated reference sample. For example, if a first CU has a size that is greater than the size of a second CU, the one or more processors may be configured to use a shorter interpolation filter for the first CU than for the second CU.

ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD

An encoder includes circuitry and memory coupled to the circuitry. The circuitry: executes a second process of applying a second filter to the first image to generate a second image, not holding the second image as a reference image, holding the first image as a reference image, and displaying the second image; writes coefficients of each of one or more filter candidates that are candidates for the second filter into a bitstream, wherein the coefficients are included in a first storage location when written into the bitstream; and writes a parameter that specifies, for each image, one of the one or more filter candidates as the second filter into the bitstream, wherein the parameter is included in a second storage location when written into the bitstream, and the second storage location is different from the first storage location.

ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD

An encoder includes circuitry and memory coupled to the circuitry. The circuitry: executes a second process of applying a second filter to the first image to generate a second image, not holding the second image as a reference image, holding the first image as a reference image, and displaying the second image; writes coefficients of each of one or more filter candidates that are candidates for the second filter into a bitstream, wherein the coefficients are included in a first storage location when written into the bitstream; and writes a parameter that specifies, for each image, one of the one or more filter candidates as the second filter into the bitstream, wherein the parameter is included in a second storage location when written into the bitstream, and the second storage location is different from the first storage location.

CONTENT-ADAPTIVE ONLINE TRAINING METHOD AND APPARATUS FOR POST-FILTERING
20220385896 · 2022-12-01 · ·

Aspects of the disclosure provide a method, an apparatus, and a non-transitory computer-readable storage medium for video decoding. The apparatus can include processing circuitry. The processing circuitry is configured to receive an image or video comprising one or more blocks. The processing circuitry can decode a first post-filtering parameter in the image or video corresponding to the one or more blocks to be reconstructed. The first post-filtering parameter applies to at least one of the one or more blocks and has been updated by a post-filtering module in a post-filtering neural network (NN) that is trained based on a training dataset. The processing circuitry can determine the post-filtering NN in a video decoder corresponding to the one or more blocks based on the first post-filtering parameter. The processing circuitry can decode the one or more blocks based on the determined post-filtering NN corresponding to the one or more blocks.

CONTENT-ADAPTIVE ONLINE TRAINING METHOD AND APPARATUS FOR POST-FILTERING
20220385896 · 2022-12-01 · ·

Aspects of the disclosure provide a method, an apparatus, and a non-transitory computer-readable storage medium for video decoding. The apparatus can include processing circuitry. The processing circuitry is configured to receive an image or video comprising one or more blocks. The processing circuitry can decode a first post-filtering parameter in the image or video corresponding to the one or more blocks to be reconstructed. The first post-filtering parameter applies to at least one of the one or more blocks and has been updated by a post-filtering module in a post-filtering neural network (NN) that is trained based on a training dataset. The processing circuitry can determine the post-filtering NN in a video decoder corresponding to the one or more blocks based on the first post-filtering parameter. The processing circuitry can decode the one or more blocks based on the determined post-filtering NN corresponding to the one or more blocks.

Methods and devices for encoding and decoding a data stream representing at least one image that disables post-processing of reconstructed block based on prediction mode

A method for decoding a data stream representative of an image split into blocks. For a current block of the image, an item of information indicating a coding mode among a first and a second coding mode of the current block is decoded from the data stream and the current block is decoded depending on this information. When the coding mode of the current block corresponds to the second coding mode, the current block is reconstructed from a prediction obtained, for each pixel, from another previously decoded pixel belonging to the current block or to a previously decoded block of the image, and from a decoded residue associated with the pixel. At least one processing method is applied to the reconstructed current block for at least one pixel of the current block depending on the coding mode of the current block and/or the coding mode of the neighbouring blocks.

Methods and devices for encoding and decoding a data stream representing at least one image that disables post-processing of reconstructed block based on prediction mode

A method for decoding a data stream representative of an image split into blocks. For a current block of the image, an item of information indicating a coding mode among a first and a second coding mode of the current block is decoded from the data stream and the current block is decoded depending on this information. When the coding mode of the current block corresponds to the second coding mode, the current block is reconstructed from a prediction obtained, for each pixel, from another previously decoded pixel belonging to the current block or to a previously decoded block of the image, and from a decoded residue associated with the pixel. At least one processing method is applied to the reconstructed current block for at least one pixel of the current block depending on the coding mode of the current block and/or the coding mode of the neighbouring blocks.

Image processing apparatus, image processing method and image processing program

An image processing apparatus for performing correction for each frame group including a predetermined number of frames into which video data is divided includes a decoding unit configured to obtain a corrected frame group by correcting a second frame group, which is a frame group continuous with a first frame group in time, using a feature quantity of the first frame group. The decoding unit performs the correction so that subjective image quality based on a relationship between the second frame group and a frame group subsequent to the second frame group in time is increased and so that a predetermined classifier classifies that a frame group in which the second frame group is concatenated with the frame group subsequent to the second frame group in time is the same as a frame group in which the corrected frame group is concatenated with a corrected frame group obtained by correcting the frame group subsequent to the second frame group in time.

Image processing apparatus, image processing method and image processing program

An image processing apparatus for performing correction for each frame group including a predetermined number of frames into which video data is divided includes a decoding unit configured to obtain a corrected frame group by correcting a second frame group, which is a frame group continuous with a first frame group in time, using a feature quantity of the first frame group. The decoding unit performs the correction so that subjective image quality based on a relationship between the second frame group and a frame group subsequent to the second frame group in time is increased and so that a predetermined classifier classifies that a frame group in which the second frame group is concatenated with the frame group subsequent to the second frame group in time is the same as a frame group in which the corrected frame group is concatenated with a corrected frame group obtained by correcting the frame group subsequent to the second frame group in time.

Loop filter block flexible partitioning
11516469 · 2022-11-29 · ·

A method of loop filtering in a video coding process comprises receiving image data; analyzing the image data; flexibility partitioning the image data into loop filtering blocks (LFBs) to allow the size of LFBs in at least one of a first row and a first column in a same frame to be smaller than other LFBs within the same frame; and applying a loop filter to the LFBs.