H04N19/98

Method for encoding and decoding image information

The present invention relates to a method for encoding and decoding image information and to an apparatus using same, and the method for encoding the image information, according to the present invention, comprises the steps of: generating a recovery block; applying a deblocking filter to the recovery block; applying a sample adaptive offset (SAO) to the recovery block to which the deblocking filter is applied; and transmitting the image information including information on the SAO which is applied, wherein in the step of transmitting, information for specifying bands that cover a scope of a pixel value, to which a band off set is applied, is transmitted when the band offset is applied during the step of applying the SAO.

Methods and apparatus for depth encoding and decoding

Methods and device for encoding/decoding data representative of depth of a 3D scene. The depth data are quantized in a range of quantized depth values larger than a range of encoding values allowed by a determined encoding bit depth. For blocks of pixels comprising the depth data, a first set of candidate quantization parameters is determined. A second set of quantization parameters is determined as a subset of the union of the first sets. The second set comprising candidate quantization parameters common to a plurality of blocks. One or more quantization parameters of the second set being associated with each block of pixels of the picture. The second set of quantization parameters is encoded, and the quantized depth values are encoded according to the quantization parameters.

Method to extend the range of rice parameter for residual coding in transform skip mode
11265536 · 2022-03-01 · ·

A method, computer program, and computer system is provided for video coding. Video data comprising a reference frame and residual blocks is received. Transform coefficients associated with the residual blocks are identified. The video data corresponding to the one or more residual blocks is encoded based on an extended dynamic range associated with the transform coefficients.

Method to extend the range of rice parameter for residual coding in transform skip mode
11265536 · 2022-03-01 · ·

A method, computer program, and computer system is provided for video coding. Video data comprising a reference frame and residual blocks is received. Transform coefficients associated with the residual blocks are identified. The video data corresponding to the one or more residual blocks is encoded based on an extended dynamic range associated with the transform coefficients.

METHOD FOR ENCODING AND DECODING IMAGE INFORMATION

The present invention relates to a method for encoding and decoding image information and to an apparatus using same, and the method for encoding the image information, according to the present invention, comprises the steps of: generating a recovery block; applying a deblocking filter to the recovery block; applying a sample adaptive offset (SAO) to the recovery block to which the deblocking filter is applied; and transmitting the image information including information on the SAO which is applied, wherein in the step of transmitting, information for specifying bands that cover a scope of a pixel value, to which a band off set is applied, is transmitted when the band offset is applied during the step of applying the SAO.

REGION BASED PROCESSING

Systems, apparatuses and methods may provide for technology that partitions a high dynamic range (HDR) image into a plurality of regions and determines, on a per region basis, a luminance level of the HDR image. Additionally, the technology may select, on the per image basis, a encoding amount for each region in the plurality of regions based on the luminance level.

REGION BASED PROCESSING

Systems, apparatuses and methods may provide for technology that partitions a high dynamic range (HDR) image into a plurality of regions and determines, on a per region basis, a luminance level of the HDR image. Additionally, the technology may select, on the per image basis, a encoding amount for each region in the plurality of regions based on the luminance level.

VIDEO RECEPTION METHOD, VIDEO TRANSMISSION METHOD, VIDEO RECEPTION APPARATUS, AND VIDEO TRANSMISSION APPARATUS
20220060756 · 2022-02-24 ·

Provided is a video reception method performed by a video reception apparatus including a display. The video reception method includes: receiving a reception signal multiplexed from video data and audio data; outputting, as first transfer characteristics information, transfer characteristics obtained by demultiplexing the reception signal; outputting, as second transfer characteristics information, transfer characteristics obtained by decoding the video data, the second transfer characteristics information being information for specifying, at frame accuracy, a transfer function corresponding to a luminance dynamic range of the video data; and displaying the video data while controlling a luminance dynamic range of the display at frame accuracy according to the second transfer characteristics information.

VIDEO RECEPTION METHOD, VIDEO TRANSMISSION METHOD, VIDEO RECEPTION APPARATUS, AND VIDEO TRANSMISSION APPARATUS
20220060756 · 2022-02-24 ·

Provided is a video reception method performed by a video reception apparatus including a display. The video reception method includes: receiving a reception signal multiplexed from video data and audio data; outputting, as first transfer characteristics information, transfer characteristics obtained by demultiplexing the reception signal; outputting, as second transfer characteristics information, transfer characteristics obtained by decoding the video data, the second transfer characteristics information being information for specifying, at frame accuracy, a transfer function corresponding to a luminance dynamic range of the video data; and displaying the video data while controlling a luminance dynamic range of the display at frame accuracy according to the second transfer characteristics information.

MACHINE LEARNING BASED DYNAMIC COMPOSING IN ENHANCED STANDARD DYNAMIC RANGE VIDEO (SDR+)

Training image pairs comprising training SDR image and corresponding training HDR images are received. Each training image pair in the training image pairs comprises a training SDR image and a corresponding training HDR image. The training SDR image and the corresponding training HDR image in the training image pair depict same visual content but with different luminance dynamic ranges. Training image feature vectors are extracted from training SDR images in the training image pairs. The training image feature vectors are used to train backward reshaping metadata prediction models for predicting operational parameter values of backward reshaping mappings used to backward reshape SDR images into mapped HDR images.