H04N19/126

METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE
20180007380 · 2018-01-04 · ·

A method of encoding an image in a merge mode, the method including determining motion information of a current prediction unit, and generating a prediction block using the motion information; generating a residual block using an original block and the prediction block, transforming the residual block to generating a transformed block, quantizing the transformed block using a quantization parameter to generate a quantized block, and scanning the quantized block to entropy-code the quantized block; and encoding the motion information using effective spatial and temporal merge candidates of the current prediction unit. Further, a motion vector of the temporal merge candidate is a motion vector of a temporal merge candidate within a temporal merge candidate picture, and the quantization parameter is encoded using an average of two effective quantization parameters among a left quantization parameter, an upper quantization parameter and a previous quantization parameter of a current coding unit, also when the quantized block is larger than a predetermined size, the quantized block is divided into a plurality of subblocks to be scanned, and a scan pattern for scanning the plurality of subblocks is the same as a scan pattern for scanning quantized coefficients within each subblock. In addition, information indicating a position of a last non-zero quantized coefficient in a transform unit is transmitted to a video decoder.

RESHAPING CURVE OPTIMIZATION IN HDR CODING

In a system for coding high dynamic range (HDR) images using lower-dynamic range (LDR) images, a reshaping function allows for a more efficient distribution of the codewords in the lower dynamic range images for improved compression. A trim pass of the LDR images by a colorist may satisfy a director's intent for a given “look,” but may also result in unpleasant clipping artifacts in the reconstructed HDR images. Given an original forward reshaping function which maps HDR luminance values to LDR pixel values, a processor identifies areas of potential clipping and generates modified forward and backward reshaping functions to reduce the visibility of potential artifacts from the trim pass process while preserving the director's intent.

RESHAPING CURVE OPTIMIZATION IN HDR CODING

In a system for coding high dynamic range (HDR) images using lower-dynamic range (LDR) images, a reshaping function allows for a more efficient distribution of the codewords in the lower dynamic range images for improved compression. A trim pass of the LDR images by a colorist may satisfy a director's intent for a given “look,” but may also result in unpleasant clipping artifacts in the reconstructed HDR images. Given an original forward reshaping function which maps HDR luminance values to LDR pixel values, a processor identifies areas of potential clipping and generates modified forward and backward reshaping functions to reduce the visibility of potential artifacts from the trim pass process while preserving the director's intent.

METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE
20180014027 · 2018-01-11 · ·

A method of encoding an image in a merge mode, the method including determining motion information of a current prediction unit, and generating a prediction block using the motion information; generating a residual block using an original block and the prediction block, transforming the residual block to generating a transformed block, quantizing the transformed block using a quantization parameter to generate a quantized block, and scanning the quantized block to entropy-code the quantized block; and encoding the motion information using effective spatial and temporal merge candidates of the current prediction unit. In addition, a motion vector of the temporal merge candidate is a motion vector of a temporal merge candidate within a temporal merge candidate picture, and the quantization parameter is encoded using an average of two effective quantization parameters among a left quantization parameter, an upper quantization parameter and a previous quantization parameter of a current coding unit, also when the quantized block is larger than a predetermined size, the quantized block is divided into a plurality of subblocks to be scanned, and a scan pattern for scanning the plurality of subblocks is the same as a scan pattern for scanning quantized coefficients within each subblock. Further, a scanning scheme for scanning the quantized coefficients is determined according to an intra-prediction mode and a size of a transform unit.

METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE
20180014027 · 2018-01-11 · ·

A method of encoding an image in a merge mode, the method including determining motion information of a current prediction unit, and generating a prediction block using the motion information; generating a residual block using an original block and the prediction block, transforming the residual block to generating a transformed block, quantizing the transformed block using a quantization parameter to generate a quantized block, and scanning the quantized block to entropy-code the quantized block; and encoding the motion information using effective spatial and temporal merge candidates of the current prediction unit. In addition, a motion vector of the temporal merge candidate is a motion vector of a temporal merge candidate within a temporal merge candidate picture, and the quantization parameter is encoded using an average of two effective quantization parameters among a left quantization parameter, an upper quantization parameter and a previous quantization parameter of a current coding unit, also when the quantized block is larger than a predetermined size, the quantized block is divided into a plurality of subblocks to be scanned, and a scan pattern for scanning the plurality of subblocks is the same as a scan pattern for scanning quantized coefficients within each subblock. Further, a scanning scheme for scanning the quantized coefficients is determined according to an intra-prediction mode and a size of a transform unit.

SYSTEMS AND METHODS FOR COMPRESSING IMAGE DATA GENERATED BY A COMPUTED TOMOGRAPHY (CT) IMAGING SYSTEM
20180014016 · 2018-01-11 ·

A compression device for compressing image data generated by a computed tomography (CT) imaging system is described herein. The compression device is configured to compress the image data by implementing a method including receiving image data from the CT imaging system and requantizing the image data in a square root domain. The method further includes identifying a group of projections (GOP) in the image data, including a first projection and a plurality of subsequent projections, and performing spatial-delta encoding on the first projection and temporal-delta encoding on each of the plurality of subsequent projections. The method also includes identifying a signed value in the GOP, and converting the signed value to an unsigned value. The method further includes entropy coding the image data in the GOP, and packetizing the GOP for transmission or storage.

SYSTEMS AND METHODS FOR COMPRESSING IMAGE DATA GENERATED BY A COMPUTED TOMOGRAPHY (CT) IMAGING SYSTEM
20180014016 · 2018-01-11 ·

A compression device for compressing image data generated by a computed tomography (CT) imaging system is described herein. The compression device is configured to compress the image data by implementing a method including receiving image data from the CT imaging system and requantizing the image data in a square root domain. The method further includes identifying a group of projections (GOP) in the image data, including a first projection and a plurality of subsequent projections, and performing spatial-delta encoding on the first projection and temporal-delta encoding on each of the plurality of subsequent projections. The method also includes identifying a signed value in the GOP, and converting the signed value to an unsigned value. The method further includes entropy coding the image data in the GOP, and packetizing the GOP for transmission or storage.

Joint transform coding of multiple color components
11711514 · 2023-07-25 · ·

There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform receiving video data in an AOMedia Video 1 (AV1) format comprising data of at least two chroma prediction-residual signal blocks, a transformation between at least one signal block, having a size less than or equal to a combination of the chroma prediction-residual signal blocks, and the chroma prediction-residual signal blocks, and decoding the video data based on an output of the transformation comprising the at least one signal block having the size less than or equal to the combination of the chroma prediction-residual blocks.

Joint transform coding of multiple color components
11711514 · 2023-07-25 · ·

There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform receiving video data in an AOMedia Video 1 (AV1) format comprising data of at least two chroma prediction-residual signal blocks, a transformation between at least one signal block, having a size less than or equal to a combination of the chroma prediction-residual signal blocks, and the chroma prediction-residual signal blocks, and decoding the video data based on an output of the transformation comprising the at least one signal block having the size less than or equal to the combination of the chroma prediction-residual blocks.

IMAGE ENCODING DEVICE, IMAGE DECODING DEVICE, AND THE PROGRAMS THEREOF

An image coding device is provided with a determination unit which determines whether to apply an orthogonal transform to a transform block obtained by dividing a prediction difference signal indicating a difference between an input image and a predicted image or perform a transform skip by which the orthogonal transform is not applied, and an orthogonal transform unit which performs processing selected on the basis of the determination, the image coding device comprising a quantization unit which, when the transform skip is selected on the basis of the determination, quantizes the transform block using a first quantization matrix in which the quantization roughnesses of all elements previously shared with a decoding side are equal, and when the orthogonal transform is applied to the transform block on the basis of the determination, quantizes the transform block using the first quantization matrix or a second quantization matrix that is transmitted to the decoding side.