Patent classifications
H04N19/85
Optimized decoded high dynamic range image saturation
To enable better color and in particular color saturation control for HDR image handling systems which need to do luminance dynamic range conversion, e.g. from a SDR image to an image optimized for rendering on a display of higher display peak brightness and dynamic range, the inventors invented an apparatus (400) for processing a color saturation (C′bL, C′rL) of an input color (Y′L, C′bL, C′rL) of an input image (Im_RLDR) to yield an output color (Y′M, Cb′M, Cr′M) of an output image (Im3000nit) corresponding to the input image, which output image is a re-grading of the input image characterized by the fact that its pixel colors have a different normalized luminance position (Y2) compared to the normalized luminance positions of the input colors (Y1), the normalized luminances being defined as the luminance of a pixel divided by the respective maximal codeable luminance of the image's luminance representation, whereby the ratio of the maximum codeable luminance of the input image and the maximum codeable luminance of the output image is at least 4 or larger, or ¼.sup.th or smaller, the apparatus comprising: a receiver (206) arranged to receive a luminance mapping function (F_L_s2h) defining a mapping between the luminance of the input color (Y′L) and a reference luminance (L′_HDR), and an initial saturation processing function (F_sat) defining saturation boost values (b) for different values of the luminance of the input color (Y′L); a display tuning unit (1009) arranged to calculate a display tuned luminance mapping function (F_L_da) based on the luminance mapping function (F_L_s2h) and at least one of a display peak brightness (PB_D) and a minimum discernable black (MB_D); a luminance processor (401) arranged to apply the display tuned luminance mapping function (F_L_da) to determine an output luminance (Y′M) from the input luminance (Y′L) of the input color; and a saturation processing unit (410, 411), arranged to map the input color saturation (C′bL, C′rL) to the color saturation (Cb′M, Cr′M) of the output color on the basis of a saturation processing strategy which specifies saturation multipliers for the normalized luminance values (Y_norm); characterized in that the apparatus further comprises a saturation factor determination unit (402) arranged to calculate a final saturation processing strategy (b; Bcorr) based on the initial saturation processing strategy (F_sat) and based on a secondary luminance value (Y′_H) which is derivable from the output luminance (Y′M) by applying a luminance mapping function (F_M2H) based on the luminance mapping function (F_L_s2h), and whe
Image processing method, apparatus, device and storage medium
The present application discloses an image processing method, apparatus, device and storage media. A specific implementation solution is: obtaining an original image and a noise-added image, where the noise-added image is an image of the original image after a noise is added, and a number of pixels with a noise in the original image is less than a preset number; encoding and decoding the original image and the noise-added image respectively, to obtain a first decoded image corresponding to the original image and a second decoded image corresponding to the noise-added image; obtaining a first PSNR between the first decoded image and the original image according to the original image and the first decoded image; obtaining a second PSNR between the second decoded image and the noise-added image according to the noise-added image and the second decoded image; and outputting the first PSNR and the second PSNR.
Image processing method, apparatus, device and storage medium
The present application discloses an image processing method, apparatus, device and storage media. A specific implementation solution is: obtaining an original image and a noise-added image, where the noise-added image is an image of the original image after a noise is added, and a number of pixels with a noise in the original image is less than a preset number; encoding and decoding the original image and the noise-added image respectively, to obtain a first decoded image corresponding to the original image and a second decoded image corresponding to the noise-added image; obtaining a first PSNR between the first decoded image and the original image according to the original image and the first decoded image; obtaining a second PSNR between the second decoded image and the noise-added image according to the noise-added image and the second decoded image; and outputting the first PSNR and the second PSNR.
STREAM REPROCESSING SYSTEM AND METHOD FOR OPERATING THE SAME
Provided are a stream reprocessing system on chip (SoC) and a method for operating the same is provided. The stream reprocessing system includes a plurality of processors including a central processing unit (CPU); a memory controller configured to receive a stream; and a stream reprocessor configured to perform reprocessing the stream, wherein the stream reprocessor includes: a control unit configured to determine whether to perform the reprocessing on the stream; a reprocessing unit configured to reprocess the stream based on receiving a command to perform the reprocessing on the stream from the control unit; and an output unit configured to transmit the reprocessed stream to a memory.
CONTENT-ADAPTIVE ONLINE TRAINING METHOD AND APPARATUS FOR POST-FILTERING
Aspects of the disclosure provide a method, an apparatus, and a non-transitory computer-readable storage medium for video decoding. The apparatus can include processing circuitry. The processing circuitry is configured to receive an image or video comprising one or more blocks. The processing circuitry can decode a first post-filtering parameter in the image or video corresponding to the one or more blocks to be reconstructed. The first post-filtering parameter applies to at least one of the one or more blocks and has been updated by a post-filtering module in a post-filtering neural network (NN) that is trained based on a training dataset. The processing circuitry can determine the post-filtering NN in a video decoder corresponding to the one or more blocks based on the first post-filtering parameter. The processing circuitry can decode the one or more blocks based on the determined post-filtering NN corresponding to the one or more blocks.
CONTENT-ADAPTIVE ONLINE TRAINING METHOD AND APPARATUS FOR POST-FILTERING
Aspects of the disclosure provide a method, an apparatus, and a non-transitory computer-readable storage medium for video decoding. The apparatus can include processing circuitry. The processing circuitry is configured to receive an image or video comprising one or more blocks. The processing circuitry can decode a first post-filtering parameter in the image or video corresponding to the one or more blocks to be reconstructed. The first post-filtering parameter applies to at least one of the one or more blocks and has been updated by a post-filtering module in a post-filtering neural network (NN) that is trained based on a training dataset. The processing circuitry can determine the post-filtering NN in a video decoder corresponding to the one or more blocks based on the first post-filtering parameter. The processing circuitry can decode the one or more blocks based on the determined post-filtering NN corresponding to the one or more blocks.
METHODS AND APPARATUS FOR PROCESSING OF HIGH-RESOLUTION VIDEO CONTENT
The present disclosure refers to methods and apparatuses for processing of high-resolution video content. In an embodiment, a method includes generating a first group of video frames from the video content. The first group of video frames has a first resolution lower than a resolution of the video content and a first rate-distortion score. The method further includes generating a second group of video frames from the video content. The second group of video frames has a second resolution lower than the resolution of the video content and a second rate-distortion score. The method further includes selecting an optimal group of video frames from the first and second groups of video frames based on a comparison between the first and second rate-distortion scores. The optimal group of video frames has a rate-distortion score lower than the first and the second rate-distortion scores.
Picture coding method, picture decoding method, picture coding apparatus, picture decoding apparatus, and program thereof
A picture coding method of the present invention codes a picture signal and a ratio of a number of luminance pixels and a number of chrominance pixels for the picture signal, and then one coding method out of at least two coding methods is selected depending on the ratio. Next, data related to a picture size is coded in accordance with the selected coding method. The data related to the picture size indicates a size of the picture corresponding to the picture signal or an output area, which is a pixel area to be outputted in decoding in a whole pixel area coded in the picture signal coding.
Picture coding method, picture decoding method, picture coding apparatus, picture decoding apparatus, and program thereof
A picture coding method of the present invention codes a picture signal and a ratio of a number of luminance pixels and a number of chrominance pixels for the picture signal, and then one coding method out of at least two coding methods is selected depending on the ratio. Next, data related to a picture size is coded in accordance with the selected coding method. The data related to the picture size indicates a size of the picture corresponding to the picture signal or an output area, which is a pixel area to be outputted in decoding in a whole pixel area coded in the picture signal coding.
Methods and devices for encoding and decoding a data stream representing at least one image that disables post-processing of reconstructed block based on prediction mode
A method for decoding a data stream representative of an image split into blocks. For a current block of the image, an item of information indicating a coding mode among a first and a second coding mode of the current block is decoded from the data stream and the current block is decoded depending on this information. When the coding mode of the current block corresponds to the second coding mode, the current block is reconstructed from a prediction obtained, for each pixel, from another previously decoded pixel belonging to the current block or to a previously decoded block of the image, and from a decoded residue associated with the pixel. At least one processing method is applied to the reconstructed current block for at least one pixel of the current block depending on the coding mode of the current block and/or the coding mode of the neighbouring blocks.