Patent classifications
H04N19/86
IMAGE PROCESSING DEVICE AND METHOD
The present invention relates to an image processing device and method enabling noise removal to be performed according to images and bit rates. A low-pass filter setting unit 93 sets, from filter coefficients stored in a built-in filter coefficient memory 94, a filter coefficient corresponding to intra prediction mode information and a quantization parameter. A neighboring image setting unit 81 uses the filter coefficient set by the low-pass filter setting unit 93 to subject neighboring pixel values of a current block from frame memory 72 to filtering processing. A prediction image generating unit 82 performs intra prediction using the neighboring pixel values subjected to filtering processing, from the neighboring image setting unit 81, and generates a prediction image. The present invention can be applied to an image encoding device which encodes with the H.264/AVC format, for example.
METHOD AND APPARATUS FOR ENCODING/DECODING IMAGES CONSIDERING LOW FREQUENCY COMPONENTS
The method performed by an apparatus for encoding a current block, includes: generating a predicted block by predicting the current block; generating a residual block of the current block by subtracting the predicted block from the current block; partitioning the residual block into a plurality of subblocks having various sizes, and transforming each of the subblocks by using a transform unit of a size identical to each of the subblocks, to thereby generate transform blocks of the subblocks; quantizing the transform blocks; and encoding transform coefficients of each of the quantized transform blocks.
METHOD AND APPARATUS FOR ENCODING/DECODING IMAGES CONSIDERING LOW FREQUENCY COMPONENTS
The method performed by an apparatus for encoding a current block, includes: generating a predicted block by predicting the current block; generating a residual block of the current block by subtracting the predicted block from the current block; partitioning the residual block into a plurality of subblocks having various sizes, and transforming each of the subblocks by using a transform unit of a size identical to each of the subblocks, to thereby generate transform blocks of the subblocks; quantizing the transform blocks; and encoding transform coefficients of each of the quantized transform blocks.
BLOCK-BASED PARALLEL DEBLOCKING FILTER IN VIDEO CODING
Deblocking filtering is provided in which an 8×8 filtering block covering eight sample vertical and horizontal boundary segments is divided into filtering sub-blocks that can be independently processed. To process the vertical boundary segment, the filtering block is divided into top and bottom 8×4 filtering sub-blocks, each covering a respective top and bottom half of the vertical boundary segment. To process the horizontal boundary segment, the filtering block is divided into left and right 4×8 filtering sub-blocks, each covering a respective left and right half of the horizontal boundary segment. The computation of the deviation d for a boundary segment in a filtering sub-block is performed using only samples from rows or columns in the filtering sub-block. Consequently, the filter on/off decisions and the weak/strong filtering decisions of the deblocking filtering are performed using samples contained within individual filtering blocks, thus allowing full parallel processing of the filtering blocks.
BLOCK-BASED PARALLEL DEBLOCKING FILTER IN VIDEO CODING
Deblocking filtering is provided in which an 8×8 filtering block covering eight sample vertical and horizontal boundary segments is divided into filtering sub-blocks that can be independently processed. To process the vertical boundary segment, the filtering block is divided into top and bottom 8×4 filtering sub-blocks, each covering a respective top and bottom half of the vertical boundary segment. To process the horizontal boundary segment, the filtering block is divided into left and right 4×8 filtering sub-blocks, each covering a respective left and right half of the horizontal boundary segment. The computation of the deviation d for a boundary segment in a filtering sub-block is performed using only samples from rows or columns in the filtering sub-block. Consequently, the filter on/off decisions and the weak/strong filtering decisions of the deblocking filtering are performed using samples contained within individual filtering blocks, thus allowing full parallel processing of the filtering blocks.
VIDEO ENCODING METHOD AND VIDEO ENCODING FOR SIGNALING SAO PARAMETERS
The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.
Methods and Apparatus of Decoding Process for Palette Syntax
Methods and apparatus for image or video decoding in a video decoding system are disclosed. Input data associated with a current block coded with palette mode is received to parse a palette predictor run. A position of reused colors in a palette predictor table is computed according to the palette predictor run. A size of the palette predictor table is determined and compared with the position computed according to the palette predictor run to obtain a comparison result. The decoder applies palette decoding to the current block according to the comparison result. If the comparison result indicates the position computed according to the palette predictor run is not within the palette predictor table, the position is changed to a new position to indicate a corresponding reused color for the current block or a decoding process of palette predictor reuse flags is terminated.
Methods and Apparatus of Decoding Process for Palette Syntax
Methods and apparatus for image or video decoding in a video decoding system are disclosed. Input data associated with a current block coded with palette mode is received to parse a palette predictor run. A position of reused colors in a palette predictor table is computed according to the palette predictor run. A size of the palette predictor table is determined and compared with the position computed according to the palette predictor run to obtain a comparison result. The decoder applies palette decoding to the current block according to the comparison result. If the comparison result indicates the position computed according to the palette predictor run is not within the palette predictor table, the position is changed to a new position to indicate a corresponding reused color for the current block or a decoding process of palette predictor reuse flags is terminated.
ADAPTIVE PRE-FILTERING BASED ON VIDEO COMPLEXITY, OUTPUT BIT RATE, AND VIDEO QUALITY PREFERENCES
Approaches for dynamic pre-filtering of digital video based on video complexity and output bit rate. An adaptive video preprocessor determines a current video complexity of the digital video and an output bit rate. Thereafter, the adaptive video preprocessor dynamically updates the strength of one or more preprocessing filters based on the current video complexity and the output bit rate for the digital video. The adaptive video preprocessor may update the strength of a preprocessing filter based, at least in part, upon selected values of a video quality preference category. A video quality preference category may be assigned natural language values which may each be translated into a particular strength value for at least one of the one or more preprocessing filters.
ADAPTIVE PRE-FILTERING BASED ON VIDEO COMPLEXITY, OUTPUT BIT RATE, AND VIDEO QUALITY PREFERENCES
Approaches for dynamic pre-filtering of digital video based on video complexity and output bit rate. An adaptive video preprocessor determines a current video complexity of the digital video and an output bit rate. Thereafter, the adaptive video preprocessor dynamically updates the strength of one or more preprocessing filters based on the current video complexity and the output bit rate for the digital video. The adaptive video preprocessor may update the strength of a preprocessing filter based, at least in part, upon selected values of a video quality preference category. A video quality preference category may be assigned natural language values which may each be translated into a particular strength value for at least one of the one or more preprocessing filters.