Patent classifications
H04N19/00
Method and apparatus of mode- and size-dependent block-level restrictions for position dependent prediction combination
An intra prediction method is described. The method can include determining a prediction value for a sample of a current block from one or more reference samples outside the current block by using an intra predication mode. The method can also include deriving a weighted prediction value, when one or more predefined conditions are not satisfied, wherein the one or more predefined conditions relate to at least one of a width and/or a height of the current block and the intra prediction mode. Furthermore, the method can include coding the current block using the weighted prediction value, when the one or more predefined conditions are not satisfied.
Source-consistent techniques for predicting absolute perceptual video quality
In various embodiments, a perceptual quality application computes an absolute quality score for encoded video content. In operation, the perceptual quality application selects a model based on the spatial resolution of the video content from which the encoded video content is derived. The model associates a set of objective values for a set of objective quality metrics with an absolute quality score. The perceptual quality application determines a set of target objective values for the objective quality metrics based on the encoded video content. Subsequently, the perceptual quality application computes the absolute quality score for the encoded video content based on the selected model and the set of target objective values. Because the absolute quality score is independent of the quality of the video content, the absolute quality score accurately reflects the perceived quality of a wide range of encoded video content when decoded and viewed.
Source-consistent techniques for predicting absolute perceptual video quality
In various embodiments, a perceptual quality application computes an absolute quality score for encoded video content. In operation, the perceptual quality application selects a model based on the spatial resolution of the video content from which the encoded video content is derived. The model associates a set of objective values for a set of objective quality metrics with an absolute quality score. The perceptual quality application determines a set of target objective values for the objective quality metrics based on the encoded video content. Subsequently, the perceptual quality application computes the absolute quality score for the encoded video content based on the selected model and the set of target objective values. Because the absolute quality score is independent of the quality of the video content, the absolute quality score accurately reflects the perceived quality of a wide range of encoded video content when decoded and viewed.
Method and device for filtering
Disclosed herein are a video decoding method and apparatus and a video encoding method and apparatus, and more particularly, a method and apparatus for performing filtering in video encoding and decoding. An encoding apparatus may perform filtering on a target, and may generate filtering information indicating whether filtering has been performed on the target. Further, the encoding apparatus may generate a bitstream including filtering information. A decoding apparatus may determine, based on filtering information, whether to perform filtering on a target, and may perform filtering on the target. The decoding apparatus may receive filtering information from the encoding apparatus through a bitstream or may derive filtering information using additional information.
Adaptive control point selection for affine motion model based video coding
Systems, methods, and instrumentalities are disclosed for motion vector clipping when affine motion mode is enabled for a video block. A video coding device may determine that an affine mode for a video block is enabled. The video coding device may determine a plurality of control point affine motion vectors associated with the video block. The video coding device may store the plurality of clipped control point affine motion vectors for motion vector prediction of a neighboring control point affine motion vector. The video coding device may derive a sub-block motion vector associated with a sub-block of the video block, clip the derived sub-block motion vector, and store it for spatial motion vector prediction or temporal motion vector prediction. For example, the video coding device may clip the derived sub-block motion vector based on a motion field range that may be based on a bit depth value.
Encoder, decoder, encoding method, and decoding method
An encoder includes circuitry and memory connected to the circuitry. In operation, the circuitry: corrects a base motion vector using a correction value for correcting the base motion vector in a predetermined direction; and encodes a current partition to be encoded in an image of a video, using the base motion vector corrected. The correction value is specified by a first parameter and a second parameter, the first parameter indicating a table to be selected from among a plurality of tables each including values, the second parameter indicating one of the values included in the table to be selected indicated by the first parameter. In each of the plurality of tables, a smaller value among the values is assigned a smaller index. Each of the plurality of tables includes a different minimum value among the values.
System and method for video coding
A decoder includes circuitry which, in operation, parses a first flag indicating whether a CCALF (cross component adaptive loop filtering) process is enabled for a first block located adjacent to a left side of a current block; parses a second flag indicating whether the CCALF process is enabled for a second block located adjacent to an upper side of the current block; determines a first index associated with a color component of the current block; and derives a second index indicating a context model, using the first flag, the second flag, and the first index. The circuitry, in operation, performs entropy decoding of a third flag indicating whether the CCALF process is enabled for the current block, using the context model indicated by the second index; and performs the CCALF process on the current block in response to the third flag indicating the CCALF process is enabled for the current block.
Data compression device and compression method configured to gradually adjust a quantization step size to obtain an optimal target quantization step size
A data compression device and a compression method are provided. The data compression device includes a quantization table processing unit and a quantization unit. The quantization table processing unit determines a target quantization table in which a quantization coefficient satisfies a data distortion rate and a compression ratio of a preset condition according to a target compression ratio. By constructing different quantization tables for different data, a distortion rate is greatly reduced based on satisfying a compression ratio, and issues that the distortion rate and the compression ratio cannot be simultaneously satisfied in the prior art are alleviated.
Neural network based image set compression
Techniques for coding sets of images with neural networks include transforming a first image of a set of images into coefficients with an encoder neural network, encoding a group of the coefficients as an integer patch index into coding table of table entries each having vectors of coefficients, and storing a collection of patch indices as a first coded image. The encoder neural network may be configured with encoder weights determined by jointly with corresponding decoder weights of a decoder neural network on the set of images.
Method and apparatus for encoding an image
The present embodiments obtain chroma components representative of the chroma components of an output image from color components representative of an input image, and if a value of a pixel in at least one of said chroma components exceeds a given value, modify the value of said pixel in at least one of said color components in such a way that the value of said pixel in said at least one of said chroma components is lower than or equals to said given value.