Patent classifications
H04N19/44
SELECTION OF EXPLICIT MODE OR IMPLICIT MODE IN INTRA BLOCK COPY CODING
A method for video encoding includes determining whether coding of a current block in an IBC prediction mode is performed in an explicit mode or an implicit mode based on whether a difference exists between a block vector and a corresponding block vector predictor. The current block is part of a current picture to be coded. The method further includes constructing a block vector predictor candidate list for the current block, the block vector predictor candidate list having a first number of block vector predictor candidates in the implicit mode, and having a second number of block vector predictor candidates in the explicit mode. The method further includes selecting a block vector predictor candidate from the constructed block vector predictor candidate list and encoding the current block according to the selected block vector predictor candidate.
DECODER WITH MERGE CANDIDATE REORDER BASED ON COMMON MOTION VECTOR
A decoder includes circuitry configured to receive a bitstream; construct, for a current block, a motion vector candidate list including a motion vector candidate having motion information that characterizes a global motion vector; reorder the motion vector candidate list such that the motion vector candidate having the motion information that characterizes the global motion vector is first in the reordered motion vector candidate list; and reconstruct pixel data of the current block and using the reordered motion vector candidate list. Related apparatus, systems, techniques and articles are also described.
DECODER WITH MERGE CANDIDATE REORDER BASED ON COMMON MOTION VECTOR
A decoder includes circuitry configured to receive a bitstream; construct, for a current block, a motion vector candidate list including a motion vector candidate having motion information that characterizes a global motion vector; reorder the motion vector candidate list such that the motion vector candidate having the motion information that characterizes the global motion vector is first in the reordered motion vector candidate list; and reconstruct pixel data of the current block and using the reordered motion vector candidate list. Related apparatus, systems, techniques and articles are also described.
Image encoding method using a skip mode, and a device using the method
Disclosed are an image encoding method using a skip mode and a device using the method. The image encoding method may comprise the steps of: judging whether there is residual block data of a prediction target block on the basis of predetermined data indicating whether residual block data has been encoded; and, if there is residual block data, restoring the prediction target block on the basis of the residual block data and an intra-screen predictive value of the prediction target block. Consequently, encoding and decoding efficiency can be increased by carrying out the encoding and decoding of screen residual data only for prediction target blocks where there is a need for a residual data block in accordance with screen similarity.
Image encoding method using a skip mode, and a device using the method
Disclosed are an image encoding method using a skip mode and a device using the method. The image encoding method may comprise the steps of: judging whether there is residual block data of a prediction target block on the basis of predetermined data indicating whether residual block data has been encoded; and, if there is residual block data, restoring the prediction target block on the basis of the residual block data and an intra-screen predictive value of the prediction target block. Consequently, encoding and decoding efficiency can be increased by carrying out the encoding and decoding of screen residual data only for prediction target blocks where there is a need for a residual data block in accordance with screen similarity.
Multi-output decoder for texture decompression
A decoder is configured to decode a plurality of texels from a received block of texture data encoded according to the Adaptive Scalable Texture Compression (ASTC) format, and includes a parameter decode unit configured to decode configuration data for the received block of texture data, a colour decode unit configured to decode colour endpoint data for the plurality of texels of the received block in dependence on the configuration data, a weight decode unit configured to decode interpolation weight data for each of the plurality of texels of the received block in dependence on the configuration data, and at least one interpolator unit configured to calculate a colour value for each of the plurality of texels of the received block using the interpolation weight data for that texel and a pair of colour endpoints from the colour endpoint data. At least one of the parameter decode unit, colour decode unit and weight decode unit are configured to decode intermediate data from the received block that is common to the decoding of at least a subset of texels of that block and to use that decoded intermediate data as part of the decoding of at least two of the plurality of texels from the received block of texture data.
Multi-output decoder for texture decompression
A decoder is configured to decode a plurality of texels from a received block of texture data encoded according to the Adaptive Scalable Texture Compression (ASTC) format, and includes a parameter decode unit configured to decode configuration data for the received block of texture data, a colour decode unit configured to decode colour endpoint data for the plurality of texels of the received block in dependence on the configuration data, a weight decode unit configured to decode interpolation weight data for each of the plurality of texels of the received block in dependence on the configuration data, and at least one interpolator unit configured to calculate a colour value for each of the plurality of texels of the received block using the interpolation weight data for that texel and a pair of colour endpoints from the colour endpoint data. At least one of the parameter decode unit, colour decode unit and weight decode unit are configured to decode intermediate data from the received block that is common to the decoding of at least a subset of texels of that block and to use that decoded intermediate data as part of the decoding of at least two of the plurality of texels from the received block of texture data.
Method and apparatus for video coding
A method and an apparatus for video decoding are disclosed. The apparatus decodes prediction information of a current block from a coded video bitstream. The prediction information indicates an intra block copy mode. The current block is one of a plurality of coding blocks in a current region of a current coding tree block (CTB) in a current picture. The apparatus determines whether the current block is to be reconstructed first in the current region. When the current block is to be reconstructed first in the current region, the apparatus determines a block vector for the current block where a reference block indicated by the block vector is in a search range in the current picture that excludes a collocated region in a previously reconstructed CTB. A position of the collocated region in the previously reconstructed CTB has a same relative position as the current region in the current CTB.
Method and apparatus for video coding
A method and an apparatus for video decoding are disclosed. The apparatus decodes prediction information of a current block from a coded video bitstream. The prediction information indicates an intra block copy mode. The current block is one of a plurality of coding blocks in a current region of a current coding tree block (CTB) in a current picture. The apparatus determines whether the current block is to be reconstructed first in the current region. When the current block is to be reconstructed first in the current region, the apparatus determines a block vector for the current block where a reference block indicated by the block vector is in a search range in the current picture that excludes a collocated region in a previously reconstructed CTB. A position of the collocated region in the previously reconstructed CTB has a same relative position as the current region in the current CTB.
Data preprocessing and data augmentation in frequency domain
Methods and systems are provided for implementing preprocessing operations and augmentation operations upon image datasets transformed to frequency domain representations, including decoding images of an image dataset to generate a frequency domain representation of the image dataset; performing a resizing operation based on resizing factors on the image dataset in a frequency domain representation; performing a reshaping operation based on reshaping factors on the image dataset in a frequency domain representation; and performing a cropping operation on the image dataset in a frequency domain representation. The methods and systems may further include performing an augmentation operation on the image dataset in a frequency domain representation. Methods and systems of the present disclosure may free learning models from computational overhead caused by transforming image datasets into frequency domain representations. Furthermore, computational overhead caused by inverse transformation operations is also alleviated.