Patent classifications
H04N19/184
Method and Apparatus for Entropy Coding of Source Samples with Large Alphabet
A general entropy coding method for source symbols is disclosed. This method determines a prefix part and any suffix part for the current symbol. The method divides prefix of the source symbol into at least two parts by comparing a test value related to the prefix part against a threshold. If the test value is greater than or equal to the threshold, the method derives a first binary string by binarizing a first prefix part related to the prefix part using a first variable length code. If the test value related to the prefix part is less than the threshold, the method derives a second binary string by binarizing a second prefix part related to the prefix part using a second variable length code or a first fixed-length code. The method then encodes at least one of the first binary string and the second binary string using a CABAC mode.
ADAPTIVE TILE DATA SIZE CODING FOR VIDEO AND IMAGE COMPRESSION
A method for encoding a video signal includes estimating a space requirement for encoding a tile of a video frame, writing a first value in a first value space of the bitstream, wherein the first value describes a size of a second value space, and defining the second value space in the bitstream, wherein the size of the second value space is based on an estimated space requirement. The method also includes writing encoded content in a content space of the bitstream, determining a size of the content space subsequent to writing encoded content in the content space, and writing a second value in the second value space of the bitstream, wherein the second value describes the size of the content space.
ADAPTIVE TILE DATA SIZE CODING FOR VIDEO AND IMAGE COMPRESSION
A method for encoding a video signal includes estimating a space requirement for encoding a tile of a video frame, writing a first value in a first value space of the bitstream, wherein the first value describes a size of a second value space, and defining the second value space in the bitstream, wherein the size of the second value space is based on an estimated space requirement. The method also includes writing encoded content in a content space of the bitstream, determining a size of the content space subsequent to writing encoded content in the content space, and writing a second value in the second value space of the bitstream, wherein the second value describes the size of the content space.
System and Method for Synchronizing Timing Across Multiple Streams
Systems and methods of adaptive streaming are discussed. Transcoded copies of a source stream may be aligned with one another such that the independently specified portions of each transcoded stream occur at the same locations within the content. These transcoded copies may be produced by one or more transcoders, whose outputs are synchronized by a delay adjuster. A fragmenter may use the synchronized and aligned streams to efficiently produce fragments suitable for use in adaptive streaming.
System and Method for Synchronizing Timing Across Multiple Streams
Systems and methods of adaptive streaming are discussed. Transcoded copies of a source stream may be aligned with one another such that the independently specified portions of each transcoded stream occur at the same locations within the content. These transcoded copies may be produced by one or more transcoders, whose outputs are synchronized by a delay adjuster. A fragmenter may use the synchronized and aligned streams to efficiently produce fragments suitable for use in adaptive streaming.
PICTURE ENCODING/DECODING METHOD AND RELATED APPARATUS
A picture encoding/decoding method and a related apparatus are provided. The picture decoding method includes obtaining a current picture; selecting, from a knowledge base, K reference pictures of the current picture, where at least one picture in the knowledge base does not belong to a random access segment in which the current picture is located and wherein K is an integer greater than or equal to 1; and decoding the current picture according to the K reference pictures.
PICTURE ENCODING/DECODING METHOD AND RELATED APPARATUS
A picture encoding/decoding method and a related apparatus are provided. The picture decoding method includes obtaining a current picture; selecting, from a knowledge base, K reference pictures of the current picture, where at least one picture in the knowledge base does not belong to a random access segment in which the current picture is located and wherein K is an integer greater than or equal to 1; and decoding the current picture according to the K reference pictures.
SUPER-TRANSFORM VIDEO CODING
Super-transform coding may include identifying a plurality of sub-blocks for prediction coding a current block, determining whether to encode the current block using a super-transform, and super-prediction coding the current block. Super-prediction coding may include generating a super-prediction block for the current block by generating a prediction block for each unpartitioned sub-block of the current block, generating a super-prediction block for each partitioned sub-block of the current block by super-prediction coding the sub-block, and including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block. Including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block may include filtering at least a portion of each prediction block and each super-prediction block based on a spatially adjacent prediction block. Super-transform coding may include transforming the super-prediction block for the current block using a corresponding super-transform.
SUPER-TRANSFORM VIDEO CODING
Super-transform coding may include identifying a plurality of sub-blocks for prediction coding a current block, determining whether to encode the current block using a super-transform, and super-prediction coding the current block. Super-prediction coding may include generating a super-prediction block for the current block by generating a prediction block for each unpartitioned sub-block of the current block, generating a super-prediction block for each partitioned sub-block of the current block by super-prediction coding the sub-block, and including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block. Including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block may include filtering at least a portion of each prediction block and each super-prediction block based on a spatially adjacent prediction block. Super-transform coding may include transforming the super-prediction block for the current block using a corresponding super-transform.
Systems and methods for liner model derivation
The present disclosure provides a video data processing method. The method includes receiving a bitstream; decoding an index associated with a coding unit based on the bitstream, the index indicating a selection mode among at least four selection modes; determining four samples based on the index; determining two parameters based on the four samples; determining predicted samples of the coding unit based on the two parameters; and decoding the coding unit based on the predicted samples.