Patent classifications
H04N19/30
Signaling for addition or removal of layers in scalable video
In one embodiment, a method of signaling individual layers in a transport stream includes: determining a plurality of layers in a transport stream, wherein each layer includes a respective transport stream parameter setting; determining an additional layer for the plurality of layers in the transport stream, wherein the additional layer enhances one or more of the plurality of layers including a base layer and the respective layer parameter settings for the plurality of layers do not take into account the additional layer; and determining an additional transport stream parameter setting for the additional layer, the additional transport stream parameter setting specifying a relationship between the additional layer and at least a portion of the plurality of layers, wherein the additional transport stream parameter setting is used to decode the additional layer and the at least a portion of the plurality of layers.
Signaling for addition or removal of layers in scalable video
In one embodiment, a method of signaling individual layers in a transport stream includes: determining a plurality of layers in a transport stream, wherein each layer includes a respective transport stream parameter setting; determining an additional layer for the plurality of layers in the transport stream, wherein the additional layer enhances one or more of the plurality of layers including a base layer and the respective layer parameter settings for the plurality of layers do not take into account the additional layer; and determining an additional transport stream parameter setting for the additional layer, the additional transport stream parameter setting specifying a relationship between the additional layer and at least a portion of the plurality of layers, wherein the additional transport stream parameter setting is used to decode the additional layer and the at least a portion of the plurality of layers.
SIMPLIFICATIONS OF CROSS-COMPONENT LINEAR MODEL
A computing device performs a method of decoding video data by reconstructing a luma block corresponding to a chroma block; searching a sub-group of a plurality of reconstructed neighboring luma samples in a predefined order to identify a maximum luma sample and a minimum luma sample; computing a down-sampled maximum luma sample corresponding to the maximum luma sample; computing a down-sampled minimum luma sample corresponding to the minimum luma sample; generating a linear model using the down-sampled maximum luma sample, the down-sampled minimum luma sample, the first reconstructed chroma sample, and the second reconstructed chroma sample; computing down-sampled luma samples from luma samples of the reconstructed luma block, wherein each down-sampled luma sample corresponds to a chroma sample of the chroma block; and predicting chroma samples of the chroma block by applying the liner model to the corresponding down-sampled luma samples.
SIMPLIFICATIONS OF CROSS-COMPONENT LINEAR MODEL
A computing device performs a method of decoding video data by reconstructing a luma block corresponding to a chroma block; searching a sub-group of a plurality of reconstructed neighboring luma samples in a predefined order to identify a maximum luma sample and a minimum luma sample; computing a down-sampled maximum luma sample corresponding to the maximum luma sample; computing a down-sampled minimum luma sample corresponding to the minimum luma sample; generating a linear model using the down-sampled maximum luma sample, the down-sampled minimum luma sample, the first reconstructed chroma sample, and the second reconstructed chroma sample; computing down-sampled luma samples from luma samples of the reconstructed luma block, wherein each down-sampled luma sample corresponds to a chroma sample of the chroma block; and predicting chroma samples of the chroma block by applying the liner model to the corresponding down-sampled luma samples.
RESTRICTIONS OF USAGE OF TOOLS ACCORDING TO REFERENCE PICTURE TYPES
A video processing method includes determining, for a conversion between a current video block of a video including multiple video blocks and a coded representation of the video, and from types of reference pictures used for the conversion, applicability of a coding tool to the current video block and performing the conversion based on the determining. The method may be performed by a video decoder or a video encoder or a video transcoder.
RESTRICTIONS OF USAGE OF TOOLS ACCORDING TO REFERENCE PICTURE TYPES
A video processing method includes determining, for a conversion between a current video block of a video including multiple video blocks and a coded representation of the video, and from types of reference pictures used for the conversion, applicability of a coding tool to the current video block and performing the conversion based on the determining. The method may be performed by a video decoder or a video encoder or a video transcoder.
TECHNIQUES FOR IMPLEMENTING A DECODING ORDER WITHIN A CODED PICTURE
A method for video processing is described. The method includes performing a conversion between a video including one or more pictures including one or more subpictures including one or more slices and a bitstream representation of the video according to a rule, and wherein the bitstream representation includes a number of coded units, wherein the rule specifies that a decoding order of coded units within a subpicture is in an increasing order of subpicture level slice index values of the coded units.
TECHNIQUES FOR IMPLEMENTING A DECODING ORDER WITHIN A CODED PICTURE
A method for video processing is described. The method includes performing a conversion between a video including one or more pictures including one or more subpictures including one or more slices and a bitstream representation of the video according to a rule, and wherein the bitstream representation includes a number of coded units, wherein the rule specifies that a decoding order of coded units within a subpicture is in an increasing order of subpicture level slice index values of the coded units.
VIRTUAL TEMPORAL AFFINE CANDIDATES
A video encoder or decoder processes portions of video using virtual temporal affine motion candidates. Under the general aspects, virtual temporal affine candidates are created using only the classical temporal motion buffer information, avoiding the storage of additional affine parameters in a temporal motion buffer. A motion field for encoding or decoding a video block is generated based on the virtual temporal affine candidates. In one embodiment, collocated motion candidates are rescaled by adjusting the picture order count of the determined motion field. In another embodiment, resolution adaptation is performed to enable a current motion buffer to correspond to a reference motion buffer.
VIRTUAL TEMPORAL AFFINE CANDIDATES
A video encoder or decoder processes portions of video using virtual temporal affine motion candidates. Under the general aspects, virtual temporal affine candidates are created using only the classical temporal motion buffer information, avoiding the storage of additional affine parameters in a temporal motion buffer. A motion field for encoding or decoding a video block is generated based on the virtual temporal affine candidates. In one embodiment, collocated motion candidates are rescaled by adjusting the picture order count of the determined motion field. In another embodiment, resolution adaptation is performed to enable a current motion buffer to correspond to a reference motion buffer.