Patent classifications
H04N19/60
Encoder, Decoder and Related Methods
There is disclosed an encoder, a decoder, related methods, and non-transitory storage units storing instructions which, when executed by a computer, cause the computer to perform the methods.
At an encoder (300), after a spatial transformation stage (304), there is obtained a spatially transformed version (306) of input image information (302) having multiple bands and, for each band, multiple transform band coefficients. After the generation of precincts (311), each comprising transform coefficients covering a predetermined spatial area of the input image information (302), there is provided a component transformation stage (320, 325), to apply one component transformation (CTr) selected (327) out of a plurality of predetermined component transformations, to each band (102′) of each precinct (311). Hence, there is obtained a spatially transformed and color transformed version (323) of the input image information (302), which is subsequently quantized and entropy encoded.
Encoder, Decoder and Related Methods
There is disclosed an encoder, a decoder, related methods, and non-transitory storage units storing instructions which, when executed by a computer, cause the computer to perform the methods.
At an encoder (300), after a spatial transformation stage (304), there is obtained a spatially transformed version (306) of input image information (302) having multiple bands and, for each band, multiple transform band coefficients. After the generation of precincts (311), each comprising transform coefficients covering a predetermined spatial area of the input image information (302), there is provided a component transformation stage (320, 325), to apply one component transformation (CTr) selected (327) out of a plurality of predetermined component transformations, to each band (102′) of each precinct (311). Hence, there is obtained a spatially transformed and color transformed version (323) of the input image information (302), which is subsequently quantized and entropy encoded.
ENCODING AND DECODING A SEQUENCE OF PICTURES
An apparatus for decoding a sequence of pictures from a data stream is configured for decoding a picture of the sequence by: deriving a residual transform signal of the picture from the data stream; combining the residual transform signal with a buffered transform signal of a previous picture of the sequence so as to obtain a transform signal of the picture, the transform signal representing the picture in spectral components; and subjecting the transform signal to a spectral-to-spatial transformation, wherein the buffered transform signal comprises a selection out of spectral components representing the previous picture.
Method and apparatus for video coding
A method of video decoding can include receiving a bit stream including coded bits of bins of syntax elements. The syntax elements are of various types that correspond to transform coefficients of a transform block in a coded picture. Context modeling is performed to determine a context model for each bin of the syntax elements. In a given frequency region of the transform block, for one type of the syntax elements, a group of the transform coefficients having different template magnitudes within a predetermined range share a same context model, or one of the transform coefficients uses the same context model for possible different template magnitudes of the one of the transform coefficients. The possible different template magnitudes are within the predetermined range. The coded bits are decoded based on the context models determined for each bin of the syntax elements to determine the bins of the syntax elements.
Method and apparatus for video coding
A method of video decoding can include receiving a bit stream including coded bits of bins of syntax elements. The syntax elements are of various types that correspond to transform coefficients of a transform block in a coded picture. Context modeling is performed to determine a context model for each bin of the syntax elements. In a given frequency region of the transform block, for one type of the syntax elements, a group of the transform coefficients having different template magnitudes within a predetermined range share a same context model, or one of the transform coefficients uses the same context model for possible different template magnitudes of the one of the transform coefficients. The possible different template magnitudes are within the predetermined range. The coded bits are decoded based on the context models determined for each bin of the syntax elements to determine the bins of the syntax elements.
Method for encoding/decoding block information using quad tree, and device for using same
Disclosed decoding method of the intra prediction mode comprises the steps of: determining whether an intra prediction mode of a present prediction unit is the same as a first candidate intra prediction mode or as a second candidate intra prediction mode on the basis of 1-bit information; and determining, among said first candidate intra prediction mode and said second candidate intra prediction mode, which candidate intra prediction mode is the same as the intra prediction mode of said present prediction unit on the basis of additional 1-bit information, if the intra prediction mode of the present prediction unit is the same as at least either the first candidate intra prediction mode or the second candidate intra prediction mode, and decoding the intra prediction mode of the present prediction unit.
Method for encoding/decoding block information using quad tree, and device for using same
Disclosed decoding method of the intra prediction mode comprises the steps of: determining whether an intra prediction mode of a present prediction unit is the same as a first candidate intra prediction mode or as a second candidate intra prediction mode on the basis of 1-bit information; and determining, among said first candidate intra prediction mode and said second candidate intra prediction mode, which candidate intra prediction mode is the same as the intra prediction mode of said present prediction unit on the basis of additional 1-bit information, if the intra prediction mode of the present prediction unit is the same as at least either the first candidate intra prediction mode or the second candidate intra prediction mode, and decoding the intra prediction mode of the present prediction unit.
Image coding apparatus for coding tile boundaries
An image decoding apparatus obtain pieces of coded data that is included in a bitstream and generated by coding tiles. Tile boundary independence information is further obtained from the bitstream, with the tile boundary independence information indicating whether each of boundaries between the tiles is one of a first boundary or a second boundary. The pieces of coded data are decoded to generate image data of the tiles. Image data of a first tile is generated by decoding a first code string included in first coded data with reference to decoding information of a decoded tile when the tile boundary independence information indicates the first boundary, and by decoding the first code string without referring to the decoding information of the decoded tile when the tile boundary independence information indicates the second boundary.
Image coding apparatus for coding tile boundaries
An image decoding apparatus obtain pieces of coded data that is included in a bitstream and generated by coding tiles. Tile boundary independence information is further obtained from the bitstream, with the tile boundary independence information indicating whether each of boundaries between the tiles is one of a first boundary or a second boundary. The pieces of coded data are decoded to generate image data of the tiles. Image data of a first tile is generated by decoding a first code string included in first coded data with reference to decoding information of a decoded tile when the tile boundary independence information indicates the first boundary, and by decoding the first code string without referring to the decoding information of the decoded tile when the tile boundary independence information indicates the second boundary.
Point cloud compression using video encoding with time consistent patches
A system comprises an encoder configured to compress attribute and/or spatial information for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. In some embodiments, an encoder generates time-consistent patches for multiple version of the point cloud at multiple moments in time and uses the time-consistent patches to generate image based representations of the point cloud at the multiple moments in time.