Patent classifications
H04N19/45
Method and apparatus for reconstructing 360-degree image according to projection format
Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
Method and apparatus for reconstructing 360-degree image according to projection format
Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
TRANSFORM SELECTION IN A VIDEO ENCODER AND/OR VIDEO DECODER
A process for selecting a transform set for a prediction block. The process can be used in both an encoder and a decoder. For example, the process can be used in both an encoder and a decoder for a prediction block that has been predicted from a reference block. In some embodiments, both the prediction block and the reference block are intra blocks.
EXTENSION OF EFFECTIVE SEARCH RANGE FOR CURRENT PICTURE REFERENCING
A method of video encoding includes determining whether a reference block for a current block is located in a different coding tree unit (CTU) than a CTU of the current block. The method also includes, in response to the reference block being located in the different CTU, (i) determining whether a memory location of a reference sample memory for the reference block is available. The second area is collocated in the different CTU with a position of the first area in the CTU of the current block. In response to the determination that the reference block is located in the different CTU, the method also includes, (ii) in response to a determination that the memory location for the reference block is available, retrieving, from the memory location corresponding to the reference block, one or more samples to encode the current block.
Method and device for encoding or decoding image
An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.
Image encoding and decoding method, apparatus, and system, and storage medium to determine a transform core pair to effectively reduce encoding complexity
An image encoding and decoding method, includes: determining location information of a target reconstructed image block of a current to-be-encoded image block, where the target reconstructed image block is a reconstructed image block used to determine motion information of the current to-be-encoded image block; determining a first transform core pair based on the location information of the target reconstructed image block; and transforming a residual signal of the current to-be-encoded image block based on the first transform core pair, to obtain a transform coefficient.
SIGNAL RECONSTRUCTION METHOD, SIGNAL RECONSTRUCTION APPARATUS, AND PROGRAM
Provided is a signal reconstruction method executed by a signal reconstruction apparatus including a processor and a memory that stores a codec. The signal reconstruction method includes reconstructing an input signal according to a desired purpose, and in the reconstructing, a likelihood of the input signal being a predetermined type of signal is considered by executing coding on a processing result of the input signal, based on the codec previously determined according to a type of the input signal.
Video decoding method and apparatus using multi-core transform, and video encoding method and apparatus using multi-core transform
A method and apparatus for performing transformation and inverse transformation on a current block by using multi-core transform kernels in video encoding and decoding processes. A video decoding method may include obtaining, from a bitstream, multi-core transformation information indicating whether multi-core transformation kernels are to be used according to a size of a current block; obtaining horizontal transform kernel information and vertical transform kernel information from the bitstream when the multi-core transformation kernels are used according to the multi-core transformation information; determining a horizontal transform kernel for the current block according to the horizontal transform kernel information; determining a vertical transform kernel for the current block according to the vertical transform kernel information; and performing inverse transformation on the current block by using the horizontal transform kernel and the vertical transform kernel.
Point Cloud Compression
A system comprises an encoder configured to compress attribute information for a point cloud and/or a decoder configured to decompress compressed attribute information for the point cloud. Attribute values for at least one starting point are included in a compressed attribute information file and attribute correction values used to correct predicted attribute values are included in the compressed attribute information file. Attribute values are predicted based, at least in part, on attribute values of neighboring points and distances between a particular point for whom an attribute value is being predicted and the neighboring points. The predicted attribute values are compared to attribute values of a point cloud prior to compression to determine attribute correction values. A decoder follows a similar prediction process as an encoder and corrects predicted values using attribute correction values included in a compressed attribute information file.
Integrated image reshaping and video coding
Given a sequence of images in a first codeword representation, methods, processes, and systems are presented for integrating reshaping into a next generation video codec for encoding and decoding the images, wherein reshaping allows part of the images to be coded in a second codeword representation which allows more efficient compression than using the first codeword representation. A variety of architectures are discussed, including: an out-of-loop reshaping architecture, an in-loop-for intra pictures only reshaping architecture, an in-loop architecture for prediction residuals, and a hybrid in-loop reshaping architecture. Syntax methods for signaling reshaping parameters, and image-encoding methods optimized with respect to reshaping are also presented.