H04N19/57

Point cloud compression using video encoding with time consistent patches

A system comprises an encoder configured to compress attribute and/or spatial information for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. In some embodiments, an encoder generates time-consistent patches for multiple version of the point cloud at multiple moments in time and uses the time-consistent patches to generate image based representations of the point cloud at the multiple moments in time.

Video signal processing method and apparatus using adaptive motion vector resolution

A video signal decoding device comprising a processor, wherein the processor is configured to: obtain reference samples corresponding to a first side of a current block and reference samples corresponding to a second side of the current block, obtain a direct current (DC) value for prediction of the current block based on a reference sample set comprising at least some of the reference samples corresponding to the first side and the reference samples corresponding to the second side, and reconstruct the current block based on the DC value.

Video signal processing method and apparatus using adaptive motion vector resolution

A video signal decoding device comprising a processor, wherein the processor is configured to: obtain reference samples corresponding to a first side of a current block and reference samples corresponding to a second side of the current block, obtain a direct current (DC) value for prediction of the current block based on a reference sample set comprising at least some of the reference samples corresponding to the first side and the reference samples corresponding to the second side, and reconstruct the current block based on the DC value.

IMAGE ENCODING/DECODING METHOD AND APPARATUS BASED ON WRAP-AROUND MOTION COMPENSATION, AND RECORDING MEDIUM STORING BITSTREAM
20230012751 · 2023-01-19 ·

An image encoding/decoding method and apparatus are provided. An image decoding method includes obtaining inter prediction information of a current block and wraparound information from a bitstream, and generating a prediction block of the current block based on the inter prediction information and the wraparound information. The wraparound information may include a first flag specifying whether wraparound motion compensation is enabled for a current video sequence including the current block. The first flag may have a first value specifying that the wraparound motion compensation is disabled, based on that one or more subpicture, which is coded independently and has a width different from a width of a current picture, is present in the current video sequence.

Hash-based motion searching

Methods, systems and device for hash-based motion estimation in video coding are described. An exemplary method of video processing includes determining, for a conversion between a current block of a video and a bitstream representation of the video, motion information associated with the current block using a hash-based motion search, a size of the current block being M×N, M and N being positive integers and M being not equal to N, applying, based on the motion information and a video picture comprising the current block, a prediction for the current block, and performing, based on the prediction, the conversion.

Method and device for encoding or decoding image

An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.

Method and device for encoding or decoding image

An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.

USING UNREFINED MOTION VECTORS FOR PERFORMING DECODER-SIDE MOTION VECTOR DERIVATION

A device for decoding video data includes a memory configured to store video data; and one or more processors implemented in circuitry and configured to: determine a deterministic bounding box from which to retrieve reference samples of reference pictures of video data for performing decoder-side motion vector derivation (DMVD) for a current block of the video data; derive a motion vector for the current block according to DMVD using the reference samples within the deterministic bounding box; form a prediction block using the motion vector; and decode the current block using the prediction block.

Video codec using template matching prediction

Video decoder and/or video encoder, configured to determine a set of search area location candidates in a reference picture of a video; match the set of search area location candidates with a current template area adjacent to a current block of a current picture to obtain a best matching search area location candidate; select, out of a search area positioned in the reference picture at the best matching search area location candidate, a set of one or more predictor blocks by matching the current template area against the search area; and predictively decode/encode the current block from/into a data stream based on the set of one or more predictor blocks.

Video codec using template matching prediction

Video decoder and/or video encoder, configured to determine a set of search area location candidates in a reference picture of a video; match the set of search area location candidates with a current template area adjacent to a current block of a current picture to obtain a best matching search area location candidate; select, out of a search area positioned in the reference picture at the best matching search area location candidate, a set of one or more predictor blocks by matching the current template area against the search area; and predictively decode/encode the current block from/into a data stream based on the set of one or more predictor blocks.