Patent classifications
H04N19/90
Attribute value of reconstructed position associated with plural original points
Aspects of the disclosure provide methods and apparatuses for point cloud compression. In some examples, an apparatus for point cloud compression includes processing circuitry. The processing circuitry determines a plurality of original points in a point cloud that is quantized to a reconstructed position. The processing circuitry determines an attribute value of the reconstructed position based on attribute information of the plurality of original points. Further, the processing circuitry encodes the point cloud based on the determined attribute value of the reconstructed position.
METHODS AND SYSTEMS FOR COMBINED LOSSLESS AND LOSSY CODING
An encoder includes circuitry configured to receive an input video, select a current frame identify a first sub-picture of the current frame to be encoded using a lossless encoding protocol, and encode the current frame, wherein encoding the current frame includes encoding the first sub-picture using the lossless encoding protocol.
METHODS AND SYSTEMS FOR COMBINED LOSSLESS AND LOSSY CODING
An encoder includes circuitry configured to receive an input video, select a current frame identify a first sub-picture of the current frame to be encoded using a lossless encoding protocol, and encode the current frame, wherein encoding the current frame includes encoding the first sub-picture using the lossless encoding protocol.
CODING VIDEO FRAME KEY POINTS TO ENABLE RECONSTRUCTION OF VIDEO FRAME
An image processing method includes receiving coded image data of a video clip transmitted by an encoder, and decoding the coded image data to obtain a first video frame, key points in the first video frame, and key points in a second video frame not included in the coded image data received from the encoder. The key points represent positions of an object in the first and second video frames. The method further includes generating transforming information of motion of the object according to the key points in the first video frame and the key points in the second video frame, and reconstructing the second video frame according to the first video frame and the transforming information.
Method and apparatus for transform-based image encoding/decoding
The present invention relates to a method and apparatus for encoding and decoding a video image based on transform. The method for decoding a video includes: determining a transform mode of a current block; inverse-transforming residual data of the current block according to the transform mode of the current block; and rearranging the inverse-transformed residual data of the current block according to the transform mode of the current block, wherein the transform mode includes at least one of SDST (Shuffling Discrete Sine Transform), SDCT (Shuffling Discrete cosine Transform), DST (Discrete Sine Transform) or DCT (Discrete Cosine Transform).
Method and apparatus for transform-based image encoding/decoding
The present invention relates to a method and apparatus for encoding and decoding a video image based on transform. The method for decoding a video includes: determining a transform mode of a current block; inverse-transforming residual data of the current block according to the transform mode of the current block; and rearranging the inverse-transformed residual data of the current block according to the transform mode of the current block, wherein the transform mode includes at least one of SDST (Shuffling Discrete Sine Transform), SDCT (Shuffling Discrete cosine Transform), DST (Discrete Sine Transform) or DCT (Discrete Cosine Transform).
PARALLELIZED VIDEO DECODING USING A NEURAL NETWORK
In a method for decoding a data stream by way of an electronic device (10) including a processor (14), and a parallelized processing unit (16) designed to perform a plurality of operations of the same type in parallel at a given time, the data stream includes a first dataset (Fet) and a second dataset (Fnn) representative of audio or video content. The decoding method includes the processor (14) processing data from the first dataset (Fet), obtaining the audio or video content by processing (E70) data from the second dataset (Fnn) using a process depending at least partially on the data from the first set (Fet) and using an artificial neural network (18) implemented by the parallelized processing unit (16).
PARALLELIZED VIDEO DECODING USING A NEURAL NETWORK
In a method for decoding a data stream by way of an electronic device (10) including a processor (14), and a parallelized processing unit (16) designed to perform a plurality of operations of the same type in parallel at a given time, the data stream includes a first dataset (Fet) and a second dataset (Fnn) representative of audio or video content. The decoding method includes the processor (14) processing data from the first dataset (Fet), obtaining the audio or video content by processing (E70) data from the second dataset (Fnn) using a process depending at least partially on the data from the first set (Fet) and using an artificial neural network (18) implemented by the parallelized processing unit (16).
Video coding method, video decoding method, video coding apparatus and video decoding apparatus
A moving picture coding method includes: making a determination as to whether or not to code all blocks in a current picture in the skip mode; setting, based on a result of the determination, a first flag indicating whether or not a temporally neighboring block is to be referenced, a value of a parameter for determining a total number of merging candidates, and a second flag for each block included in the current picture, the second flag indicating whether or not the block is to be coded in the skip mode; calculating, as a merging candidate, a neighboring block usable for merging; and coding an index which indicates a merging candidate to be used for coding of the current block and attaching the coded index to a bitstream.
Video coding method, video decoding method, video coding apparatus and video decoding apparatus
A moving picture coding method includes: making a determination as to whether or not to code all blocks in a current picture in the skip mode; setting, based on a result of the determination, a first flag indicating whether or not a temporally neighboring block is to be referenced, a value of a parameter for determining a total number of merging candidates, and a second flag for each block included in the current picture, the second flag indicating whether or not the block is to be coded in the skip mode; calculating, as a merging candidate, a neighboring block usable for merging; and coding an index which indicates a merging candidate to be used for coding of the current block and attaching the coded index to a bitstream.