H04N19/91

ENTROPY ENCODING/DECODING METHOD AND APPARATUS
20230239516 · 2023-07-27 ·

The technology of this application relates to an entropy encoding method that includes obtaining base layer information of a to-be-encoded picture block, where the base layer information corresponds to M samples in the picture block, and M is a positive integer, obtaining K elements corresponding to enhancement layer information of the picture block, where the enhancement layer information corresponds to N samples in the picture block, both K and N are positive integers, and N≥M, inputting the base layer information into a neural network to obtain K groups of probability values, where the K groups of probability values correspond to the K elements, and any group of probability values is for representing probabilities of a plurality of candidate values of a corresponding element, and performing entropy encoding on the K elements based on the K groups of probability values.

Encoder, decoder, computer program and computer program product for processing a frame of a video sequence

An encoder is provided that comprises a partitioner and an entropy coder. The partitioner is configured to receive a current block of the frame and obtain a list of candidate geometric partitioning (GP) lines. Each of the candidate GP lines is generated based on information of one or more candidate neighbor blocks of the current block. The partitioner is further configured to determine a final GP line that partitions the current block into two segments, select a GP line from the list of GP lines to obtain a selected GP line, and generate a GP parameter for the current block. The GP parameter includes an offset information indicating an offset between the final GP line and the selected GP line. The entropy coder is configured to encode the GP parameter.

Encoder, decoder, computer program and computer program product for processing a frame of a video sequence

An encoder is provided that comprises a partitioner and an entropy coder. The partitioner is configured to receive a current block of the frame and obtain a list of candidate geometric partitioning (GP) lines. Each of the candidate GP lines is generated based on information of one or more candidate neighbor blocks of the current block. The partitioner is further configured to determine a final GP line that partitions the current block into two segments, select a GP line from the list of GP lines to obtain a selected GP line, and generate a GP parameter for the current block. The GP parameter includes an offset information indicating an offset between the final GP line and the selected GP line. The entropy coder is configured to encode the GP parameter.

Methods and devices using direct coding in point cloud compression
11570481 · 2023-01-31 · ·

Methods and devices for coding point clouds using direct coding mode to code coordinates of a point within a sub-volume associated with a current node instead of a pattern of occupancy for child nodes. Eligibility for use of direct coding is based on occupancy data from another node. If eligible, then a flag is represented in the bitstream to signal whether direct coding is applied to points in the sub-volume or not.

Methods and devices using direct coding in point cloud compression
11570481 · 2023-01-31 · ·

Methods and devices for coding point clouds using direct coding mode to code coordinates of a point within a sub-volume associated with a current node instead of a pattern of occupancy for child nodes. Eligibility for use of direct coding is based on occupancy data from another node. If eligible, then a flag is represented in the bitstream to signal whether direct coding is applied to points in the sub-volume or not.

Texture compression
11568572 · 2023-01-31 · ·

A computer-implemented method comprises receiving a first compressed representation of a texture map in a first compression format, wherein the first compressed representation has been compressed using a first compressor, and receiving an array of compression parameters for a second compressor, the array of compression parameters including one or more respective compression parameters for each of a plurality of pixel regions of the texture map. The method further comprises decompressing the first compressed representation of the texture map to obtain the texture map, and compressing, using the second compressor, the texture map to a second compressed representation in a second compression format, comprising compressing each of said plurality of pixel regions of the texture map in accordance with the respective one or more compression parameters. The method further comprises storing the second compressed representation of the texture map to one or more memories accessible by a graphics processing unit, and selectively decompressing portions of the second compressed representation of the texture map using the graphical processing unit.

Method for encoding and decoding video by using motion vector differential value, and apparatus for encoding and decoding motion information

Provided is a video decoding method including: generating a merge candidate list including neighboring blocks referred to predict a motion vector of a current block in a skip mode or a merge mode; when a merge motion vector difference is used according to merge difference mode information indicating whether the merge motion vector difference and a motion vector determined from the merge candidate list are used, determining a base motion vector from a candidate determined among the merge candidate list based on merge candidate information; determining the motion vector of the current block by using the base motion vector and a merge motion vector difference of the current block, the merge motion vector difference being determined by using a distance index and direction index of the merge motion vector difference of the current block; and reconstructing the current block by using the motion vector of the current block.

Method for encoding and decoding video by using motion vector differential value, and apparatus for encoding and decoding motion information

Provided is a video decoding method including: generating a merge candidate list including neighboring blocks referred to predict a motion vector of a current block in a skip mode or a merge mode; when a merge motion vector difference is used according to merge difference mode information indicating whether the merge motion vector difference and a motion vector determined from the merge candidate list are used, determining a base motion vector from a candidate determined among the merge candidate list based on merge candidate information; determining the motion vector of the current block by using the base motion vector and a merge motion vector difference of the current block, the merge motion vector difference being determined by using a distance index and direction index of the merge motion vector difference of the current block; and reconstructing the current block by using the motion vector of the current block.

Selectable transcode engine systems and methods

An electronic device includes a video encoding pipeline configured to encode source image data. The video encoding pipeline includes a first transcode engine and a second transcode engine. The electronic device also includes processing circuitry configured to determine a target throughput for a bin stream and determine whether to encode the bin stream using only the first transcode engine or both the first and second transcode engines based on the target throughput. The processing circuitry is also configured to cause only the first transcode engine to encode the bin stream or both the first and second transcode engines to encode the bin stream based on determining whether to encode the bin stream using only the first transcode engine or both the first and second transcode engines.

Selectable transcode engine systems and methods

An electronic device includes a video encoding pipeline configured to encode source image data. The video encoding pipeline includes a first transcode engine and a second transcode engine. The electronic device also includes processing circuitry configured to determine a target throughput for a bin stream and determine whether to encode the bin stream using only the first transcode engine or both the first and second transcode engines based on the target throughput. The processing circuitry is also configured to cause only the first transcode engine to encode the bin stream or both the first and second transcode engines to encode the bin stream based on determining whether to encode the bin stream using only the first transcode engine or both the first and second transcode engines.