Patent classifications
H04N19/80
Three-Dimensional Mesh Compression Using a Video Encoder
A system comprises an encoder configured to compress and encode data for a three-dimensional mesh using a video encoding technique. To compress the three-dimensional mesh, the encoder determines sub-meshes and for each sub-mesh: texture patches and geometry patches. Also the encoder determines patch connectivity information and patch texture coordinates for the texture patches and geometry patches. The texture patches and geometry patches are packed into video image frames and encoded using a video codec. Additionally, the encoder determines boundary stitching information for the sub-meshes. A decoder receives a bit stream as generated by the encoder and reconstructs the three-dimensional mesh.
Three-Dimensional Mesh Compression Using a Video Encoder
A system comprises an encoder configured to compress and encode data for a three-dimensional mesh using a video encoding technique. To compress the three-dimensional mesh, the encoder determines sub-meshes and for each sub-mesh: texture patches and geometry patches. Also the encoder determines patch connectivity information and patch texture coordinates for the texture patches and geometry patches. The texture patches and geometry patches are packed into video image frames and encoded using a video codec. Additionally, the encoder determines boundary stitching information for the sub-meshes. A decoder receives a bit stream as generated by the encoder and reconstructs the three-dimensional mesh.
LOOP FILTER BLOCK FLEXIBLE PARTITIONING
A method of loop filtering in a video coding process comprises receiving image data; analyzing the image data; flexibility partitioning the image data into loop filtering blocks (LFBs) to allow the size of LFBs in at least one of a first row and a first column in a same frame to be smaller than other LFBs within the same frame; and applying a loop filter to the LFBs.
LOOP FILTER BLOCK FLEXIBLE PARTITIONING
A method of loop filtering in a video coding process comprises receiving image data; analyzing the image data; flexibility partitioning the image data into loop filtering blocks (LFBs) to allow the size of LFBs in at least one of a first row and a first column in a same frame to be smaller than other LFBs within the same frame; and applying a loop filter to the LFBs.
Method and apparatus for video coding
An apparatus includes processing circuitry configured to decode coding information of a current block from a coded video bitstream. The coding information can indicate that a first prediction mode of the current block is one of a plurality screen content coding (SCC) tools. The processing circuitry can determine whether at least one loop filter associated with the current block is disabled based on at least one of the first prediction mode of the current block and a first quantization parameter (QP) of the current block. In response to the at least one loop filter being determined as disabled, the processing circuitry can reconstruct the current block without the at least one loop filter.
Method and apparatus for video coding
An apparatus includes processing circuitry configured to decode coding information of a current block from a coded video bitstream. The coding information can indicate that a first prediction mode of the current block is one of a plurality screen content coding (SCC) tools. The processing circuitry can determine whether at least one loop filter associated with the current block is disabled based on at least one of the first prediction mode of the current block and a first quantization parameter (QP) of the current block. In response to the at least one loop filter being determined as disabled, the processing circuitry can reconstruct the current block without the at least one loop filter.
Relationship modeling of encode quality and encode parameters based on source attributes
A source quality of a source video and a source content complexity of the source video are identified. Parameter constraints with respect to parameters of an operation are received. The source video quality, source content complexity, and parameter constraints are applied to a deep neural network (DNN) producing DNN outputs. In an example, the DNN outputs are combined using domain knowledge to provide the filter parameters, as predicted, to a filter chain, such that applying the filter chain to the input source video results in an output video achieving the full reference video quality score. In another example, the DNN outputs are combined using domain knowledge to provide the filter parameters, as predicted, to a filter chain, such that applying the filter chain to the input source video results in an output video achieving the full reference video quality score.
Relationship modeling of encode quality and encode parameters based on source attributes
A source quality of a source video and a source content complexity of the source video are identified. Parameter constraints with respect to parameters of an operation are received. The source video quality, source content complexity, and parameter constraints are applied to a deep neural network (DNN) producing DNN outputs. In an example, the DNN outputs are combined using domain knowledge to provide the filter parameters, as predicted, to a filter chain, such that applying the filter chain to the input source video results in an output video achieving the full reference video quality score. In another example, the DNN outputs are combined using domain knowledge to provide the filter parameters, as predicted, to a filter chain, such that applying the filter chain to the input source video results in an output video achieving the full reference video quality score.
ADAPTIVE UP-SAMPLING FILTER FOR LUMA AND CHROMA WITH REFERENCE PICTURE RESAMPLING (RPR)
Coded information indicates that an adaptive up-sampling filter is applied to a current block in a current picture. A respective class for each of a plurality of subblocks of the current block is determined. A respective filter coefficient set is determined for each of the plurality of subblocks from a plurality of filter coefficient sets of the adaptive up-sampling filter. The respective filter coefficient set is determined based on at least one class corresponding to the respective subblock and a respective sampling rate of reference pixel resampling (RPR) applied on a reference picture of the current picture. The respective sampling rate is associated with one of a plurality of phases of the RPR. The adaptive up-sampling filter is applied to the current block to generate filtered reconstructed samples of the current block based on the determined respective filter coefficient sets without applying a secondary adaptive loop filter (ALF).
ADAPTIVE UP-SAMPLING FILTER FOR LUMA AND CHROMA WITH REFERENCE PICTURE RESAMPLING (RPR)
Coded information indicates that an adaptive up-sampling filter is applied to a current block in a current picture. A respective class for each of a plurality of subblocks of the current block is determined. A respective filter coefficient set is determined for each of the plurality of subblocks from a plurality of filter coefficient sets of the adaptive up-sampling filter. The respective filter coefficient set is determined based on at least one class corresponding to the respective subblock and a respective sampling rate of reference pixel resampling (RPR) applied on a reference picture of the current picture. The respective sampling rate is associated with one of a plurality of phases of the RPR. The adaptive up-sampling filter is applied to the current block to generate filtered reconstructed samples of the current block based on the determined respective filter coefficient sets without applying a secondary adaptive loop filter (ALF).