Patent classifications
H04N19/137
NON-INTERLEAVED SEPARATE TREE
Aspects of the disclosure provide methods and apparatuses for video encoding/decoding. In some examples, an apparatus for video decoding includes processing circuitry. The processing circuitry determines that a non-interleaved separate tree structure is used for coding different color components of coding tree units (CTUs) in a bitstream. The processing circuitry decodes a first color component of a plurality of CTUs from a first portion of the bitstream, and decodes a second color component of the plurality of CTUs from a second portion of the bitstream, the second portion is located after the first portion in the bitstream.
NON-INTERLEAVED SEPARATE TREE
Aspects of the disclosure provide methods and apparatuses for video encoding/decoding. In some examples, an apparatus for video decoding includes processing circuitry. The processing circuitry determines that a non-interleaved separate tree structure is used for coding different color components of coding tree units (CTUs) in a bitstream. The processing circuitry decodes a first color component of a plurality of CTUs from a first portion of the bitstream, and decodes a second color component of the plurality of CTUs from a second portion of the bitstream, the second portion is located after the first portion in the bitstream.
System and method for compressing streaming interactive video
A computer-implemented method is provided. The method includes executing a video game on a server unit and said server unit producing uncompressed interactive video. The method includes processing the uncompressed interactive video at a compression unit associated with the server unit. The compression unit outputting compressed interactive video, and the server unit and the compression unit being located at a data center. The method includes streaming the compressed interactive video over a packetized network from the data center to one or more client devices associated with one or more users. Each client device is located geographically remote to the data center, and the server is configured to receive input to drive gameplay of the video game by said one or more client devices. The compressed interactive video is configured for decompression and presentation at said one or more client devices. The method includes receiving, by the server, updates from said one or more clients devices regarding a quality of said uncompressed interactive video that is received from said streaming. The method includes adjusting automatically, by the compression unit, a rate of compression provided to one or more of said client devices based on said updates received regarding the quality of said uncompressed interactive video for the video game.
System and method for compressing streaming interactive video
A computer-implemented method is provided. The method includes executing a video game on a server unit and said server unit producing uncompressed interactive video. The method includes processing the uncompressed interactive video at a compression unit associated with the server unit. The compression unit outputting compressed interactive video, and the server unit and the compression unit being located at a data center. The method includes streaming the compressed interactive video over a packetized network from the data center to one or more client devices associated with one or more users. Each client device is located geographically remote to the data center, and the server is configured to receive input to drive gameplay of the video game by said one or more client devices. The compressed interactive video is configured for decompression and presentation at said one or more client devices. The method includes receiving, by the server, updates from said one or more clients devices regarding a quality of said uncompressed interactive video that is received from said streaming. The method includes adjusting automatically, by the compression unit, a rate of compression provided to one or more of said client devices based on said updates received regarding the quality of said uncompressed interactive video for the video game.
VIDEO DECODING IMPLEMENTATIONS FOR A GRAPHICS PROCESSING UNIT
Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.
VIDEO DECODING IMPLEMENTATIONS FOR A GRAPHICS PROCESSING UNIT
Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.
Method and Apparatus Using Affine Non-Adjacent Candidates for Video Coding
Methods and apparatus for video coding using non-adjacent affine candidates are provided. According to this method, one or more neighboring MVs (motion vectors) are determined from one or more non-adjacent affine-coded neighbors of the current block. CPMVs (Control-Point Motion Vectors) are determined based on said one or more neighboring MVs, wherein if a target neighboring block associated with one target neighboring MV (Motion Vector) is outside an available region, a derived CPMV) is generated to replace the target neighboring MV. An affine merge list or an affine AMVP (Advanced Motion Vector Prediction) list having said one or more neighboring MVs as one non-adjacent affine candidate is generated, wherein said one non-adjacent affine candidate generates a non-adjacent affine predictor using motion information according to the CPMVs. The current block is encoded or decoded using a motion candidate selected from the affine merge list or the affine AMVP list.
Method and Apparatus Using Affine Non-Adjacent Candidates for Video Coding
Methods and apparatus for video coding using non-adjacent affine candidates are provided. According to this method, one or more neighboring MVs (motion vectors) are determined from one or more non-adjacent affine-coded neighbors of the current block. CPMVs (Control-Point Motion Vectors) are determined based on said one or more neighboring MVs, wherein if a target neighboring block associated with one target neighboring MV (Motion Vector) is outside an available region, a derived CPMV) is generated to replace the target neighboring MV. An affine merge list or an affine AMVP (Advanced Motion Vector Prediction) list having said one or more neighboring MVs as one non-adjacent affine candidate is generated, wherein said one non-adjacent affine candidate generates a non-adjacent affine predictor using motion information according to the CPMVs. The current block is encoded or decoded using a motion candidate selected from the affine merge list or the affine AMVP list.
Motion compensation boundary filtering
At least a method and an apparatus are presented for efficiently encoding or decoding video. For example, a prediction block for a current block is obtained. A reconstructed neighboring block of the prediction block is obtained. Filtering is performed on a boundary between the prediction block and the reconstructed neighboring block. At the encoder side, the prediction residual is obtained as the difference between the filtered prediction block and the current block, and then encoded. At the decoder side, the prediction residual is added to the filtered prediction block to reconstruct the current block.
Motion compensation boundary filtering
At least a method and an apparatus are presented for efficiently encoding or decoding video. For example, a prediction block for a current block is obtained. A reconstructed neighboring block of the prediction block is obtained. Filtering is performed on a boundary between the prediction block and the reconstructed neighboring block. At the encoder side, the prediction residual is obtained as the difference between the filtered prediction block and the current block, and then encoded. At the decoder side, the prediction residual is added to the filtered prediction block to reconstruct the current block.