Patent classifications
H04N19/139
INTER PREDICTION METHOD BASED ON VARIABLE COEFFICIENT DEEP LEARNING
An inter prediction method allows a variable coefficient deep learning model to adaptively learn characteristics of a video; transmits a variable coefficient deep learning model parameter generated from the learning from an image encoding device to an image decoding device; and refers to a virtual reference frame generated by the variable coefficient deep learning model.
Video Encoding or Decoding Methods and Apparatuses with Scaling Ratio Constraint
Video processing methods and apparatuses for processing a current block in a current picture by reference picture resampling include receiving input data of the current block, determining a scaling window of the current picture and a scaling window of a reference picture. The current picture and reference picture may have different scaling window sizes. A ratio between a scaling window width, height, or size of the current picture and a scaling window width, height, or size of the reference picture is constrained to be within a ratio constraint. A reference block is generated from the reference picture according to the ratio, and used to encode or decode the current block.
Video Encoding or Decoding Methods and Apparatuses with Scaling Ratio Constraint
Video processing methods and apparatuses for processing a current block in a current picture by reference picture resampling include receiving input data of the current block, determining a scaling window of the current picture and a scaling window of a reference picture. The current picture and reference picture may have different scaling window sizes. A ratio between a scaling window width, height, or size of the current picture and a scaling window width, height, or size of the reference picture is constrained to be within a ratio constraint. A reference block is generated from the reference picture according to the ratio, and used to encode or decode the current block.
Encoding Device and Method for Video Analysis and Composition
An encoding device for video analysis and composition includes circuitry configured to receive an input video having a first data volume, determine at least a region of interest of the input video, and encode at least an output video as a function of the input video and the at least a region of interest, wherein the at least an output video has at least a second data volume, and the at least a second data volume is less than the first data volume.
Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data
Particular embodiments may remove a condition check in the semantics for checking a high-precision data flag. This simplifies the semantics used in the encoding and decoding process. In this case, even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value. For example, the syntax elements luma_offset_l0[i], luma_offset_l1[i], delta_chroma_offset_l0[i][j], and delta_chroma_offset_l1[i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data
Particular embodiments may remove a condition check in the semantics for checking a high-precision data flag. This simplifies the semantics used in the encoding and decoding process. In this case, even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value. For example, the syntax elements luma_offset_l0[i], luma_offset_l1[i], delta_chroma_offset_l0[i][j], and delta_chroma_offset_l1[i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
A moving picture decoding apparatus, method, and medium for decoding a current block are provided. A first candidate is derived from a first motion vector that has been used to decode a first block. The first block is adjacent to the current block. A first index identifying a reference picture to be selected for decoding the current block is decoded. A second candidate having a second motion vector that includes a non-zero value is derived. The non-zero value is assigned to the reference picture. A selected candidate is selected from a plurality of candidates, including the first candidate and the second candidate. A second index identifying the selected candidate is decoded. The current block is decoded using the selected candidate. The second candidate includes the non-zero value of the reference picture, with the reference picture being selected from a plurality of referable reference pictures.
Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
A moving picture decoding apparatus, method, and medium for decoding a current block are provided. A first candidate is derived from a first motion vector that has been used to decode a first block. The first block is adjacent to the current block. A first index identifying a reference picture to be selected for decoding the current block is decoded. A second candidate having a second motion vector that includes a non-zero value is derived. The non-zero value is assigned to the reference picture. A selected candidate is selected from a plurality of candidates, including the first candidate and the second candidate. A second index identifying the selected candidate is decoded. The current block is decoded using the selected candidate. The second candidate includes the non-zero value of the reference picture, with the reference picture being selected from a plurality of referable reference pictures.
Complexity reduction of overlapped block motion compensation
Overlapped block motion compensation (OBMC) may be performed for a current video block based on motion information associated with the current video block and motion information associated with one or more neighboring blocks of the current video block. Under certain conditions, some or ail of these neighboring blocks may be omitted from the OBMC operation of the current block. For instance, a neighboring block may be skipped during the OBMC operation if the current video block and the neighboring block are both uni-directionally or bi-directionally predicted, if the motion vectors associated with the current block and the neighboring block refer to a same reference picture, and if a sum of absolute differences between those motion vectors is smaller than a threshold value. Further, OBMC may be conducted in conjunction with regular motion compensation and may use simplified filters than traditionally allowed.
Complexity reduction of overlapped block motion compensation
Overlapped block motion compensation (OBMC) may be performed for a current video block based on motion information associated with the current video block and motion information associated with one or more neighboring blocks of the current video block. Under certain conditions, some or ail of these neighboring blocks may be omitted from the OBMC operation of the current block. For instance, a neighboring block may be skipped during the OBMC operation if the current video block and the neighboring block are both uni-directionally or bi-directionally predicted, if the motion vectors associated with the current block and the neighboring block refer to a same reference picture, and if a sum of absolute differences between those motion vectors is smaller than a threshold value. Further, OBMC may be conducted in conjunction with regular motion compensation and may use simplified filters than traditionally allowed.