Patent classifications
H04N19/563
Method and apparatus for encoding/decoding a video signal, and a recording medium storing a bitstream
A video decoding method according to the present disclosure may include determining whether an affine motion model is applied to a current block or not, performing motion compensation for the current block according to whether the affine motion model is applied or not, determining a value of a first variable and a second variable representing whether a prediction block obtained by the motion compensation will be refined and determining a padding size of the prediction block.
Method and apparatus for encoding/decoding a video signal, and a recording medium storing a bitstream
A video decoding method according to the present disclosure may include determining whether an affine motion model is applied to a current block or not, performing motion compensation for the current block according to whether the affine motion model is applied or not, determining a value of a first variable and a second variable representing whether a prediction block obtained by the motion compensation will be refined and determining a padding size of the prediction block.
Padding process in adaptive loop filtering
An example method of video processing includes determining, for a conversion between a current block of a video and a bitstream representation of the video, that one or more samples of the video outside the current block are unavailable for a coding process of the conversion. The coding process comprises an adaptive loop filter (ALF) coding process. The method also includes performing, based on the determining, the conversion by using padded samples for the one or more samples of the video. The padded samples are generated by checking for availability of samples in an order.
HYBRID CUBEMAP PROJECTION FOR 360-DEGREE VIDEO CODING
A system, method, and/or instrumentality may be provided for coding a 360-degree video. A picture of the 360-degree video may be received. The picture may include one or more faces associated with one or more projection formats. A first projection format indication may be received that indicates a first projection format may be associated with a first face. A second projection format indication may be received that indicates a second projection format may be associated with a second face. Based on the first projection format, a first transform function associated with the first face may be determined. Based on the second projection format, a second transform function associated with the second face may be determined. At least one decoding process may be performed on the first face using the first transform function and/or at least one decoding process may be performed on the second face using the second transform function.
Apparatus for selecting an intra-prediction mode for padding
Video codec for supporting temporal inter-prediction, configured to perform padding of an area of a referenced portion of a reference picture which extends beyond a border of the reference picture, which referenced portion is referenced by an inter predicted block of a current picture by selecting one of a plurality of intra-prediction modes, and padding the area using the selected intra-prediction mode.
Apparatus for selecting an intra-prediction mode for padding
Video codec for supporting temporal inter-prediction, configured to perform padding of an area of a referenced portion of a reference picture which extends beyond a border of the reference picture, which referenced portion is referenced by an inter predicted block of a current picture by selecting one of a plurality of intra-prediction modes, and padding the area using the selected intra-prediction mode.
Method and Apparatus of Boundary Padding for VR Video Processing
A method and apparatus or video coding or processing for an image sequence corresponding to virtual reality (VR) video are disclosed. According to embodiments of the present invention, a padded area outside one cubic face frame boundary of one cubic face frame is padded to form a padded cubic face frame using one or more extended cubic faces, where at least one boundary cubic face in said one cubic face frame has one padded area using pixel data derived from one extended cubic face in a same cubic face frame.
360-DEGREE VIDEO CODING USING GEOMETRY PROJECTION
Processing video data may include capturing the video data with multiple cameras and stitching the video data together to obtain a 360-degree video. A frame-packed picture may be provided based on the captured and stitched video data. A current sample location may be identified in the frame-packed picture. Whether a neighboring sample location is located outside of a content boundary of the frame-packed picture may be determined. When the neighboring sample location is located outside of the content boundary, a padding sample location may be derived based on at least one circular characteristic of the 360-degree video content and the projection geometry. The 360-degree video content may be processed based on the padding sample location.
360-DEGREE VIDEO CODING USING GEOMETRY PROJECTION
Processing video data may include capturing the video data with multiple cameras and stitching the video data together to obtain a 360-degree video. A frame-packed picture may be provided based on the captured and stitched video data. A current sample location may be identified in the frame-packed picture. Whether a neighboring sample location is located outside of a content boundary of the frame-packed picture may be determined. When the neighboring sample location is located outside of the content boundary, a padding sample location may be derived based on at least one circular characteristic of the 360-degree video content and the projection geometry. The 360-degree video content may be processed based on the padding sample location.
METHOD AND DEVICE FOR IMAGE ENCODING AND DECODING, AND RECORDING MEDIUM HAVING BIT STREAM STORED THEREIN
Disclosed is a method of decoding an image and a method of encoding an image. The method of decoding an image includes: obtaining motion-constrained tile set information; determining, on the basis of the motion-constrained tile set information, a first boundary region of a collocated tile set within a reference picture, which corresponds to a motion-constrained tile set; padding a second boundary region corresponding to the first boundary region; and performing inter prediction on the motion-constrained tile set by using a collocated tile set that includes the padded second boundary region.