Patent classifications
H04N19/80
CROSS COMPONENT FILTERING-BASED IMAGE CODING APPARATUS AND METHOD
According to one embodiment of the present document, a cross component adaptive loop filtering (CCALF) process may be performed. The CCALF process can enhance the filtering performance for chroma components and improve the subjective/objective image quality of a picture.
ENHANCING 360-DEGREE VIDEO USING CONVOLUTIONAL NEURAL NETWORK (CNN)-BASED FILTER
An example apparatus for enhancing video includes a decoder to decode a received 360-degree projection format video bitstream to generate a decoded 360-degree projection format video. The apparatus also includes a viewport generator to generate a viewport from the decoded 360-degree projection format video. The apparatus further includes a convolutional neural network (CNN)-based filter to remove an artifact from the viewport to generate an enhanced image. The apparatus further includes a displayer to send the enhanced image to a display.
ENHANCING 360-DEGREE VIDEO USING CONVOLUTIONAL NEURAL NETWORK (CNN)-BASED FILTER
An example apparatus for enhancing video includes a decoder to decode a received 360-degree projection format video bitstream to generate a decoded 360-degree projection format video. The apparatus also includes a viewport generator to generate a viewport from the decoded 360-degree projection format video. The apparatus further includes a convolutional neural network (CNN)-based filter to remove an artifact from the viewport to generate an enhanced image. The apparatus further includes a displayer to send the enhanced image to a display.
EFFICIENT ENCODING OF FILM GRAIN NOISE
One embodiment of the present invention sets forth a technique for encoding video frames. The technique includes performing one or more operations to generate a plurality of denoised video frames associated with a video sequence. The technique also includes determining a first set of motion vectors based on a first denoised frame included in the plurality of denoised video frames and a second denoised frame included in the plurality of denoised video frames, and determining a first residual between the second denoised frame and a prediction frame associated with the second denoised frame. The technique further includes performing one or more operations to generate an encoded video frame associated with the second denoised frame based on the first set of motion vectors, the first residual, and a first frame that is included in the video sequence and corresponds to the first denoised frame.
EFFICIENT ENCODING OF FILM GRAIN NOISE
One embodiment of the present invention sets forth a technique for encoding video frames. The technique includes performing one or more operations to generate a plurality of denoised video frames associated with a video sequence. The technique also includes determining a first set of motion vectors based on a first denoised frame included in the plurality of denoised video frames and a second denoised frame included in the plurality of denoised video frames, and determining a first residual between the second denoised frame and a prediction frame associated with the second denoised frame. The technique further includes performing one or more operations to generate an encoded video frame associated with the second denoised frame based on the first set of motion vectors, the first residual, and a first frame that is included in the video sequence and corresponds to the first denoised frame.
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
For a location displaced by four samples in a vertical direction or a horizontal direction from a current location, the encoder performs a first determination of determining only whether the location displaced by four samples is a TU boundary, where the current location is a sample location of a current sub-block boundary on which the determination process is to be performed. In the first determination, when it is determined that the location displaced by four samples is a TU boundary, the encoder sets a maximum filter length to a first value, and in the case otherwise, the encoder performs a second determination of determining whether a location displaced by eight samples in the vertical direction or the horizontal direction from the current location is a TU boundary.
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
For a location displaced by four samples in a vertical direction or a horizontal direction from a current location, the encoder performs a first determination of determining only whether the location displaced by four samples is a TU boundary, where the current location is a sample location of a current sub-block boundary on which the determination process is to be performed. In the first determination, when it is determined that the location displaced by four samples is a TU boundary, the encoder sets a maximum filter length to a first value, and in the case otherwise, the encoder performs a second determination of determining whether a location displaced by eight samples in the vertical direction or the horizontal direction from the current location is a TU boundary.
Method and apparatus for processing video signal
A method for decoding a video according to the present invention may comprise: deriving a spatial merge candidate for a current block, generating a merge candidate list for the current block based on the spatial merge candidate, obtaining motion information for the current block based on the merge candidate list, and performing motion compensation for the current block using the motion information. Herein, if the current block does not have a pre-defined shape or does not have a size equal to or greater than a pre-defined size, the spatial merge candidate of the current block is derived based on a block having the pre-defined shape or having a size equal to or greater than the pre-defined size, the block comprising the current block.
Method and apparatus for processing video signal
A method for decoding a video according to the present invention may comprise: deriving a spatial merge candidate for a current block, generating a merge candidate list for the current block based on the spatial merge candidate, obtaining motion information for the current block based on the merge candidate list, and performing motion compensation for the current block using the motion information. Herein, if the current block does not have a pre-defined shape or does not have a size equal to or greater than a pre-defined size, the spatial merge candidate of the current block is derived based on a block having the pre-defined shape or having a size equal to or greater than the pre-defined size, the block comprising the current block.
Method and device for filtering
Disclosed herein are a video decoding method and apparatus and a video encoding method and apparatus, and more particularly, a method and apparatus for performing filtering in video encoding and decoding. An encoding apparatus may perform filtering on a target, and may generate filtering information indicating whether filtering has been performed on the target. Further, the encoding apparatus may generate a bitstream including filtering information. A decoding apparatus may determine, based on filtering information, whether to perform filtering on a target, and may perform filtering on the target. The decoding apparatus may receive filtering information from the encoding apparatus through a bitstream or may derive filtering information using additional information.