Patent classifications
H04N19/583
COMPLEXITY REDUCTION OF OVERLAPPED BLOCK MOTION COMPENSATION
Overlapped block motion compensation (OBMC) may be performed for a current video block based on motion information associated with the current video block and motion information associated with one or more neighboring blocks of the current video block. Under certain conditions, some or all of these neighboring blocks may be omitted from the OBMC operation of the current block. For instance, a neighboring block may be skipped during the OBMC operation if the current video block and the neighboring block are both uni-directionally or bi-directionally predicted, if the motion vectors associated with the current block and the neighboring block refer to a same reference picture, and if a sum of absolute differences between those motion vectors is smaller than a threshold value. Further, OBMC may be conducted in conjunction with regular motion compensation and may use simplified filters than traditionally allowed.
MAPPING-AWARE CODING TOOLS FOR 360 DEGREE VIDEOS
Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere. These mapping-aware coding tools contemplate changes to video information by the mapping of 360 degree video data into a conventional video format.
MAPPING-AWARE CODING TOOLS FOR 360 DEGREE VIDEOS
Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere. These mapping-aware coding tools contemplate changes to video information by the mapping of 360 degree video data into a conventional video format.
Moving picture coding device, moving picture coding method, and moving picture coding program, and moving picture decoding device, moving picture decoding method, and moving picture decoding program
A merging motion information calculating unit calculates motion information of a plurality of coded neighboring blocks located at predetermined positions neighboring to a coding target block in space as spatial motion information candidates of the coding target block, in a case where there are spatial motion information candidates having the same motion information out of the spatial motion information candidates, sets one of the spatial motion information candidates having the same motion information as the spatial motion information candidate and, calculates a temporal motion information candidate of the coding target block by using the motion information of a coded block included in a picture that is different in time from a picture including the coding target block, and includes the spatial motion information candidates and the temporal motion information candidate in candidates for the motion information.
Adaptive Overlapped Block Prediction in Variable Block Size Video Coding
Encoding frames of a video stream may include encoding a current block of a current frame, generating a base prediction block for the current block based on current prediction parameters associated with the current block, identifying adjacent prediction parameters used for encoding previously encoded adjacent blocks that are adjacent to the current block. At least one side of the current block is adjacent to two or more of the previously encoded adjacent blocks. The encoding may include determining overlap regions in the current block, each of the overlap regions corresponding to a respective previously encoded adjacent block, generating an overlapped prediction of pixel values for each of the overlap regions according to a weighted function of the base prediction and a prediction based on the adjacent prediction parameters. The weighted function may be based on a difference between the current prediction parameters and the adjacent prediction parameters.
Adaptive Overlapped Block Prediction in Variable Block Size Video Coding
Decoding a current block of an encoded video stream may include generating a base prediction block for the current block based on current prediction parameters associated with the current block, identifying adjacent prediction parameters used for decoding a previously decoded adjacent block that is adjacent to the current block, and determining an overlap region within the current block and adjacent to the adjacent block. The overlap region has a size being determined as a function of a difference between the first prediction parameters and the adjacent prediction parameters. For each pixel within the overlap region, an overlapped prediction of a pixel value may be generated as a function of the base prediction and a prediction based on the adjacent prediction parameters.
Interaction between IBC and ATMVP
Devices, systems and methods for applying intra-block copy (IBC) in video coding are described. In general, methods for integrating IBC with existing motion compensation algorithms for video encoding and decoding are described. In a representative aspect, a method for video encoding using IBC includes determining whether a current block of the current picture is to be encoded using a motion compensation algorithm, and encoding, based on the determining, the current block by selectively applying an intra-block copy to the current block. In a representative aspect, another method for video encoding using IBC includes determining whether a current block of the current picture is to be encoded using an intra-block copy, and encoding, based on the determining, the current block by selectively applying a motion compensation algorithm to the current block.
Interaction between IBC and ATMVP
Devices, systems and methods for applying intra-block copy (IBC) in video coding are described. In general, methods for integrating IBC with existing motion compensation algorithms for video encoding and decoding are described. In a representative aspect, a method for video encoding using IBC includes determining whether a current block of the current picture is to be encoded using a motion compensation algorithm, and encoding, based on the determining, the current block by selectively applying an intra-block copy to the current block. In a representative aspect, another method for video encoding using IBC includes determining whether a current block of the current picture is to be encoded using an intra-block copy, and encoding, based on the determining, the current block by selectively applying a motion compensation algorithm to the current block.
Reduced complexity video filtering using stepped overlapped transforms
Pseudorandom overlapped block processing may be provided. First, a first temporal sequence of video frames corrupted with noise may be received. Next, matched frames may be producing by frame matching video frames of the first temporal sequence according to a first stage of processing. Then the matched frames may be denoised according to a second stage of processing. The second stage of processing may commence responsive to completion of the first stage of processing for all the video frames of the first temporal sequence. The second stage of processing may comprise overlapped block processing. The overlapped block processing may comprise pseudorandom overlapped block processing having no successive pixels both horizontally and vertically that are one-sized in transforms.
Filtering method for removing blocking artifact and apparatus
The present invention relates to the field of video image processing, and provides a filtering method and an apparatus, to resolve a problem that subjective quality and objective quality of an image deteriorate because filtering processing cannot be performed on internal blocks of a non-translational motion prediction unit.