H04N19/553

Systems and methods of performing improved local illumination compensation

Techniques and systems are provided for processing video data. For example, video data can be obtained for processing by an encoding device or a decoding device. Bi-predictive motion compensation can then be performed for a current block of a picture of the video data. Performing the bi-predictive motion compensation includes deriving one or more local illumination compensation parameters for the current block using a template of the current block, a first template of a first reference picture, and a second template of a second reference picture. The templates can include neighboring samples of the current block, the first reference picture, and the second reference picture. The first template of the first reference picture and the second template of the second reference picture can be used simultaneously to derive the one or more local illumination compensation parameters.

Systems and methods of performing improved local illumination compensation

Techniques and systems are provided for processing video data. For example, video data can be obtained for processing by an encoding device or a decoding device. Bi-predictive motion compensation can then be performed for a current block of a picture of the video data. Performing the bi-predictive motion compensation includes deriving one or more local illumination compensation parameters for the current block using a template of the current block, a first template of a first reference picture, and a second template of a second reference picture. The templates can include neighboring samples of the current block, the first reference picture, and the second reference picture. The first template of the first reference picture and the second template of the second reference picture can be used simultaneously to derive the one or more local illumination compensation parameters.

Image processing apparatus, image processing method, and program
10776927 · 2020-09-15 · ·

A motion information calculation unit acquires motion information between a plurality of target images. An occlusion information calculation unit generates occlusion information between the target images. An image interpolation processing unit determines priority of the motion information based on the motion information and the occlusion information, and performs predetermined image processing for the target images by using motion information that is weighted based on the priority.

Image processing apparatus, image processing method, and program
10776927 · 2020-09-15 · ·

A motion information calculation unit acquires motion information between a plurality of target images. An occlusion information calculation unit generates occlusion information between the target images. An image interpolation processing unit determines priority of the motion information based on the motion information and the occlusion information, and performs predetermined image processing for the target images by using motion information that is weighted based on the priority.

THREE-DIMENSIONAL MODEL ENCODING DEVICE, THREE-DIMENSIONAL MODEL DECODING DEVICE, THREE-DIMENSIONAL MODEL ENCODING METHOD, AND THREE-DIMENSIONAL MODEL DECODING METHOD
20200250798 · 2020-08-06 ·

A three-dimensional model encoding device includes: a projector that generates a two-dimensional image by projecting a three-dimensional model to at least one two-dimensional plane; a corrector that generates, using the two-dimensional image, a corrected image by correcting one or more pixels forming an inactive area to which the three-dimensional model is not projected, the inactive area being included in the two-dimensional image; and an encoder that generates encoded data by performing two-dimensional encoding on the corrected image.

Apparatuses and methods for encoding and decoding a video coding block of a video signal

A decoding apparatus partitions a video coding block based on coding information into two or more segments including a first segment and a second segment. The coding information comprises a first segment motion vector associated with the first segment and a second segment motion vector associated with the second segment. A co-located first segment in a first reference frame is determined based on the first segment motion vector and a co-located second segment in a second reference frame is determined based on the second segment motion vector. A predicted video coding block is generated based on the co-located first segment and the co-located second segment. A divergence measure is determined based on the first segment motion vector and the second segment motion vector and a first or second filter is applied depending on the divergence measure to the predicted video coding block.

Apparatuses and methods for encoding and decoding a video coding block of a video signal

A decoding apparatus partitions a video coding block based on coding information into two or more segments including a first segment and a second segment. The coding information comprises a first segment motion vector associated with the first segment and a second segment motion vector associated with the second segment. A co-located first segment in a first reference frame is determined based on the first segment motion vector and a co-located second segment in a second reference frame is determined based on the second segment motion vector. A predicted video coding block is generated based on the co-located first segment and the co-located second segment. A divergence measure is determined based on the first segment motion vector and the second segment motion vector and a first or second filter is applied depending on the divergence measure to the predicted video coding block.

Video coding device, video coding method, video decoding device, and video decoding method
10652549 · 2020-05-12 · ·

A region determination circuit determines, for a first block encoded by referring to a first prediction block generated by applying a bidirectional prediction mode for a first component of a pixel value from among blocks in a coding-target picture included in video data, a partial region to which a unidirectional prediction mode is to be applied for a second component on the basis of a difference value for the first component between corresponding pixels belonging to the first prediction block and the first block. A prediction circuit generates a second prediction block for the second component by applying a unidirectional prediction mode to the partial region and a bidirectional prediction mode to a region that is not the partial region. An encoder calculates a prediction error for the second component between corresponding pixels belonging to the first block and the second prediction block and encodes the prediction error.

Video coding device, video coding method, video decoding device, and video decoding method
10652549 · 2020-05-12 · ·

A region determination circuit determines, for a first block encoded by referring to a first prediction block generated by applying a bidirectional prediction mode for a first component of a pixel value from among blocks in a coding-target picture included in video data, a partial region to which a unidirectional prediction mode is to be applied for a second component on the basis of a difference value for the first component between corresponding pixels belonging to the first prediction block and the first block. A prediction circuit generates a second prediction block for the second component by applying a unidirectional prediction mode to the partial region and a bidirectional prediction mode to a region that is not the partial region. An encoder calculates a prediction error for the second component between corresponding pixels belonging to the first block and the second prediction block and encodes the prediction error.

Image processing method and apparatus

An image processing method includes obtaining multiple video frames, where the multiple video frames are collected from a same scene at different angles and determining a depth map of each video frame according to corresponding pixels among the multiple video frames; supplementing background missing regions of the multiple video frames according to depth maps of the multiple video frames, to obtain supplemented video frames of the multiple video frames and depth maps of the multiple supplemented video frames. The method also includes generating an alpha image of each video frame according to an occlusion relationship between each of the multiple video frames and a supplemented video frame of each video frame in a background missing region and generating a browsing frame at a specified browsing angle according to the multiple video frames, the supplemented video frames of the multiple video frames, and alpha images of the multiple video frames.