H04N19/51

METHOD AND APPARATUS FOR DECODING INTER-LAYER VIDEO, AND METHOD AND APPARATUS FOR ENCODING INTER-LAYER VIDEO
20180007379 · 2018-01-04 · ·

Provided is an inter-layer video decoding method including obtaining a disparity vector of a current block included in a first layer image; determining a block of a second layer image corresponding to the current block by using the obtained disparity vector; determining a reference block including a sample that contacts a boundary of the block; obtaining a motion vector of the reference block; and determining a motion vector of the current block included in the first layer image by using the obtained motion vector.

Method and Apparatus for Compressing Coding Unit in High Efficiency Video Coding
20180014028 · 2018-01-11 ·

Methods for decoding of a video bitstream by a video decoding circuit are provided. In one implementation, a method receives coded data for a 2N×2N coding unit (CU) from the video bitstream, selects one or more first codewords according to whether asymmetric motion partition is disabled or enabled when a size of said 2N×2N CU is not equal to a smallest CU size, wherein none of the first codewords corresponds to INTER N×N partition, selects one or more second codewords when the size of said 2N×2N CU is equal to the smallest CU size, wherein none of the second codewords corresponds to the INTER N×N partition when N is 4, determines a CU structure for said 2N×2N CU from the video bitstream using said one or more first codewords or said one or more second codewords, and decodes the video bitstream using the CU structure.

Method and Apparatus for Compressing Coding Unit in High Efficiency Video Coding
20180014028 · 2018-01-11 ·

Methods for decoding of a video bitstream by a video decoding circuit are provided. In one implementation, a method receives coded data for a 2N×2N coding unit (CU) from the video bitstream, selects one or more first codewords according to whether asymmetric motion partition is disabled or enabled when a size of said 2N×2N CU is not equal to a smallest CU size, wherein none of the first codewords corresponds to INTER N×N partition, selects one or more second codewords when the size of said 2N×2N CU is equal to the smallest CU size, wherein none of the second codewords corresponds to the INTER N×N partition when N is 4, determines a CU structure for said 2N×2N CU from the video bitstream using said one or more first codewords or said one or more second codewords, and decodes the video bitstream using the CU structure.

METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE
20180014027 · 2018-01-11 · ·

A method of encoding an image in a merge mode, the method including determining motion information of a current prediction unit, and generating a prediction block using the motion information; generating a residual block using an original block and the prediction block, transforming the residual block to generating a transformed block, quantizing the transformed block using a quantization parameter to generate a quantized block, and scanning the quantized block to entropy-code the quantized block; and encoding the motion information using effective spatial and temporal merge candidates of the current prediction unit. In addition, a motion vector of the temporal merge candidate is a motion vector of a temporal merge candidate within a temporal merge candidate picture, and the quantization parameter is encoded using an average of two effective quantization parameters among a left quantization parameter, an upper quantization parameter and a previous quantization parameter of a current coding unit, also when the quantized block is larger than a predetermined size, the quantized block is divided into a plurality of subblocks to be scanned, and a scan pattern for scanning the plurality of subblocks is the same as a scan pattern for scanning quantized coefficients within each subblock. Further, a scanning scheme for scanning the quantized coefficients is determined according to an intra-prediction mode and a size of a transform unit.

METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE
20180014027 · 2018-01-11 · ·

A method of encoding an image in a merge mode, the method including determining motion information of a current prediction unit, and generating a prediction block using the motion information; generating a residual block using an original block and the prediction block, transforming the residual block to generating a transformed block, quantizing the transformed block using a quantization parameter to generate a quantized block, and scanning the quantized block to entropy-code the quantized block; and encoding the motion information using effective spatial and temporal merge candidates of the current prediction unit. In addition, a motion vector of the temporal merge candidate is a motion vector of a temporal merge candidate within a temporal merge candidate picture, and the quantization parameter is encoded using an average of two effective quantization parameters among a left quantization parameter, an upper quantization parameter and a previous quantization parameter of a current coding unit, also when the quantized block is larger than a predetermined size, the quantized block is divided into a plurality of subblocks to be scanned, and a scan pattern for scanning the plurality of subblocks is the same as a scan pattern for scanning quantized coefficients within each subblock. Further, a scanning scheme for scanning the quantized coefficients is determined according to an intra-prediction mode and a size of a transform unit.

Method and apparatus for adaptive illumination compensation in video encoding and decoding

Different implementations are described for determining one or more illumination compensation parameters for a current block being encoded by a video encoder or decoded by a video decoder, based on the selection of one or more neighboring samples. The selection of the one or more neighboring samples is based on information used to reconstruct a plurality of neighboring reconstructed blocks. The selection may be based on the motion information, such as motion vector and reference picture information. In one example, only samples from neighboring reconstructed blocks that have (1) the same reference picture index and/or (2) a motion vector close to the motion vector of the current block is selected. In another example, if the current block derives or inherits some motion information from a top or left neighboring block, then only the top or left neighboring samples are selected for IC parameter calculation.

Method and device for sharing a candidate list

The present invention relates to a method and device for sharing a candidate list. A method of generating a merging candidate list for a predictive block may include: producing, on the basis of a coding block including a predictive block on which a parallel merging process is performed, at least one of a spatial merging candidate and a temporal merging candidate of the predictive block; and generating a single merging candidate list for the coding block on the basis of the produced merging candidate. Thus, it is possible to increase processing speeds for coding and decoding by performing inter-picture prediction in parallel on a plurality of predictive blocks.

Method and device for sharing a candidate list

The present invention relates to a method and device for sharing a candidate list. A method of generating a merging candidate list for a predictive block may include: producing, on the basis of a coding block including a predictive block on which a parallel merging process is performed, at least one of a spatial merging candidate and a temporal merging candidate of the predictive block; and generating a single merging candidate list for the coding block on the basis of the produced merging candidate. Thus, it is possible to increase processing speeds for coding and decoding by performing inter-picture prediction in parallel on a plurality of predictive blocks.

Method and device for inducing motion information between temporal points of sub prediction unit

According to the present invention, there is provided A method of encoding a three-dimensional (3D) image, the method comprising: determining a prediction mode for a current block as an inter prediction mode; determining whether a reference block corresponding to the current block in a reference picture has motion information; when the reference block has the motion information, deriving motion information on the current block for each sub prediction block in the current block; and deriving a prediction sample for the current block based on the motion information on the current block.

Method and device for inducing motion information between temporal points of sub prediction unit

According to the present invention, there is provided A method of encoding a three-dimensional (3D) image, the method comprising: determining a prediction mode for a current block as an inter prediction mode; determining whether a reference block corresponding to the current block in a reference picture has motion information; when the reference block has the motion information, deriving motion information on the current block for each sub prediction block in the current block; and deriving a prediction sample for the current block based on the motion information on the current block.