H04N19/13

Method and Apparatus for Compressing Coding Unit in High Efficiency Video Coding
20180014028 · 2018-01-11 ·

Methods for decoding of a video bitstream by a video decoding circuit are provided. In one implementation, a method receives coded data for a 2N×2N coding unit (CU) from the video bitstream, selects one or more first codewords according to whether asymmetric motion partition is disabled or enabled when a size of said 2N×2N CU is not equal to a smallest CU size, wherein none of the first codewords corresponds to INTER N×N partition, selects one or more second codewords when the size of said 2N×2N CU is equal to the smallest CU size, wherein none of the second codewords corresponds to the INTER N×N partition when N is 4, determines a CU structure for said 2N×2N CU from the video bitstream using said one or more first codewords or said one or more second codewords, and decodes the video bitstream using the CU structure.

Method and Apparatus for Compressing Coding Unit in High Efficiency Video Coding
20180014028 · 2018-01-11 ·

Methods for decoding of a video bitstream by a video decoding circuit are provided. In one implementation, a method receives coded data for a 2N×2N coding unit (CU) from the video bitstream, selects one or more first codewords according to whether asymmetric motion partition is disabled or enabled when a size of said 2N×2N CU is not equal to a smallest CU size, wherein none of the first codewords corresponds to INTER N×N partition, selects one or more second codewords when the size of said 2N×2N CU is equal to the smallest CU size, wherein none of the second codewords corresponds to the INTER N×N partition when N is 4, determines a CU structure for said 2N×2N CU from the video bitstream using said one or more first codewords or said one or more second codewords, and decodes the video bitstream using the CU structure.

METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE
20180014027 · 2018-01-11 · ·

A method of encoding an image in a merge mode, the method including determining motion information of a current prediction unit, and generating a prediction block using the motion information; generating a residual block using an original block and the prediction block, transforming the residual block to generating a transformed block, quantizing the transformed block using a quantization parameter to generate a quantized block, and scanning the quantized block to entropy-code the quantized block; and encoding the motion information using effective spatial and temporal merge candidates of the current prediction unit. In addition, a motion vector of the temporal merge candidate is a motion vector of a temporal merge candidate within a temporal merge candidate picture, and the quantization parameter is encoded using an average of two effective quantization parameters among a left quantization parameter, an upper quantization parameter and a previous quantization parameter of a current coding unit, also when the quantized block is larger than a predetermined size, the quantized block is divided into a plurality of subblocks to be scanned, and a scan pattern for scanning the plurality of subblocks is the same as a scan pattern for scanning quantized coefficients within each subblock. Further, a scanning scheme for scanning the quantized coefficients is determined according to an intra-prediction mode and a size of a transform unit.

METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE
20180014027 · 2018-01-11 · ·

A method of encoding an image in a merge mode, the method including determining motion information of a current prediction unit, and generating a prediction block using the motion information; generating a residual block using an original block and the prediction block, transforming the residual block to generating a transformed block, quantizing the transformed block using a quantization parameter to generate a quantized block, and scanning the quantized block to entropy-code the quantized block; and encoding the motion information using effective spatial and temporal merge candidates of the current prediction unit. In addition, a motion vector of the temporal merge candidate is a motion vector of a temporal merge candidate within a temporal merge candidate picture, and the quantization parameter is encoded using an average of two effective quantization parameters among a left quantization parameter, an upper quantization parameter and a previous quantization parameter of a current coding unit, also when the quantized block is larger than a predetermined size, the quantized block is divided into a plurality of subblocks to be scanned, and a scan pattern for scanning the plurality of subblocks is the same as a scan pattern for scanning quantized coefficients within each subblock. Further, a scanning scheme for scanning the quantized coefficients is determined according to an intra-prediction mode and a size of a transform unit.

SIGNIFICANCE MAP ENCODING AND DECODING USING PARTITION SELECTION
20180014030 · 2018-01-11 · ·

Methods of encoding and decoding for video data are describe in which significance maps are encoded and decoded using non-spatially-uniform partitioning of the map into parts, wherein the bit positions within each part are associated with a given context. Example partition sets and processes for selecting from amongst predetermined partition sets and communicating the selection to the decoder are described.

SIGNIFICANCE MAP ENCODING AND DECODING USING PARTITION SELECTION
20180014030 · 2018-01-11 · ·

Methods of encoding and decoding for video data are describe in which significance maps are encoded and decoded using non-spatially-uniform partitioning of the map into parts, wherein the bit positions within each part are associated with a given context. Example partition sets and processes for selecting from amongst predetermined partition sets and communicating the selection to the decoder are described.

VIDEO ENCODING METHOD AND VIDEO ENCODING FOR SIGNALING SAO PARAMETERS

The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.

VIDEO ENCODING METHOD AND VIDEO ENCODING FOR SIGNALING SAO PARAMETERS

The present disclosure relates to signaling of sample adaptive offset (SAO) parameters determined to minimize an error between an original image and a reconstructed image in video encoding and decoding operations. An SAO decoding method includes obtaining context-encoded leftward SAO merge information and context-encoded upward SAO merge information from a bitstream of a largest coding unit (MCU); obtaining SAO on/off information context-encoded with respect to each color component, from the bitstream; if the SAO on/off information indicates to perform SAO operation, obtaining absolute offset value information for each SAO category bypass-encoded with respect to each color component, from the bitstream; and obtaining one of band position information and edge class information bypass-encoded with respect to each color component, from the bitstream.

ENCODING METHOD AND APPARATUS, AND DECODING METHOD AND APPARATUS

An encoding apparatus for encoding an image includes: a communicator configured to receive, from a device, device information related to the device; and a processor configured to encode the image by using image information of the image and the device information, wherein the processor is further configured to process the image according to at least one of the device information and the image information, determine a non-encoding region, a block-based encoding region, and a pixel-based encoding region of the image according to at least one of the device information and the image information, performs block-based encoding on the block-based encoding region by using a quantization parameter determined according to at least one of the device information and the image information, perform pixel-based encoding on the pixel-based encoding region, generates an encoded image by entropy encoding a symbol determined by the block-based encoding or the pixel-based encoding, and generate a bitstream comprising the encoded image, region information of the block-based encoding region and the pixel-based encoding region, and quantization information of the quantization parameter, and wherein the communicator is further configured to transmit the bitstream to the device.

SYSTEMS AND METHODS FOR COMPRESSING IMAGE DATA GENERATED BY A COMPUTED TOMOGRAPHY (CT) IMAGING SYSTEM
20180014016 · 2018-01-11 ·

A compression device for compressing image data generated by a computed tomography (CT) imaging system is described herein. The compression device is configured to compress the image data by implementing a method including receiving image data from the CT imaging system and requantizing the image data in a square root domain. The method further includes identifying a group of projections (GOP) in the image data, including a first projection and a plurality of subsequent projections, and performing spatial-delta encoding on the first projection and temporal-delta encoding on each of the plurality of subsequent projections. The method also includes identifying a signed value in the GOP, and converting the signed value to an unsigned value. The method further includes entropy coding the image data in the GOP, and packetizing the GOP for transmission or storage.