H04N19/50

THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
20230005187 · 2023-01-05 ·

A three-dimensional data encoding method includes: calculating a prediction residual that is a difference between information of a three-dimensional point included in point cloud data and a predicted value; and generates a bitstream including first information with respect to the prediction residual, second information indicating a bit count of the first information, and third information indicating a bit count of the second information.

Surveillance Camera Upgrade via Removable Media having Deep Learning Accelerator and Random Access Memory
20230007317 · 2023-01-05 ·

Systems, devices, and methods related to a deep learning accelerator and memory are described. For example, a removable media (e.g., a memory card, or a USB drive) may be configured to execute instructions with matrix operands and configured with: an interface to receive a video stream; and random access memory to buffer a portion of the video stream as an input to an artificial neural network and to store instructions executable by the deep learning accelerator and matrices of the artificial neural network. Such a removable media can be used to replace an existing removable media used in a surveillance camera to record video or images. The deep learning accelerator can execute the instructions to generate analytics of the buffer portion using the artificial neural network, enabling the surveillance camera that is upgraded via the use of the removable media to provide intelligent services based on the analytics.

Surveillance Camera Upgrade via Removable Media having Deep Learning Accelerator and Random Access Memory
20230007317 · 2023-01-05 ·

Systems, devices, and methods related to a deep learning accelerator and memory are described. For example, a removable media (e.g., a memory card, or a USB drive) may be configured to execute instructions with matrix operands and configured with: an interface to receive a video stream; and random access memory to buffer a portion of the video stream as an input to an artificial neural network and to store instructions executable by the deep learning accelerator and matrices of the artificial neural network. Such a removable media can be used to replace an existing removable media used in a surveillance camera to record video or images. The deep learning accelerator can execute the instructions to generate analytics of the buffer portion using the artificial neural network, enabling the surveillance camera that is upgraded via the use of the removable media to provide intelligent services based on the analytics.

ENTROPY ENCODING/DECODING METHOD AND APPARATUS
20230239516 · 2023-07-27 ·

The technology of this application relates to an entropy encoding method that includes obtaining base layer information of a to-be-encoded picture block, where the base layer information corresponds to M samples in the picture block, and M is a positive integer, obtaining K elements corresponding to enhancement layer information of the picture block, where the enhancement layer information corresponds to N samples in the picture block, both K and N are positive integers, and N≥M, inputting the base layer information into a neural network to obtain K groups of probability values, where the K groups of probability values correspond to the K elements, and any group of probability values is for representing probabilities of a plurality of candidate values of a corresponding element, and performing entropy encoding on the K elements based on the K groups of probability values.

ENTROPY ENCODING/DECODING METHOD AND APPARATUS
20230239516 · 2023-07-27 ·

The technology of this application relates to an entropy encoding method that includes obtaining base layer information of a to-be-encoded picture block, where the base layer information corresponds to M samples in the picture block, and M is a positive integer, obtaining K elements corresponding to enhancement layer information of the picture block, where the enhancement layer information corresponds to N samples in the picture block, both K and N are positive integers, and N≥M, inputting the base layer information into a neural network to obtain K groups of probability values, where the K groups of probability values correspond to the K elements, and any group of probability values is for representing probabilities of a plurality of candidate values of a corresponding element, and performing entropy encoding on the K elements based on the K groups of probability values.

Method of encoding an image into a coded image, method of decoding a coded image, and apparatuses thereof

A method of encoding an image into a coded image, the method comprising: writing a quantization offset parameter into the coded image, determining a prediction mode type for coding a block of image samples of the image into a coding unit of the coded image, determining a quantization parameter for the block of image samples, and determining if the prediction mode type is of a predetermined type, wherein if the prediction mode type is of the predetermined type, the method further comprises: modifying the determined quantization parameter using the quantization offset parameter, and performing a quantization process for the block of image samples using the modified quantization parameter, and wherein if the prediction mode type is not of the predetermined type, the method further comprises: performing a quantization process for the block of image samples using the determined quantization parameter.

Method of encoding an image into a coded image, method of decoding a coded image, and apparatuses thereof

A method of encoding an image into a coded image, the method comprising: writing a quantization offset parameter into the coded image, determining a prediction mode type for coding a block of image samples of the image into a coding unit of the coded image, determining a quantization parameter for the block of image samples, and determining if the prediction mode type is of a predetermined type, wherein if the prediction mode type is of the predetermined type, the method further comprises: modifying the determined quantization parameter using the quantization offset parameter, and performing a quantization process for the block of image samples using the modified quantization parameter, and wherein if the prediction mode type is not of the predetermined type, the method further comprises: performing a quantization process for the block of image samples using the determined quantization parameter.

Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure

Disclosed are an apparatus and a method of encoding/decoding a video, particularly a method and an apparatus for storing a quantization parameter differential value in a largest coding unit (LCU) based on quadtree splitting and adaptively predicting a quantization parameter value based on context information on neighboring CUs. Quadtree-based quantization parameter encoding and decoding methods and apparatuses effectively show information on a block having a quantization parameter differential value based on splitting information on a CU and adaptively predict a quantization parameter value using context information including a block size, block partition and a quantization parameter of a neighboring CU.

Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure

Disclosed are an apparatus and a method of encoding/decoding a video, particularly a method and an apparatus for storing a quantization parameter differential value in a largest coding unit (LCU) based on quadtree splitting and adaptively predicting a quantization parameter value based on context information on neighboring CUs. Quadtree-based quantization parameter encoding and decoding methods and apparatuses effectively show information on a block having a quantization parameter differential value based on splitting information on a CU and adaptively predict a quantization parameter value using context information including a block size, block partition and a quantization parameter of a neighboring CU.

Indication of two-step cross-component prediction mode

A method for video bitstream processing includes generating a prediction block for a first video block of a video related to a first component, where the prediction block is selectively generated according to a criterion by applying a two-step cross-component prediction mode (TSCPM), and performing a conversion between the first video block and a bitstream representation of the video using the prediction block, wherein a first field in the bitstream representation corresponds to the TSCPM.