H04N19/30

Method and apparatus for processing video signal
11700389 · 2023-07-11 · ·

A method for decoding a video according to the present invention may comprise: decoding information indicating whether a non-zero transform coefficient exists in a current block, when the information indicates that the non-zero transform coefficient exists in the current block, determining a scanning order of the current block, and decoding a transform coefficient included in the current block according to the determined scanning order.

Method and apparatus for processing video signal
11700389 · 2023-07-11 · ·

A method for decoding a video according to the present invention may comprise: decoding information indicating whether a non-zero transform coefficient exists in a current block, when the information indicates that the non-zero transform coefficient exists in the current block, determining a scanning order of the current block, and decoding a transform coefficient included in the current block according to the determined scanning order.

METHOD AND APPARATUS FOR ENCODING/DECODING DEEP LEARNING NETWORK

Disclosed herein are a method and apparatus for encoding/decoding a deep learning network. According to an embodiment, the method for decoding a deep learning network may include decoding network header information regarding the deep learning network; decoding layer header information regarding a plurality of layers in the deep learning network; decoding layer data information regarding specific information of the plurality of layers; and obtaining the deep learning network and a plurality of layers in the deep learning network, and the layer header information includes layer distinction information associated with distinguishing the plurality of layers.

Profile, tier and layer indication in video coding
11700390 · 2023-07-11 · ·

A method of video processing includes performing a conversion between a video including a plurality of video layers and a bitstream of the video, wherein the bitstream includes a plurality of output layer sets (OLSs), each including one or more of the plurality of video layers, and the bitstream conforms to a format rule, wherein the format rule specifies that, for an OLS having a single layer, a profile-tier-level (PTL) syntax structure that indicates a profile, a tier and a level for the OLS is included in a video parameter set for the bitstream, and the PTL syntax structure for the OLS is also included in a sequence parameter set coded in the bitstream.

Removal delay parameters for video coding
11553198 · 2023-01-10 · ·

A system for decoding a video bitstream includes receiving a bitstream and a plurality of enhancement bitstreams together with receiving a video parameter set and a video parameter set extension. The system also receives an output layer set change message including information indicating a change in at least one output layer set.

Removal delay parameters for video coding
11553198 · 2023-01-10 · ·

A system for decoding a video bitstream includes receiving a bitstream and a plurality of enhancement bitstreams together with receiving a video parameter set and a video parameter set extension. The system also receives an output layer set change message including information indicating a change in at least one output layer set.

GUIDED PROBABILITY MODEL FOR COMPRESSED REPRESENTATION OF NEURAL NETWORKS

In example embodiments, an apparatus, a method, and a computer program product are provided. The apparatus comprises at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform determine a processing order of building blocks to encode or decode a media item; and determine a number of processing steps required to encode or decode the media item; wherein the processing order of building blocks and the number of processing steps are determined based on a content of the media item by using a guided probability model based on a neural network.

GUIDED PROBABILITY MODEL FOR COMPRESSED REPRESENTATION OF NEURAL NETWORKS

In example embodiments, an apparatus, a method, and a computer program product are provided. The apparatus comprises at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform determine a processing order of building blocks to encode or decode a media item; and determine a number of processing steps required to encode or decode the media item; wherein the processing order of building blocks and the number of processing steps are determined based on a content of the media item by using a guided probability model based on a neural network.

High dynamic range processing on spherical images

Image signal processing includes obtaining two or more image signals from a first hyper-hemispherical image sensor, where each of the two or more image signals has a different exposure and obtaining two or more image signals from a second hyper-hemispherical image sensor, where each of the two or more image signals has a different exposure. Image signal processing includes generating an exposure compensated image based on a gain value applied to an exposure level of a first image and a gain value applied to an exposure level of a second image. Image signal processing further includes performing high dynamic range (HDR) processing on the exposure compensated image. The HDR processing may be performed on a high a frequency portion of the exposure compensated image.

Signaling of in-loop reshaping information using parameter sets

A method for video processing is provided to include performing a conversion between a current video block of a video region of a video and a coded representation of the video, wherein the conversion uses a coding mode in which the current video block is constructed based on a first domain and a second domain and/or chroma residue is scaled in a luma-dependent manner, and wherein a parameter set in the coded representation comprises parameter information for the coding mode.