H04N19/46

METHOD FOR SIGNALING MIXED NAL UNIT TYPE AND SUBPICTURE PARTITIONING CODED VIDEO STREAM
20230007292 · 2023-01-05 · ·

A method, computer program, and computer system is provided for coding video data. Video data including one or more subpictures is received. A network abstraction layer (NAL) unit type associated with each of the one or more subpictures is identified based on checking a flag corresponding to mixed NAL units in the one or more subpictures. The video data is decoded based on the identified NAL unit types.

JOINT CODING OF PALETTE MODE USAGE INDICATION
20230007271 · 2023-01-05 ·

Devices, systems and methods for palette mode coding are described. An exemplary method for video processing includes determining, for a conversion between a block of a video region in a video and a bitstream representation of the video, a prediction mode based on one or more allowed prediction modes that include at least a palette mode of the block. An indication of usage of the palette mode is determined according to the prediction mode. The method also includes performing the conversion based on the one or more allowed prediction modes.

JOINT CODING OF PALETTE MODE USAGE INDICATION
20230007271 · 2023-01-05 ·

Devices, systems and methods for palette mode coding are described. An exemplary method for video processing includes determining, for a conversion between a block of a video region in a video and a bitstream representation of the video, a prediction mode based on one or more allowed prediction modes that include at least a palette mode of the block. An indication of usage of the palette mode is determined according to the prediction mode. The method also includes performing the conversion based on the one or more allowed prediction modes.

Surveillance Camera Upgrade via Removable Media having Deep Learning Accelerator and Random Access Memory
20230007317 · 2023-01-05 ·

Systems, devices, and methods related to a deep learning accelerator and memory are described. For example, a removable media (e.g., a memory card, or a USB drive) may be configured to execute instructions with matrix operands and configured with: an interface to receive a video stream; and random access memory to buffer a portion of the video stream as an input to an artificial neural network and to store instructions executable by the deep learning accelerator and matrices of the artificial neural network. Such a removable media can be used to replace an existing removable media used in a surveillance camera to record video or images. The deep learning accelerator can execute the instructions to generate analytics of the buffer portion using the artificial neural network, enabling the surveillance camera that is upgraded via the use of the removable media to provide intelligent services based on the analytics.

Surveillance Camera Upgrade via Removable Media having Deep Learning Accelerator and Random Access Memory
20230007317 · 2023-01-05 ·

Systems, devices, and methods related to a deep learning accelerator and memory are described. For example, a removable media (e.g., a memory card, or a USB drive) may be configured to execute instructions with matrix operands and configured with: an interface to receive a video stream; and random access memory to buffer a portion of the video stream as an input to an artificial neural network and to store instructions executable by the deep learning accelerator and matrices of the artificial neural network. Such a removable media can be used to replace an existing removable media used in a surveillance camera to record video or images. The deep learning accelerator can execute the instructions to generate analytics of the buffer portion using the artificial neural network, enabling the surveillance camera that is upgraded via the use of the removable media to provide intelligent services based on the analytics.

Opportunistic progressive encoding
11570838 · 2023-01-31 · ·

Methods, systems, and devices are described for communicating data from multiple data terminals to an aggregator terminal over a communication link having changing link conditions. In some embodiments, source data is received at multiple data terminals, each in communication with an aggregator terminal over a communication link. For example, during a live newscast, one mobile camera may receive live video of an event from a first position while another mobile camera receives live video of the event from a second position. For various reasons (e.g., as the cameras move) each communication link may experience independently changing link conditions. Each data terminal encodes the source data (or store source data for later encoding) as a function of its respective link conditions and transmits encoded source data over its respective communication link to the aggregator terminal.

Opportunistic progressive encoding
11570838 · 2023-01-31 · ·

Methods, systems, and devices are described for communicating data from multiple data terminals to an aggregator terminal over a communication link having changing link conditions. In some embodiments, source data is received at multiple data terminals, each in communication with an aggregator terminal over a communication link. For example, during a live newscast, one mobile camera may receive live video of an event from a first position while another mobile camera receives live video of the event from a second position. For various reasons (e.g., as the cameras move) each communication link may experience independently changing link conditions. Each data terminal encodes the source data (or store source data for later encoding) as a function of its respective link conditions and transmits encoded source data over its respective communication link to the aggregator terminal.

Adaptive video streaming

A method, system and apparatus for image capture, analysis and transmission are provided. A link aggregation method involves identifying controller network ports to a source connected to the same subnetwork; producing packets associating corresponding controller network ports selected by the source CPU for substantially uniform selection; and transmitting the packets to their corresponding network ports. An image analysis method involves producing by a camera an indication whether a region of an image differs by a threshold extent from a corresponding region of a reference image; transmitting the indication and image data to a controller via a communications network; and storing at the controller the image data and the indication in association therewith. The controller may perform operations according to positive indications. A transmission method involves receiving user input in respect of a video stream and transmitting, in accordance with the user input, selected data packets of selected image frames thereof.

Adaptive video streaming

A method, system and apparatus for image capture, analysis and transmission are provided. A link aggregation method involves identifying controller network ports to a source connected to the same subnetwork; producing packets associating corresponding controller network ports selected by the source CPU for substantially uniform selection; and transmitting the packets to their corresponding network ports. An image analysis method involves producing by a camera an indication whether a region of an image differs by a threshold extent from a corresponding region of a reference image; transmitting the indication and image data to a controller via a communications network; and storing at the controller the image data and the indication in association therewith. The controller may perform operations according to positive indications. A transmission method involves receiving user input in respect of a video stream and transmitting, in accordance with the user input, selected data packets of selected image frames thereof.

Method of encoding an image into a coded image, method of decoding a coded image, and apparatuses thereof

A method of encoding an image into a coded image, the method comprising: writing a quantization offset parameter into the coded image, determining a prediction mode type for coding a block of image samples of the image into a coding unit of the coded image, determining a quantization parameter for the block of image samples, and determining if the prediction mode type is of a predetermined type, wherein if the prediction mode type is of the predetermined type, the method further comprises: modifying the determined quantization parameter using the quantization offset parameter, and performing a quantization process for the block of image samples using the modified quantization parameter, and wherein if the prediction mode type is not of the predetermined type, the method further comprises: performing a quantization process for the block of image samples using the determined quantization parameter.