H04N19/90

Systems and methods for using pre-calculated block hashes for image block matching
11317123 · 2022-04-26 · ·

A server accesses a previous frame of an image in a video and obtains hash values for each pixel in the previous frame and creates a hash map that stores each of the hash values. The server receives a current frame of the image and separates the current frame into a plurality of current blocks of pixels. The server calculates, using a hash function, a hash value for each of the current blocks of pixels. The server compares the hash values in the hash map with the hash values associated with the current frame and identifies a hash value in the hash map that matches a hash value in the current frame. The server compresses the current frame for transmission to a client using the identified matching hash values and pre-calculates a new hash map based on the current frame for use in compressing a next frame of the video.

THREE-DIMENSIONAL NOISE REDUCTION

Systems and methods are disclosed for image signal processing. For example, methods may include receiving a current image of a sequence of images from an image sensor; combining the current image with a recirculated image to obtain a noise reduced image, where the recirculated image is based on one or more previous images of the sequence of images from the image sensor; determining a noise map for the noise reduced image, where the noise map is determined based on estimates of noise levels for pixels in the current image, a noise map for the recirculated image, and a set of mixing weights; recirculating the noise map with the noise reduced image to combine the noise reduced image with a next image of the sequence of images from the image sensor; and storing, displaying, or transmitting an output image that is based on the noise reduced image.

THREE-DIMENSIONAL NOISE REDUCTION

Systems and methods are disclosed for image signal processing. For example, methods may include receiving a current image of a sequence of images from an image sensor; combining the current image with a recirculated image to obtain a noise reduced image, where the recirculated image is based on one or more previous images of the sequence of images from the image sensor; determining a noise map for the noise reduced image, where the noise map is determined based on estimates of noise levels for pixels in the current image, a noise map for the recirculated image, and a set of mixing weights; recirculating the noise map with the noise reduced image to combine the noise reduced image with a next image of the sequence of images from the image sensor; and storing, displaying, or transmitting an output image that is based on the noise reduced image.

FEATURE DATA ENCODING METHOD, APPARATUS AND DEVICE, FEATURE DATA DECODING METHOD, APPARATUS AND DEVICE, AND STORAGE MEDIUM
20230247230 · 2023-08-03 ·

A feature data encoding method is provided. First feature data corresponding to each of channels to be encoded is determined; data type conversion processing is performed on the first feature data corresponding to each channel to obtain second feature data that meets a data input condition of an apparatus for encoding feature data; spatial re-expression is performed on the second feature data corresponding to each channel to obtain third feature data, wherein a height of the third feature data is a first height, a width of the third feature data is a first width, and the third feature data comprises the second feature data located at a target position of each of the channels; and the third feature data is encoded and signalled in a bitstream.

FEATURE DATA ENCODING METHOD, APPARATUS AND DEVICE, FEATURE DATA DECODING METHOD, APPARATUS AND DEVICE, AND STORAGE MEDIUM
20230247230 · 2023-08-03 ·

A feature data encoding method is provided. First feature data corresponding to each of channels to be encoded is determined; data type conversion processing is performed on the first feature data corresponding to each channel to obtain second feature data that meets a data input condition of an apparatus for encoding feature data; spatial re-expression is performed on the second feature data corresponding to each channel to obtain third feature data, wherein a height of the third feature data is a first height, a width of the third feature data is a first width, and the third feature data comprises the second feature data located at a target position of each of the channels; and the third feature data is encoded and signalled in a bitstream.

Pruning methods and apparatuses for neural network based video coding
11765376 · 2023-09-19 · ·

A pruning method of neural network based video coding of a current block of a picture of a video sequence is performed by at least one processor and includes categorizing parameters of a neural network into groups, setting a first index to indicate that a first group of the groups is to be pruned, and a second index to indicate that a second group of the groups is not to be pruned, and transmitting, to a decoder, the set first index and the set second index. Based on the transmitted first index and the transmitted second index, the current block is processed using the parameters of which the first group of the groups is pruned.

Deep Neural Network (DNN)-Based Reconstruction Method and Apparatus for Compressive Video Sensing (CVS)
20220030281 · 2022-01-27 ·

The present disclosure provides a deep neural network (DNN)-based reconstruction method and apparatus for compressive video sensing (CVS). The method divides a video signal into a key frame and a non-key frame. The key frame is reconstructed by using an existing image reconstruction method. The non-key frame is reconstructed by using a special DNN according to the present disclosure. The neural network includes an adaptive sampling module, a multi-hypothesis prediction module, and a residual reconstruction module. The neural network makes full use of a spatio-temporal correlation of the video signal to sample and reconstruct the video signal. This ensures low time complexity of an algorithm while improving reconstruction quality. Therefore, the method in the present disclosure is applicable to a video sensing system with limited resources on a sampling side and high requirements for reconstruction quality and real-time performance.

Deep Neural Network (DNN)-Based Reconstruction Method and Apparatus for Compressive Video Sensing (CVS)
20220030281 · 2022-01-27 ·

The present disclosure provides a deep neural network (DNN)-based reconstruction method and apparatus for compressive video sensing (CVS). The method divides a video signal into a key frame and a non-key frame. The key frame is reconstructed by using an existing image reconstruction method. The non-key frame is reconstructed by using a special DNN according to the present disclosure. The neural network includes an adaptive sampling module, a multi-hypothesis prediction module, and a residual reconstruction module. The neural network makes full use of a spatio-temporal correlation of the video signal to sample and reconstruct the video signal. This ensures low time complexity of an algorithm while improving reconstruction quality. Therefore, the method in the present disclosure is applicable to a video sensing system with limited resources on a sampling side and high requirements for reconstruction quality and real-time performance.

Method and apparatus for encoding and decoding a texture block using depth based block partitioning

The invention relates to an apparatus for decoding an encoded texture block of a texture image, the decoding apparatus comprising: a partitioner (510) adapted to determine a partitioning mask (332) for the encoded texture block (312′) based on depth information (322) associated to the encoded texture block, wherein the partitioning mask (332) is adapted to define a plurality of partitions (P1, P2) and to associate a texture block element of the encoded texture block to a partition of the plurality of partitions of the encoded texture block; and a decoder (720) adapted to decode the partitions of the plurality of partitions of the encoded texture block based on the partitioning mask.

Method and apparatus for encoding and decoding a texture block using depth based block partitioning

The invention relates to an apparatus for decoding an encoded texture block of a texture image, the decoding apparatus comprising: a partitioner (510) adapted to determine a partitioning mask (332) for the encoded texture block (312′) based on depth information (322) associated to the encoded texture block, wherein the partitioning mask (332) is adapted to define a plurality of partitions (P1, P2) and to associate a texture block element of the encoded texture block to a partition of the plurality of partitions of the encoded texture block; and a decoder (720) adapted to decode the partitions of the plurality of partitions of the encoded texture block based on the partitioning mask.