Patent classifications
H04N19/426
HIERARCHICAL SURVEILANCE VIDEO COMPRESSION REPOSITORY
Apparatus and methods for processing video surveillance data includes training a data repository, using a first plurality of surveillance video files including a first plurality of video frames, to identify macroblocks of the video frames representing average content of the first plurality of surveillance video files. An ordered data structure is generated by sorting the plurality of macroblocks of video frames based on image differences within the plurality of macroblocks. The ordered data structure includes a root node. A second plurality of surveillance video files including a second plurality of video frames is received. The second plurality of video frames is inserted into the generated ordered data structure. References to the generated ordered data structure are stored in the data store for each frame of the second plurality of video frames along with a difference between corresponding video frames and references.
HIERARCHICAL SURVEILANCE VIDEO COMPRESSION REPOSITORY
Apparatus and methods for processing video surveillance data includes training a data repository, using a first plurality of surveillance video files including a first plurality of video frames, to identify macroblocks of the video frames representing average content of the first plurality of surveillance video files. An ordered data structure is generated by sorting the plurality of macroblocks of video frames based on image differences within the plurality of macroblocks. The ordered data structure includes a root node. A second plurality of surveillance video files including a second plurality of video frames is received. The second plurality of video frames is inserted into the generated ordered data structure. References to the generated ordered data structure are stored in the data store for each frame of the second plurality of video frames along with a difference between corresponding video frames and references.
Systems and methods for player input motion compensation by anticipating motion vectors and/or caching repetitive motion vectors
Systems and methods for reducing latency through motion estimation and compensation techniques are disclosed. The systems and methods include a client device that uses transmitted lookup tables from a remote server to match user input to motion vectors, and tag and sum those motion vectors. When a remote server transmits encoded video frames to the client, the client decodes those video frames and applies the summed motion vectors to the decoded frames to estimate motion in those frames. In certain embodiments, the systems and methods generate motion vectors at a server based on predetermined criteria and transmit the generated motion vectors and one or more invalidators to a client, which caches those motion vectors and invalidators. The server instructs the client to receive input from a user, and use that input to match to cached motion vectors or invalidators. Based on that comparison, the client then applies the matched motion vectors or invalidators to effect motion compensation in a graphic interface. In other embodiments, the systems and methods cache repetitive motion vectors at a server, which transmits a previously generated motion vector library to a client. The client stores the motion vector library, and monitors for user input data. The server instructs the client to calculate a motion estimate from the input data and instructs the client to update the stored motion vector library based on the input data, so that the client applies the stored motion vector library to initiate motion in a graphic interface prior to receiving actual motion vector data from the server. In this manner, latency in video data streams is reduced.
Adaptive resolution management prediction rescaling
A method includes receiving a reference frame, determining, for a current block, a scaling constant, determining a scaled reference block using the reference frame and the scaling constant, determining a scaled prediction block using the scaled reference block, and reconstructing pixel data of the current block and using the rescaled prediction block. Related apparatus, systems, techniques and articles are also described.
VIDEO CODING AND DECODING
A sequence of images is encoded in a bitstream as a series of picture units PU-01˜03. Each picture unit corresponds to one encoded image and includes one or more network abstraction layer (NAL) units NAL-01˜23. The NAL units may be video coding layer (VCL) NAL units which each contain encoded image data or adaptation parameter set NAL units which each contain an adaptation parameter set (APS) having parameters for performing one or more types of processing operation on the image data contained in one or more VCL NAL units. The APS NAL units may be prefix APS NAL units P-APS or suffix APS NAL units S-APS. An additional constraint is applied to the bitstream prohibiting inclusion, in a picture unit, of a prefix APS NAL unit after the first NAL unit of the picture unit concerned. This can avoid more than one APS applying to slices belonging to the same picture unit, and hence reduce the size of an APS buffer. Alternatively, or in addition, it is permitted to include, in the same picture unit, of a prefix APS NAL unit and a suffix APS NAL unit having the same APS type and the same APS identifier but different contents. This can reduce rewriting operations when performing random access decoding at a specific timing in the coded video sequence.
VIDEO CODING AND DECODING
A sequence of images is encoded in a bitstream as a series of picture units PU-01˜03. Each picture unit corresponds to one encoded image and includes one or more network abstraction layer (NAL) units NAL-01˜23. The NAL units may be video coding layer (VCL) NAL units which each contain encoded image data or adaptation parameter set NAL units which each contain an adaptation parameter set (APS) having parameters for performing one or more types of processing operation on the image data contained in one or more VCL NAL units. The APS NAL units may be prefix APS NAL units P-APS or suffix APS NAL units S-APS. An additional constraint is applied to the bitstream prohibiting inclusion, in a picture unit, of a prefix APS NAL unit after the first NAL unit of the picture unit concerned. This can avoid more than one APS applying to slices belonging to the same picture unit, and hence reduce the size of an APS buffer. Alternatively, or in addition, it is permitted to include, in the same picture unit, of a prefix APS NAL unit and a suffix APS NAL unit having the same APS type and the same APS identifier but different contents. This can reduce rewriting operations when performing random access decoding at a specific timing in the coded video sequence.
LUMA MAPPING- AND CHROMA SCALING-BASED VIDEO OR IMAGE CODING
According to the disclosure of the present document, for LMCS, a linear reshaper may be used, LMCS codewords (or ranges thereof) may be restricted, and in the chroma scaling of the LMCS, a single chroma residual scaling factor may be used. Accordingly, the resources/cost (of software or hardware) necessary for an LMCS procedure may be minimized, and latency in coding may be eliminated, thus enabling the LMCS procedure to be efficiently performed.
Method and apparatus for video encoding and decoding
A video encoding method, performed by a computer device, includes: obtaining a reference frame corresponding to a current frame from a video input to be encoded; determining a sampling manner corresponding to the current frame; sampling the reference frame based on the sampling manner according to resolution information of the current frame, to obtain a target reference frame corresponding to the reference frame; and encoding the current frame according to the target reference frame.
VIDEO DECODING APPARATUS AND VIDEO CODING APPARATUS
A video decoding apparatus includes matrix reference pixel derivation circuitry that derives reference samples by using top neighboring samples and left neighboring samples of a current block, weight matrix derivation circuitry that derives a weight matrix, matrix prediction image derivation circuitry that derives a prediction image, and matrix prediction image interpolation circuitry that derives a predicted image by using the prediction image. A size index is derived according to a value of a target block width and a value of a target block height. A prediction size is derived using the size index. In a case that a first condition, that both the value of the transform block width and the value of the transform block height are equal to 4, is true, the size index is set equal to 0 and the prediction size is set equal to 4.
VIDEO DECODING APPARATUS AND VIDEO CODING APPARATUS
A video decoding apparatus includes matrix reference pixel derivation circuitry that derives reference samples by using top neighboring samples and left neighboring samples of a current block, weight matrix derivation circuitry that derives a weight matrix, matrix prediction image derivation circuitry that derives a prediction image, and matrix prediction image interpolation circuitry that derives a predicted image by using the prediction image. A size index is derived according to a value of a target block width and a value of a target block height. A prediction size is derived using the size index. In a case that a first condition, that both the value of the transform block width and the value of the transform block height are equal to 4, is true, the size index is set equal to 0 and the prediction size is set equal to 4.