Patent classifications
H04N19/587
Video Frame Interpolation Via Feature Pyramid Flows
Systems and methods for generating interpolated images are disclosed. In examples, image features are extracted from a first image and a second image; such image features may be warped using first and second plurality of parameters. A first candidate intermediate frame may be generated based on the warped first features and the warped second features. Multi-scale features associated with the image features extracted from the first image and the second image may be obtained and warped using the first and second plurality of parameters. A second candidate intermediate frame may be generated based on the warped first multi-scale features and the warped second multi-scale features. By blending the first candidate intermedia frame with the second candidate intermediate frame, an interpolated image may be generated.
Video Frame Interpolation Via Feature Pyramid Flows
Systems and methods for generating interpolated images are disclosed. In examples, image features are extracted from a first image and a second image; such image features may be warped using first and second plurality of parameters. A first candidate intermediate frame may be generated based on the warped first features and the warped second features. Multi-scale features associated with the image features extracted from the first image and the second image may be obtained and warped using the first and second plurality of parameters. A second candidate intermediate frame may be generated based on the warped first multi-scale features and the warped second multi-scale features. By blending the first candidate intermedia frame with the second candidate intermediate frame, an interpolated image may be generated.
Methods and systems for video processing
A method for processing an online video stream may include determining a transmission performance of a network for a queue of video frames, wherein each video frame in the queue may be associated with a priority level. The method may also include determining a maximum discarding level based on the transmission performance of the network. The method may further include removing a target video frame of which the associated priority level is lower than or equal to the maximum discarding level from the queue.
Methods and systems for video processing
A method for processing an online video stream may include determining a transmission performance of a network for a queue of video frames, wherein each video frame in the queue may be associated with a priority level. The method may also include determining a maximum discarding level based on the transmission performance of the network. The method may further include removing a target video frame of which the associated priority level is lower than or equal to the maximum discarding level from the queue.
METHODS AND SYSTEMS FOR MANAGING STORAGE OF VIDEOS IN A STORAGE DEVICE
Systems and methods for storing videos in a storage device, more specifically in a digital video recorder (DVR). The method includes determining, for each video, a condition indicating a need to reduce the size of a respective video is determined. The condition being at least a storage period of the respective video being greater than a threshold time period. Upon determination reducing the video size either by reducing at least one quality parameter of the respective video or by eliminating at least one video segment of the video corresponding to an uneventful region. Further after the size reduction, storing the respective video as a modified version of the respective video in the storage device. Further, the video or a portion of video comprising important events can be shared with a remote device.
Neural Network-Based Video Compression with Spatial-Temporal Adaptation
A mechanism for processing video data is disclosed. A determination is made to apply an end-to-end neural network-based video codec to a current video unit of a video. The end-to-end neural network-based video codec comprises a spatial-temporal adaptive compression (STAC) component including a frame extrapolative compression (FEC) branch and an image compression branch. A conversion is performed between the current video unit and a bitstream of the video via the end-to-end neural network-based video codec.
HIGH QUALITY UI ELEMENTS WITH FRAME EXTRAPOLATION
A frame processor may generate a mask based on one or more static regions of a first set of frames of a plurality of previous frame and adjust the mask to at least one of determine alpha data or conceal distorted content associated with the one or more static regions of the first set of frames. The distorted content may be caused by extrapolation of a frame from a second set of frames of the plurality of previous frames. The frame processor may generate a composite frame based on application of at least one of the mask or the alpha data to a previous frame of the plurality of frames, and application of the previous frame based on the at least one of the mask or the alpha data to the frame extrapolated from the second set of frames of the plurality of previous frames.
Method and apparatus for encoding or decoding video data in FRUC mode with reduced memory accesses
The present disclosure concerns a method and a device for encoding or decoding video data. It concerns more particularly the encoding according to a particular encoding mode using a decoder side motion vector derivation mode referenced as frame-rate up conversion mode or FRUC mode. It concerns encoding and decoding improvement which reduce the need for memory accesses when using an encoding mode where the motion information is predicted using a decoder side motion vector derivation method.
Video decoding method and camera
The present invention provides a video decoding method and a camera. Said method comprises: loading a video, and decoding each frame from the video; calculating, for any two adjacent frames, a first optical flow field; calculating, according to the first optical flow field, a second optical flow field of a position where a frame is to be inserted, the position being located between two adjacent frames; using the second optical flow field to calculate a corresponding pixel position, in a previous frame of two adjacent frames, of each pixel of the frame to be inserted, and assigning the pixel value of the previous frame to the pixel of the frame to be inserted; and placing the inserted frame and the decoded original frame together according to a time sequence, so as to reconstitute a video having a high frame rate. The present invention eliminates or improves discrete motion, providing a more fluent viewing experience.
FACTIONAL SAMPLE INTERPOLATION FOR REFERENCE PICTURE RESAMPLING
Concepts are described, including encoding of a video into data stream and decoding of a data stream having a video encoded thereinto, using motion compensation prediction between pictures of equal resolution and pictures of different resolution, based on motion vectors at a half-sample resolution and on motion vectors at a different resolution using interpolation filters to obtain sub-sample values within a reference sample array. The interpolation filter is selected from two interpolation filter versions different in a higher edge preserving property, and the selection is depending on whether a current picture is equal in picture resolution to the reference sample array in horizontal and/or vertical dimension, and/or a constraint information in the data stream.