H04N7/0137

Video processing method, computer device, and storage medium

A video processing method is provided. In the method, circuitry of a terminal determines a second video frame portion associated with a second time point that is after a first time point associated with a first video frame portion of a video. The circuitry generates a transitional video frame based on the first video frame portion and the second video frame portion. A color value of a pixel at a target pixel location in the transitional video frame is within a target color interval. The target color interval is determined according to a color value of a pixel at the target pixel location in the first video frame portion and a color value of a pixel at the target pixel location in the second video frame portion. The circuitry performs display control of the video according to the transitional video frame.

METHOD FOR FORMING AN OUTPUT IMAGE SEQUENCE FROM AN INPUT IMAGE SEQUENCE, METHOD FOR RECONSTRUCTING AN INPUT IMAGE SEQUENCE FROM AN OUTPUT IMAGE SEQUENCE, ASSOCIATED DEVICES, SERVER EQUIPMENT, CLIENT EQUIPMENT AND COMPUTER PROGRAMS

A method for forming an image sequence that is an output sequence, from an input image sequence, is provided. The input image sequence has an input spatial resolution and an input temporal resolution. The output sequence has an output temporal resolution equal to the input temporal resolution and an output spatial resolution equal to a predetermined fraction 1/N of the input spatial resolution by an integer number N higher than or equal to 2. The method, implemented for a sub-sequence of the input frame sequence that is a current input sub-sequence and including a preset number of images, includes: obtaining a temporal frequency that is an image frequency, associated with the current input sub-sequence; processing the current input sub-sequence to obtain an output sub-sequence; and inserting the output sub-sequence and the associated image frequency into an output container.

Video Repair Method and Apparatus, and Storage Medium

The present disclosure relates to a video repair method and apparatus, an electronic device, and a storage medium. The method includes: acquiring first forward optical flows and first reverse optical flows between adjacent images among continuous multiple frames of images; respectively performing optical flow optimization processing on the first forward optical flows and the first reverse optical flows to obtain second forward optical flows corresponding to the first forward optical flows and second reverse optical flows corresponding to the first reverse optical flows; performing forward and reverse conduction optimization on the continuous multiple frames of images by utilizing the second forward and the second reverse optical flows, respectively, until all the images in the optimized continuous multiple frames of images satisfy repair requirements; and obtaining repaired images of the continuous multiple frames of images according to the optimized images obtained by the forward conduction optimization and the reverse conduction optimization.

Systems and methods for enhanced motion detection, object tracking, situational awareness and super resolution video using microscanned images

Method for displaying super-resolution video of at least one-moving object without image artifacts, including the procedures of acquiring microscanned images of at least one moving object, a first and second subset of the images respectively forming a first and second data set, for each data set, analyzing at least a portion of the sub-set of the images for spatial and temporal information, determining a respective movement indication of the moving object according to the spatial and temporal information, in parallel to the procedure of analyzing, forming a respective super-resolution image from each data set and designating a respective bounded area surrounding the moving object, and repeatedly displaying each super-resolution image outside the bounded area a plurality of times at a video frame rate and displaying during those times within the respective bounded area, a plurality of consecutive microscanned images of the moving object at the video frame rate.

VIDEO FRAME PULLDOWN BASED ON FRAME ANALYSIS
20210021783 · 2021-01-21 ·

The described technology is directed towards generating a new video image sequence (e.g., for playback at 30 frames per second) based on an existing video image sequence (e.g., originated for playback at 24 frames per second). The technology is based on processing frames, e.g., adjacent pairs of frames in a four-frame sequence, to obtain candidate frames for selecting a similar candidate frame to insert into the original sequence to create the new sequence (e.g., a five-frame sequence). Aspects include selecting a repeated frame to insert or creating a new frame from existing frames to insert, to generate the new sequence based on a difference/scoring comparison.

Multi-frame video interpolation using optical flow
10776688 · 2020-09-15 · ·

Video interpolation is used to predict one or more intermediate frames at timesteps defined between two consecutive frames. A first neural network model approximates optical flow data defining motion between the two consecutive frames. A second neural network model refines the optical flow data and predicts visibility maps for each timestep. The two consecutive frames are warped according to the refined optical flow data for each timestep to produce pairs of warped frames for each timestep. The second neural network model then fuses the pair of warped frames based on the visibility maps to produce the intermediate frame for each timestep. Artifacts caused by motion boundaries and occlusions are reduced in the predicted intermediate frames.

APPARATUS AND METHODS FOR ARTIFACT DETECTION AND REMOVAL USING FRAME INTERPOLATION TECHNIQUES
20200160495 · 2020-05-21 ·

Methods and apparatus for the generation of interpolated frames of video data. In one embodiment, the interpolated frames of video data are generated by obtaining two or more frames of video data from a video sequence; determining frame errors for the obtained two or more frames from the video sequence, determining whether the frame errors exceed a threshold value; performing a multi-pass operation; performing a single-pass operation; performing frame blending; performing edge correction; and generating the interpolated frame of image data.

Image processing method capable of deinterlacing the interlacing fields
10587840 · 2020-03-10 · ·

The image processing system receives first field information, second field information, and third field information. The first and the third field information correspond to first pixels, and the second field information corresponds to second pixels. The first pixels and the second pixels are disposed in interlaced rows. Generate the motion adaptive deinterlacing parameter of a first pixel by performing the motion detection and interpolation according to the first and the third field information. Calculate the horizontal and the vertical compensating display parameters of the first pixel according to the horizontal and vertical motion estimation values and the first and the third field information. Generate the mixed display parameter of the first pixel by using a weighted average of the horizontal or the vertical compensating display parameter of the first pixel and the motion adaptive deinterlacing parameters of the first pixel.

VIDEO PROCESSING METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM

A video processing method is provided. In the method, circuitry of a terminal determines a second video frame portion associated with a second time point that is after a first time point associated with a first video frame portion of a video. The circuitry generates a transitional video frame based on the first video frame portion and the second video frame portion. A color value of a pixel at a target pixel location in the transitional video frame is within a target color interval. The target color interval is determined according to a color value of a pixel at the target pixel location in the first video frame portion and a color value of a pixel at the target pixel location in the second video frame portion. The circuitry performs display control of the video according to the transitional video frame.

Occlusion processing for frame rate conversion using deep learning

A method for converting a frame rate of an input video is provided. The method includes performing a motion estimation by generating at least one motion field for at least one pair of frames of the input video, wherein the at least one motion field includes a set of motion vectors for each block of a first reference frame of the input video, the set of motion vectors indicating to a second reference frame of the input video, and preparing first data for a predetermined interpolation phase for each block of an interpolated frame, where the first data include at least one parameter for a pre-trained occlusion processing (OcC) convolutional neural network (CNN), the at least one parameter being obtained based on the at least one motion field.