Patent classifications
H04N7/0137
VIDEO FRAME PULLDOWN BASED ON FRAME ANALYSIS
The described technology is directed towards generating a new video image sequence (e.g., for playback at 30 frames per second) based on an existing video image sequence (e.g., originated for playback at 24 frames per second). The technology is based on processing frames, e.g., adjacent pairs of frames in a four-frame sequence, to obtain candidate frames for selecting a similar candidate frame to insert into the original sequence to create the new sequence (e.g., a five-frame sequence). Aspects include selecting a repeated frame to insert or creating a new frame from existing frames to insert, to generate the new sequence based on a difference/scoring comparison.
METHOD FOR VIDEO FRAME INTERPOLATION, AND ELECTRONIC DEVICE
The disclosure discloses a method for video frame interpolation. The method includes: obtaining a first visual semantic feature and first pixel information of a first frame, and obtaining a second visual semantic feature and second pixel information of a second frame; generating semantic optical flow information based on the first visual semantic feature and the second visual semantic feature; generating pixel optical flow information based on the first pixel information and the second pixel information; and generating an interpolation frame between the first frame and the second frame based on the semantic optical flow information and the pixel optical flow information, and interpolating the interpolation frame between the first frame and the second frame.
Video frame pulldown based on frame analysis
The described technology is directed towards generating a new video image sequence (e.g., for playback at 30 frames per second) based on an existing video image sequence (e.g., originated for playback at 24 frames per second). The technology is based on processing frames, e.g., adjacent pairs of frames in a four-frame sequence, to obtain candidate frames for selecting a similar candidate frame to insert into the original sequence to create the new sequence (e.g., a five-frame sequence). Aspects include selecting a repeated frame to insert or creating a new frame from existing frames to insert, to generate the new sequence based on a difference/scoring comparison.
OCCLUSION PROCESSING FOR FRAME RATE CONVERSION USING DEEP LEARNING
A method for converting a frame rate of an input video is provided. The method includes performing a motion estimation by generating at least one motion field for at least one pair of frames of the input video, wherein the at least one motion field includes a set of motion vectors for each block of a first reference frame of the input video, the set of motion vectors indicating to a second reference frame of the input video, and preparing first data for a predetermined interpolation phase for each block of an interpolated frame, where the first data include at least one parameter for a pre-trained occlusion processing (OcC) convolutional neural network (CNN), the at least one parameter being obtained based on the at least one motion field.
METHOD AND APPARATUS FOR INTERPOLATING FRAME TO VIDEO, AND ELECTRONIC DEVICE
The disclosure provides a method and an apparatus for interpolating a frame to a video. A first deep-level feature of a first frame is obtained and a second deep-level feature of a second frame is obtained. Forward optical flow information and inverse optical flow information between the first frame and the second frame are obtained based on first deep-level feature and the second deep-level feature. An interpolated frame between the first frame and the second frame is generated based on the forward optical flow information and the inverse optical flow information, and the interpolated frame is inserted between the first frame and the second frame.
Apparatus and methods for artifact detection and removal using frame interpolation techniques
Methods and apparatus for the generation of interpolated frames of video data. In one embodiment, the interpolated frames of video data are generated by obtaining two or more frames of video data from a video sequence; determining frame errors for the obtained two or more frames from the video sequence, determining whether the frame errors exceed a threshold value; performing a multi-pass operation; performing a single-pass operation; performing frame blending; performing edge correction; and generating the interpolated frame of image data.
VIDEO DECODING METHOD AND CAMERA
The present invention provides a video decoding method and a camera. Said method comprises: loading a video, and decoding each frame from the video; calculating, for any two adjacent frames, a first optical flow field; calculating, according to the first optical flow field, a second optical flow field of a position where a frame is to be inserted, the position being located between two adjacent frames; using the second optical flow field to calculate a corresponding pixel position, in a previous frame of two adjacent frames, of each pixel of the frame to be inserted, and assigning the pixel value of the previous frame to the pixel of the frame to be inserted; and placing the inserted frame and the decoded original frame together according to a time sequence, so as to reconstitute a video having a high frame rate. The present invention eliminates or improves discrete motion, providing a more fluent viewing experience.
Display Device, Signal Processing Device, And Signal Processing Method
The present technology relates to a display device, a signal processing device, and a signal processing method which make it possible to realize interpolation more suitable for viewing when interpolating motion between original images.
Provided is a display device which includes a signal processing unit that, when an interpolation frame is generated for original frames along a time axis, the interpolation frame interpolating between the original frames, controls an interpolation rate of the interpolation frame depending on motion between the original frames in a certain direction. The present technology can be applied, for example, to a television receiver.
Frame interpolation with multi-scale deep loss functions and generative adversarial networks
A method includes selecting two or more frames from a plurality of frames of a video, downscaling the two or more frames, estimating a flow data based on an optical flow associated with the downscaled two or more frames, upscaling the flow data, generating a refined flow data based on the upscaled flow data and the downscaled two or more frames, upscaling the refined flow data, and synthesizing an image based on the upscaled refined flow data and the two or more frames.
APPARATUS AND METHOD FOR VIRTUAL REALITY SICKNESS REDUCTION BASED ON VIRTUAL REALITY SICKNESS ASSESSMENT
Disclosed is an apparatus and method for VR sickness reduction based on VR sickness assessment. According to an embodiment of the inventive concept, an apparatus for reducing virtual reality (VR) content cybersickness includes a first module extracting feature information about each of predetermined cybersickness precipitating factors through analysis of VR content, and a second module determining a cybersickness precipitating factor requiring cybersickness reduction among the cybersickness precipitating factors based on the extracted feature information about each of the cybersickness precipitating factors, and generating the VR content as VR content having a cybersickness score not greater than a predetermined reference cybersickness score, by performing the cybersickness reduction on corresponding feature information, using a deep learning neural network pre-learned for each of the respective determined cybersickness precipitating factor.