Patent classifications
H04N7/0135
IMAGING SYSTEM AND ELECTRONIC DEVICE
An imaging system that has an image processing function and is capable of generating an interpolation image is provided. The imaging system has an additional function such as image processing and can generate an interpolation image by using image data output from an imaging device. The imaging device can perform filter processing in parallel during a light exposure period, and thus can perform a large amount of arithmetic operation and generate a high-quality interpolation image. The number of arithmetic operations can be further increased particularly during image capturing in a dark place, which requires a long exposure time. Accordingly, the frame rate can be substantially increased, and high-quality moving image data can be generated.
Image processing apparatus and recording medium
An image processing apparatus includes: a signal receiver configured to receive an input image; an image processor configured to process the input image and generate an output image; a storage configured to store a first patch corresponding to a first pixel of the input image; and a controller configured to control the image processor to generate the output image by applying the first patch stored in the storage to the first pixel.
Systems and methods for psycho-signal processing
Systems and methods for psycho-signal processing. According to an aspect, a method includes receiving a visual representation of a subject. The method also includes performing a structured motion operation on the received visual representation to generate a modified visual representation of the subject. The method further includes presenting, via a user interface, the modified visual representation.
METHOD AND APPARATUS FOR PROCESSING VIDEO
Systems and methods are described for processing video content. Content may be encoded/transcoded for delivery to a computing device that requested the content. The content may be encoded as temporally interlaced blocks, which interlaces frames over time. Because the frames are temporally interlaced, the computing device may be able to decode and play back a full-length or complete video at a reduced quality even if every temporally interlaced block was not received by the computing device, such as when network bandwidth is low or the network connection is disrupted. Receiving only a portion of the temporally interlaced blocks may be still be sufficient to display a full-length video at a reduced quality (e.g., at a reduced bit rate).
Increasing resolution and luminance of a display
The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.
Increasing resolution and luminance of a display
The disclosed system increases resolution of a display. The display operates at a predetermined frequency by displaying a first image at a first time and a second image at a second time. A selective screen disposed between the display and the camera includes multiple light transmitting elements. A light transmitting element A redirects a first portion of light transmitted by the display. A light transmitting element B allows a second portion of light transmitted by the display to reach the camera. The selective screen increases the resolution of the display by operating at the predetermined frequency and causing a first portion of the first image to be shown at the first time, and a second portion of the second image to be shown at the second time. The camera forms an image from the first portion of the first image, and the second portion of the second image.
OPTIMIZATION OF ADAPTIVE CONVOLUTIONS FOR VIDEO FRAME INTERPOLATION
Embodiments are disclosed for video image interpolation. In some embodiments, video image interpolation includes receiving a pair of input images from a digital video, determining, using a neural network, a plurality of spatially varying kernels each corresponding to a pixel of an output image, convolving a first set of spatially varying kernels with a first input image from the pair of input images and a second set of spatially varying kernels with a second input image from the pair of input images to generate filtered images, and generating the output image by performing kernel normalization on the filtered images.
VIDEO FRAME PULLDOWN BASED ON FRAME ANALYSIS
The described technology is directed towards generating a new video image sequence (e.g., for playback at 30 frames per second) based on an existing video image sequence (e.g., originated for playback at 24 frames per second). The technology is based on processing frames, e.g., adjacent pairs of frames in a four-frame sequence, to obtain candidate frames for selecting a similar candidate frame to insert into the original sequence to create the new sequence (e.g., a five-frame sequence). Aspects include selecting a repeated frame to insert or creating a new frame from existing frames to insert, to generate the new sequence based on a difference/scoring comparison.
VIDEO FRAME INTERPOLATION METHOD AND DEVICE, COMPUTER READABLE STORAGE MEDIUM
A video frame interpolation method and device, and a computer-readable storage medium are described. The method includes: inputting at least two image frames into a video frame interpolation model to obtain at least one frame-interpolation image frame, training the initial model using a first loss to obtain a reference model, copying the reference model to obtain three reference models with shared parameters, selecting different target sample images according to a preset rules to train the first/second reference model to obtain a first/second frame-interpolation result; selecting third target sample images from the first/second frame-interpolation result to train the third reference model to obtain the frame-interpolation result, obtaining a total loss of the first training model based on the frame-interpolation result and the sample images, adjusting parameters of the first training model based on the total loss, and using a parameter model via a predetermined number of iterations as the video frame interpolation model.
Video frame pulldown based on frame analysis
The described technology is directed towards generating a new video image sequence (e.g., for playback at 30 frames per second) based on an existing video image sequence (e.g., originated for playback at 24 frames per second). The technology is based on processing frames, e.g., adjacent pairs of frames in a four-frame sequence, to obtain candidate frames for selecting a similar candidate frame to insert into the original sequence to create the new sequence (e.g., a five-frame sequence). Aspects include selecting a repeated frame to insert or creating a new frame from existing frames to insert, to generate the new sequence based on a difference/scoring comparison.