Patent classifications
H04N5/147
Time compressing video content
Methods and systems for compressing video content are presented. The methods and systems include analyzing a sequence of media frames stored in the memory device and calculating a displacement level of each of the media frames. The displacement level indicates how different each of the media frames is to a previous media frame. The sequence of media frames is divided into a plurality of cuts where each cut ends at a media frame having a substantially high displacement level. Frames to be removed from the sequence of media frames are identified in each cut based upon the frame's displacement level. The identified frames are then removed.
SHOT-CHANGE DETECTION USING CONTAINER LEVEL INFORMATION
The disclosed computer-implemented method may include, for a current frame of a sequence of video frames, determining a frame type label of the current frame. The method may include, in response to determining that the current frame is labeled as an intra frame (I-frame), decoding the current frame and comparing the decoded frame to historical I-frame data. The method may also include, in response to the comparison satisfying a shot-change threshold, flagging the current frame as a shot-change frame, and in response to flagging the current frame as the shot-change frame, storing the current frame for a subsequent shot-change detection. The method may further include updating, based on flagged shot-change frames, shot boundaries for the sequence of video frames. Various other methods, systems, and computer-readable media are also disclosed.
Synchronization and presentation of multiple 3D content streams
Systems, methods, and computer-readable media are disclosed for synchronization and presentation of multiple 3D content streams. Example methods may include determining a first content stream of 3D content to send to a user device, where movement of the user device causes presentation of different portions of the 3D content at the user device, and determining a first position of the user device. Some methods may include causing presentation of a first portion of the first content stream at the user device, where the first portion corresponds to the first position, determining a second content stream of 3D content, where movement of the user device causes presentation of different portions of the 3D content at the user device, and causing presentation of a second portion of the second content stream at the user device, where the second portion corresponds to the first position of the user device.
Video reformatting system
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for receiving, by one or more processors, a video comprising one or more shots in a first aspect ratio; detecting a first shot of the one or more shots, the first shot comprising a sequence of frames; and identifying an object that appears throughout a continuous portion of frames of the sequence of frames in the first shot. A visual presentation of the object in the first shot is automatically modified and a modified video comprising the one or more shots in a second aspect ratio is generated based on the automatically modified visual presentation of the object in the first shot.
Video signal processing device, video freeze detection circuit and video freeze detection method
A video signal processing device includes: a video signal dividing unit configured to divide a video signal into first to k-th (k is an integer of 2 or greater) partial video signals for each frame; a video change detection unit configured to determine, for each of the first to k-th partial video signals, whether or not a video based on the partial video signals has changed between respective frames, and generate first to k-th video change detection signals representing the respective detection results; and a video sameness determination unit configured to generate a video sameness signal indicating that the video signal has not changed, if the number of video change detection signals that indicate the video has not changed, among the first to k-th video change detection signals, is greater than a prescribed number.
Systems and methods for producing a flipbook
A system for producing a flipbook includes a processor that receives a video comprising a plurality of frames, selects a start frame and an end frame, and a plurality of frames therebetween. The processor can analyze the frames of the segment to determine an average rate of change of the plurality of frames and a threshold of relative image difference based on the average rate of change of the plurality of frames and a baseline frame rate. The processor can select, based on the results of its analysis, a plurality of selected frames, each of the selected frames being separated from two other selected frames by a sub-segment of the video, wherein each pair of adjacent frames comprises a relative image difference above the threshold and wherein each selected frame meets quality criteria not met by one or more local frames. The processor arranges the selected frames in temporal order, adds a protruding edge to each of the selected frames, and transmits data representing each of the selected frames to a printer for printing and binding a flipbook.
VIDEO FRAME ANALYSIS FOR TARGETED VIDEO BROWSING
A method for video frame analysis includes determining a first dissimilarity metric and a second dissimilarity metric. The first dissimilarity metric may correspond to a first difference between a first foreground of a first key frame in a video and a second foreground of a second key frame following the first key frame in the video. The second dissimilarity metric may correspond to a second difference between the second foreground of the second key frame and a third foreground of a third key frame following the second key frame in the video. A playback of the video may be generated based on the first dissimilarity metric and the second dissimilarity metric. Related systems and computer program products are also provided.
Low power framework for processing, compressing, and transmitting images at a mobile image capture device
The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. In particular, the present disclosure provides low power frameworks for processing, compressing, and transmitting images at a mobile image capture device. One example low power framework includes a scene analyzer that analyzes a scene depicted by a first image and determines whether to store the first image in a non-volatile memory or to discard the first image from a temporary image buffer without storing the first image in the non-volatile memory.
Information processing apparatus, information processing method, and program
There is provided an image archiving method for use with a writing target, comprising the steps of receiving a series of captured images of the writing target, detecting difference between first and second candidate received images separated by a predetermined period of time, where additive differences are indicative of writing and subtractive differences are indicative of erasure; upon detecting subtractive difference, temporarily retaining a last candidate image captured prior to the detection, and detecting whether the subtractive difference relative to the retained image exceeds a subtraction threshold amount; and if so, then storing the retained image.
Event detection apparatus and event detection method
An event detection apparatus includes an input unit configured to input a plurality of time-sequential images, a first extraction unit configured to extract sets of first image samples according to respective different sample scales from a first time range of the plurality of time-sequential images based on a first scale parameter, a second extraction unit configured to extract sets of second image samples according to respective different sample scales from a second time range of the plurality of time-sequential images based on a second scale parameter, a dissimilarity calculation unit configured to calculate a dissimilarity between the first and second image samples based on the sets of the first and second image samples, and a detection unit configured to detect an event from the plurality of time-sequential images based on the dissimilarity.