Patent classifications
G11B27/28
METHOD AND SYSTEM FOR AUTOMATIC PRE-RECORDATION VIDEO REDACTION OF OBJECTS
A system and a method for automatic video redaction are provided herein. The method may include: receiving, an input video comprising a sequence of frames captured by a camera, wherein the input video includes live video obtained directly from the camera, wherein recordation of the video directly from the camera is disabled; performing visual analysis of the input video, to detect portions of the frames of the input video in which one of a plurality of predefined objects or a descriptor thereof is detected; generating a redacted input video by replacing the portions of the frames with new portions of another visual content; and recording the redacted input video on a data storage device, wherein the generating of thethe redacted input video, is carried out by a computer processor, after the input video is captured by the camera and before the recording of the redacted input video on the data storage device.
Image processing apparatus, image processing method and medium
An object of one embodiment of the present disclosure is to provide a product with a high added value to a user by preventing an unnatural character string from being combined, combination of no character string, and the like in a case where there is no voice or almost no voice before or after an image selected from within a moving image. One embodiment of the present disclosure is an image processing apparatus including: a selection unit configured to select, from a moving image including a plurality of frames, a part of the moving image; an extraction unit configured to extract a voice during a predetermined time corresponding to the selected part in the moving image; and a combination unit configured to combine a character string based on a voice extracted by the extraction unit, with the part of the moving image selected by the selection unit.
METHOD OF IDENTIFYING AN ABRIDGED VERSION OF A VIDEO
A computer-implemented method of identifying whether a target video comprises an abridged version of a reference video includes evaluating condition a) that the target video does not comprise all shots of the reference video; condition b) that the target video includes groups of consecutive shots also included in the reference video; and condition c) that all shots which are present in both the target video and the reference video are in the same order. The method further includes identifying whether the target video comprises an abridged version of the reference video; and outputting a result of the identifying. The target video is identified as comprising an abridged version of the reference video on condition that conditions a), b) and c) are met. Also provided is a data processing apparatus for performing the method; and a computer program and computer readable storage medium comprising instructions to perform the method.
Generating a stitched data stream
Systems and methods provide for receiving a request for an alternate data stream of a plurality of individual data streams than an active data stream currently being displayed on a computing device, during display on the computing device of a stitched data steam comprising the plurality of individual data streams associated with a common audio timeline. The systems and methods further providing for determining a subset of the plurality of individual data streams of the stitched data stream associated with a time period of the active data stream in the common audio timeline, selecting the alternate data stream from the subset of the plurality of individual data streams, and providing the alternate data stream to the computing device, wherein the display of the active data stream on the computing device transitions to the alternate data stream on the computing device in the common audio timeline.
Generating a stitched data stream
Systems and methods provide for receiving a request for an alternate data stream of a plurality of individual data streams than an active data stream currently being displayed on a computing device, during display on the computing device of a stitched data steam comprising the plurality of individual data streams associated with a common audio timeline. The systems and methods further providing for determining a subset of the plurality of individual data streams of the stitched data stream associated with a time period of the active data stream in the common audio timeline, selecting the alternate data stream from the subset of the plurality of individual data streams, and providing the alternate data stream to the computing device, wherein the display of the active data stream on the computing device transitions to the alternate data stream on the computing device in the common audio timeline.
Event/object-of-interest centric timelapse video generation on camera device with the assistance of neural network input
An apparatus including an interface and a processor. The interface may be configured to receive pixel data generated by a capture device. The processor may be configured to generate video frames in response to the pixel data, perform computer vision operations on the video frames to detect objects, perform a classification of the objects detected based on characteristics of the objects, determine whether the classification of the objects corresponds to a user-defined event and generate encoded video frames from the video frames. The encoded video frames may be communicated to a cloud storage service. The encoded video frames may comprise a first sample of the video frames selected at a first rate when the user-defined event is not detected and a second sample of the video frames selected at a second rate while the user-defined event is detected. The second rate may be greater than the first rate.
Generating Moving Thumbnails For Videos
A method of generating a moving thumbnail is disclosed. The method includes sampling video frames of a video item. The method further includes determining frame-level quality scores for the sampled video frames. The method also includes determining multiple group-level quality scores for multiple groups of the sampled video frames using the frame-level quality scores of the sampled video frames. The method further includes selecting one of the groups of the sampled video frames based on the multiple group-level quality scores. The method includes creating a moving thumbnail using a subset of the video frames that have timestamps within a range from the start timestamp to the end timestamp.
AUDIO CONTENT SEGMENTATION METHOD AND APPARATUS
Embodiments of the present invention provide an audio content segmentation method and an apparatus. The method includes: obtaining at least one piece of first segmentation location information of audio content; sending a segmentation location message to a server, wherein the segmentation location message carries the at least one piece of first segmentation location information of the audio content and an audio identifier of the audio content; receiving a segmentation location recommendation message sent by the server, wherein the segmentation location recommendation message carries the audio identifier of the audio content and the at least one piece of third segmentation location information; and segmenting the audio content according to the at least one piece of third segmentation location information.
IMAGE PROCESSING APPARATUS, IMAGE PICKUP DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM
An image pickup device which captures sound and a moving image prevents deterioration in a reproduction quality. A scene change detector detects a frame at the time of a scene change from among a plurality of frames imaged at a predetermined frame rate as a detection frame. A frame rate converting unit converts a frame rate of the frame imaged outside a detection to a lower frame rate. A video reproduction time setting unit sets a reproduction time when reproduction is performed at the lower frame rate as a video reproduction time. An audio reproduction time setting unit sets an audio reproduction time at constant intervals for sounds recorded at constant intervals outside the detection period and sets an audio reproduction time in synchronization with the video reproduction time corresponding to the detection frame relative to sound recorded in the detection period.
DETERMINING NATIVE RESOLUTIONS OF VIDEO SEQUENCES
In one embodiment of the present invention, a native resolution analyzer generates a log-magnitude spectrum that elucidates sampling operations that have been performed on a scene. In operation, the native resolution analyzer performs a transform operation of a color component associated with a frame included in the scene to generate a frame spectrum. The native resolution analyzer then normalizes the magnitudes associated with the frame spectrum and logarithmically scales the normalized magnitudes to create a log-magnitude frame spectrum. This two dimensional log-magnitude frame spectrum serves as a frequency signature for the frame. More specifically, patterns in the log-magnitude spectrum reflect re-sampling operations, such as a down-sampling and subsequent up-sampling, that may have been performed on the frame. By analyzing the log-magnitude spectrum, discrepancies between the display resolution of the scene and the lowest resolution with which the scene has been processed may be detected in an automated fashion.