Patent classifications
G11B27/034
Apparatus and method for associating images from two image streams
An apparatus configured to, based on first imagery (301) of at least part of a body of a user (204), and contemporaneously captured second imagery (302) of a scene, the second imagery comprising at least a plurality of images taken over time, and based on expression-time information indicative of when a user expression of the user (204) occurs, provide a time window (303) temporally extending from a first time (t−1) prior to the time (t) of the expression-time information, to a second time (t−5) comprising a time equal to or prior to the first time (t−1), the time window (303) provided to identify at least one expression-causing image (305) from the plurality of images of the second imagery (302) that was captured in said time window, and provide for recordal of the at least one expression-causing image (305) with at least one expression-time image (306) comprising at least one image from the first imagery (301).
CHANGE-DEPENDENT PRE-EXPORT FOR VIDEO EDITING PROJECTS
Techniques are described for pre-exporting chunks of video content during video editing of a video editing project. For example, the chunks of the video editing project can be monitored for changes. When a change is detected to a chunk, the chunk can be pre-exported as an independent chunk that is combinable with other pre-exported chunks and without encoding or re-encoding the pre-exported chunks. In addition, the monitoring and pre-exporting can be performed while the video editing project is editable by a user of the video editing project. When the video editing project is ready to be finalized, the pre-exported chunks can be combined to generate, at least in part, a media file. The generated media file can then be output.
CHANGE-DEPENDENT PRE-EXPORT FOR VIDEO EDITING PROJECTS
Techniques are described for pre-exporting chunks of video content during video editing of a video editing project. For example, the chunks of the video editing project can be monitored for changes. When a change is detected to a chunk, the chunk can be pre-exported as an independent chunk that is combinable with other pre-exported chunks and without encoding or re-encoding the pre-exported chunks. In addition, the monitoring and pre-exporting can be performed while the video editing project is editable by a user of the video editing project. When the video editing project is ready to be finalized, the pre-exported chunks can be combined to generate, at least in part, a media file. The generated media file can then be output.
DIFFERENCE ENGINE FOR MEDIA CHANGE MANAGEMENT
A universal media difference engine generates a change list specifying the edits required to create an edited revision of a media composition from a base version. The difference engine determines the format of the media composition, locates and installs a plug-in corresponding to the format, and uses the plug-in to parse the composition and generate the change list. The supported compositional formats include formats native to specific media editing applications, as well as interoperable formats. The difference engine is able to convert rich change lists expressed in native form to canonical change lists that are compatible with multiple editing applications. Timeline, mixer configuration, and scene graph composition types are supported. Content management system storage requirements are reduced by storing a base version and change lists instead of multiple revisions of the composition. A media composition recreation engine recreates an edited revision by applying a change list to a prior version.
SYSTEM AND METHOD FOR ENHANCING MULTIMEDIA CONTENT WITH VISUAL EFFECTS AUTOMATICALLY BASED ON AUDIO CHARACTERISTICS
Exemplary embodiments of the present disclosure are directed towards system for enhancing multimedia content with visual effects based on audio characteristics, comprising computing device comprises multimedia content enhancing module enables end-user to record multimedia content using camera; enables to select audio track and combine with multimedia content recorded; sends audio track and multimedia content recorded to cloud server; cloud server comprising multimedia analyzing and visual effects retrieving module to receive and analyze beat characteristics of audio track and multimedia content recorded; categorize visual effects and filters and deliver to the computing device; multimedia content enhancing module displays categorized visual effects and filters on computing device and enables end-user to select and apply categorized visual effects and filters on multimedia content to create enhanced multimedia content; enables the end-user to share and post enhanced multimedia content on computing device.
Method and apparatus for interactive reassignment of character names in a video device
Systems and processes are provided for interactive reassignment of character names in an audio video program including a tuner configured for receiving and demodulating a video signal to extract the audio video program, a user input operative to receive a user request to substitute an original character name within the audio video program with an alternative character name, a memory configured to buffer the audio video program to generate a delayed audio video program, a processor configured to detect the original character name within the audio video program and to replace the original character name with the alternative character name within the delayed audio video program to generate a modified audio video program, and a loudspeaker configured to reproduce the alternative character name in response to the modified audio video program.
Method and apparatus for interactive reassignment of character names in a video device
Systems and processes are provided for interactive reassignment of character names in an audio video program including a tuner configured for receiving and demodulating a video signal to extract the audio video program, a user input operative to receive a user request to substitute an original character name within the audio video program with an alternative character name, a memory configured to buffer the audio video program to generate a delayed audio video program, a processor configured to detect the original character name within the audio video program and to replace the original character name with the alternative character name within the delayed audio video program to generate a modified audio video program, and a loudspeaker configured to reproduce the alternative character name in response to the modified audio video program.
Information processing apparatus and information processing method
This information processing apparatus includes displaying video content on a first display region in a display section as a first video, displaying the above-mentioned video content on a second display region in the above-mentioned display section as a second video delayed from the above-mentioned first video by a predetermined time, and setting a first tag inputted by a user into the above-mentioned first video and a second tag inputted by the above-mentioned user into the above-mentioned second video as tags for the above-mentioned video content.
Information processing apparatus and information processing method
This information processing apparatus includes displaying video content on a first display region in a display section as a first video, displaying the above-mentioned video content on a second display region in the above-mentioned display section as a second video delayed from the above-mentioned first video by a predetermined time, and setting a first tag inputted by a user into the above-mentioned first video and a second tag inputted by the above-mentioned user into the above-mentioned second video as tags for the above-mentioned video content.
Setting ad breakpoints in a video within a messaging system
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for setting ad breakpoints in a video. The program and method provide for accessing a video; determining plural shot boundaries for the video, each shot boundary defining a shot corresponding to a contiguous sequence of video frames that is free of cuts or transitions; and for each shot boundary of the plural shot boundaries, performing a set of breakpoint tests on the shot boundary, each breakpoint test configured to return a respective score indicating whether the shot boundary corresponds to a breakpoint for potential insertion of an ad during playback of the video, calculating a combined score for the shot boundary based on combining the each of the respective scores, and setting, in a case where the combined score meets a threshold value, the shot boundary as the breakpoint.