Patent classifications
G11B27/322
IMAGE AND AUDIO RECOGNITION AND SEARCH PLATFORM
The present disclosure relates to receiving video and audio from a plurality of devices, performing image recognition on the video and audio recognition on the audio, receiving an input image or input audio, and identifying video clips and audio clips containing a match to the input image or input audio.
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR STORING AND PROVIDING VIDEO
Embodiments of the present disclosure relate to a method, a device, and a computer program product for storing and providing a video. A method for storing a video is provided, including: acquiring frame storage information in a to-be-stored video, the frame storage information including information related to storage of a plurality of frames in the video; converting the video into a plurality of data blocks based on the frame storage information; and converting the frame storage information into a streaming media index file to characterize the video in association with the plurality of data blocks. Embodiments of the present disclosure further provide a method for providing a video.
Method and Apparatus for Generating Multimedia File
A method for generating a multimedia file includes selecting a multimedia clip according to at least one received input instruction in a process of multimedia recording, parsing the selected multimedia clip to obtain corresponding text information, generating a multimedia file according to the at least one received input instruction, and generating a file name of the multimedia file according to the text information. Accordingly, the multimedia file is generated using text information that is obtained by parsing a multimedia clip, so that a file name of the multimedia file reflects information of the multimedia file.
Gallery of messages from individuals with a shared interest
A machine includes a processor and a memory connected to the processor. The memory stores instructions executed by the processor to receive a message and a message parameter indicative of a characteristic of the message, where the message includes a photograph or a video. A determination is made that the message parameter corresponds to a selected gallery, where the selected gallery includes a sequence of photographs or videos. The message is posted to the selected gallery in response to the determination. The selected gallery is supplied in response to a request.
System and method for improved video operations
A method of video operations includes generating derivative byproducts related to encoded video captured of a scene, initializing a first operation based on the encoded video, and initializing a second operation different from the first operation based on the derivative byproducts.
STREAM PRODUCER FILTER VIDEO COMPOSITING
Methods and systems for providing accessibility features for generating annotations for a video content stream include providing an annotation menu with a plurality of annotation options for defining annotations for the video content stream. Each annotation option provides an annotation tool to generate an annotation that corresponds with an accessibility feature defined by the annotation option. Annotation generated using the annotation tool is used to generate an annotation layer for the video content stream. An annotated video content stream is generated by overlaying the annotation layer over the video content stream. The annotated video content stream generated for the selected annotation option is provided for rendering at a client device of a user, in response to receiving a selection of the accessibility feature corresponding to the annotation option from the user. The annotations included in the annotated video content stream augments the content of the video content stream.
System and method for automatically managing media content
A method, computer program product and computing device for receiving a request to load at least one new media content item on a personal media device. The size of the at least one new media content item is compared with the amount of storage space remaining on the personal media device to determine if the personal media device has sufficient available storage space. If the personal media device does not have sufficient available storage space, a relative weight associated with at least one old media content item stored on the personal media device is ascertained, the relative weight corresponding to a likelihood that the at least one old media content item will be rendered on the personal media device.
Multimedia distribution system
A multimedia file and methods of generating, distributing and using the multimedia file are described. Multimedia files in accordance with embodiments of the present invention can contain multiple video tracks, multiple audio tracks, multiple subtitle tracks, a complete index that can be used to locate each data chunk in each of these tracks and an abridged index that can enable the location of a subset of the data chunks in each track, data that can be used to generate a menu interface to access the contents of the file and ‘meta data’ concerning the contents of the file. Multimedia files in accordance with several embodiments of the present invention also include references to video tracks, audio tracks, subtitle tracks and ‘meta data’ external to the file.
AUTOMATIC MEDIA CONTENT LAYERING SYSTEM
Provided are mechanisms that allow automatic media content layering. The systems and methods obtain a media content list that includes a plurality of different types of media content segment entries. Media content tracks are determined from the plurality of media content segment entries and are based on the type of those entries. Media content track features are determined from the media content segment entries where those features are used to adjust the media content tracks, layering of multiple media content tracks, adjusting media content segments that make up the media content tracks or other features. A media content layered object is then generated based on the media content track features and the media content tracks. An action, such as storage, may be performed on the generated media content layered object.
Caption timestamp predictor
An automated solution to determine suitable time ranges or timestamps for captions is described. In one example, a content file includes subtitle data with captions for display over respective timeframes of video. Audio data is extracted from the video, and the audio data is compared against a sound threshold to identify auditory timeframes in which sound is above the threshold. The subtitle data is also parsed to identify subtitle-free timeframes in the video. A series of candidate time ranges is then identified based on overlapping ranges of the auditory timeframes and the subtitle-free timeframes. In some cases, one or more of the candidate time ranges can be merged together or omitted, and a final series of time ranges or timestamps for captions is obtained. The time ranges or timestamps can be used to add additional non-verbal and contextual captions and indicators, for example, or for other purposes.