Patent classifications
G06F16/743
VIDEO PROCESSING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
The present disclosure provides a video processing method, an apparatus, a device and a storage medium. The method includes: after determining a target effect style and determining a target video clip based on presentation of a video to be processed on a timeline, establishing a binding relationship between the target effect style and the target video clip in response to an effect application trigger operation, so as to achieve an effect of applying the target effect style to the target video clip. The embodiment of the present disclosure, by establishing the binding relationship between the target effect style and the target video clip, achieves the effect of effect processing only on a certain video clip of the video, thereby meeting the user's demand of effect processing only on a certain video clip, which increases flexibility of video effect processing, and further improves the user's experience of video effect processing.
Multimedia focalization
Example implementations are directed to methods and systems for individualized multimedia navigation and control including receiving metadata for a piece of digital content, where the metadata comprises a primary image and text that is used to describes the digital content; analyzing the primary image to detect one or more objects; selecting one or more secondary images corresponding to each detected object; and generating a data structure for the digital content comprising the one or more secondary images, where the digital content is described by a preferred secondary image.
Automated Generation of Personalized Content Thumbnails
A system includes a computing platform including processing hardware and a memory storing software code, a trained machine learning (ML) model, and a content thumbnail generator. The processing hardware executes the software code to receive interaction data describing interactions by a user with content thumbnails, identify, using the interaction data, an affinity by the user for at least one content thumbnail feature, and determine, using the interaction data, a predetermined business rule, or both, content for promotion to the user. The software code further provides a prediction, using the trained ML model and based on the affinity by the user, of the desirability of each of multiple candidate thumbnails for the content to the user, generates, using the content thumbnail generator and based on the prediction, a thumbnail having features of one or more of the candidate thumbnails, and displays the thumbnail to promote the content to the user.
METADATA TAG IDENTIFICATION
A method for automatic metadata tag identification for videos is described. Content features are extracted from a video into respective data structures. The extracted content features are from at least two different feature modalities. The respective data structures are encoded into a common data structure using an encoder of a recurrent neural network (RNN) model. The common data structure is decoded using a decoder of the RNN model to identify content platform metadata tags to be associated with the video on a social content platform. Decoding is based on group tag data for users of the social content platform that identifies groups of the users and corresponding group metadata tags of interest for the groups of users.
Media browsing user interface with intelligently selected representative media items
- Graham R. CLARKE ,
- Kevin Aujoulet ,
- Kevin Bessiere ,
- Simon BOVET ,
- Eric M. G. CIRCLAEYS ,
- Lynne DEVINE ,
- Alan C. DYE ,
- Benedikt M. Hirmer ,
- Andreas KARLSSON ,
- Chelsea L. Burnette ,
- Matthieu LUCAS ,
- Behkish J. Manzari ,
- Nicole R. Ryan ,
- William A. Sorrentino, III ,
- Andre SOUZA DOS SANTOS ,
- Gregg SUZUKI ,
- Sergey TATARCHUK ,
- Justin S. Titi
The present disclosure generally relates to navigating a collection of media items. In accordance with one embodiment, in response to receiving an input, a device displays a first view of a collection of media items, including concurrently displaying a representation of a first time period and a representation of a second time period. In accordance with a determination that a current time is associated with a first recurring temporal event: the representation of the first time period includes a first representative media item and the representation of the second time period includes a second representative media item. In accordance with a determination that the current time is associated with a second recurring temporal event, the representation of the first time period includes a third representative media item and the representation of the second time period includes a fourth representative media item.
VIDEO-BASED INTERACTION IMPLEMENTATION METHOD AND APPARATUS, DEVICE AND MEDIUM
A video-based interaction implementation method includes the following: At least one video is presented in an interface; an interactive video input by a user based on a presented video is acquired, and an association relationship is established between the presented video and the interactive video; and the interactive video is presented based on the association relationship.
SYSTEMS AND METHODS FOR FLEXIBLY USING TRENDING TOPICS AS PARAMETERS FOR RECOMMENDING MEDIA ASSETS THAT ARE RELATED TO A VIEWED MEDIA ASSET
Systems and methods are provided herein for flexibly using trending topics as parameters for recommending media assets that are related to a viewed media asset. A media guidance application may determine that a user has viewed a media asset. The media guidance may identify a plurality of attributes corresponding to the viewed media asset and determine that a respective attribute of the plurality of attributes matches a trending topic. The media guidance application may update a set of weightings corresponding to the plurality of attributes by increasing a weighting corresponding to the respective attribute and adjust a recommendation for a media asset different from the viewed media asset based on the updated set of weightings. The media guidance application may generate for display the recommendation of the media asset different from the viewed media asset.
Method and program for producing multi reactive video, and generate meta data to make multi reactive video, and analyze into interaction data to understand human act
Disclosed is a multi-reactive video generating method and program that performs various condition playbacks depending on a user's manipulation, based on a video database (e.g., a basic video) in which a general video or a plurality of image frames are stored. According to an embodiment of the inventive concept, various actions (i.e., reactions) may be applied as the multi-reactive video generation file is played with a general video or a combination of a plurality of image frames.
SYSTEM AND METHOD FOR PROVIDING RELIABLE AND EFFICIENT DATA TRANSFER
Systems and methods for providing reliable and efficient data transfer involving a user browser configured to run JavaScript which permits a user browser to communicate with components of a media distribution system. The user browser may request specific media content on the company website which may inform components of the media distribution system of the request. To facilitate the downloading of the requested media content, components of the media distribution system may arrange for the generation of a torrent files, informing the user browser where the requested media content may be downloaded. A fake torrent file may be generated and distributed to the user browser to permit viewing of the media content before generation of a real torrent file is possible. Upon receiving the torrent files, a user may download and play the media content.
IMAGE GUIDED VIDEO THUMBNAIL GENERATION FOR E-COMMERCE APPLICATIONS
Systems and methods are provided for automatically generating a thumbnail for a video on an online shopping site. The disclosed technology automatically generates a thumbnail for a video, where the thumbnail represents an item but not necessarily content of the video. A thumbnail generator receives a video that describes the item and an ordered list of item images associated with the item used in an item listing. The thumbnail generator extracts video frames from the video based on sampling rules and determines similarity scores for the sampled video frames. A similarity score indicates a degree of similarity between content of a video frame and an item image. The thumbnail generator determines weighted similarity scores based item images and occurrences of sampled video frames in the video. The disclosed technology generates a thumbnail for the video by selecting a sample video frame based on the weighted similarity scores.