G06V20/47

Composite video competition

A method may include serially joining different video clips from videos of different historical competitions to form a composite video competition, the different video clips comprising an indeterminate subset of clips drawn from a larger pool of clips, wherein each clip from a historical competition has an associated partial result contribution to a final result of the historical competition further include presenting a result during the composite video competition, the result comprising a linked combination of the partial result contributions from the different video clips.

METHOD AND SYSTEM FOR SELECTING HIGHLIGHT SEGMENTS
20230230378 · 2023-07-20 ·

Described are methods and systems for selecting a highlight segment. The computer-implemented method comprises receiving a sequence of frames, and at least one user data; via a converting module, for each frame, selecting a local neighborhood around it. said neighborhood comprising at least one frame; and converting each neighborhood into a feature vector; via a high-lighting module, assigning a score to each of the feature vectors based on the user data; via a selection module, selecting at least one highlight segment based on the scoring of the feature vectors; and via an outputting module, outputting the highlight segment. The system comprises a receiving module configured to receive a sequence of frames, and at least one user data; a converting module configured to select a local neighborhood around each frame, said neighborhood comprising at least one frame, and convert each neighborhood into a feature vector, a highlighting module configured to assign a score to each of the feature vector based on the user data; a selection module configured to select at least one highlight segment based on the scoring of the feature vectors; and an output component configured to output the highlight segment.

Method and system for producing story video

A method and a system for producing a story video are provided. A method for producing a story video, according to one embodiment, can produce a specific story video by determining a theme of a story that is suitable for collected videos and selecting and arranging an appropriate video for each frame of a template associated with the theme.

VIDEO PREVIEW METHOD AND APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
20230222722 · 2023-07-13 ·

This disclosure relates to a video processing method and apparatus, and a non-transitory computer-readable storage medium, and relates to the field of computer technologies. The processing method includes: presenting an avatar of at least one user; under the condition that a first user exists in the at least one user, playing a dynamic image within an area where an avatar of the first user is located, wherein the first user is a user who has posted a video content within a preset duration, and the dynamic image is a related image of the video content; and presenting the video content posted by the first user in response to an operation of a second user on the dynamic image.

Selection of video template based on computer simulation metadata
11554324 · 2023-01-17 · ·

Computer game metadata is used to select a video template for delivery to a user to populate the template with a video of the player or the game. Each template can be associated with its own unique text, audio, overlays, and the like, in other words, its own style, which depends on the metadata collected during the course of game play.

METHOD FOR ACTION RECOGNITION IN VIDEO AND ELECTRONIC DEVICE
20230010392 · 2023-01-12 ·

A method for action recognition in a video is described. The method includes inputting a plurality of consecutive clips divided from the video into a convolutional neural network (CNN), and obtaining a set of clip descriptors; processing the set of clip descriptors via a Bi-directional Attention mechanism, and obtaining a global representation of the video; and performing video-classification for the global representation of the video such that action recognition is achieved.

METHOD AND HOME APPLIANCE DEVICE FOR GENERATING TIME-LAPSE VIDEO

A method of generating a time-lapse video includes: identifying an image storage mode; obtaining a first image; based on identifying that the image storage mode is an emphasis mode for emphasizing one or more images, obtaining a first time difference between the first image and a stored image, and a first feature value indicating a first amount of change between the first image and the stored image; for each respective image of a first plurality of images of a first image group stored in a memory, identifying a second image from among the first plurality of images, based on a second time difference and a second feature value; generating a second image group by removing the second image from the first image group and adding the first image to the first image group; and generating the time-lapse video by using a second plurality of images of the second image group.

Automatic trailer detection in multimedia content
11694726 · 2023-07-04 · ·

The disclosed computer-implemented method may include accessing media segments that correspond to respective media items. At least one of the media segments may be divided into discrete video shots. The method may also include matching the discrete video shots in the media segments to corresponding video shots in the corresponding media items according to various matching factors. The method may further include generating a relative similarity score between the matched video shots in the media segments and the corresponding video shots in the media items, and training a machine learning model to automatically identify video shots in the media items according to the generated relative similarity score between matched video shots. Various other methods, systems, and computer-readable media are also disclosed.

Image capture device with an automatic image capture capability

An image capture device may automatically capture images. An image sensor may generate visual content based on light that becomes incident thereon. A depiction of interest within the visual content may be identified, and one or more images may be generated to include one or more portions of the visual content including the depiction of interest.

Method and apparatus for generating story from plurality of images by using deep learning network

Disclosed herein are a visual story generation method and apparatus for generating a story from a plurality of images by using a deep learning network. The visual story generation method includes: extracting features from a plurality of respective images by using the first extraction unit of a deep learning network; generating the structure of a story based on the overall feature of the plurality of images by using the second extraction unit of the deep learning network; and generating the story by using outputs of the first and second extraction units.