G11B27/28

Systems and methods for creating video summaries
11710318 · 2023-07-25 · ·

Video information defining video content may be accessed. Highlight moments within the video content may be identified. Flexible video segments may be determined based on the highlight moments. Individual flexible video segments may include one or more of the highlight moments and a flexible portion of the video content. The flexible portion of the video content may be characterized by a minimum segment duration, a target segment duration, and a maximum segment duration. A duration allocated to the video content may be determined. One or more of the flexible video segments may be selected based on the duration and one or more of the minimum segment duration, the target segment duration, and/or the maximum segment duration of the selected flexible video segments. A video summary including the selected flexible video segments may be generated.

Video generation method and apparatus, electronic device, and computer readable medium

Disclosed are a video generation method and apparatus, an electronic device, and a computer readable medium. A specific embodiment of the method comprises: obtaining a video footage and an audio footage, the video footage comprising a picture footage; determining a music point of the audio footage, the music point being used for dividing the audio footage into a plurality of audio clips; using the video footage to generate a video clip for each music clip in the audio footage to obtain a plurality of video clips, corresponding music clips and video clips having the same duration; and splicing the plurality of video clips according to the time when music clips respectively corresponding to the plurality of video clips appear in the audio footage, and adding the audio footage as a video audio signal to obtain a composite video.

SELECTIVE CONTENT INSERTION INTO AREAS OF MEDIA OBJECTS

One or more computing devices, systems, and/or methods for selective content insertion into areas of media objects are provided. For example, a media object (e.g., an image or video), is selected for composition with content, such as where a message, interactive content, a hyperlink, or other types of content is overlaid or embedded into the media object to create a composite media object. The content is added into an area of the media object that is selectively identified to reduce occlusion and/or improve visual cohesiveness between the content and the media object (e.g., added to an area with a similar or complimentary color, having an adequate size with spare amounts of visual features such as a soccer player, a ball, or other entity, etc.). In this way, the content may be add into the area of the media object to create a composite media object to provide to users.

SELECTIVE CONTENT INSERTION INTO AREAS OF MEDIA OBJECTS

One or more computing devices, systems, and/or methods for selective content insertion into areas of media objects are provided. For example, a media object (e.g., an image or video), is selected for composition with content, such as where a message, interactive content, a hyperlink, or other types of content is overlaid or embedded into the media object to create a composite media object. The content is added into an area of the media object that is selectively identified to reduce occlusion and/or improve visual cohesiveness between the content and the media object (e.g., added to an area with a similar or complimentary color, having an adequate size with spare amounts of visual features such as a soccer player, a ball, or other entity, etc.). In this way, the content may be add into the area of the media object to create a composite media object to provide to users.

GENERATION, CURATION, AND PRESENTATION OF MEDIA COLLECTIONS WITH AUTOMATED ADVERTISING
20230237535 · 2023-07-27 ·

Systems, devices, media, instructions, and methods for computer based automated content generation, curation, and presentation are described. In one embodiment a content collection is generated with a first continuous presentation group by associating a first content element from a first content message of the plurality of content messages and a second content element from a second content message of the plurality of content messages to associate the first content element and the second content element as the first continuous presentation group. Advertising element placement within the presentation order for the first media collection is determined, and adjusted to avoid interrupting the continuous presentation group. In other embodiments, various advertising patterns are used and adjusted based on curated presentation groups within content collections.

INFORMATION PROCESSING DEVICE AND METHOD, AND PROGRAM
20230005510 · 2023-01-05 ·

The present technology relates to an information processing device and a method, and a program capable of improving creation efficiency of content.

An information processing device includes a determination unit that, in a case where time-series display information regarding an audio signal of each of a plurality of tracks is arranged and displayed, determines a display sequence of the display information of the plurality of tracks or a time position of a marker indicating switching of a scene in the audio signal of the plurality of tracks on the basis of the audio signal of each of the plurality of tracks or audio related information regarding each of the plurality of the audio signals. The present technology can be applied to a creation tool for content.

Gas detection device that visualizes gas

A gas detection device includes: a processor that visualizes a gas by performing image processing on infrared image data in an inspection region imaged by an imaging device; a display that displays an inspection image that reflects a result of the image processing; and an input interface that receives an input of supplementary information on the inspection image displayed on the display.

System and method for displaying objects of interest at an incident scene

A system and method for displaying an image of an object of interest located at an incident scene. The method includes receiving, from the image capture device, a first video stream of the incident scene, and displaying the video stream. The method includes receiving an input indicating a pixel location in the video stream, and detecting the object of interest in the video stream based on the pixel location. The method includes determining an object class, an object identifier, and metadata for the object of interest. The metadata includes the object class, an object location, an incident identifier corresponding to the incident scene, and a time stamp. The method includes receiving an annotation input for the object of interest, and associating the annotation input and the metadata with the object identifier. The method includes storing, in a memory, the object of interest, the annotation input, and the metadata.

System and method for displaying objects of interest at an incident scene

A system and method for displaying an image of an object of interest located at an incident scene. The method includes receiving, from the image capture device, a first video stream of the incident scene, and displaying the video stream. The method includes receiving an input indicating a pixel location in the video stream, and detecting the object of interest in the video stream based on the pixel location. The method includes determining an object class, an object identifier, and metadata for the object of interest. The metadata includes the object class, an object location, an incident identifier corresponding to the incident scene, and a time stamp. The method includes receiving an annotation input for the object of interest, and associating the annotation input and the metadata with the object identifier. The method includes storing, in a memory, the object of interest, the annotation input, and the metadata.

AUTOMATED WORKFLOWS FROM MEDIA ASSET DIFFERENTIALS

The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.