G06F16/739

Scene and shot detection and characterization
11604935 · 2023-03-14 · ·

A method includes receiving, with a computing system, a video item. The method further includes identifying a first set of features within a first frame of the video item. The method further includes identifying, with the computing system, a second set of features within a second frame of the video item, the second frame being subsequent to the first frame. The method further includes determining, with the computing system, differences between the first set of features and the second set of features. The method further includes assigning a clip category to a clip extending between the first frame and the second frame based on the differences.

AUGMENTED INTELLIGENCE BASED VIRTUAL MEETING USER EXPERIENCE IMPROVEMENT

In an approach for improving the virtual meeting user experience, a processor detects a user disengaging from a virtual meeting having at least two participants for a pre-set period of time or for a pre-set percentage of a total allotted time of a pre-scheduled virtual meeting. A processor retrieves data from a database. A processor prepares a summary that is tailored to a profile of the user and that covers a portion of the virtual meeting during which the user was disengaged. A processor detects the user reconnecting to the virtual meeting. A processor determines whether the user will review the summary before rejoining the virtual meeting. Responsive to determining the user will review the summary before rejoining the virtual meeting, a processor prompts the user with a set of default user preferences to review the summary. A processor outputs the summary to the user.

METHODS AND SYSTEMS FOR ENCODING AND DECODING OF VIDEO DATA IN CONNECTION TO PERFORMING A SEARCH IN THE VIDEO DATA
20230130970 · 2023-04-27 · ·

There are provided encoding and decoding methods, and corresponding systems which are beneficial in connection to performing a search among regions of interest, ROIs, in encoded video data. In the encoded video data, there are independently decodable ROIs. These ROIs and the encoded video frames in which they are present are identified in metadata which is searched responsive to a search query. The encoded video data further embeds information which associates the ROIs with sets of coding units, CUs, that spatially overlap with the ROIs. In connection to independently decoding the ROIs found in the search, the embedded information is used to identify the sets of CUs to decode.

GENERATING VISUAL DATA STORIES
20230130778 · 2023-04-27 ·

This disclosure describes one or more embodiments of systems, non-transitory computer-readable media, and methods that intelligently and automatically analyze input data and generate visual data stories depicting graphical visualizations from data insights determined from the input data. For example, the disclosed systems automatically extract data insights utilizing an in-depth statistical analysis of dataset groups from data-attribute categories within the input data. Based on the data insights, the disclosed systems can automatically generate exportable visual data stories to visualize the data insights, provide textual or audio-based natural language summaries of the data insights, and animate such data insights in videos. In some embodiments, the disclosed systems generate a visual-data-story graph comprising nodes representing visual data stories and edges representing similarities between the visual data stories. Based on the visual-data-story graph, the disclosed systems can select a relevant visual data story to display on a graphical user interface.

Methods, systems, and media for presenting media content previews

Methods, systems, and media for presenting media content previews are provided. In some embodiments, the method comprises: causing a plurality of thumbnail images to be presented on a page presented on a user device, wherein each thumbnail image represents a media content item available for presentation on the user device, and wherein the user device is associated with a headset display; determining that a viewpoint of the headset display is directed to one of the thumbnail images of the plurality of thumbnail images; in response to determining that the viewpoint of the headset display is directed to the one of the thumbnail images, causing a first view of a content preview corresponding to the one of the thumbnail images to be presented on the headset display, wherein the content preview includes a second view that is different than and does not include the first view; detecting that the viewpoint of the headset display has changed in a direction toward the second view of the content preview; in response to detecting that the viewpoint of the headset display has changed in the direction toward the second view of the content preview, causing the second view of the content preview to be presented on the headset display; determining that the viewpoint of the headset display is no longer directed to the content preview; and in response to determining that the viewpoint of the headset display is no longer directed to the content preview, causing presentation of the second view of the content preview to be inhibited and causing presentation of the plurality of thumbnail images to resume on the headset display.

ELECTRONIC DEVICE AND CONTROL METHOD THEREOF

An electronic device is provided. The electronic device includes a display, at least one processor, and at least one memory configured to store instructions that cause the at least one processor to obtain first information from a first still image frame that is included in a first moving image, obtain second information from the first moving image, identify at least one image function based on at least one of the first information or the second information, and control the display to display at least one function execution object for executing the at least one image function. Various other embodiments can be provided.

METHOD AND APPARATUS FOR PRESENTING SEARCH RESULTS

A method, comprising: receiving a first search query that is associated with a video file; retrieving one or more search results in response to the first search query, each of the search results corresponding to a different section in the video file; and displaying the search results on a display device, wherein displaying any of the search results includes displaying a link that points to the section of the video file, which corresponds to the search result.

EVENT PROGRESS DETECTION IN MEDIA ITEMS
20230164369 · 2023-05-25 ·

One or more frames sampled from a first media item of an event are analyzed to identify one or more candidate event periods within the one or more frames. For each of the one or more frames, whether a candidate event period of the one or more candidate event periods satisfies one or more conditions is determined. Responsive to determining that the candidate event period of the one or more candidate event periods satisfies the one or more conditions, the candidate event period is identified as an actual event period used to divide a time the event. Mapping data that maps the actual event period to a timestamp associated with a respective frame of the one or more frames of the first media item is generated.

System and method for identifying social trends

A method and system for identifying social trends are provided. The method includes collecting multimedia content from a plurality of data sources; gathering environmental variables related to the collected multimedia content; extracting visual elements from the collected multimedia content; generating at least one signature for each extracted visual element; generating at least one cluster of visual elements by clustering at least similar signatures generated for the extracted visual elements; correlating environmental variables related to visual elements in the at least one cluster; determining at least one social trend by associating the correlated environmental variables with the at least one cluster.

Event summarization facilitated by emotions/reactions of people near an event location

A method, system and computer program product for event summarization facilitated by emotions/reactions of people near an event location is disclosed. The method includes generating a query based at least in part on reaction information and at least in part on primary video metadata. Based on the query, at least one possible event summarization match for the one or more events is retrieved from a database.