Patent classifications
H04N21/8405
Systems and methods for determining playback points in media assets
Systems and methods are described for determining playback points in media assets based on both a keyword and a context of a current playback point in a media asset. For example, in response to user input of a keyword (e.g., “Matt Damon”) while the user is consuming a media asset, a current playback point in the media asset is determined. Context of the media asset at the current playback point is then determined (e.g., the current playback point involves a car chase). Playback points in the media asset are determined that match both the context and the keyword and are presented to the user (e.g., playback points with Matt Damon in a car chase).
Automatically generating supercuts
Embodiments of the present technology may include systems and processes associated with automatically generating supercuts associated with programming content. The present technology may include receiving, at the computing device, a set of related programming content, wherein the set of related programming content includes video clips; receiving an input from a user including a command to generate a supercut and a keyword associated with the supercut; searching the set of related programming content for portions of the video clips associated with the keyword; identifying a first event within a first video clip associated with the keyword and a second event within a second video clip associated with the keyword; determining a type of event associated with each of the first and second events and link the types of events with their respective events; determining a pre-event time period and a post-event time period for each of the first and second events based on the types of events; and generating and displaying a supercut using the first event, the second event, the pre-event time periods, and the post-event time periods. The above steps may be implemented as a computer-implemented method, computer program product, or device such as a television receiver, or in other types of embodiments.
Automatically generating supercuts
Embodiments of the present technology may include systems and processes associated with automatically generating supercuts associated with programming content. The present technology may include receiving, at the computing device, a set of related programming content, wherein the set of related programming content includes video clips; receiving an input from a user including a command to generate a supercut and a keyword associated with the supercut; searching the set of related programming content for portions of the video clips associated with the keyword; identifying a first event within a first video clip associated with the keyword and a second event within a second video clip associated with the keyword; determining a type of event associated with each of the first and second events and link the types of events with their respective events; determining a pre-event time period and a post-event time period for each of the first and second events based on the types of events; and generating and displaying a supercut using the first event, the second event, the pre-event time periods, and the post-event time periods. The above steps may be implemented as a computer-implemented method, computer program product, or device such as a television receiver, or in other types of embodiments.
Method and system for segmenting video without tampering video data
Techniques segmenting a video using tags without modifying video data thereof are disclosed. According to one aspect of the present invention, each tag is created to define a portion of the video, wherein the tags can be modified, edited, looped, reordered or restored to a create an impression other than that if the video was played back sequentially. The tags are so structured in a table included in a tagging file that can be shared or published electronically or modified or updated by others. Further the table may be modified to include one or more conditional or commercial tags.
Method and system for segmenting video without tampering video data
Techniques segmenting a video using tags without modifying video data thereof are disclosed. According to one aspect of the present invention, each tag is created to define a portion of the video, wherein the tags can be modified, edited, looped, reordered or restored to a create an impression other than that if the video was played back sequentially. The tags are so structured in a table included in a tagging file that can be shared or published electronically or modified or updated by others. Further the table may be modified to include one or more conditional or commercial tags.
SOURCE CLASSIFICATION USING HDMI AUDIO METADATA
Methods, apparatus, systems and articles of manufacture are disclosed for source classification using HDMI audio metadata. An example apparatus includes a metadata extractor to extract values of audio encoding parameters from HDMI metadata obtained from a monitored HDMI port of a media device, the HDMI metadata corresponding to media being output from the monitored HDMI port; map the extracted values of the audio encoding parameters to a first unique encoding class (UEC) in a set of defined UECs, different ones of the set of defined UECs corresponding to different combinations of possible values of the audio encoding parameters capable of being included in the HDMI metadata; and identify a media source corresponding to the media output from the HDMI port based on one or more possible media sources mapped to the first UEC.
METHOD AND APPARATUS OF PLAYING VIDEO, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present disclosure provides a method and apparatus of playing a video, an electronic device, and a storage medium, which relates to a field of video processing. The method may include: receiving a video, and distinguishing a content of interest and a content of no interest in the video according to tag information added to the video, wherein the tag information contains tag information added to the video subsequent to recognizing the content of interest and the content of no interest in the video by a machine model pre-trained; and playing the content of interest and the content of no interest at different playing speeds, wherein a playing speed for the content of interest is greater than that for the content of no interest.
SYSTEMS AND METHODS TO GENERATE METADATA FOR CONTENT
Systems and methods are described herein for generating metadata for content. Upon detecting a request for a stored media asset from a first device, a server determines that metadata is needed for the media asset based on determining that (a) the server has access to insufficient metadata associated with the media asset and (b) the popularity of the media asset is sufficiently high. The server then assigns at least a time segment of the media asset to the first device for analysis. After assignment, the first device gathers frame analysis and user input data while the user is viewing the media asset, and transmits the gathered data to the server. The server then uses the frame analysis data and the user input data to generate metadata, and makes the generated metadata available to all devices requesting the media asset.
SYSTEMS AND METHODS TO GENERATE METADATA FOR CONTENT
Systems and methods are described herein for generating metadata for content. Upon detecting a request for a stored media asset from a first device, a server determines that metadata is needed for the media asset based on determining that (a) the server has access to insufficient metadata associated with the media asset and (b) the popularity of the media asset is sufficiently high. The server then assigns at least a time segment of the media asset to the first device for analysis. After assignment, the first device gathers frame analysis and user input data while the user is viewing the media asset, and transmits the gathered data to the server. The server then uses the frame analysis data and the user input data to generate metadata, and makes the generated metadata available to all devices requesting the media asset.
METHODS AND SYSTEMS FOR GENERATING MEME CONTENT
Systems and methods are described for generating meme content. A content item is tagged with one or more first tags based on metadata for the content item. The content item having the one or more first tags is received at user equipment. The content item is tagged with one or more second tags based on a user profile. A segment of the content item is identified based on the first and second tags. The identified segment is stored for use in generating meme content.