Patent classifications
G06F16/748
Using manifest files to determine events in content items
Systems, methods, apparatuses are described for monitoring events in a plurality of different services. A system may monitor manifest files for one or more content items. Manifest files may contain manifest file tags indicating events and insertion opportunities. Events and/or insertion opportunities may be detected, and a switch from one content item to another content item, based on customized user priority preferences, may be caused.
SYSTEMS AND METHODS FOR PRESENTING AUXILIARY VIDEO RELATING TO AN OBJECT A USER IS INTERESTED IN WHEN THE USER RETURNS TO A FRAME OF A VIDEO IN WHICH THE OBJECT IS DEPICTED
Systems and methods are described herein for a media guidance application that detects, and responds to, a user's review of video content on a media device. The media guidance application detects a rewind operation during playback of a video comprising a media asset. In response, the media guidance application determines if the playback position reached during the rewind operation occurs during a first break in the media asset and, if so, identifies objects depicted in the video at the playback position, and presents auxiliary video relating to an object at a second break in the media asset.
User interface providing supplemental and social information
Systems and methods are provided which implement techniques for providing supplemental and social information along with primary information. In one implementation, a user interface is provided with various sections. One section plays back a main item of content, while a second section displays supplemental information. A third section provides interactive tools for a user to communicate or share information with other users. For example, while playing a movie in the first portion, the social networking section may provide a chat interface in the third portion along with social network service controls to post or share user input, or to view posts from friends about the item of content.
Smart summarization, indexing, and post-processing for recorded document presentation
Systems and methods for providing summarization, indexing, and post-processing of a recorded document presentation are provided. The system accesses a structured document and recordings associated with a recorded presentation given using the structured document. The system analyzes, using machine-trained models, the structured document, audio and video recordings, and recording of operations performed during the presentation. The analyzing comprises generating a transcript of the audio recording, determining context of components of the structured document, and deriving context from the video recordings and recording of operations. Based on the analyzing, the system segments the recorded presentation into a plurality of segments and generates an index of the plurality of segments that is used for post-processing.
Systems and methods for presenting auxiliary video relating to an object a user is interested in when the user returns to a frame of a video in which the object is depicted
Systems and methods are described herein for a media guidance application that detects, and responds to, a user's review of video content on a media device. The media guidance application detects a rewind operation during playback of a video comprising a media asset. In response, the media guidance application determines if the playback position reached during the rewind operation occurs during a first break in the media asset and, if so, identifies objects depicted in the video at the playback position, and presents auxiliary video relating to an object at a second break in the media asset.
Text-driven editor for audio and video editing
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
SELECTION AND PROVISION OF DIGITAL COMPONENTS DURING DISPLAY OF CONTENT
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for the selection, provision and display of one or more digital components during display of content. Methods can include identifying a plurality of digital components that can be presented on the client device. A maximum number of digital components that can be presented in a slot of a content and the time duration of the slot is determined. For each digital component a score is generated based on the duration, a position requirement and the number of times the digital component is available for provision within the slot is generated. A first set of digital component is selected based on the scores and provided to the client device.
Conditional display of hyperlinks in a video
Systems and methods are provided for dynamically displaying hyperlinks in a video based on various factors associated with a device at which the video is played and/or a user of the device. In one or more aspects, a system includes a request component configured to receive a request to play a video hosted by a media provider. The system further includes a selection component configured to select a subset of links included in a set of links associated with the video to provide with the video when it is played in response to the request based in part on a number of links included in the set of links, wherein graphical elements respectively representative of the links included in the subset of links are configured to be displayed over the video when the video is played in response to the request.
Synchronization and playback of related media items of different formats
A synchronized media item is generated and presented to a user via a user client. The user client receives a synchronization point including a location identifier that identifies a location within a media item linked to a location within a related media item. The user client inserts the received synchronization point into the media item at the identified location to create a synchronized media item. The user client presents the synchronized media item and the synchronization point.
Text-driven editor for audio and video assembly
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.