Patent classifications
H04N21/8405
Processing segments of closed-caption text using external sources
Particular embodiments provide supplemental content that may be related to video content that a user is watching. A segment of closed-caption text from closed-captions for the video content is determined. A first set of information from the segment of closed-caption text, such as terms may be extracted. Particular embodiments use an external source that can be determined from a set of external sources. To determine the supplemental content, particular embodiments may extract a second set of information from the external source. Because the external source may be more robust and include more text than the segment of closed-caption text, the second set of information may include terms that better represent the segment of closed-caption text. Particular embodiments thus use the second set of information to determine supplemental content for the video content, and can provide the supplemental content to a user watching the video content.
Processing segments of closed-caption text using external sources
Particular embodiments provide supplemental content that may be related to video content that a user is watching. A segment of closed-caption text from closed-captions for the video content is determined. A first set of information from the segment of closed-caption text, such as terms may be extracted. Particular embodiments use an external source that can be determined from a set of external sources. To determine the supplemental content, particular embodiments may extract a second set of information from the external source. Because the external source may be more robust and include more text than the segment of closed-caption text, the second set of information may include terms that better represent the segment of closed-caption text. Particular embodiments thus use the second set of information to determine supplemental content for the video content, and can provide the supplemental content to a user watching the video content.
System and method of setting selection for the presentation of AV content
A method of selecting settings for presenting AV content comprises storing a plurality of settings for presenting AV content, storing a plurality of characteristic features corresponding to AV content, storing a plurality of values defining a strength of association between a respective stored characteristic feature and a respective stored setting, obtaining one or more characteristic features from a currently delivered AV content, determining a cumulative strength of association between respective stored settings and respective stored characteristic features corresponding to the or each characteristic feature obtained from the currently delivered AV content, and selecting the stored setting having the greatest cumulative strength of association.
System and method of setting selection for the presentation of AV content
A method of selecting settings for presenting AV content comprises storing a plurality of settings for presenting AV content, storing a plurality of characteristic features corresponding to AV content, storing a plurality of values defining a strength of association between a respective stored characteristic feature and a respective stored setting, obtaining one or more characteristic features from a currently delivered AV content, determining a cumulative strength of association between respective stored settings and respective stored characteristic features corresponding to the or each characteristic feature obtained from the currently delivered AV content, and selecting the stored setting having the greatest cumulative strength of association.
APPROXIMATE TEMPLATE MATCHING FOR NATURAL LANGUAGE QUERIES
Systems and methods provide a media guidance application that recognizes a plurality of natural language search queries for identifying a set of search results. For example, a user may want to determine when the Yankees are playing their next baseball game. The user may structure their query in multiple ways, such as, “When are the Yankees playing?” “What time is the Yankees game?” “When is the next Yankees baseball game?” The user would expect the same result, a description of when the Yankees are playing, regardless of how the query is structured. The systems and methods enable a user to use a plurality of search queries when searching for items or information to get desired results.
APPROXIMATE TEMPLATE MATCHING FOR NATURAL LANGUAGE QUERIES
Systems and methods provide a media guidance application that recognizes a plurality of natural language search queries for identifying a set of search results. For example, a user may want to determine when the Yankees are playing their next baseball game. The user may structure their query in multiple ways, such as, “When are the Yankees playing?” “What time is the Yankees game?” “When is the next Yankees baseball game?” The user would expect the same result, a description of when the Yankees are playing, regardless of how the query is structured. The systems and methods enable a user to use a plurality of search queries when searching for items or information to get desired results.
JOINT HETEROGENEOUS LANGUAGE-VISION EMBEDDINGS FOR VIDEO TAGGING AND SEARCH
Systems, methods and articles of manufacture for modeling a joint language-visual space. A textual query to be evaluated relative to a video library is received from a requesting entity. The video library contains a plurality of instances of video content. One or more instances of video content from the video library that correspond to the textual query are determined, by analyzing the textual query using a data model that includes a soft-attention neural network module that is jointly trained with a language Long Short-term Memory (LSTM) neural network module and a video LSTM neural network module. At least an indication of the one or more instances of video content is returned to the requesting entity.
Searching and displaying multimedia search results
A system and method for searching and displaying multimedia search results is disclosed herein. An embodiment operates by determining that an interface including one or more previously saved searches is displayed, each of the previously saved searches corresponding to a set of one or more search terms. An updated plurality of search results is received for each of the previously saved searches from a remote server. A grouping of the plurality of search results for each of a plurality of different search terms is displayed across a plurality of individual time periods.
Searching and displaying multimedia search results
A system and method for searching and displaying multimedia search results is disclosed herein. An embodiment operates by determining that an interface including one or more previously saved searches is displayed, each of the previously saved searches corresponding to a set of one or more search terms. An updated plurality of search results is received for each of the previously saved searches from a remote server. A grouping of the plurality of search results for each of a plurality of different search terms is displayed across a plurality of individual time periods.
Enhancing video content with personalized extrinsic data
Disclosed are various embodiments that relate to enhancing video content with personalized extrinsic data. A video content feature is rendered on a display. A user interface is rendered on top of the video content feature on the display. The user interface presents cast member indicia, where the cast member indicia correspond to respective cast members in the video content feature. In response to receiving a selection of one of the cast member indicia from a user, the user interface is updated to present additional information about the corresponding cast member. The additional information is personalized based at least in part on profile data associated with the user.