Patent classifications
H04N21/4524
METHODS AND SYSTEMS FOR GENERATING AND PROVIDING PROGRAM GUIDES AND CONTENT
Systems and methods for identifying, assembling, and transmitting content are described in the illustrative context of electronic program guides and program channels. A first system causes an interactive interstitial to be presented on a remote first device of a user in conjunction with a scheduled program. The first system determines if a second device of the user is available to receive an interstitial interaction request. At least partly in response to determining that the second device is available to receive an interstitial interaction request, the interstitial interaction request is presented via a client hosted on the second device. At least partly in response to determining that the user has provided an interaction via the second device, the interaction is stored in memory. Optionally, an interstitial is composed based at least in part on the user interaction. The composed interstitial is optionally displayed via the first device of the user in conjunction with a scheduled program.
Merging Permissions and Content Access
Aspects of the disclosure relate to determining that a wireless device associated with one user account is proximate to a computing device associated with another user account. In response to determining the proximity of the two devices, one or more of the devices may receive merged access to permissions and/or content associated with the two user accounts. In response to determining that the wireless device is not proximate to the computing device, the devices may no longer receive merged access to permissions and/or content associated with the two user accounts.
PROVIDING VISUAL GUIDANCE FOR PRESENTING VISUAL CONTENT IN A VENUE
Systems, methods, and computer program products can provide visual guidance on presenting content on a media surface of a venue. These systems, methods, and computer program products can operate by mapping visual content onto a media surface of the venue and a key feature unique to the visual content and/or the media surface being extracted. Thereafter, these systems, methods, and computer program products can retrieve an experiment metric corresponding to the visual content and/or the media surface and can determine a viewer location metric and/or a media surface metric based on the experiment metric and the key feature. These systems, methods, and computer program products can utilize the viewer location metric and/or the media surface metric to provide a hint of the attribute to a user.
SYSTEMS AND METHODS FOR VIEWING-SESSION CONTINUITY
The present disclosure is generally directed to media systems configured to receive and play media assets. In particular, methods and systems are provided for improved media asset session continuity across such media systems. Systems and methods are provided herein for continuing media asset sessions across media systems or media devices in a way designed to minimize manual intervention, for example, by determining a likelihood (e.g., a probability) of a user requesting media session continuation of an ongoing media asset or a segment thereof across two or more devices.
Methods and apparatus to calibrate audience measurement ratings based on return path data
Methods and apparatus to calibrate media ratings based on return path data are disclosed. An apparatus includes a processor and memory including instructions that, when executed, cause the processor to: determine an initial rating for the media provided in a first geographic area based on return path data (RPD) tuning information obtained from RPD devices in subscriber households in the first geographic area; determine a first panelist rating for the media provided in a second geographic area based on first panel tuning information obtained from first metering devices in a first subset of panelist households in the second geographic area; determine a nonsubscriber calibration factor based on the first panelist rating; and determine a final rating for the media in the first geographic area by modifying the initial rating based on the nonsubscriber calibration factor.
VIDEO PROCESSING DEVICE AND MANIFEST FILE FOR VIDEO STREAMING
One aspect of this disclosure relates a video processing device comprising a processor for processing a manifest file for video streaming for a user. The manifest file comprises at least a plurality of positions defined for a scene that are associated with pre-rendered omnidirectional or volumetric video segments stored on a server system. The manifest file may also contain a plurality of resource locators for retrieving omnidirectional or volumetric video segments from the server system. Each resource locator may be associated with a position defined for the scene. The video processing device may be configured to associate a position of the user with a first position for the scene in the manifest file to retrieve a first omnidirectional or volumetric video segment associated with the first position using a first resource locator from the manifest file.
Internet enabled video media content stream
Aspects of the subject disclosure may include, for example, a method that includes receiving weblink information, receiving media content, inserting the weblink information into a digital frame, providing the plurality of digital frames as a video stream to a consumer media device, displaying, by the consumer media device, the video stream, receiving, by the consumer media device, a user input to pause the video stream during the displaying of the video stream to pause the video stream to display a current digital frame of the plurality of digital frames, receiving, by the consumer media device, a user input indicating a selection of a portion of the current digital frame, determining selected Weblink information for the portion of the current digital frame, and providing a connection to a website associated with the selected Weblink information to the consumer media device. Other embodiments are disclosed.
METHODS, ARTICLES OF MANUFACTURE, AND APPARATUS TO EDIT TUNING DATA COLLECTED VIA AUTOMATED CONTENT RECOGNITION
Methods, apparatus, systems, and articles of manufacture are disclosed for editing tuning data collected via automated content recognition. Examples include determining whether a time conflict exists between first tuning data corresponding to a first tuning event and second tuning data corresponding to a second tuning event. Examples also include that, in response to determining that the time conflict exists, creating a third tuning event based on the first tuning data, the second tuning data, and one or more criteria. Examples also include that modifying at least one of the first tuning event or the second tuning event based on the third tuning event. Examples also include that crediting a media presentation by the presentation device based on edited tuning data, the edited tuning data including the first modified tuning event, the second modified tuning event, and the third tuning event.
Augmented reality content recommendation
Methods and systems are described herein for providing streamlined access to media assets of interest to a user. The method includes determining that a supplemental viewing device, through which a user views a field of view, is directed at a first field of view. The method further involves detecting that the supplemental viewing device is now directed at a second field of view, and determining that a media consumption device is within the second field of view. A first media asset of interest to the user that is available for consumption via the media consumption device is identified, and the supplemental viewing device generates a visual indication in the second field of view. The visual indication indicates that the first media asset is available for consumption via the media consumption device, and the visual indication tracks a location of the media consumption device in the second field of view.
Guide voice output control system and guide voice output control method
A guide voice output control system includes a voice output control unit having a function of outputting a guide voice in response to a trigger and a function of executing interaction related processing having a reception stage for receiving voice, a recognition stage for recognizing voice, and an output stage for outputting voice based on a recognition result, in which the voice output control unit controls the output of the guide voice according to the processing stage of the interaction related processing when the trigger is generated during the execution of the processing, and dynamically controls the output of the guide voice according to whether or not the processing stage is a stage that does not affect the accuracy of voice recognition or listening difficulty of a user even if the guide voice is output.