H04N21/4394

Methods and apparatus to detect spillover
11716495 · 2023-08-01 · ·

Methods and apparatus to detect spillover are disclosed. An example apparatus includes at least one memory, instructions in the apparatus, and processor circuitry to execute the instructions to: identify a quantity of first durations of loudness in an audio signal of media; calculate a ratio of the quantity of the first durations of loudness to a quantity of second durations of loudness in the audio signal of the media, the quantity of the second durations of loudness including the quantity of the first durations of loudness; and in response to a detection of the audio signal being spillover, store data denoting the media as un-usable to credit a media exposure when the ratio does not satisfy a loudness ratio threshold, the storing of the data to improve an accuracy of media exposure credits by not crediting spillover media.

Methods and systems for recommending content in context of a conversation

A media guidance application may monitor a conversation among users, and identify keywords in the conversation, without the use of wakewords. The keywords are used to search for media content that is relevant to the on-going conversation. Accordingly, the media guidance application presents relevant content to the users, during the conversation, to more actively engage the users. A conversation monitoring window may be used to present conversation information as well as relevant content. A listening mode may be used to manage when the media guidance application processes speech from a conversation. The media guidance application may access user profiles for keywords, select content types, select content sources, and determine relevancy of media content, to provide content in context of a conversation.

Methods and apparatus to identify media based on watermarks across different audio streams and/or different watermarking techniques

Example apparatus disclosed herein are to detect a first watermark embedded in an audio stream associated with media, the first watermark embedded and detected based on a first watermarking technique; and detect a second watermark embedded in the audio stream, the second watermark embedded and detected based on a second watermarking technique. Disclosed example apparatus are also to assign the first watermark to a first monitoring track and to a second monitoring track, the first monitoring track limited to watermarks embedded in the audio stream based on the first watermarking technique, the second monitoring track limited to watermarks embedded in the audio stream based on any of the first or second watermarking techniques; group the first and second watermarks to form a media detection event when the second watermark is assigned to the second monitoring track; and cause transmission of the media detection event to a data collection facility.

ADAPTIVE VOLUME CONTROL FOR MEDIA OUTPUT DEVICES AND SYSTEMS
20230239541 · 2023-07-27 ·

Various arrangements for performing dynamic volume control are provided. Audio characteristics of audio content being output to a user may be identified. Adjustments made to an audio volume setting by the user while the audio content is being output to the user can be monitored. A machine learning model can be trained based on the adjustments made to the audio volume setting by the user that are mapped with the audio characteristics of the audio content. After the machine learning model is trained, the audio volume setting can be adjusted based at least in part on the trained machine learning model analyzing audio content.

DISPLAY SYSTEM AND METHOD

A system for obtaining content for display to a user of a head-mountable display device, HMD, the system comprising one or more audio detection units operable to capture audio in the environment of the user, a motion prediction unit operable to predict motion of the HMD in dependence upon the captured audio, and a content obtaining unit operable to obtain content for display in dependence upon the predicted motion of the HMD.

METHOD AND DATA PROCESSING APPARATUS

A method of generating an emotion descriptor icon includes receiving input content comprising video information, and performing analysis on the input content to produce information representing the video information with respect to a plurality of characteristics. The method also includes determining, based on a comparison of the information representing the video information at a temporal position in the video information and a set of information items respectively representing an emotion state, a relative likelihood of association between the input content and at least some of a plurality of emotion states, selecting an emotion state based on the outcome of the determination, and outputting an emotion descriptor icon selected from an emotion descriptor icon set comprising a plurality of emotion descriptor icons. The outputted emotion descriptor icon is associated with the selected emotion state.

SYSTEM AND METHOD FOR CAPTION VALIDATION AND SYNC ERROR CORRECTION
20230028897 · 2023-01-26 · ·

Disclosed is a system and method for validating and correcting sync errors of captions of a media asset comprising a caption file, wherein each caption has a start time and an end time. The system decodes the caption file using caption decoder for generating a format agnostic XML file, a transcriber engine extracts an audio track and transcribes the audio for generating a transcript, a caption analyser identifies matching set of words in the transcript and assign a match score and classifies the captions as one of MATCHING and UNDETECTED based on the match score. The caption analyser determines sync offset for each caption that is classified as MATCHNING and the system uses a prediction engine for predicting sync offset of the captions that are classified as UNDETECTED.

Methods and apparatus to identify and credit media using ratios of media characteristics

Apparatus, systems, articles of manufacture, and methods to identify and credit media using ratios of media characteristics are disclosed herein. Example apparatus to identify media include at least one memory, instructions, and at least one processor to execute the instructions to: determine a first ratio based on a first time interval and a second time interval of a monitored media signal; determine a second ratio based on the second time interval and a third time interval of the monitored media signal; generate a first ratio signature based on the first and second ratios; and initiate transmission of the first ratio signature to a recipient that is to compare the first signature with a second ratio signature to identify the media.

ELECTRONIC DEVICE FOR PERFORMING SYNCHRONIZATION OF VIDEO DATA AND AUDIO DATA, AND CONTROL METHOD THEREFOR

An electronic device for use with an external electronic device includes a touchscreen display, at least one speaker, and at least one processor. The at least one processor may obtain a user input for outputting video data of a first medium while audio data of the first medium is output through the at least one speaker, identify a point of time when the audio data is output through the at least one speaker, based on the obtained user input, determine a point of time when the video data is to be output through the touchscreen display or an external electronic device, by a delay time calculated at least based on the identified point of time, and control the touchscreen display or the external electronic device such that the video data is output through the touchscreen display or the external electronic device at the determined point of time.

Systems and methods for modifying playback of a media asset in response to a verbal command unrelated to playback of the media asset

Systems and methods are described for translating a non-playback command into a playback modification to encourage a recipient of the command to execute on the command. A media guidance application may determine a command from a first user to a second user. The media guidance application may compare the command to a set of playback operation commands for a media asset that is currently being played back. The media guidance application may determine that the command is not contained within the set of playback operation commands. In response to determining that the command is not contained within the set of commands, the media guidance application may determine whether the second user executes a desired outcome of the command. And in response to determining that the second user does not execute the desired outcome of the command, the media guidance application may determine whether to modify playback of the media asset.