Patent classifications
H04N21/4394
SYSTEMS AND METHODS FOR RECOMMENDING CONTENT USING PROGRESS BARS
During playback of a content item, a media signature corresponding to a first portion of the content item is identified. A number of media signatures representing portions of a plurality of other content items may have been previously identified and stored. Each stored media signature may also include an identifier of an associated content item and a timestamp corresponding to a position in the associated content item at which the signature is located. If it is determined that the identified media signature matches a stored media signature, a progress bar is generated for display comprising an identifier of the content item associated with the matching stored media signature, and a progress indicator corresponding to a timestamp associated with the stored media signature.
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
An information processing system for obtaining an audio content file for video data providing video content representing a sport event, including: a receiver configured to receive a data stream including the video data; a preference data obtainer configured to obtain preference data, wherein the preference data indicate a selected competitor participating in the sport event; a category identifier obtainer configured to obtain a category identifier from a machine learning algorithm into which the video data is input, wherein the machine learning algorithm is trained to classify a scene represented in the video content into a category of a predetermined set of categories associated with the sport event, wherein the category identifier indicates the category into which the scene is classified; an audio content file obtainer configured to obtain, based on the obtained category identifier and the obtained preference data, the audio content file from a prestored set of audio content files, wherein the audio content file provides audio content associated with the category of the scene and the preference data; and a synchronizer configured to synchronize the audio content and the video content for synchronized play back of the scene by a media player configured to play back the video content and the audio content file.
Audio improvement using closed caption data
Methods and systems are described herein for improving audio for hearing impaired content consumers. An example method may comprise determining a content asset. Closed caption data associated with the content asset may be determined. At least a portion of the closed caption data may be determined based on a user setting associated with a hearing impairment. Compensating audio comprising a frequency translation associated with at least the portion of the closed caption data may be generated. The content asset may be caused to be output with audio content comprising the compensating audio and the original audio.
Method and apparatus for identifying a single user requesting conflicting content and resolving said conflict
Systems and methods for automatically determining when a single party is playing or requesting conflicting content on two different devices, and resolving the conflict accordingly. Systems automatically identify when a single user is playing back a content stream on one device, then requests another content stream on another device. If the two content streams conflict, the conflict is automatically resolved in a number of ways, including by automatically pausing or redirecting one of the content streams. Conflict identification may also be carried out with the assistance of an added state flag that indicates a device or stream that has audio priority in a conflict. Thus, for example, when one user requests two conflicting content streams, and only one stream is associated with the conditional audio enabled flag, audio of the flagged stream may be played, while the other stream is muted.
Target character video clip playing method, system and apparatus, and storage medium
Provided are a target character video clip playing method, system and apparatus, and a storage medium. The method comprises: using image recognition technology to perform target character recognition on an entire video, positioning a plurality of video clips containing target characters, and obtaining a first playing time period set corresponding to the video clips; according to audio clips corresponding to each character marked within the entire video, obtaining a second playing time period set corresponding to the audio clips of the various characters; merging the time periods included in the playing time period sets, and obtaining a sum playing time period set of the target characters; according to a sorting of various playing timelines within the sum playing time period set, performing video playing of the target characters.
Methods and apparatus to detect commercial advertisements associated with media presentations
Methods and apparatus to detect commercial advertisements associated with media presentations are disclosed. An example method involves receiving a video frame and detecting a change in box-formatting between the video frame and a subsequent video frame. A transition between the video frame and the subsequent video frame is indicated as a commercial advertisement transition based on the detected change in box-formatting.
Methods and apparatus to reduce false crediting of exposure to video-on-demand media assets
Methods, apparatus, systems and articles of manufacture are disclosed to reduce false crediting of exposure to video-on-demand media assets. Example apparatus disclosed herein include a signature matcher to compare a sequence of monitored media signatures to sequences of reference signatures representative of corresponding reference media assets, the sequence of monitored media signatures included in monitoring data reported by a media device meter, the sequences of reference signature stored in a library of reference signatures. Disclosed example apparatus also include a matched assets counter to determine a count of ones of the reference media assets represented by corresponding ones of the sequences of reference signatures determined to match the sequence of monitored media signatures. Disclosed examples further include a credit determiner to determine whether to credit media exposure to a first one of the reference media assets based on the count.
ADJUSTING AUDIO AND NON-AUDIO FEATURES BASED ON NOISE METRICS AND SPEECH INTELLIGIBILITY METRICS
Some implementations involve determining a noise metric and/or a speech intelligibility metric and determining a compensation process corresponding to the noise metric and/or the speech intelligibility metric. The compensation process may involve altering a processing of audio data and/or applying a non-audio-based compensation method. In some examples, altering the processing of the audio data does not involve applying a broadband gain increase to the audio signals. Some examples involve applying the compensation process in an audio environment. Other examples involve determining compensation metadata corresponding to the compensation process and transmitting an encoded content stream that includes encoded compensation metadata, encoded video data and encoded audio data from a first device to one or more other devices.
Methods and apparatus to determine an audience composition based on voice recognition
Methods, apparatus, systems and articles of manufacture are disclosed. An example apparatus includes a controller to cause a people meter to emit a prompt for input of audience identification information at a first time and determine a first audience count based on the input, an audio detector to determine a second audience count based on signatures generated from audio data captured in the media environment, and a comparator to cause the people meter to not emit the prompt for at least a first time period after the first time when the first audience count is equal to the second audience count.
Implementation method and system of real-time subtitle in live broadcast and device
The present disclosure describes techniques of synchronizing subtitles in live broadcast The disclosed techniques comprise obtaining a source signal and a simultaneous interpretation signal in a live broadcast; performing voice recognition on the simultaneous interpretation signal in real-time to obtain corresponding translation text; delaying the simultaneous interpretation signal to obtain a first delayed signal; delaying the source signal to obtain a second delayed signal; obtaining proofreading results of the first delayed signal and the corresponding translation text; determining proofread subtitles based on the proofreading results; and sending the proofread subtitles and the second delay signal to a live display interface.