Patent classifications
H04N21/242
Systems and methods for establishing a virtual shared experience for media playback
Aspects disclosed herein relate to systems and methods for enhanced media consumption. In one aspect, a media platform is provided that allows users to provide commentary while consuming media. The content provided by a user may be saved and associated with a specific portion of the media. The saved commentary may be presented to other users as they consume the same media file.
Systems and methods for establishing a virtual shared experience for media playback
Aspects disclosed herein relate to systems and methods for enhanced media consumption. In one aspect, a media platform is provided that allows users to provide commentary while consuming media. The content provided by a user may be saved and associated with a specific portion of the media. The saved commentary may be presented to other users as they consume the same media file.
Method and system for dynamic image content replacement in a video stream
The present invention relates to a method for dynamic image content replacement in a video stream comprising generating a set of key image data (K) comprising a sequence of at least two different key images (K1, K2), periodically displaying said set of key image data (K) on a physical display, generating at least a first original video stream (O1) of a scene which includes said physical display by recording said scene with a camera, wherein said at least one video stream (O1) comprises key video frames (FK1, FK2), captures synchronously with displaying each of said at least two different key images (K1, K2) of said set of key image data (K) on said physical display, generating a mask area (MA) corresponding to an active area of said physical display visible in said key video frames from differential images (AFK) obtained from consecutive key video frames (FK1, FK2), generating at least one alternative video stream (V) by inserting of alternative image content (I) into the mask area (MA) of an original video stream, and broadcasting at least said at least one alternative video stream.
Providing alternative live media content
Techniques are described for providing alternative media content to a client device along with primary media content.
Providing alternative live media content
Techniques are described for providing alternative media content to a client device along with primary media content.
Synchronizing independent media and data streams using media stream synchronization points
A messaging channel is embedded directly into a media stream. Messages delivered via the embedded messaging channel are extracted at a client media player. According to a variant embodiment, and in lieu of embedding all of the message data in the media stream, only a coordination index is injected, and the message data is sent separately and merged into the media stream downstream (at the client media player) based on the coordination index. In one example embodiment, multiple data streams (each potentially with different content intended for a particular “type” or class of user) are transmitted alongside the video stream in which the coordination index (e.g., a sequence number) has been injected into a video frame. Based on a user's service level, a particular one of the multiple data streams is released when the sequence number appears in the video frame, and the data in that stream is associated with the media.
Implementation method and system of real-time subtitle in live broadcast and device
The present disclosure describes techniques of synchronizing subtitles in live broadcast The disclosed techniques comprise obtaining a source signal and a simultaneous interpretation signal in a live broadcast; performing voice recognition on the simultaneous interpretation signal in real-time to obtain corresponding translation text; delaying the simultaneous interpretation signal to obtain a first delayed signal; delaying the source signal to obtain a second delayed signal; obtaining proofreading results of the first delayed signal and the corresponding translation text; determining proofread subtitles based on the proofreading results; and sending the proofread subtitles and the second delay signal to a live display interface.
Implementation method and system of real-time subtitle in live broadcast and device
The present disclosure describes techniques of synchronizing subtitles in live broadcast The disclosed techniques comprise obtaining a source signal and a simultaneous interpretation signal in a live broadcast; performing voice recognition on the simultaneous interpretation signal in real-time to obtain corresponding translation text; delaying the simultaneous interpretation signal to obtain a first delayed signal; delaying the source signal to obtain a second delayed signal; obtaining proofreading results of the first delayed signal and the corresponding translation text; determining proofread subtitles based on the proofreading results; and sending the proofread subtitles and the second delay signal to a live display interface.
System and method for automatic synchronization of video with music, and gaming applications related thereto
A computer system including a server having a processor and a memory, the memory having a video database and a music database, the video database storing at least one video file having a plurality of video file markers, and the music database storing at least one music file having a plurality of music file markers, wherein the server receives and decodes encoded data from computer readable code, identifies and retrieves from the music database a music file based on the decoded data, synchronizes the retrieved music file with one of the video files by aligning the video file markers of the video file with the music file markers for the retrieved music file to produce a synchronized video-music file, and transmits the synchronized video-music file to a display, wherein the video file markers are generated for each video file and the music file markers are generated for each music file.
Precision timing for broadcast network
The present aspects relate to techniques of timing synchronization of audio and video (AV) data in a network. In particular, the techniques for a AV master to distribute AV data encoded with one or more time markers to a plurality of processing nodes. The one or more time markers may be indexed to a precision time protocol (PTP) time stamp used as a time reference. In one technique, the nodes extract the time markers to determine an offset value that is applied to a PLL to synchronize AV data packets at a distribution node or a processing node. In another technique the distribution node or the processing node determines the worst case path, which corresponds to a system offset value. The distribution node then reports the system offset value to the AV master, which in turn adjusts the phase based on the report.