Patent classifications
H04N21/4307
Using non-audio data embedded in an audio signal
Embodiments included herein generally relate to measuring a latency of a playback device. For example, a method includes: determining a first latency of a playback device; determining a second latency of the playback device; comparing the second latency to the first latency to determine whether an event occurred at the playback device; and in response to detecting a latency change between the second latency and the first latency indicating the occurrence of the event, adjusting a timing of a data stream provided to the playback device based on the latency change.
Media playback system with virtual line-in
Example systems and techniques disclosed herein facilitate interoperability between different media playback systems referred to herein as a virtual line-in (VLI) media playback system and a native playback system. When a VLI session is created by a VLI sender, a first native playback device can join a VLI group as a VLI receiver. As a VLI receiver, the first native playback device receives audio content and playback commands from the VLI sender to facilitate synchronous playback with other VLI receivers. At the same time, this native playback device can concurrently operate as a native domain group coordinator of a native domain synchrony group. As the native domain group coordinator, the native playback device translates VLI domain audio, control, and timing signals into the native domain and distributes such signals to native domain group members. In this way, the native domain group members can synchronize their playback with the VLI group.
Enhanced immersive digital media
This disclosure describes systems, methods, and devices related to immersive digital media. A method may include receiving, at a first device, first volumetric data, and second volumetric data including a first volumetric time slice of a first volumetric media stream. The method may include determining that the first volumetric time slice includes a first portion and a second portion, the first portion representing a first object and including an amount of the second volumetric data. The method may include determining that the first volumetric data represents the first object. The method may include generating a second volumetric time slice including the first volumetric data and the second portion of the first volumetric time slice, and generating a second volumetric media stream including the second volumetric time slice. The method may include sending the second volumetric media for presentation at a third device.
System and method for synchronized streaming of a video-wall
A system is disclosed for processing and streaming real-time graphics by a video-server for synchronized output via secondary-network connected display adapters to multiple displays arranged as a video-wall. This system enables the video-server to leverage performance advantages afforded by advanced GPUs, combined with low-cost Smart displays or System-on-Chip devices to deliver advanced realtime video-wall capabilities over the network while offering flexibility in the selection of network display adapters and still achieving synchronized output of multiple sub-image streams to selected end-point displays. This has applications generally in the field of real-time multiple-display graphics distribution as well as specific applications in the field of network video-walls. A method and computer readable medium are also disclosed that operate in accordance with the system.
System and method for compiling user-generated videos
A system and method are operable within a computer network environment for compiling videos into a compilation, where each video is programmatically inserted into the compilation, and the resulting video compilation plays alongside an audio track preferably sourced using a unique identifier for the audio track. The system includes a solution stack comprising a remote service system and at least one client, which may be operable to generate at least one video to be associated with an audio track section, with such section determined by select start/end times, programmatically identified, or programmatically associated based on selected metadata. The system then operates to compile at least one user-generated video into an audiovisual set which may be presented as a social post, and further into a video compilation which may include additional filler content, to play alongside a section or the entirety of an audio track.
Multiple Device Content Management
The description relates to cooperatively controlling devices based upon their location and pose. One example can receive first sensor data associated with a first device and second sensor data associated with a second device. This example can analyze the first sensor data and the second sensor data to determine relative locations and poses of the first device relative to the second device and can supply the locations and poses to enable content to be collectively presented across the devices based upon the relative locations and poses.
CONTROL AND SYNCHRONIZATION IN VIDEO PRODUCTION
Index values are periodically incremented at a central or primary video production control system and a video production control system node. Messages that include or otherwise indicate current index values generated by the primary video production control system are transmitted to the video production control system node, and the node reflects the messages back to the primary video production control system. The primary video production control system determines a round-trip message time, associated with the video production control system node, based on the transmitted messages and reflections of the messages received from the node. A further message that includes or otherwise indicates an instruction for execution of a task at a target execution time, is transmitted by the primary video production control system to the node at a time, in advance of the target execution time, that is based on the round-trip message time.
COORDINATED PRIMARY MEDIA STREAM WITH AGGREGATE SUPPLEMENTAL MEDIA STREAM
Providing a coordinated primary media stream with an aggregate supplemental media stream is disclosed. A request is received from an end user device associated with a user for supplemental media. Metadata and a broadcast time associated with a primary media stream transmitted to the end user device are determined. Based on the metadata and the broadcast time, first supplemental media from a first account of the user on a first platform and second supplemental media from a second account of the user on a second platform are determined. The first supplemental media and the second supplemental media are merged into an aggregate supplemental media stream. The aggregate supplemental media stream is streamed in synchronization with the primary media stream.
MEDIA CONTENT DISPLAY SYNCHRONIZATION ON MULTIPLE DEVICES
A method for displaying media content on devices respectively linked to media players from a group of media players. The method includes the acts of: transmitting to the media players configuration data including data; transmitting data corresponding to the media content to at least one of the media players; transmitting to the media players data corresponding to a multicast address and to an entry port; selecting by the server a master media player among the at least one media player which received the data corresponding to the media content; sending by the master media player a multicast media stream using the multicast address and the entry port, the multicast media stream being obtained by the master media player from the data corresponding to the media content.
Method and system for synchronizing playback of independent audio and video streams through a network
A method and system for synchronizing an audio signal and a video signal includes a source device receiving an audio-video signal comprising the audio signal and the video signal. The video signal has a video time stamp. The source device communicates the audio signal from the source device to a first sink device through a wireless network with a second time stamp and communicates the video signal to the second sink device with the video time stamp. The second sink device generates a synchronization (synch) signal and communicates the synch signal to the first sink device. The first sink device compares the synch signal to the second time stamp, adjusts a relative timing of the playback of the audio signal in response to comparing and generates an audible signal from the audio signal.