Patent classifications
H04N21/4307
Electronic devices and corresponding methods utilizing ultra-wideband communication signals for user interface enhancement
One or more processors of an electronic device detect a communication device electronically in communication with a content presentation companion device operating as a primary display for the electronic device and including a first ultra-wide band component. The one or more processors determine, with a second ultra-wide band tag component carried by the electronic device, a distance between the electronic device and the content presentation companion device using an ultra-wide band ranging process. The one or more processors dynamically enhance an audio performance characteristic of the content presentation companion device as a function of the distance between the electronic device and the content presentation companion device.
SYNCHRONIZATION OF A VSYNC SIGNAL OF A FIRST DEVICE TO A VSYNC SIGNAL OF A SECOND DEVICE
A method is disclosed including setting, at a server, a server VSYNC signal to a server VSYNC frequency. The server VSYNC signal corresponding to generation of video frames during frame periods for the server VSYNC frequency. The method including setting, at a client, a client VSYNC signal to a client VSYNC frequency. The method including sending compressed video frames from the server to the client over a network using the server VSYNC signal, wherein the compressed video frames are based on the generated video frames. The method including decoding and displaying, at the client, the compressed video frames. The method including analyzing the timing of one or more client operations to adjust the relative timing between the server VSYNC signal and the client VSYNC signal, as the client receives the compressed video frames.
TEXT TAGGING AND GRAPHICAL ENHANCEMENT
Systems and methods for text tagging and graphical enhancement of subtitles in an audio-visual media display are disclosed. A media asset associated with an audio-visual display that includes one or more speaking characters may be received by a text tagging and graphical enhancement system. A set of sounds from the audio-visual display corresponding to speech by one of the speaking characters is identified. The set of sounds corresponding to the identified speaking character may be analyzed and one or more vocal parameters is identified, each vocal parameter measuring an element of one of the sounds. A display of subtitles synchronized to the speech of the identified speaking character within the audio-visual display may be generated. The appearance of the subtitles may be modified based on the identified vocal parameters for each of the corresponding sounds.
Methods and systems for performing and recording live music near live with no latency
Exemplary methods include a processor executing instructions stored in a memory for generating an electronic count-in, binding it to a first performance to generate a master clock and transmitting a first musician's first performance and first timing information to a network caching, storage, timing and mixing module. The first musician's first performance may be recorded locally at full resolution and transmitted to a full resolution media server and the first timing information may be transmitted to the master clock. The first musician's first performance is transmitted to a sound device of a second musician and the second musician creates a second performance, transmits it and second timing information to a network caching, storage, timing and mixing module. The first and second performances are mixed along with the first and the second timing information to generate a first mixed audio, which can be transmitted to a sound device of a third musician.
External Stream Representation Properties
A mechanism for processing video data is disclosed. An essential property (EssentialProperty) of a Dynamic Adaptive Streaming over Hypertext transfer protocol (DASH) representation is determined. The EssentialProperty indicates that the representation is an external stream representation (ESR). A conversion is performed between a visual media data and a media presentation based on the based on the ESR.
COMMUNICATION APPARATUS, METHOD FOR CONTROLLING COMMUNICATION APPARATUS, AND STORAGE MEDIUM
A communication apparatus is provided and receives a communication packet, acquires a timestamp of a reception of the communication packet, analyzes a type of the received communication packet, transfers the received communication packet to a predetermined memory based on information indicating an analyzed type of the communication packet, and associates the analyzed type of the communication packet with the acquired timestamp.
IMAGE SYNCHRONIZATION METHOD AND APPARATUS, AND DEVICE AND COMPUTER STORAGE MEDIUM
Provided are an image synchronization method and apparatus, a device and a computer storage medium. The method includes that: a plurality of groups of candidate images collected by a plurality of image collection apparatuses for a target object and time parameters corresponding to each of the candidate images are acquired; a candidate image is selected from the each group of candidate images to serve as an image to be analyzed, and a group of images to be analyzed is constructed on the basis of a plurality of images to be analyzed; and the group of images to be analyzed is determined as a synchronization image group corresponding to the target object in response to the time parameters of the images to be analyzed all satisfying a preset synchronization condition.
Dynamic delay equalization for media transport
Systems and methods of the present disclosure provide for dynamic delay equalization of related media signals in a media transport system. Methods include receiving a plurality of related media signals, transporting the related media signals along different media paths, calculating uncorrected propagation delays for the media paths, and delaying each of the related media signals by an amount related to the difference between the longest propagation delay (of the uncorrected propagation delays) and the uncorrected propagation delay of the related media signal/media path. Calculating the uncorrected propagation delays and delaying the related media signals may be performed in response to a change to the propagation delay of at least one of the related media signals/media paths. Additionally or alternatively, calculating the uncorrected propagation delays and delaying the related media signals may be performed while transporting the related media signals.
SYNCHRONIZED PLAYBACK OF MEDIA CONTENT
The subject technology provides for synchronized playback of different media content streams. The disclosed techniques may include determining, while certain audio content is being outputted, whether a triggering event has occurred at a media device. Responsive to a determination that the triggering event has occurred, audio information including identification information and a current output status of the audio content may be obtained, and a visual content stream for visual content corresponding to the audio content may be obtained. At the media device, the visual content stream may be processed based on the audio information to determine a starting time point indicating a time point within the visual content from which to start outputting the visual content. The visual content may be outputted such that the output of the visual content begins at the starting time point and is synchronized in time with the audio content.
CONTENT PRESENTATION
The invention relates to the field of content presentation.
A system and a computer-readable medium are described, comprising program instructions, which provide for display of the visual content parts by a display device synchronously with reproduction of the audio content parts by a reproducing device based on the content synchronization information; display of at least one visual content part not synchronized with an audio content part currently being reproduced without interrupting reproduction of the audio content part in response to at least one corresponding action by the user during his interaction with the input device interface; and a subsequent return to display of the visual content parts synchronously with reproduction of the audio content parts in response to at least one corresponding action by the user during his interaction with the input device interface or automatically upon the occurrence of a predetermined event.