Patent classifications
H04N21/2368
Method, apparatus and systems for audio decoding and encoding
An audio processing system (100) accepts an audio bitstream having one of a plurality of predefined audio frame rates. The system comprises a front-end component (110), which receives a variable number of quantized spectral components, corresponding to one audio frame in any of the predefined audio frame rates, and performs an inverse quantization according to predetermined, frequency-dependent quantization levels. The front-end component may be agnostic of the audio frame rate. The audio processing system further comprises a frequency-domain processing stage (120) and a sample rate converter (130), which provide a reconstructed audio signal sampled at a target sampling frequency independent of the audio frame rate. By its frame-rate adaptability, the system can be configured to operate frame-synchronously in parallel with a video processing system that accepts plural video frame rates.
DYNAMIC DELAY EQUALIZATION FOR MEDIA TRANSPORT
Systems and methods of the present disclosure provide for dynamic delay equalization of related media signals in a media transport system. Methods include receiving a plurality of related media signals, transporting the related media signals along different media paths, calculating uncorrected propagation delays for the media paths, and delaying each of the related media signals by an amount related to the difference between the longest propagation delay (of the uncorrected propagation delays) and the uncorrected propagation delay of the related media signal/media path. Calculating the uncorrected propagation delays and delaying the related media signals may be performed in response to a change to the propagation delay of at least one of the related media signals/media paths. Additionally or alternatively, calculating the uncorrected propagation delays and delaying the related media signals may be performed while transporting the related media signals.
Audio and Video Synchronization
Concepts and technologies disclosed herein are directed to audio and video synchronization. According to one aspect disclosed herein, an audio-video (“AV”) synchronization system can simultaneously capture samples of a pre-encode media stream and a post-encode media stream. The pre-encode media stream can include AV content prior to being encoded. The post-encode media stream can include the AV content after being encoded. The AV synchronization system can align a pre-encode video component of the pre-encode media stream with a post-encode video component and can determine a video offset therebetween. The AV synchronization system can align a pre-encode audio component of the pre-encode media stream with a post-encode audio component of the post-encode media stream and can determine an audio offset therebetween. The AV synchronization system can then compare the video offset and the audio offset to determine if the post-encode media stream is synchronized with the pre-encode media stream.
METHOD, SYSTEM AND SERVER FOR LIVE STREAMING AUDIO-VIDEO FILE
A method for live streaming an audio-video file is disclosed, in which an original audio-video file is obtained; an audio frame and a video frame are read from the original audio-video file; the video frame is transcoded into video frames with different code rates; the video frames with different code rates are synthesized respectively with the audio frame into audio-video files with different code rates; the audio frames and the video frames are extracted from the audio-video files with different code rates respectively to form respective video streams; and different video streams are pushed.
METHOD, SYSTEM AND SERVER FOR LIVE STREAMING AUDIO-VIDEO FILE
A method for live streaming an audio-video file is disclosed, in which an original audio-video file is obtained; an audio frame and a video frame are read from the original audio-video file; the video frame is transcoded into video frames with different code rates; the video frames with different code rates are synthesized respectively with the audio frame into audio-video files with different code rates; the audio frames and the video frames are extracted from the audio-video files with different code rates respectively to form respective video streams; and different video streams are pushed.
SIGNAL PROCESSING DEVICE, AUDIO-VIDEO DISPLAY DEVICE AND PROCESSING METHOD
A signal processing device is disclosed, which includes a plurality of channel receivers, a plurality of time code processors in one-to-one correspondence with the channel receivers, a timing generator, a signal processor and a transmitter, wherein each channel receiver is configured to parse an audio-video signal which has a data format defined by the SDI protocol and including a time code that characterizes time information. Each time code processor is configured to extract the time code from a parsed audio-video signal obtained by a corresponding channel receiver, and form first frame image data including a frame time code. The signal processor is configured to form an absolute frame output image based on multiple channels of the first frame of image data, frame time codes therein, and an internal clock signal generated by the timing generator. The transmitter is configured to transmit the absolute frame output image for display.
SIGNAL PROCESSING DEVICE, AUDIO-VIDEO DISPLAY DEVICE AND PROCESSING METHOD
A signal processing device is disclosed, which includes a plurality of channel receivers, a plurality of time code processors in one-to-one correspondence with the channel receivers, a timing generator, a signal processor and a transmitter, wherein each channel receiver is configured to parse an audio-video signal which has a data format defined by the SDI protocol and including a time code that characterizes time information. Each time code processor is configured to extract the time code from a parsed audio-video signal obtained by a corresponding channel receiver, and form first frame image data including a frame time code. The signal processor is configured to form an absolute frame output image based on multiple channels of the first frame of image data, frame time codes therein, and an internal clock signal generated by the timing generator. The transmitter is configured to transmit the absolute frame output image for display.
LOW LATENCY WIRELESS VIRTUAL REALITY SYSTEMS AND METHODS
Virtual Reality (VR) systems, apparatuses and methods of processing data are provided which include predicting, at a server, a user viewpoint of a next frame of video data based on received user feedback information sensed at a client, rendering a portion of the next frame using the prediction, encoding the portion, formatting the encoded portion into packets and transmitting the video data. At a client, the encoded and packetized A/V data is received and depacketized. The portion of video data and corresponding audio data is decoded and controlled to be displayed and aurally provided in synchronization. Latency may be minimized by utilizing handshaking between hardware components and/or software components such as a 3D server engine, one or more client processors, one or more client processors, a video encoder, a server NIC, a video decoder, a client NIC; and a 3D client engine.
LOW LATENCY WIRELESS VIRTUAL REALITY SYSTEMS AND METHODS
Virtual Reality (VR) systems, apparatuses and methods of processing data are provided which include predicting, at a server, a user viewpoint of a next frame of video data based on received user feedback information sensed at a client, rendering a portion of the next frame using the prediction, encoding the portion, formatting the encoded portion into packets and transmitting the video data. At a client, the encoded and packetized A/V data is received and depacketized. The portion of video data and corresponding audio data is decoded and controlled to be displayed and aurally provided in synchronization. Latency may be minimized by utilizing handshaking between hardware components and/or software components such as a 3D server engine, one or more client processors, one or more client processors, a video encoder, a server NIC, a video decoder, a client NIC; and a 3D client engine.
Cloud server, control equipment and method for audio and video synchronization
A method in control equipment for synchronizing audio and video acquires an identifier of playback equipment. Information as to audio and video formats for the playback equipment is acquired playback and a first time delay data corresponding to the audio data and video data from the cloud server is set according to the identifier, the audio format information and the video format information. A first delay time between the audio data and video data according to the first time delay data acquired from the cloud server is set for the implementation of audio and video synchronization.