Patent classifications
G10L21/055
Method, apparatus and systems for audio decoding and encoding
An audio processing system (100) accepts an audio bitstream having one of a plurality of predefined audio frame rates. The system comprises a front-end component (110), which receives a variable number of quantized spectral components, corresponding to one audio frame in any of the predefined audio frame rates, and performs an inverse quantization according to predetermined, frequency-dependent quantization levels. The front-end component may be agnostic of the audio frame rate. The audio processing system further comprises a frequency-domain processing stage (120) and a sample rate converter (130), which provide a reconstructed audio signal sampled at a target sampling frequency independent of the audio frame rate. By its frame-rate adaptability, the system can be configured to operate frame-synchronously in parallel with a video processing system that accepts plural video frame rates.
Method, apparatus and systems for audio decoding and encoding
An audio processing system (100) accepts an audio bitstream having one of a plurality of predefined audio frame rates. The system comprises a front-end component (110), which receives a variable number of quantized spectral components, corresponding to one audio frame in any of the predefined audio frame rates, and performs an inverse quantization according to predetermined, frequency-dependent quantization levels. The front-end component may be agnostic of the audio frame rate. The audio processing system further comprises a frequency-domain processing stage (120) and a sample rate converter (130), which provide a reconstructed audio signal sampled at a target sampling frequency independent of the audio frame rate. By its frame-rate adaptability, the system can be configured to operate frame-synchronously in parallel with a video processing system that accepts plural video frame rates.
Systems and methods for generating composite media using distributed networks
A distributed systems and methods for generating composite media including receiving a media context that defines media that is to be generated, the media context including: a definition of a sequence of media segment specifications and, an identification of a set of remote devices. For each media segment specification, a reference segment may be generated and transmitted to at least one remote device. A media segment may be received from each of the remote device, the media segment having been recorded by a camera. Verified media sequences may replace the corresponding reference segment. The media segments may be aggregated and an updated sequence of media segments may be defined. An instance of the media context that includes a subset of the updated sequence of media segments may then be generated.
SYNCHRONIZATION OF DIGITAL ALGORITHMIC STATE DATA WITH AUDIO TRACE SIGNALS
Embodiments disclosed herein provide systems, methods, and computer readable media for synchronizing algorithmic state data with audio trace signals. In a particular embodiment, a method provides processing digital audio data linearly in the time domain using a digital audio processing algorithm. The method further provides determining the digital algorithmic state data comprising an output state of the digital audio processing algorithm at each of a plurality of time instances during the processing of the digital audio data. Also, the method provides aligning the digital algorithmic state data in the time domain with a trace of the digital audio data.
ANIMATION SYNTHESIS SYSTEM AND LIP ANIMATION SYNTHESIS METHOD
An animation display system is provided. The animation display system includes a display; a storage configured to store a language model database, a phonetic-symbol lip-motion matching database and a lip motion synthesis database; and a processor electronically connected to the storage and the display, respectively. The processor includes a speech conversion module, a phonetic-symbol lip-motion matching module, and a lip motion synthesis module. A lip animation display method is also provided.
Text-driven editor for audio and video assembly
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
Text-driven editor for audio and video assembly
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
APPARATUS AND METHOD FOR GENERATING VISUAL CONTENT FROM AN AUDIO SIGNAL
An apparatus and method for generating visual content from an audio signal are described. The method includes receiving (310) audio content, processing (320) the audio content to separate into a first and second portion of the audio content, converting (330) the second portion into visual content, delaying (340) the first portion based on a time relationship between the audio content and the visual content, the delaying accounting for time to process the first portion and convert the second portion, and providing (350) the visual content and audio content for reproduction. The apparatus includes a source separation module (210) processing the received audio content to separate into a first and second portion of the audio content, a converter module (220) converting the second portion into visual content, and a synchronization module (230) delaying the first portion based on a time relationship between the audio content and the visual content.
APPARATUS AND METHOD FOR GENERATING VISUAL CONTENT FROM AN AUDIO SIGNAL
An apparatus and method for generating visual content from an audio signal are described. The method includes receiving (310) audio content, processing (320) the audio content to separate into a first and second portion of the audio content, converting (330) the second portion into visual content, delaying (340) the first portion based on a time relationship between the audio content and the visual content, the delaying accounting for time to process the first portion and convert the second portion, and providing (350) the visual content and audio content for reproduction. The apparatus includes a source separation module (210) processing the received audio content to separate into a first and second portion of the audio content, a converter module (220) converting the second portion into visual content, and a synchronization module (230) delaying the first portion based on a time relationship between the audio content and the visual content.
Audiovisual capture and sharing framework with coordinated, user-selectable audio and video effects filters
Coordinated audio and video filter pairs are applied to enhance artistic and emotional content of audiovisual performances. Such filter pairs, when applied in audio and video processing pipelines of an audiovisual application hosted on a portable computing device (such as a mobile phone or media player, a computing pad or tablet, a game controller or a personal digital assistant or book reader) can allow user selection of effects that enhance both audio and video coordinated therewith. Coordinated audio and video are captured, filtered and rendered at the portable computing device using camera and microphone interfaces, using digital signal processing software executable on a processor and using storage, speaker and display devices of, or interoperable with, the device. By providing audiovisual capture and personalization on an intimate handheld device, social interactions and postings of a type made popular by modern social networking platforms can now be extended to audiovisual content.