Patent classifications
H04S5/00
Dynamic audio upmixer parameters for simulating natural spatial variations
A system and method for creating natural spatial variations in an audio output. At least one parameter in a set of mixer tuning parameters is dynamically modified over time and within a predetermined range that is defined by a set of modification control parameters. The set of mixer tuning parameters that includes the at least one dynamically modified parameter is applied to a mixer allowing the mixer to create natural spatial variations in the audio output to be played at one or more loudspeakers.
Audio processing in adaptive intermediate spatial format
Systems, methods, and computer program products of audio processing based on Adaptive Intermediate Spatial Format (AISF) are described. The AISF is an extension to ISF that allows spatial resolution around an ISF ring to be adjusted dynamically with respect to content of incoming audio objects. An AISF encoder device adaptively warps each ISF ring during ISF encoding to adjust angular distance between objects, resulting in increase in uniformity of energy distribution around the ISF ring. At an AISF decoder device, matrices that decode sound positions to the output speaker take into account the warping that was performed at the AISF encoder device to reproduce the true positions of sound sources.
Spatializing audio data based on analysis of incoming audio data
A system for enabling spatializing audio data is provided. The system analyzes audio data to identify when to generate spatialized audio data. The system can receive incoming audio data including a plurality of channel-based audio signals as well as object-based audio. The system performs an analysis of the audio data and/or metadata associated with the audio data to determine when to generate the spatialized audio data. The system can identify one or more categories associated with the audio data (e.g., stereo, mono, game effect, . . . ) and use the category to determine whether to spatialize the audio data or not spatialize the audio data.
Acoustic signal processing device, acoustic signal processing method, and program for determining a steering coefficient which depends on angle between sound source and microphone
An acoustic signal processing device calculates a signal waveform that a microphone receives when at least one of a sound source and the microphone is moving. The acoustic signal processing device includes a coefficient calculation unit configured to model a steering coefficient g.sub.k,m representing how much an amplitude of a sound source signal emitted at an mth discrete time, where m is an integer between 1 and M and M is a length of the sound source signal, is transferred to an amplitude of a signal that the microphone receives at a kth discrete time, where k is an integer between 1 and K and K is a length of a recording signal, using N-order Fourier series expansion where N is an integer of 1 or more, and a recording signal calculation unit configured to calculate the signal waveform that the microphone receives using the modeled steering coefficient g.sub.k,m.
Systems and methods for processing audio signals based on user device parameters
In various applications, the system provides a method for processing audio signals, including: receiving a request for audio content; receiving an identifier encoded in a personal audio device comprising a transducer for playing audio; retrieving at least one parameter associated with the identifier; and processing the audio content using at least the request, the identifier and the at least one parameter, wherein the processing is customized for the personal audio device based on the at least one parameter associated with the identifier. In various applications the parameter is, one or more of, associated with a specification of the personal audio device, acoustic metrics of the transducer, relates to control of equalization, relates to permission to enable proprietary sonic processing for enhanced acoustic reception of streaming content, relates to acoustic metrics of the transducer and wherein the identifier is associated with permission to enable proprietary sonic processing for enhanced acoustic reception of streaming content, and/or is stored in a chip on the personal audio device, is retrieved from a server in a network, among other things. In various applications, the personal audio device comprises ear buds and the identifier is stored in a non-volatile memory of the ear buds.
Systems and methods for processing audio signals based on user device parameters
In various applications, the system provides a method for processing audio signals, including: receiving a request for audio content; receiving an identifier encoded in a personal audio device comprising a transducer for playing audio; retrieving at least one parameter associated with the identifier; and processing the audio content using at least the request, the identifier and the at least one parameter, wherein the processing is customized for the personal audio device based on the at least one parameter associated with the identifier. In various applications the parameter is, one or more of, associated with a specification of the personal audio device, acoustic metrics of the transducer, relates to control of equalization, relates to permission to enable proprietary sonic processing for enhanced acoustic reception of streaming content, relates to acoustic metrics of the transducer and wherein the identifier is associated with permission to enable proprietary sonic processing for enhanced acoustic reception of streaming content, and/or is stored in a chip on the personal audio device, is retrieved from a server in a network, among other things. In various applications, the personal audio device comprises ear buds and the identifier is stored in a non-volatile memory of the ear buds.
Method and system for surround sound processing in a headset
An audio headset may receive a plurality of audio signals corresponding to plurality of surround sound channels. The headset may determine, via its audio processing circuitry, context and/or content of the audio signals. The audio processing circuitry may process the audio signals to generate stereo signals carrying one or more virtual surround channels, wherein the processing comprises automatically controlling, based on the context and the content of the audio signals, a simulated acoustic environment of the virtual surround channels.
Method and apparatus for adaptive control of decorrelation filters
An audio signal processing method and apparatus for adaptively adjusting a decorrelator. The method comprises obtaining a control parameter and calculating mean and variation of the control parameter. Ratio of the variation and mean of the control parameter is calculated, and a decorrelation parameter is calculated based on the said ratio. The decorrelation parameter is then provided to a decorrelator.
Determining corrections to be applied to a multichannel audio signal, associated coding and decoding
A method and device for determining a set of corrections to be made to a multichannel sound signal, in which the set of corrections is determined on the basis of an item of information representative of a spatial image of an original multichannel signal and an item of information representative of a spatial image of the original multichannel signal that has been coded and then decoded.
SYSTEM AND METHOD FOR ADAPTIVE AUDIO SIGNAL GENERATION, CODING AND RENDERING
Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent. The object position metadata contains the appropriate allocentric frame of reference information required to play the sound correctly using the available speaker positions in a room that is set up to play the adaptive audio content.