Patent classifications
H04S7/307
Time-varying always-on compensation for tonally balanced 3D-audio rendering
A system reduces sound coloration caused by rendering of a 3D audio signal. The system renders the 3D audio signal including a plurality of channels using the input audio signal. Input spectra data defining spectral information of the input audio signal is computed. 3D spectra data defining spectral information of a single channel representation of the 3D audio signal is computed. The system generates a tonal balance filter based on the input spectral data and the 3D spectral data. The tonal balance filter, when applied to the 3D audio signal, reduces sound coloration caused by the rendering of the 3D audio signal. The tonal balance filter is applied to the 3D audio signal to generate an output audio signal and the output audio signal is presented via a speaker array.
Method and system for audio critical listening and evaluation
Disclosed herein is a method of constructing and utilizing a sound engineering evaluation and comparison process to allow for improved finished results. Such a method entails the utilization of a high-pass filter for listening evaluation of recorded music or sounds including consistency with low-frequency mixing to allow for a tool to implement changes in relation to the filtered results in order to accommodate sensitivities of the human ear (with the optional inclusion of a comparison method to provide possible further enhanced results and the avoidance of biases). In such a manner, a facilitating method for sound engineering mixing adjustments that provide such accommodations are provided for improved sound recordings for distribution within on-line or recording product frameworks.
PERCEPTUAL BASS EXTENSION WITH LOUDNESS MANAGEMENT AND ARTIFICIAL INTELLIGENCE (AI)
One embodiment provides a computer-implemented method that includes implementing a customizable compressor for at least one sidechain processing associated with a loudspeaker. Machine learning is applied to automatically tune one or more parameters of the at least one sidechain processing. One or more channels are extracted, including a low-frequency effects (LFE) channel, for nonlinear signal synthesis. A proportional power-sum-based mix-in of an LFE sidechain channel is applied into a non-LFE sidechain. The LFE sidechain channel is maintained within a specified threshold of being level, before and after nonlinear signal synthesis.
Controlling audio processing
An apparatus comprising means for: receiving an indication of a form factor of a device; selecting an audio component, in dependence on the receiving an indication of a form factor of the device and in dependence on an assigned location of a graphical user interface object associated with the audio component, wherein the graphical user interface object is assigned to at least a first display area when the device is in a first form factor and is assigned to the at least a first display area when the device is in a second form factor; controlling audio processing of audio associated with the selected audio component; and providing at least the processed audio for audio output by one or more loudspeakers.
PLAYBACK DEVICE CALIBRATION
A first subwoofer may be configured to output multimedia content in synchrony with at least one other playback device and a second subwoofer. The first subwoofer may, based on a received indication of an acoustic characteristic of the at least one other playback device, determine a crossover frequency of (i) the first subwoofer and the second subwoofer and (ii) the at least one other playback device. After determining the crossover frequency, the first subwoofer may output a first tone set and a second tone set in synchrony with the second subwoofer and the at least one other playback device, and after outputting the first tone set and the second tone set, receive, from a controller device, an indication of a selected one of the first tone set or the second tone set. Based on the selected tone set, the first subwoofer may adjust a phase setting of the first subwoofer.
METHOD AND APPARATUS FOR AUDIO PROCESSING
An apparatus and method of loudspeaker equalization. The method combines default tunings (generated based on a default listening environment) and room tunings (generated based on an end user listening environment) to result in combined tunings that account for differences between the end user listening environment and the default listening environment.
MULTI-CHANNEL DECOMPOSITION AND HARMONIC SYNTHESIS
In one example in accordance with the present disclosure, a system is described. The system includes a decompose device to decompose a multi-channel audio stream into at least a first portion and a second portion. A synthesis device of the system independently synthesizes harmonics in each of the first portion and the second portion using different harmonic models. An audio generator of the system combines synthesized harmonics from the first portion and the second portion with the multi-channel audio stream to generate a synthesized audio output.
Audio cancellation for voice recognition
An audio cancellation system includes a voice enabled computing system that is connected to an audio output device using a wired or wireless communication network. The voice enabled computing device can provide media content to a user and receive a voice command from the user. The connection between the voice enabled computing system and the audio output device introduces a time delay between the media content being generated at the voice enabled computing device and the media content being reproduced at the audio output device. The system operates to determine a calibration value adapted for the voice enabled computing system and the audio output device. The system uses the calibration value to filter the user's voice command from a recording of ambient sound including the media content, without requiring significant use of memory and computing resources.
Frequency Routing Based on Orientation
Systems, methods, and apparatus for frequency routing based on orientation are disclosed. An example method includes receiving, by a playback device, an audio data stream. The example method includes determining, by the playback device, an orientation of the playback device. The example method includes routing, by the playback device, a first set of frequencies in the audio data stream to at least one of a plurality of speaker drivers based on the first orientation. The example method includes routing, by the playback device, a second set of frequencies in the audio data stream to the at least one of the plurality of speaker drivers based on the second orientation, wherein the first set of frequencies is different than the second set of frequencies.
METHOD FOR BI-PHASIC SEPARATION AND RE-INTEGRATION ON MOBILE MEDIA DEVICES
Disclosed is a method of presenting audio information to a user. The method comprising receiving samples of a waveform from a media handling component, initializing a biquad filter with a set of one or more coefficients corresponding to a set of one or more stages of the biquad filter for both a real component of the samples and an imaginary component of the samples. The biquad filter is implemented on a media processing component of the mobile media device. The method further comprises applying the biquad filter to the samples of the waveform to generate an output for presentation to the user, the output comprising a processed rendering of the real component of the samples and the imaginary component of the samples.