Patent classifications
H04S7/307
Playback Device Configuration
Examples described herein involve configuring a playback device based on distortion, such as that caused by a barrier. One implementation may involve causing the playback device to play audio content according to an existing playback configuration, determining an existing frequency response of the playback device in a given system, and determining whether a difference between the existing frequency response of the playback device in the given system and a predetermined frequency response for the playback device is greater than a predetermined distortion threshold. If it is determined that the difference between the existing frequency response of the playback device and the predetermined frequency response for the playback device is greater than the predetermined distortion threshold, then the existing playback configuration of the playback device is changed to an updated playback configuration of the playback device and the playback device plays audio content according to the updated playback configuration.
CALL ENVIRONMENT GENERATION METHOD, CALL ENVIRONMENT GENERATION APPARATUS, AND PROGRAM
Provided is a technique to generate a call environment that prevents call contents from being heard by a person other than a person speaking on the phone in a case where call voice is output from a speaker. Speakers installed in an automobile are denoted by SP.sub.1, ..., SP.sub.N, a first filter coefficient used to generate an input signal for a speaker SP.sub.n is denoted by F.sub.n (ω), and a second filter coefficient that is different from the first filter coefficient and is used to generate an input signal for the speaker SP.sub.n is denoted by .sup.~F.sub.n (ω). A call environment generation method includes: an acoustic signal generation step of generating, when detecting a start signal of a call, a call-time acoustic signal that is obtained by adjusting volume of an acoustic signal to be reproduced during the call, by using a predetermined volume value; a first local signal generation step of generating a sound signal S.sub.n as an input signal for the speaker SP.sub.n from a voice signal of the call by using the first filter coefficient F.sub.n (ω); and a second local signal generation step of generating an acoustic signal A.sub.n as an input signal for the speaker SP.sub.n from the call-time acoustic signal by using the second filter coefficient .sup.~F.sub.n (ω).
MODIFYING AUDIO DATA TRANSMITTED TO A RECEIVING DEVICE TO ACCOUNT FOR ACOUSTIC PARAMETERS OF A USER OF THE RECEIVING DEVICE
A communication system provides audio content to one or more client devices capable of playing spatialized audio content. For example, the communication system receives audio content from a client device and transmits the audio content to other client devices to be played for users. The communication system dynamically modifies audio content transmitted to different client devices based on a payload including audio parameters (e.g., local area acoustic properties, an audiogram for a user, a head related transfer function for a user, etc.) received from a client device.
DYNAMIC AUDIO EQUALIZATION
Methods and systems for performing automatic speed-based audio control. One method includes receiving, with an electronic control unit included in a vehicle, a speed of the vehicle and receiving, with the electronic control unit, an audio signal. The method also includes accessing, with the electronic control unit, a plurality of equalization curves based on the speed of the vehicle, each of the plurality of equalization curves associated with the speed of the vehicle and each of the plurality of equalization curves defining a gain adjustment for one of a plurality of frequencies, and, for each curve of the plurality of equalization curves, applying the gain adjustment defined by the curve to one of the plurality of frequencies of the audio signal.
Systems and methods of adjusting bass levels of multi-channel audio signals
Systems and methods for adjusting bass levels of a multi-channel audio signal include, among other features, (i) receiving the multi-channel signal via a playback device; (ii) separating, from the multi-channel signal, low-frequency signals comprising frequencies less than a threshold frequency; (iii) determining electrical energies of the low-frequency signals; (iv) determining a first energy by summing the electrical energies of the low-frequency signals; (v) consolidating the low-frequency signals into a consolidated low-frequency signal; (vi) determining a second energy by determining an electrical energy of the consolidated low-frequency signal; (vii) generating a gain-adjusted low-frequency signal by adjusting a gain of the consolidated low-frequency signal based on both (a) the first energy and (b) the second energy; (viii) generating a gain-adjusted multi-channel signal by mixing the gain-adjusted low-frequency signal back into the multi-channel signal; and (ix) using the gain-adjusted multi-channel signal to play back gain-adjusted multi-channel audio content via the playback device.
System and method for data augmentation for multi-microphone signal processing
A method, computer program product, and computing system for receiving a signal from each microphone of a plurality of microphones, thus defining a plurality of signals. One or more inter-microphone gain-based augmentations may be performed on the plurality of signals, thus defining one or more inter-microphone gain-augmented signals.
AUTOMATIC LOUDSPEAKER ROOM EQUALIZATION BASED ON SOUND FIELD ESTIMATION WITH ARTIFICIAL INTELLIGENCE MODELS
One embodiment provides a computer-implemented method that includes acquiring, via at least one microphone, sound pressure data from a loudspeaker in a room. The sound pressure data is input into an artificial intelligence (AI) model. The AI model automatically estimates, without user interaction, at least one of energy average (EA) in a listening area or total sound power (TSP) produced by the loudspeaker. The AI model is trained prior to automatically estimating the at least one of the EA in the listening area or the TSP produced by the loudspeaker.
Emphasis for audio spatialization
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a first input audio signal is received. The first input audio signal is processed to generate a first output audio signal. The first output audio signal is presented via one or more speakers associated with the wearable head device. Processing the first input audio signal comprises applying a pre-emphasis filter to the first input audio signal; adjusting a gain of the first input audio signal; and applying a de-emphasis filter to the first audio signal. Applying the pre-emphasis filter to the first input audio signal comprises attenuating a low frequency component of the first input audio signal. Applying the de-emphasis filter to the first input audio signal comprises attenuating a high frequency component of the first input audio signal.
Systems and methods for providing augmented audio
A system for providing augmented audio to users in a vehicle, comprising: a plurality of speakers disposed in a perimeter of a cabin of the vehicle; and a controller configured to receive a first audio signal and a second audio signal, to drive the plurality of speakers in accordance with a first array configuration such that a bass content of the first audio signal is produced in a first listening zone within the vehicle cabin, and to drive the plurality of speakers in accordance with a second array configuration such that a bass content of the second audio signal is produced in a first listening zone within the vehicle cabin, wherein in the first listening zone a magnitude of the first bass content is greater than a magnitude of the second bass content and in the second listening zone the magnitude of the second bass content is greater than the magnitude of the first bass content.
ARRAY AUGMENTATION FOR AUDIO PLAYBACK DEVICES
Systems and methods for providing augmented arrays for audio playback are disclosed. An example playback device includes a first transducer configured to output audio along a first acoustic axis and a second transducer configured to output audio along a second acoustic axis. The playback device is configured to receive a source stream of audio content including at least a first input channel and a second input channel. The device plays back first audio output via the first transducer based on the first input channel and directed along the first acoustic axis, and plays back second audio output via the second transducer based on the second input channel and directed along the second acoustic axis, wherein the second audio output at least partially cancels the first audio output along a first spatial region offset from the first acoustic axis.