Patent classifications
H03G5/16
Passive sub-audible room path learning with noise modeling
Frequency domain compensation is provided for spectral impairment resulting from the audio path characteristics for a given audio device in a given listening space. Selected segments of an audio stream are recorded at a listener position to measure degradation in the audio path and to update compensation filter characteristics of the audio device. Recorded transmitted and received audio sequences are aligned based and compared in the frequency domain. The difference between the aligned transmitted and received sequences represents the frequency domain degradation along the acoustic path due to the speaker, the physical attributes of the room, and noise. A dynamically updated noise model is determined for adjusting compensation filter characteristics of the audio device, which can be updated during use of the audio device. A compensation curve is derived which can adapt the equalization of the audio device passively during normal usage.
TRANSFORMING AUDIO CONTENT FOR SUBJECTIVE FIDELITY
A method or apparatus for delivering audio programming such as music to listeners may include identifying, capturing and applying a listener's audiometric profile to transform audio content so that the listener hears the content similarly to how the content was originally heard by a creative producer of the content. An audio testing tool may be implemented as software application to identify and capture the listener's audiometric profile. A signal processor may operate an algorithm used for processing source audio content, obtaining an identity and an audiometric reference profile of the creative producer from metadata associated with the content. The signal processor may then provide audio output based on a difference between the listener's and creative producer's audiometric profiles.
SENSOR ARRANGEMENT HAVING AN OPTIMIZED GROUP DELAY AND SIGNAL PROCESSING METHOD
In various embodiments, a circuit arrangement is provided. The circuit arrangement includes a sensor set up to provide an analogue signal, an analogue/digital converter set up to receive the analogue signal and to provide a first signal, and a first filter set up to receive a signal based on the first signal and to provide a second signal. The first filter is set up in such a manner that the second signal is allowed through without amplification or substantially without amplification in a frequency range of approximately 20 Hz to approximately 10 kHz, and the second signal has a gain of greater than 0 dB at least above a predefined frequency which is greater than approximately 20 kHz.
Customized automated audio tuning
An example method of operation may include identifying, in a particular room environment, a number of speakers and one or more microphones on a network controlled by a controller and amplifier, providing test signals to play sequentially from each amplifier channel of the amplifier and the speakers, monitoring the test signals from the one or more microphones simultaneously to detect operational speakers and amplifier channels, providing additional test signals to the speakers to determine tuning parameters, detecting the additional test signals at the one or more microphones controlled by the controller, and automatically establishing a background noise level and noise spectrum of the room environment based on the detected additional test signals.
Volume leveler controller and controlling method
Volume leveler controller and controlling method are disclosed. In one embodiment, A volume leveler controller includes an audio content classifier for identifying the content type of an audio signal in real time; and an adjusting unit for adjusting a volume leveler in a continuous manner based on the content type as identified. The adjusting unit may configured to positively correlate the dynamic gain of the volume leveler with informative content types of the audio signal, and negatively correlate the dynamic gain of the volume leveler with interfering content types of the audio signal.
Method and apparatus for playing audio, and computer-readable storage
The present application relates to the field of audio technology, and provides a method, a device, and an apparatus for playing audio, and a computer-readable storage medium. The method for playing audio includes: obtaining an ambient atmospheric pressure value and audio data to be played; obtaining multiple target frequency points contained in the audio data to be played when the ambient atmospheric pressure value meets a preset condition, and determining equal-loudness multiples corresponding to the target frequency points according to the ambient atmospheric pressure value and a preset calibration atmospheric pressure value; and sending the audio data to be played and the equal-loudness multiples of the target frequency points to a power amplifying module, such that the power amplifying module amplifies the audio data to be played according to the equal-loudness multiples corresponding to the target frequency points.
MULTICHANNEL AUDIO ENHANCEMENT, DECODING, AND RENDERING IN RESPONSE TO FEEDBACK
In some embodiments, a method for performing at least one of enhancement, decoding, or rendering of a multichannel audio signal in response to compression feedback or feedback from a smart amplifier. For example, the compression feedback may be indicative of amount of compression applied to each of multiple frequency bands, of the audio signal or an enhanced audio signal generated in response thereto. The enhancement (e.g., bass enhancement) may include dynamic routing of audio content of the input audio signal between channels of an enhanced audio signal generated in response thereto. The enhancement and compression may be performed on a per speaker class basis. Other aspects are systems (e.g., programmed processors) and devices (e.g., devices having physically-limited bass reproduction capabilities, such as, for example, a notebook or laptop computer, tablet, soundbar, mobile phone, or other device with small speakers) configured to perform any embodiment of the method.
Systems and methods for equalizing audio for playback on an electronic device
Embodiments are provided for receiving a request to output audio at a first speaker and a second speaker of an electronic device, determining that the electronic device is oriented in a portrait orientation or a landscape orientation, identifying, based on the determined orientation, a first equalization setting for the first speaker and a second equalization setting for the second speaker, providing, for output at the first speaker, a first audio signal with the first equalization setting, and providing, for output at the second speaker, a second audio signal with the second equalization setting.
Validation of audio calibration using multi-dimensional motion check
Examples described herein involve validating motion of a microphone during calibration of a playback device. An example implementation involves a mobile device detecting, via one or more microphones, audio signals emitted from one or more playback devices as part of a calibration process. After the one or more playback devices emit the audio signals, the mobile device determines whether the detected audio signals indicate that sufficient horizontal translation of the mobile device occurred during the calibration process. When the detected audio signals indicate that insufficient horizontal translation occurred, the mobile device displays a prompt to move the mobile device more while the one or more playback devices emit one or more additional audio signals as part of the calibration process. When the detected audio signals indicate that sufficient horizontal translation occurred, the mobile device calibrates the one or more playback devices with a calibration based on the detected audio signals.
Validation of audio calibration using multi-dimensional motion check
Examples described herein involve validating motion of a microphone during calibration of a playback device. An example implementation involves a mobile device detecting, via one or more microphones, audio signals emitted from one or more playback devices as part of a calibration process. After the one or more playback devices emit the audio signals, the mobile device determines whether the detected audio signals indicate that sufficient horizontal translation of the mobile device occurred during the calibration process. When the detected audio signals indicate that insufficient horizontal translation occurred, the mobile device displays a prompt to move the mobile device more while the one or more playback devices emit one or more additional audio signals as part of the calibration process. When the detected audio signals indicate that sufficient horizontal translation occurred, the mobile device calibrates the one or more playback devices with a calibration based on the detected audio signals.