H04R2227/007

PLAYBACK DEVICE CALIBRATION
20220360928 · 2022-11-10 ·

Systems and methods for calibrating a playback device include (i) outputting first audio content; (ii) capturing audio data representing reflections of the first audio content within a room in which the playback device is located; (iii) based on the captured audio data, determining an acoustic response of the room; (iv) connecting to a database comprising a plurality of sets of stored audio calibration settings, each set associated with a respective stored acoustic room response of a plurality of stored acoustic room responses; (v) querying the database for a stored acoustic room response that corresponds to the determined acoustic response of the room in which the playback device is located; and (vi) applying to the playback device a particular set of stored audio calibration settings associated with the stored acoustic room response that corresponds to the determined acoustic response of the room in which the playback device is located.

Audio processing device, system, use and method in which one of a plurality of coding schemes for distributing pulses to an electrode array is selected based on characteristics of incoming sound

The invention relates to a hearing aid a cochlear implant comprising a) at least one input transducer for capturing incoming sound and for generating electric audio signals which represent frequency bands of the incoming sound, b) a sound processor which is configured to analyze and to process the electric audio signals, c) a transmitter that sends the processed electric audio signals, d) a receiver/stimulator, which receives the processed electric audio signals from the transmitter and converts the processed electric audio signals into electric pulses, e) an electrode array embedded in the cochlear comprising a number of electrodes for stimulating the cochlear nerve with said electric pulses, and f) a control unit configured to control the distribution of said electric pulses to the number of said electrodes. The control unit is configured to distribute said electric pulses to the number of said electrodes by applying one out of a plurality of different coding schemes, and wherein the applied coding scheme is selected according to characteristics of the incoming sound.

Determination of Room Reverberation for Signal Enhancement

A hearing prosthesis arrangement is described for a hearing assisted patient. A microphone senses an acoustic environment around the hearing assisted patient and generates a corresponding microphone output signal. An audio signal processor processes the microphone output signal and produces a corresponding prosthesis stimulation signal to the patient for audio perception. The audio signal processor includes a dereverberation process that measures a dedicated reverberation reference signal produced in the acoustic environment to determine reverberation characteristics of the acoustic environment, and reduces reverberation effects in the hearing prosthesis stimulation signal based on the reverberation characteristics.

METHODS FOR HEARING-ASSIST SYSTEMS IN VARIOUS VENUES
20170339496 · 2017-11-23 ·

A hearing-assist system for use in a venue in which ambient sounds contain dialogue as well as other components which comprises circuitry inserted in a signal path between a program source feed in a program occurring in the venue and a hearing-assist unit worn by a user in that venue which reduces psychoacoustic conflict and interference between sound which is ambient in the venue and sound heard by the user via the hearing-assist unit.

Crowd sourced audio data for venue equalization

Mobile devices may capture audio signals indicative of test audio received by an audio capture device of the mobile device; and send the captured audio and the zone designation to a sound processor to determine equalization settings for speakers of the zone of the venue. An audio filtering device may receive the captured audio signals from the mobile devices; compare each of the captured audio signals with the test signal to determine an associated reliability of each of the captured audio signals; combine the captured audio signals into zone audio data; and transmit the zone audio data and associated reliability to a sound processor configured to determine equalization settings for the zone based on the captured audio signals and the test signal.

Spatial audio correction
09794710 · 2017-10-17 · ·

Example techniques may involve performing aspects of a spatial calibration. An example implementation may include detecting a trigger condition that initiates calibration of a media playback system including multiple audio drivers that form multiple sound axes, each sound axis corresponding to a respective channel of multi-channel audio content The implementation may also include causing the multiple audio drivers to emit calibration audio that is divided into constituent frames, the multiple sound axes emitting calibration audio during respective slots of each constituent frame. The implementation may further include recording the emitted calibration audio. The implementation may include causing delays for each sound axis of the multiple sound axes to be determined, the determined delay for each sound axis based on the slots of recorded calibration audio corresponding to the sound axes and causing the multiple sound axes to be calibrated.

Techniques for using computer vision to alter operation of speaker(s) and/or microphone(s) of device

In one aspect, a first device includes at least one processor and storage accessible to the at least one processor. The storage includes instructions that may be executable by the processor to receive input from a camera and identify a second device based on the input from the camera. The second device may include at least one speaker and at least one microphone. The instructions may also be executable to identify a current location of the second device within an environment based on the input from the camera and to identify a current location of an object within the environment that is different from the second device. The instructions may then be executable to provide a command to alter operation of the at least one speaker and/or the at least one microphone based on the current location of the second device and the current location of the object.

Playback Device Calibration Based on Representation Spectral Characteristics
20170286052 · 2017-10-05 ·

A computing device may maintain a database of representative spectral characteristics. The computing device may also receive particular spectral data associated with a particular playback environment corresponding to the particular playback device. Based on the particular spectral data, the computing device may identify one of the representative spectral characteristics from the database that substantially matches the particular spectral data, and then identify, in the database, an audio processing algorithm based on a) the identified representative spectral characteristic and b) at least one characteristic of the particular playback device. The computing device may then transmit, to the particular playback device, data indicating the identified audio processing algorithm.

Dynamically providing to a person feedback pertaining to utterances spoken or sung by the person

Arrangements described herein relate to receiving, in real time, utterances spoken or sung by a first person when the utterances are spoken or sung and comparing, in real time, the detected utterances spoken or sung by the first person to at least a stored sample of utterances spoken or sung by the first person. Based, at least in part, on the comparing the detected utterances spoken or sung by the first person to at least the stored sample of utterances spoken or sung by the first person, a key indicator that indicates at least one characteristic of the detected utterances spoken or sung by the first person can be generated. Feedback indicating the at least one characteristic of the detected utterances spoken or sung by the first person can be communicated to the first person or a second person.

Compensation for Speaker Nonlinearities

A first signal may be received indicative of audio to be played by a speaker. A second signal may be received which comprises (i) a voice input received by a microphone and (ii) at least a portion of the audio played by the speaker at a same time that the microphone receives the voice input. Based on the first signal, nonlinearities output by the speaker which played the audio may be determined. At least the nonlinearities from the second signal may be removed to output a third signal comprising substantially the voice input received at the microphone.