H04R2225/43

Individualized own voice detection in a hearing prosthesis
11477587 · 2022-10-18 · ·

Presented herein are techniques for training a hearing prosthesis to classify/categorize received sound signals as either including a recipient's own voice (i.e., the voice or speech of the recipient of the hearing prosthesis) or external voice (i.e., the voice or speech of one or more persons other than the recipient). The techniques presented herein use the captured voice (speech) of the recipient to train the hearing prosthesis to perform the classification of the sound signals as including the recipient's own voice or external voice.

Audio analysis and processing system

An audio analysis and processing system with a processor configured with an audio array input thread connected to a plurality of audio input channels each corresponding to an audio input sensor. An audio input sensor may be positionally related to a position of other audio input sensors and a source input thread may be configured to be connected to a microphone audio input channel. An audio output thread may be configured to be connected to a speaker output channel and a beamformer thread may be responsive to the audio array input thread. A beam analysis and selection thread may be connected to an output of the beamformer thread and a mixer thread may have a first input connected to an output of the source input thread and a second input connected to an output of the beam analysis and selection thread and may have an output connected to the audio output thread. The audio input channel may be connected to the personal communication device. The microphone audio input channel may be connected to the personal communication device. The processor may include a line output thread configured to connect to an audio output channel. An audio information interface may be provided to connect signals representing audio to the processor.

Identifying information and associated individuals

A hearing aid system for individual identification of a hearing aid system may include a wearable camera, a microphone, and at least one processor. The processor may be programmed to receive a plurality of images captured by the wearable camera; receive audio signals representative of sounds captured by the microphone; and identify a first audio signal, from among the received audio signals, representative of a voice of a first individual. The processor may transcribe and store, in a memory, text corresponding to speech associated with the voice of the first individual and determine whether the first individual is a recognized individual. If the first individual is a recognized individual, the processor may associate an identifier of the first recognized individual with the stored text corresponding to the speech associated with the voice of the first individual.

Loudspeaker system provided with dynamic speech equalization

A method for speech equalization, comprising the steps of receiving an input audio signal, processing said input audio signal in dependence on frequency and to providing an equalized electric audio signal according to an equalization function, wherein said equalization function comprises at least an actuator part configured to dynamically applying a compensation filter to the received input signal and dynamically applying a transparent filter to the received input signal, and further transmitting an output signal perceivable by a user as sound representative of said electric acoustic input signal or a processed version thereof.

Binaural hearing system having two hearing instruments to be worn in or on the ear of the user, and method of operating such a hearing system

A binaural hearing system for assisting a hearing of a user includes two hearing instruments each to be worn in or on an ear of the user. An audio signal is modified in each of the two hearing instruments by way of a programmable signal processor of the respective hearing instrument by executing a plurality of software modules of firmware of the hearing system and is output by an output transducer of the respective hearing instrument. The executed software modules of the firmware are distributed asymmetrically on the two hearing instruments, so that at least one of the software modules of the firmware is selectively executed in one of the two hearing instruments.

METHODS TO ASSIST VERBAL COMMUNICATION FOR BOTH LISTENERS AND SPEAKERS
20230122715 · 2023-04-20 ·

Methods implemented in a system utilizing computing programs for a speaker and a listener in conversation are provided. Aspects include (i) a reminder provisioner for a speaker which is triggered according to speed, pitch or volume of the speaker's speech, (ii) a speech training provisioner for a speaker, and (iii) an application which records and plays back difficult conversation to understand.

Polyphonic Pitch Enhancement in a Cochlear Implant
20220323756 · 2022-10-13 ·

A cochlear implant system for processing polyphonic pitch includes an electrode array for implanting in a cochlea of a patient. The electrode array includes a first set of electrodes, where each electrode of the first set is for implanting on a first region of the cochlea. The electrode array also includes a second set of electrodes, where each electrode of the second set is for implanting on a second region of the cochlea. The system also includes a sound processor configured to capture a sound signal having polyphonic pitch. For each electrode of the first set and second set, the speech processor generates at least two different modulated frequency signals from the sound signal, such that each of the modulated frequency signals corresponds to a different pitch in the sound signal. The speech processor stimulates the electrode by simultaneously applying the at least two different modulated frequency signals.

Methods for a voice processing system

Methods for a voice processing system comprising P microphone units (102A . . . 102D) and a central unit (104) are disclosed. Each microphone unit is linked to a person and derives from N microphone signals a source localisation signal. The source localisation signal is used to control an adaptive beam form process to obtain a beam formed audio signal. The microphone unit is further configured to derive metadata from for N microphone signals, such direction the sound is coming from. Packages with the metadata and beam formed audio signal are transmitted to the central unit. The central unit processes the metadata to determine which parts of the P beam formed audio signal comprises speech from a person that is linked to another microphone unit. By removing said parts from the audio signals before transcription, the quality of the transcription is improved. The transcriptions are displayed on a remote device.

Feature extraction in hearing prostheses

Presented herein are techniques for extracting features from sound signals received at a hearing prosthesis at least partially based on an environmental classification of the sound signals. More specifically, one or more sound signals are received at a hearing prosthesis and are converted in to stimulation control signals for use in delivering stimulation to a recipient of the hearing prosthesis. The hearing prosthesis determines an environmental classification of the sound environment associated with the one or more sound signals and is configured to use the environmental classification in the determination of a feature-based adjustment for incorporation into the stimulation control signals.

Hearing aid comprising a noise reduction system

A hearing aid comprises a) a multitude of M input transducers each providing an electric input signal representative of environment sound in a time-frequency representation (k, l), and each comprise varying amounts of target (s) and noise (v) signal components; b) a signal processor configured to process said multitude of electric input signals; and comprising a beamformer filter configured to receive said multitude M of electric input signals and to provide a spatially filtered signal and a post-filter configured to receive said spatially filtered signal and to provide an estimate Ŝ(k,l) of a target signal representing said target signal components from said target sound source. The signal processor is configured to provide estimates of power spectral densities λ.sub.s(k,l) of said target signal components in dependence of inter-frequency bin relationships between the spectral components enforced by properties of the electric input signals across at least some of said frequency bins.