H04R25/50

Listening device with automatic mode change capabilities

A hearing aid includes a casing configured to fit behind an ear of a user's head and against a side of the user's head. The hearing aid further includes a first proximity sensor associated with the casing and configured to generate a first signal that is proportional to a proximity of the casing to the ear and includes a processor coupled to the first proximity sensor and configured to select an operating mode from a plurality of operating modes in response to the first signal.

Sound signal modelling based on recorded object sound
11140495 · 2021-10-05 · ·

A hearing device configured to be worn by a user, includes: a first input transducer for providing an input signal; a first processing unit configured for processing the input signal according to a first sound signal model; and an acoustic output transducer coupled to an output of the first processing unit for conversion of an output signal from the first processing unit into an audio output signal; wherein the hearing device is configured to obtain an input signal comprising a first signal part and a second signal part, the first signal part corresponding at least partly to a first object signal recorded by a recording unit; and wherein the hearing device is also configured to apply a first set of parameter values of a second sound signal model to the first sound signal model, and process the input signal according to the first sound signal model.

SOUND-OUTPUT DEVICE

The present application discloses a sound-output device, including a vibration speaker configured to generate a bone-conducted sound wave; and an air-conducted speaker configured to generate an air-conducted sound wave. The sound-output device further comprises a signal processing module configured to generate a control signal, wherein, the vibration speaker includes a vibration assembly electrically connected to the signal processing module to receive the control signal, and generate the bone-conducted sound wave based on the control signal, and the air-conducted speaker includes a housing coupled to the vibration assembly to generate the air-conducted sound wave based on the bone-conducted sound wave.

SYSTEMS AND METHODS FOR PROCESSING AUDIO SIGNALS BASED ON USER DEVICE PARAMETERS
20210289307 · 2021-09-16 ·

In various applications, the system provides a method for processing audio signals, including: receiving a request for audio content; receiving an identifier encoded in a personal audio device comprising a transducer for playing audio; retrieving at least one parameter associated with the identifier; and processing the audio content using at least the request, the identifier and the at least one parameter, wherein the processing is customized for the personal audio device based on the at least one parameter associated with the identifier. In various applications the parameter is, one or more of, associated with a specification of the personal audio device, acoustic metrics of the transducer, relates to control of equalization, relates to permission to enable proprietary sonic processing for enhanced acoustic reception of streaming content, relates to acoustic metrics of the transducer and wherein the identifier is associated with permission to enable proprietary sonic processing for enhanced acoustic reception of streaming content, and/or is stored in a chip on the personal audio device, is retrieved from a server in a network, among other things. In various applications, the personal audio device comprises ear buds and the identifier is stored in a non-volatile memory of the ear buds.

Selectively Collecting and Storing Sensor Data of a Hearing System

A method for collecting and storing sensor data (56, 64) of a hearing system (10) comprises: receiving the sensor data (56, 64) of at least one sensor (20, 32, 34) of a hearing device (12) of the hearing system (10), wherein the hearing device (12) is worn by a user; detecting a situation (72) of interest by classifying at least a part of the sensor data (56, 64) with a classifier (61) implemented in the hearing system (10); collecting the sensor data (56, 64), when the hearing system (10) is in a situation (72) of interest; and sending the collected sensor data (76) to a storage system (54) in data communication with the hearing system (10).

HEARING DEVICE
20210306778 · 2021-09-30 · ·

A hearing device comprising a behind-the-ear module and a tube element extending from the behind-the-ear module is disclosed. The behind-the-ear module comprises a signal processor for processing received audio signals into a signal modified to compensate for a user's hearing impairment, and an antenna configured for emission and reception of electromagnetic radiation at a first frequency. The hearing device further comprises at least one electrically conducting element, wherein a first section of the at least one electrically conducting element extends into the tube element, and at least one decoupling element, the at least one decoupling element being configured to electrically decouple the first section and the behind-the-ear module at the first frequency while maintaining an electrical connection between the first section and the behind-the-ear module at second frequencies.

IDENTIFYING INFORMATION AND ASSOCIATED INDIVIDUALS
20210258703 · 2021-08-19 · ·

A hearing aid system for individual identification of a hearing aid system may include a wearable camera, a microphone, and at least one processor. The processor may be programmed to receive a plurality of images captured by the wearable camera; receive audio signals representative of sounds captured by the microphone; and identify a first audio signal, from among the received audio signals, representative of a voice of a first individual. The processor may transcribe and store, in a memory, text corresponding to speech associated with the voice of the first individual and determine whether the first individual is a recognized individual. If the first individual is a recognized individual, the processor may associate an identifier of the first recognized individual with the stored text corresponding to the speech associated with the voice of the first individual.

USING VOICE AND VISUAL SIGNATURES TO IDENTIFY OBJECTS
20210233539 · 2021-07-29 ·

Disclosed is a system for identifying sound-emanating objects in an environment of a user. The system may comprise at least one memory device and at least one processor programmed to: receive a plurality of images captured by a wearable camera; analyze the received at least one of the plurality of images to determine one or more visual characteristics associated with the at least one sound-emanating object; identify within a database in view of the one or more visual characteristics, the at least one sound-emanating object and determine a degree of certainty of identification; receive audio signals acquired by a wearable microphone; analyze the received audio signals to determine a voiceprint of the at least one sound emanating object; identify the at least one sound-emanating object based on the determined voiceprint: and initiate at least one action based on an identity of the at least one sound-emanating object.

SELECTIVE INPUT FOR A HEARING AID BASED ON IMAGE DATA
20210235201 · 2021-07-29 · ·

Disclosed is a hearing aid system for selectively conditioning audio signals. The system may comprise a processor programmed to: receive a plurality of images captured by a wearable camera, wherein the plurality of images depict objects in an environment of a user; receive audio signals acquired by a wearable microphone, wherein the audio signals are representative of sounds emanating from the objects; analyze the plurality of images to identify at least one sound-emanating object in the environment of the user; retrieve, from a database, information about the at least one identified sound-emanating object; causing, based on the retrieved information, selective conditioning of at least one audio signal received by the wearable microphone from a region associated with the at least one sound-emanating object; and cause transmission of the at least one conditioned audio signal to a hearing interface device configured to provide sounds to an ear of the user.

DIFFERENTIAL AMPLIFICATION RELATIVE TO VOICE OF SPEAKERPHONE USER

A system may include a wearable camera configured to capture images and a microphone configured to capture sounds. The system may also include a processor programmed to receive the images; identify a representation of one or more individuals in the images; receive from the microphone a first audio signal associated with a voice; determine, based on analysis of the images, that the first audio signal is not associated with a voice of any of the one or more individuals; receive from the microphone a second audio signal associated with a voice; determine, based on analysis of the images, that the second audio signal is associated with a voice of one of the one or more individuals; and cause a first amplification of the first audio signal and a second amplification of the second audio signal. The first amplification may differ from the second amplification in one aspect.