H04R2430/03

Hearing device comprising a keyword detector and an own voice detector and/or a transmitter

A hearing device, e.g. a hearing aid, is configured to be arranged at least partly on a user's head or at least partly implanted in a user's head. The hearing device comprises a) at least one input transducer for picking up an input sound signal from the environment and providing at least one electric input signal representing said input sound signal; b) a signal processor providing a processed signal based on one or more of said at least one electric input signals; c) an output unit for converting said processed signal or a signal originating therefrom to stimuli perceivable by said user as sound; d) a keyword spotting system comprising d1) a keyword detector configured to detect a limited number of predefined keywords or phrases or sounds in said at least one electric input signal or in a signal derived therefrom, and to provide a keyword indicator of whether or not, or with what probability, said keywords or phrases or sounds are detected, and d2) an own voice detector for providing an own voice indicator estimating whether or not, or with what probability, a given input sound signal originates from the voice of the user of the hearing device. The hearing device further comprises e) a controller configured to provide an own-voice-keyword indicator of whether or not or with what probability a given one of said keywords or phrases or sounds is currently detected and spoken by said user, said own-voice-keyword indicator being dependent on said keyword indicator and said own voice indicator.

Multi-frequency sensing system with improved smart glasses and devices
11544036 · 2023-01-03 · ·

The systems and methods described relate to the concept that smart devices can be used to: sense various types of phenomena like sound, blue light exposure, RF and microwave radiation, and, in real-time, analyze, report and/or control outputs (e.g., displays or speakers). The systems are configurable and use standard computing devices, such as wearable electronics (e.g., smart glasses), tablet computers, and mobile phones to measure various frequency bands across multiple points, allowing a single user to visualize and/or adjust environmental conditions.

Playback device self-calibration
11540073 · 2022-12-27 · ·

Examples described herein involve configuring a playback device based on distortion, such as that caused by a barrier. One implementation may involve causing the playback device to play audio content according to an existing playback configuration, determining an existing frequency response of the playback device in a given system, and determining whether a difference between the existing frequency response of the playback device in the given system and a predetermined frequency response for the playback device is greater than a predetermined distortion threshold. If it is determined that the difference between the existing frequency response of the playback device and the predetermined frequency response for the playback device is greater than the predetermined distortion threshold, then the existing playback configuration of the playback device is changed to an updated playback configuration of the playback device and the playback device plays audio content according to the updated playback configuration.

Technologies for multi-randomized audio-visual entrainment
11536965 · 2022-12-27 · ·

Technologies that stimulate the mammalian central nervous system are described. A method includes: (a) causing a pair of glasses to be worn by a user, where the pair of glasses hosts/interacts with a processor, a first light source, a second light source, a first sound source, and a second sound source; (b) causing the processor to read sets of parameters that include information regarding base frequencies, variability, frequency ranges, and time ranges, where the frequency ranges are positively and negatively off the base frequencies; and (c) causing the processor to request the light sources to (i) flash light to visual fields (left/right) of each eye according to selected frequencies for a duration of time and to (ii) pulse sound to the each ear according to selected frequencies for a duration of time such that an audio-visual entrainment (AVE) occurs that causes neuron and the glia to respond dynamically to the AVE. The frequencies are randomly selected from a frequency range and the duration of stimulation time is randomly selected from a time range.

Audio component adjustment based on location

In one aspect, a device may include at least one processor and storage accessible to the at least one processor. The storage may include instructions executable by the at least one processor to identify at least one characteristic associated with audio as sensed at a first location, with the audio being produced at a second location that is different from the first location. The instructions may also be executable to, based on the at least one identified characteristic, adjust a first volume level of a first component of the audio in a first frequency and/or first frequency band but not a second volume level of a second component of the audio in a second frequency and/or second frequency band of the audio.

CONTENT AND ENVIRONMENTALLY AWARE ENVIRONMENTAL NOISE COMPENSATION

Some implementations involve receiving a content stream that includes audio data, determining a content type corresponding to the content stream and determining, based at least in part on the Receiving, by a control system and via an interface system, a content stream that includes audio data content type, a noise compensation method. Some examples involve performing the noise compensation method on the audio data to produce noise-compensated audio data, rendering the noise-compensated audio data for reproduction via a set of audio reproduction transducers of the audio environment, to produce rendered audio signals, and providing the rendered audio signals to at least some audio reproduction transducers of the audio environment.

Method and an audio processing unit for detecting a tone

A method for detecting a prominent tone of an input audio includes establishing a first analysis audio signal based on the input audio signal, establishing a second analysis audio signal based on the input audio signal, wherein an analysis audio signal of the first analysis audio signal and the second analysis audio signal is established by applying an analysis audio filter to the input audio signal, comparing the first analysis audio signal and the second analysis audio signal to obtain an energy level contrast, and determining a representation of the prominent tone by converting the energy level contrast by a contrast-to-frequency mapping function.

Method and apparatus for output signal equalization between microphones
11528556 · 2022-12-13 · ·

A method, apparatus and computer program product provide an improved filter calibration procedure to reliably equalize the long term spectrum of the audio signals captured by first and second microphones that are at different locations relative to a sound source and/or are of different types. In the context of a method, the signals captured by the first and second microphones are analyzed. The method also determines one or more quality measures based on the analysis. In an instance in which one or more quality measure satisfy a predefined condition, the method determines a frequency response of the signals captured by the first and second microphones. The method also determines a difference between the frequency response of the signals captured by the first and second microphones and processes the signals captured by the first microphone for filtering relative to the signals captured by the second microphone based upon the difference.

APPARATUS, METHOD AND COMPUTER-READABLE STORAGE MEDIUM FOR MIXING COLLECTED SOUND SIGNALS OF MICROPHONES
20220394382 · 2022-12-08 ·

An apparatus comprising: one or more processors; and one or more memory devices configured to store one or more computer programs executable by the one or more processors. The one or more programs, when executed by the one or more processors, cause the apparatus to function as: a setting unit configured to set an angle section at a single sound collection position, selected by a user; a analysis unit configured to convert each of M collected sound signals into a frequency component; a beamforming unit configured to multiply M frequency components obtained through conversion by the analysis unit by respective beamforming matrices to generate a plurality of acoustic signals of two channels; and a signal generation unit configured to synthesize the acoustic signals per channel and outputting an acoustic signal for every channel.

Methods and systems for assessing insertion position of hearing instrument

A speaker of a hearing instrument generates a sound that includes a range of frequencies. Furthermore, a microphone of the hearing instrument measures an acoustic response to the sound. A processing system classifies, based on the acoustic response to the sound, a depth of insertion of an in-ear assembly of the hearing instrument into an ear canal of a user. Additionally, the processing system generates an indication based on the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal of the user.