H04R2430/25

Wearable communication enhancement device
09848260 · 2017-12-19 · ·

Embodiments disclosed herein may include a wearable apparatus including a frame having a memory and processor associated therewith. The apparatus may include a camera associated with the frame and in communication with the processor, the camera configured to track an eye of a wearer. The apparatus may also include at least one microphone associated with the frame. The at least one microphone may be configured to receive a directional instruction from the processor. The directional instruction may be based upon an adaptive beamforming analysis performed in response to a detected eye movement from the infrared camera. The apparatus may also include a speaker associated with the frame configured to provide an audio signal received at the at least one microphone to the wearer.

HEARING AID COMPRISING A BEAM FORMER FILTERING UNIT COMPRISING A SMOOTHING UNIT

A hearing aid comprises a resulting beam former (Y) for providing a resulting beamformed signal Y.sub.BF based on first and second electric input signals IN.sub.1 and IN.sub.2, first and second sets of complex frequency dependent weighting parameters W.sub.11(k), W.sub.12(k) and W.sub.21(k), W.sub.22(k), and a resulting complex, frequency dependent adaptation parameter β(k)•β(k) may be determined as <C.sub.2*•C.sub.1>/<(|C2|.sup.2>+c), where * denotes the complex conjugation and custom-charactercustom-character denotes the statistical expectation operator, and c is a constant, and wherein said adaptive beam former filtering unit (BFU) comprises a smoothing unit for implementing said statistical expectation operator by smoothing the complex expression C.sub.2*•C.sub.1 and the real expression |C.sub.2>.sup.2 over time. Alternatively, β(k) may be determined from the following expression

[00001] β = w C .Math. .Math. 1 H .Math. C v .Math. w C .Math. .Math. 2 w C .Math. .Math. 2 H .Math. C v .Math. w C .Math. .Math. 2 ,

where w.sub.C1 and w.sub.C2 are the beamformer weights representing the first (C.sub.1) and the second (C.sub.2) beamformers, respectively, C.sub.v is a noise covariance matrix, and H denotes Hermitian transposition. Corresponding methods of operating a hearing aid, and a hearing aid utilizing smoothing β(k) based on adaptive covariance smoothing are disclosed.

Multichannel Head-Trackable Microphone
20170347193 · 2017-11-30 ·

A production work flow optimized multichannel virtual reality microphone that has its own rendering software allowing for the recording, rendering, and playing back of immersive, head-trackable positional audio for 360 video, gaming, and virtual reality applications. The multichannel microphone used to record multiple binaural sound perspectives, has eight microphones coupled to a rotatable disc frictionally mounted on the outside of a truncated spherical shell, an internal, detachable clamp for attaching the shell to a vertical pole stand, and easily accessible microphone output connections configured as four stereo microphone pairs, spaced closely to an average set of human ears. The microphone output connections are located on an internal support member. These are accessible through upper and lower lids and a door. Four small baffles simulating the pinna of the human ear reside on the disc, separating the paired microphones.

SYSTEM AND METHOD FOR SPEECH ENHANCEMENT USING A COHERENT TO DIFFUSE SOUND RATIO
20170330580 · 2017-11-16 ·

Embodiments of the present disclosure may include a system and method for speech enhancement using the coherent to diffuse sound ratio. Embodiments may include receiving an audio signal at one or more microphones and controlling one or more adaptive filters of a beamformer using a coherent to diffuse ratio (“CDR”)

Selective amplification of speaker of interest
11496842 · 2022-11-08 · ·

A system may include a camera configured to capture images from an environment of a user and a microphone configured to capture sounds from an environment of the user. The system may also include a processor programmed to: receive the images; identify a representation of a first individual and a representation of a second individual in the images; receive, from the microphone, a first audio signal associated with a voice of the first individual and a second audio signal associated with a voice of the second individual; detect an amplification criteria indicative of a voice amplification priority between the first individual and the second individual; selectively amplify the first audio signal relative to the second audio signal when the amplification criteria indicates that the first individual has voice amplification priority over the second individual; and cause transmission of the selectively amplified first audio signal to a hearing interface device.

ELECTRONIC DEVICE AND REVERBERATION REMOVAL METHOD THEREFOR

Provided are an electronic device and a reverberation removal method therefor. The reverberation removal method for an electronic device comprises: a plurality of microphone units for receiving a user's voice; a reverberation removal unit removing a reverberation component of the user's voice received from the plurality of microphone units so as to acquire an original component of the user's voice; a reverberation information acquisition unit for acquiring information on the intensity of the reverberation component of the user's voice; and a post-processing unit for additionally removing a reverberation component from the original component acquired from the reverberation removal unit on the basis of the information on the intensity of the reverberation component.

System and methods thereof for processing sound beams
09788108 · 2017-10-10 · ·

A system and method for processing sounds are provided. The sound processing system comprises a sound sensing unit including a plurality of microphones, each microphone providing a non-manipulated sound signal; a beam synthesizer including a plurality of filters, wherein each filter corresponds to at least one parameter for generating at least one sound beam; a sound analyzer connected to the sound sensing unit and to the beam synthesizer, wherein the sound analyzer is configured to generate at least one manipulated sound signal responsive to the plurality of filters and to the non-manipulated sound signals provided by at least two of the microphones.

Ear-mountable listening device having a ring-shaped microphone array for beamforming

An ear-mountable listening device includes an adaptive phased array of microphones, a speaker, and electronics. The microphones are physically arranged into a ring pattern to capture sounds emanating from an environment. Each of the microphones is configured to output one of a plurality of first audio signals that is representative of the sounds captured by a respective one of the microphones. The speaker is arranged to emit audio into an ear. The electronics are coupled to the adaptive phased array and the speaker and include logic that when executed causes the ear-mountable listening device receive a user input identifying a first sound for cancelling or amplifying, steer a null or a lobe of the adaptive phased array based upon the user input, and generate a second audio signal that drives the speaker based upon a combination of one or more of the first audio signals.

Position directed acoustic array and beamforming methods

Methods and systems are provided for receiving desired sounds. The system includes a position sensor configured to determine an occupant position of an occupant engaging in speech within a defined space and transmit the speaking occupant position. A plurality of microphones are configured to receive sound from within the defined space and transmit audio signals corresponding to the received sound. A processor, in communication with the position sensor and the microphones, is configured to receive the speaking occupant position and the audio signals, apply a beamformer to the audio signals to direct a microphone beam toward the occupant position, and generate a beamformer output signal.

Earphone signal processing method and system, and earphone

An earphone signal processing method includes: a signal picked up by a first microphone of an earphone at a position close to a mouth outside an ear canal, a signal picked up by a second microphone of the earphone at a position away from the mouth outside the ear canal and a signal picked up by a third microphone are acquired, the third microphone being in a cavity formed by the earphone and the ear canal; dual-microphone noise reduction is performed on the signals picked up by the first and second microphones to obtain a first intermediate signal; dual-microphone noise reduction is performed on the signals picked up by the second and third microphones to obtain a second intermediate signal; the first and second intermediate signals are fused to obtain a fused voice signal; and the fused voice signal is output.