Patent classifications
H04R2430/21
FIRST-ORDER DIFFERENTIAL MICROPHONE ARRAY WITH STEERABLE BEAMFORMER
A first-order differential microphone array (FODMA) with a steerable beamformer is constructed by specifying a target beampattern for the FODMA at a steering angle θ and then decomposing the target beampattern into a first sub-beampattern and a second sub-beampattern based on the steering angle θ. A first sub-beamformer and a second sub-beamformer are generated to each filter signals from microphones of the FODMA, wherein the first sub-beamformer is associated with the first sub-beampattern, and the second sub-beamformer is associated with the second sub-beampattern. The steerable beamformer is then generated based on the first sub-beamformer and the second sub-beamformer. The decomposing of the target beampattern into a first sub-beampattern and a second sub-beampattern includes dividing the target beampattern into a sum of a first-order cosine (cardioid) first sub-beampattern and a first-order sinusoidal (dipole) second sub-beampattern.
Earloop microphone
A headset implements an earloop microphone and includes a housing. An earloop of the headset secures the headset to an ear of a user. A first microphone is acoustically coupled to a first opening in the housing. A second microphone is acoustically coupled to a second opening in the housing. A third microphone is acoustically coupled to a third opening in the earloop.
APPARATUS AND METHOD FOR ESTIMATING DIRECTION OF SOUND BY USING ACOUSTIC SENSOR
Provided is a direction estimating apparatus using an acoustic sensor, the direction estimating apparatus including a non-directional acoustic sensor, a plurality of directional acoustic sensors provided adjacent to the non-directional acoustic sensor, and a processor configured to obtain a first output signal from the non-directional acoustic sensor and a plurality of second output signals from the plurality of directional acoustic sensors, and estimate a direction of a sound source within an error range from -5 degrees to +5 degrees by comparing magnitudes between the two output signals and phase information between the first output signal and one of the second output signals.
Direction finding system using MEMS sound sensors
Provided is a Direction Finding Acoustic Sensor comprising a first sound sensor and a second sound sensor, where the first and second sound sensors are generally maintained in a reflectional symmetry around an axis of symmetry. A digital device in data communication both sound sensors receives a signal P.sub.L from the first sensor a signal P.sub.R from the second sensor based on displacement respective sensors. The digital device evaluates a difference between an α.sub.1P.sub.L and an α.sub.2P.sub.R relative to a sum of the α.sub.1P.sub.L and the α.sub.2P.sub.R, and provides an angle θ.sub.S corresponding to the result. Typically, the Direction Finding Acoustic Sensor communicates the θ.sub.s determined using some appropriate reference frame, such as the axis of symmetry. The Direction Finding Acoustic Sensor is capable of providing an unambiguous direction within an angle of ±(90°−θ.sub.off) of the axis of symmetry.
METHOD FOR OPERATING A BINAURAL HEARING SYSTEM AND BINAURAL HEARING SYSTEM
In a method for operating a binaural hearing system having a first hearing aid and a second hearing, the first hearing aid generates a first reference signal from a sound signal by a first reference microphone and the second hearing aid generates a second reference signal from the sound signal by a second reference microphone. The first reference signal and the second reference signal are both used to derive a first binaural beamformer signal. For at least a number of frequency bands, the first reference signal is used to derive a first phase. For the number of frequency bands, a first output signal is derived from the first binaural beamformer signal and the first phase.
System and method for modifying signals to determine an incidence angle of an acoustic wave
Systems and methods for virtually coupled resonators to determine an incidence angle of an acoustic wave are described herein. In one example, a system includes a processor and first and second transducers in communication with the processor. The first transducer produces a first signal in response to detecting an acoustic wave, while the second transducer produces a second signal in response to detecting the acoustic wave. The system may also include a memory in communication with the processor and having machine-readable instructions that cause the processor to modify the first signal and the second signal using a virtual resonator mapping function to generate a modified first signal and a modified second signal. The virtual resonator mapping function changes the first signal and the second signal to be representative of signals produced by transducers located within a hypothetical chamber of a hypothetical resonator.
Acquisition of spatialized sound data
A data-processing method for determining at least one spatial coordinate of a sound source emitting a sound signal, in a three-dimensional space, includes the following steps: obtaining at least one first signal and one second signal from the sound signal, collected according to separate directivities by a first sensor and a second sensor; deducing from the first and second signals an expression of at least one first spatial coordinate of the sound source, the expression comprising an uncertainty; determining additional information relating to the first spatial coordinate of the sound source, from a comparison between the respective features of the signals collected by the first and second sensors; and determining the first spatial coordinate of the sound source on the basis of the expression and the additional information.
Directional audio capture
Systems and methods for improving performance of a directional audio capture system are provided. An example method includes correlating phase plots of at least two audio inputs, with the audio inputs being captured by at least two microphones. The method can further include generating, based on the correlation, estimates of salience at different directional angles to localize a direction of a source of sound. The method can allow providing cues to the directional audio capture system based on the estimates. The cues include attenuation levels. A rate of change of the levels of attenuation is controlled by attack and release time constants to avoid sound artifacts. The method also includes determining a mode based on an absence or presence of one or more peaks in the estimates of salience. The method also provides for configuring the directional audio capture system based on the determined mode.
MICROPHONE DEVICE WITH INTEGRATED PRESSURE SENSOR
A microphone device comprises a microphone die including a first microphone motor and a second microphone motor, an acoustic integrated circuit structured to process signals produced by the first microphone motor and the second microphone motor, and a sensor die stacked on top of the acoustic integrated circuit, wherein the sensor die comprises a pressure sensor. Another microphone comprises a microphone die including a first microphone motor and a second microphone motor and an integrated circuit die. The integrated circuit die comprises an acoustic integrated circuit structured to process signals produced by the first microphone motor and the second microphone motor, a pressure sensor, and a pressure integrated circuit structured to press signals produced by the pressure sensor.
Linear differential microphone arrays with steerable beamformers
An N.sup.th order linear differential microphone array (LDMA) with a steerable beamformer may be constructed by specifying a target beampattern for the LDMA at a steering angle θ. An N.sup.th order polynomial associated with the target beampattern may then be generated. A relationship between the nulls of the polynomial and the steering angle θ is determined and then a value of one of the nulls is determined based on N−1 assigned values for the other nulls and the determined relationship between the nulls of the polynomial and the steering angle θ. The steerable beamformer may be generated based on the determined null value and the N−1 assigned null values. The N−1 assigned null values may be associated with the N−1 nulls of the polynomial that are of less than N.sup.th order and the determined null value may be associated with the null of the polynomial that is of N.sup.th order.