Microphone Placement in Open Ear Hearing Assistance Devices
20210044888 ยท 2021-02-11
Inventors
- Andrew Todd Sabin (Chicago, IL, US)
- Ryan C. Struzik (Hopkinton, MA, US)
- Aric J. Wax (Watertown, MA, US)
- Daniel M. Gauger, JR. (Berlin, MA, US)
Cpc classification
H04R1/028
ELECTRICITY
H04R2430/25
ELECTRICITY
H04R2205/022
ELECTRICITY
H04R1/1066
ELECTRICITY
International classification
Abstract
A head-worn acoustic device includes at least one acoustic transducer disposed such that, in a head-worn state, the transducer is in an open-ear configuration in which an ear canal of a user of the head-worn acoustic device is unobstructed. The acoustic device also includes at least one microphone configured to capture audio that is processed and played back through the transducer, and an amplifier circuit configured to process signals representing the audio captured using the microphone and generate driver signals for the transducer. The transducer and the microphone are disposed on the head-worn acoustic device such that, in the head-worn state, a lobe of a radiation pattern of the at least one acoustic transducer is directed towards the ear canal of the user, and the at least one microphone is positioned in an acoustic null in a radiation pattern of the at least one acoustic transducer.
Claims
1. A head-worn acoustic device comprising: at least one acoustic transducer disposed such that, in a head-worn state, the at least one acoustic transducer is in an open-ear configuration in which an ear canal of a user of the head-worn acoustic device is unobstructed; at least two microphones configured to capture audio that is processed and played back through the at least one acoustic transducer; and an amplifier circuit configured to process signals representing the audio captured using the at least two microphones and generate driver signals for the at least one acoustic transducer, wherein the at least one acoustic transducer and the at least two microphones are disposed on the head-worn acoustic device such that, in the head-worn state, a lobe of a radiation pattern of the at least one acoustic transducer is directed towards the ear canal of the user, a feedforward microphone of the at least two microphones is positioned in an acoustic null in the radiation pattern of the at least one acoustic transducer, and a feedback microphone of the at least two microphones is positioned such that an amount of coupling between the at least one acoustic transducer and the feedback microphone is substantially equal to an amount of coupling between the at least one acoustic transducer and an ear of the user.
2. The acoustic device of claim 1, wherein the at least one acoustic transducer and at least one microphone of the at least two microphones is disposed along a temple of an eyeglass frame.
3. The acoustic device of claim 2, wherein the at least one microphone is disposed on a front portion of the eyeglass frame.
4. The acoustic device of claim 2, wherein the at least one microphone is a portion of an array of multiple microphones disposed along the temple of the eyeglass frame.
5. The acoustic device of claim 4, further comprising one or more processing devices configured to implement a beamforming process based on audio captured using the multiple microphones of the array.
6. The acoustic device of claim 5, wherein the beamforming process is configured to preferentially capture audio from a gaze-direction of the user.
7. The acoustic device of claim 1, wherein the at least one acoustic transducer and the at least two microphones are disposed on an open-ear headphone.
8. The acoustic device of claim 7, wherein the at least two microphones is a portion of an array of multiple microphones disposed on the open-ear headphone.
9. The acoustic device of claim 8, further comprising one or more processing devices configured to implement a beamforming process based on audio captured using the multiple microphones of the array.
10. The acoustic device of claim 1, wherein a power ratio of (i) a portion of output of the at least one acoustic transducer radiated towards the ear canal of the user and (ii) a portion of output of the at least one acoustic transducer radiated towards the feedforward microphone is at least 1 dB.
11. The acoustic device of claim 1, wherein the at least one acoustic transducer is a part of an array of acoustic transducers.
12. The acoustic device of claim 1, wherein in the head-worn state, a physical separation exists between the at least one acoustic transducer and the ear canal of the user.
13. The acoustic device of claim 1, wherein in the head-worn state, a physical separation exists between the at least one acoustic transducer and a concha or pinna of the user.
14. The acoustic device of claim 1, wherein the at least one acoustic transducer comprises an acoustic dipole.
15. A head-worn acoustic device comprising: at least one acoustic transducer disposed such that, in a head-worn state, the at least one acoustic transducer is in an open-ear configuration in which an ear canal of a user of the head-worn acoustic device is at least partially unobstructed; at least two microphones configured to capture audio that is processed and played back through the at least one acoustic transducer, wherein a feedforward microphone of the at least two microphones is positioned in an acoustic null in a radiation pattern of the at least one acoustic transducer, and wherein a feedback microphone of the at least two microphones is positioned such that an amount of coupling between the at least one acoustic transducer and the feedback microphone is substantially equal to an amount of coupling between the at least one acoustic transducer and an ear of the user; an amplifier circuit configured to process signals representing the audio captured using a first subset of the at least two microphones to generate a first signal for the at least one acoustic transducer; and an echo cancellation circuit configured to process the signals representing the audio captured using a second subset of the at least two microphones to generate a second signal for the at least one acoustic transducer, wherein a combination of the first signal and second signal reduces coupling between the at least one acoustic transducer and the at least two microphones by at least 3 dB.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010]
[0011]
[0012]
[0013]
DETAILED DESCRIPTION
[0014] This document describes technology for facilitating capture of audio signals in open-ear acoustic devices, and delivering the captured (and amplified) audio to user's ears such that the coupling between microphones and acoustic transducers is not significant, and the output of the acoustic transducers is low enough to not reach other people in the vicinity of the user. In addition, this document also describes feedforward and feedback noise reduction processes that allow for reducing the effect of audio coming from directions outside of one or more target directions. Such noise reduction, particularly in portions of the speech band, can result in at least 5 dB of improvement in signal to noise ratio (SNR), which in turn can improve speech perception/intelligibility even for users who do not have hearing loss. When combined with the directional capture of audio using microphone arrays, the technology described herein can allow a user to select the target direction from which audio is to be emphasized. For example, the target direction can be the direction at which a user is lookingreferred to herein as the look direction or gaze direction of the user.
[0015]
[0016] The frame 20 includes electronics module 70 and other components for controlling the audio eyeglasses 10 according to particular implementations. In some cases, separate, or duplicate sets of electronics module 70 are included in portions of the frame, e.g., each of the respective arms 40 in the frame 20. However, certain components described herein can also be present in singular form. Also, while the electronics module 70 is disposed in the arms 40 of the frame 20, in some implementations, at least portions of the electronics module 70 may be disposed elsewhere in the frame (e.g., in a portion of the frontal region 30 such as the bridge 60).
[0017]
[0018] In some implementations, the electronics module 70 includes one or more electroacoustic transducers 80 disposed such that, in a head-worn state of the corresponding device, the one or more electroacoustic transducers 80 are in an open-ear configuration. This refers to a configuration in which there exists a physical separation between an ear canal of a user and the corresponding acoustic transducer such that the acoustic transducer (and/or other portions of the corresponding device) does not fully occlude the ear canal from the environment. For example, referring back to
[0019] In some implementations, each transducer 80 can be used as a dipole loudspeaker with an acoustic driver or radiator that emits front-side acoustic radiation from its front side, and emits rear-side acoustic radiation from its rear side. The dipole loudspeaker can be built into the frame 20 of the audio eyeglasses 10. In some implementations, an acoustic channel defined within the housing of the eyeglasses 10 (e.g. within the arms 40) can direct the front-side acoustic radiation and another acoustic channel can direct the rear-side acoustic radiation. A plurality of sound-conducting vents (openings) in the housing allow sound to leave the housing. Openings in the eyeglass frame 20 can be aligned with these vents, so that the sound also leaves the frame 20. In some implementations, the distance between the sound-conducting openings defines an effective length of an acoustic dipole of the loudspeaker. The effective length may be considered to be the distance between the two openings that contribute most to the emitted radiation at any particular frequency. The housing and its openings can be constructed and arranged such that the effective dipole length is frequency dependent. In certain cases, the transducer 80 (e.g., loudspeaker dipole transducer) is able to achieve a higher ratio of (i) sound pressure delivered to the ear to (ii) spilled sound, as compared to an off-ear headphone not having this feature. Exemplary dipole transducers are shown and described in U.S. patent application Ser. No. 16/151,541, filed Oct. 4, 2018; and Ser. No. 16/408,179, filed May 9, 2019.
[0020] The electronics module 70 can also include an array 75 of one or more microphones. In some implementations, the microphones in the array 75 can be used to capture audio preferentially from a particular direction. For example, each of the microphones in the array 75 can be inherently directional that capture audio from a particular direction. In other examples, the audio captured by the array can be processed (e.g., using a smart antenna or beamforming process) to emphasize the audio captured from a particular direction. In some implementations, the microphone array 75 captures ambient audio preferentially from a first direction (e.g., as compared to at least a second direction that is different from the first direction). For example, the microphone array 75 can be configured to capture/emphasize audio preferentially from the front of the frame 20 along a direction parallel to the two arms 40. In some cases, this allows for preferential capture of audio from a direction that coincides with the gaze direction of the user of the audio eyeglasses 10. In implementations where the captured audio is played back through the one or more acoustic transducers 80 (possibly with some amplification), this can allow for a user to change a direction of gaze to better hear the sounds coming from that direction, as compared to, for example, sounds coming from other directions. In some implementations, to facilitate such amplification, the electronic module 70 includes an amplifier circuit 86 that processes signals representing the audio captured using the microphones of the array 75, and generates driver signals for the one or more acoustic transducers 80. In some cases, this can be improve the user's perception of speech in noise environments. For example, even a 5-10 dB improvement in the ratio of power from a particular direction to the power from other directions can improve perception of speech, particularly when the improvement is within the speech band (e.g., in the 300-1500 Hz frequency band) of the audio spectrum.
[0021] The multiple microphones can be disposed in the corresponding device in various ways. For the example device (audio eyeglasses 10) of
[0022] In some implementations, the locations of the microphones in the array 75 and the locations of the one or more acoustic transducers 80 can be jointly determined to implement an acoustics package that provides for directional audio delivery and capture in open-ear acoustic devices. For example, the locations of the transducers 80 and the microphones in the array 75 can be determined such that the transducers 80 satisfactorily deliver audio towards the ear of the user, without directing audio towards a microphone over a target or threshold amount. For example, the one or more acoustic transducers 80 and the multiple microphones of the array 75 can be disposed on a head-worn acoustic device (e.g., the audio eyeglasses 10) such that, in the head-worn state, a mainlobe of a radiation pattern of a directional acoustic transducer is directed towards the ear canal of the user, while a power ratio of (i) a portion of output of the one or more acoustic transducers radiated towards the ear canal of the user and (ii) a portion of output of the at least one acoustic transducer radiated towards a microphone of the array 75 satisfies a threshold condition. For example, a threshold condition can dictate that the above-referenced power ratio is at least 10 dB. In some implementations, the locations of the transducers 80 and the microphones of the array 75 can be determined while accounting for the directionality of the transducers, and/or the microphones, and/or the corresponding arrays.
[0023] In some implementations, the locations of the microphones of the array 75 are determined first, and the locations of the acoustic transducers 80 are then determined to achieve the target performances discussed above. For example, once the locations associated with the microphone array 75 are determined, the locations of the one or more acoustic transducers 80 are then determined such that the transducers 80 satisfactorily deliver audio towards the ear of the user, without directing audio towards a microphone of the array 75 over the target or threshold amount. Where a dipole transducer is used, the microphone(s) may be located in or near an acoustic null in a radiation pattern of the dipole transducer. In some cases, the microphone is positioned in a region in which acoustic energy radiated from a first radiating surface of the transducer destructively interferes with acoustic energy radiated from a second radiating surface of the transducer.
[0024] In some implementations, the electronics module 70 includes a controller 82 that coordinates and controls various portions of the electronic module 70. The controller 82 can include one or more processing devices that, in communication with one or more non-transitory machine-readable storage devices, execute various operations of the electronic module 70. In some implementations, the controller 82 implements an active noise reduction (ANR) engine 84 that generates driver signals for reducing the effect of audio signals that are considered as noise. For example, in a particular use-case scenario, the audio captured from a particular direction (e.g., the gaze direction of a user) can be considered to be a signal of interest, and the audio captured from other directions can be considered to be noise. The ANR engine 84 can be configured to generate one or more driver signals that have phases that are substantially inverted with respect to the phases of the noise signal, such that the driver signals generated by the ANR engine 84 destructively interferes with the noise signal (based on the principles of superposition) to reduce the effects of the noise.
[0025] In some implementations, the ANR engine 84 can include multiple noise reduction pathways such as a feedback path and a feedforward path (generally referred to as ANR pathways, ANR signal paths) that require the use of microphones to capture corresponding reference signals. In some implementations, one or more microphones of the array 75 can be used as a microphone for an ANR signal path, and in such cases, the placement of the corresponding microphones can be governed by whether the microphones are used for capturing reference audio for feedforward path or a feedback path. However, to facilitate an understanding of such placements, a description of an ANR engine 84 is provided first.
[0026] Various signal flow topologies can be implemented in the ANR engine to enable functionalities such as echo cancellation, feedback noise cancellation, feedforward noise cancellation, etc. For example, as shown in the example block diagram of an ANR engine 84 in
[0027] In some implementations, the feedforward microphone 202 and/or the feedback microphone 204 can be included in the microphone array 75. In such cases, the locations for the feedforward microphone 202 and/or the feedback microphone 204 may be determined first, before determining the locations for the one or more transducers 80. For example, the feedback microphone 204 can be disposed on the device at a location such that in a head-worn state of the device, the feedback microphone 204 is located close to the ear of the user. This can result in a high degree of coherence between what the user actually hears and what the microphone captures. Referring back to
[0028] In some implementations, the performance of an open ear device can be further improved by implementing an echo canceler (or echo cancellation circuit) that reduces the effects of any output of the transducer 80 as picked by a microphone such as the feedback microphone 204. For example, a reference microphone 208 can be used for picking up a different version of a signal that is also picked up or captured by the feedback microphone 204. Based on the two versions of the signal, an echo cancellation circuit (K.sub.echo) 220 can generate an additional signal, which, when combined with the output of the feedback compensator 216, further reduces the effect of coupling between the transducer 80 and the microphones. While the echo cancellation circuit shown in the example of
[0029] Referring back to
[0030] The power source 100 to the transducer 80 can be provided locally (e.g., with a battery in each of the temple regions of the frame 20), or a single battery can transfer power via wiring that passes through the frame 20 or is otherwise transferred from one temple to the other. The power source 100 can be used to control operation of the transducer 80, according to various implementations.
[0031] The controller 82 can include conventional hardware and/or software components for executing program instructions or code according to processes described herein. For example, controller 82 may include one or more processing devices, memory, communications pathways between components, and/or one or more logic engines for executing program code. Controller 82 can be coupled with other components in the electronics module 70 via any conventional wireless and/or hardwired connection which allows controller 82 to send/receive signals to/from those components and control operation thereof.
[0032] Referring back to
[0033]
[0034] While a distinction has sometimes been made between feedback and feedforward microphones, in acoustic devices such as open ear acoustic devices, a feedforward microphone could capture some amount of the transducer signal and thus have potential for feedback behavior. Therefore, the one or more microphones and their respective locations can be thought of more generally as being more or less able to capture either environmental sound signals or transducer sound signals coherent with the ear canal. Microphone locations corresponding to ratios close to unity (or approximately 0 dB) in the heat map may be better suited for accurately capturing the environmental sound signal at the ear canal at the expense of stability of the ANR system and vice-versa. Nonetheless, for a specific transducer and microphone system configuration, the ANR engine can be designed to account for those tradeoffs generally without making a rigid distinction between feedback and feedforward paths.
[0035] The functionality described herein, or portions thereof, and its various modifications (hereinafter the functions) can be implemented, at least in part, via a computer program product, e.g., a computer program tangibly embodied in an information carrier, such as one or more non-transitory machine-readable media or storage device, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a computer, multiple computers, and/or programmable logic components.
[0036] A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
[0037] Actions associated with implementing all or part of the functions can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the functions can be implemented as, special purpose logic circuitry, e.g., an FPGA and/or an ASIC (application-specific integrated circuit). In some implementations, at least a portion of the functions may also be executed on a floating point or fixed point digital signal processor (DSP) such as the Super Harvard Architecture Single-Chip Computer (SHARC) developed by Analog Devices Inc.
[0038] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Components of a computer include a processor for executing instructions and one or more memory devices for storing instructions and data.
[0039] Elements of different implementations described herein may be combined to form other embodiments not specifically set forth above. Elements may be left out of the structures described herein without adversely affecting their operation. Furthermore, various separate elements may be combined into one or more individual elements to perform the functions described herein.