Acoustic device
10390129 ยท 2019-08-20
Assignee
Inventors
- Roman N. Litovsky (Newton, MA, US)
- Bojan Rip (Newton, MA, US)
- Joseph M. Geiger (Clinton, MA, US)
- Chester Smith Williams (Lexington, MA, US)
- Pelham Norville (Framingham, MA, US)
- Brandon Westley (Hopkinton, MA, US)
Cpc classification
International classification
Abstract
An acoustic device that has a neck loop that is constructed and arranged to be worn around the neck. The neck loop includes a housing with a first acoustic waveguide having a first sound outlet opening, and a second acoustic waveguide having a second sound outlet opening. There is a first open-backed acoustic driver acoustically coupled to the first waveguide and a second open-backed acoustic driver acoustically coupled to the second waveguide.
Claims
1. An audio device comprising: a housing comprising a central portion, right and left legs depending from the central portion, a first acoustic waveguide having a first sound outlet opening, and a second acoustic waveguide having a second sound outlet opening; a right acoustic transducer carried by the right leg of the housing and acoustically coupled to the first waveguide and not the second waveguide, wherein the right acoustic transducer is constructed and arranged to radiate sound outwardly from the housing via the first sound outlet opening; a left acoustic transducer carried by the left leg of the housing and acoustically coupled to the second waveguide and not the first waveguide, wherein the left acoustic transducer is constructed and arranged to radiate sound outwardly from the housing via the second sound outlet opening; wherein the first sound outlet opening is located in the left leg of the housing proximate the left acoustic transducer and the second sound outlet opening is located in the right leg of the housing proximate the right acoustic transducer; and a controller that controls the relative phases of the first and second acoustic transducers.
2. The audio device of claim 1, wherein the first sound outlet opening is proximate to a first end of the first acoustic waveguide, and the second sound outlet opening is proximate to a first end of the second acoustic waveguide.
3. The audio device of claim 2, wherein the first acoustic transducer is proximate to a second end of the first acoustic waveguide, and the second acoustic transducer is proximate to a second end of the second acoustic waveguide.
4. The audio device of claim 1, wherein the housing is configured to be worn around a user's neck.
5. The audio device of claim 1, wherein the controller establishes two operational modes comprising: a first operational mode wherein the first and second acoustic transducers are out of phase in a first frequency range, in phase in a second frequency range, and out of phase in a third frequency range; and a second operational mode wherein the first and second acoustic transducers are out of phase in the first frequency range, and in phase in the second and third frequency ranges.
6. The audio device of claim 5, wherein the controller enables the first operational mode in response to the user speaking.
7. The audio device of claim 5, wherein the controller enables the second operational mode in response to a person other than the user speaking.
8. The audio device of claim 5, wherein the first frequency range is below the resonant frequency of the first and second waveguides.
9. The audio device of claim 5, wherein the controller is further configured to apply a first equalization scheme to audio signals output via the first and second transducers during the first operational mode, and apply a second equalization scheme to audio signals output via the first and second transducers during the second operational mode.
10. The audio device of claim 1, further comprising a microphone configured to receive voice signals from at least one of: the user and a person other than the user.
11. The audio device of claim 10, further comprising a wireless communication module for wirelessly transmitting the voice signals to a translation engine.
12. The audio device of claim 11, wherein the translation engine translates the voice signals to another language.
13. A computer-implemented method of controlling an audio device to assist with oral communication between a device user and another person, wherein the audio device comprises a housing comprising a first acoustic waveguide having a first sound outlet opening, and a second acoustic waveguide having a second sound outlet opening, and first and second acoustic transducers, wherein the first acoustic transducer is acoustically coupled to the first waveguide and not the second waveguide, wherein the first acoustic transducer is constructed and arranged to radiate sound outwardly from the housing via the first sound outlet opening, and the second acoustic transducer is acoustically coupled to the second waveguide and not the first waveguide, wherein the second acoustic transducer is constructed and arranged to radiate sound outwardly from the housing via the second sound outlet opening, wherein the first sound outlet opening is located proximate the second acoustic transducer and the second sound outlet opening is located proximate the first acoustic transducer, the method comprising: receiving a voice signal associated with the user; generating a first audio signal that is based on the received user's voice signal; outputting the first audio signal from the first and second acoustic transducers, wherein the first and second acoustic transducers are operated out of phase in a first frequency range, in phase in a second frequency range, and out of phase in a third frequency range; receiving a voice signal associated with the other person; generating a second audio signal that is based on the received other person's voice; and outputting the second audio signal from the first and second acoustic transducers, wherein the first and second acoustic transducers are operated out of phase in the first frequency range, and in phase in the second and third frequency ranges.
14. The method of claim 13, further comprising obtaining a translation of the received user's voice signal from the user's language into a different language, and wherein the first audio signal is based on the translation.
15. The method of claim 13, further comprising obtaining a translation of the received other person's voice signal from the other person's language into the user's language, and wherein the second audio signal is based on the translation.
16. The method of claim 13, further comprising wirelessly transmitting the received user's voice signal to a secondary device, and using information from the secondary device to generate the first audio signal.
17. The method of claim 13, further comprising wirelessly transmitting the received other person's voice signal to a secondary device, and using information from the secondary device to generate the second audio signal.
18. The method of claim 13, further comprising applying a first equalization scheme to the first audio signal, and applying a second equalization scheme to the second audio signal.
19. A machine-readable storage device having encoded thereon computer readable instructions for causing one or more processors to perform operations comprising: receiving a voice signal associated with a user of an audio device; generating a first audio signal that is based on the received user's voice signal; outputting the first audio signal from first and second acoustic transducers supported by a housing of the audio device, the housing comprising a first acoustic waveguide having a first sound outlet opening, and a second acoustic waveguide having a second sound outlet opening, wherein the first acoustic transducer is acoustically coupled to the first waveguide and not the second waveguide, wherein the first acoustic transducer is constructed and arranged to radiate sound outwardly from the housing via the first sound outlet opening, wherein the second acoustic transducer is acoustically coupled to the second waveguide and not the first waveguide, wherein the second acoustic transducer is constructed and arranged to radiate sound outwardly from the housing via the second sound outlet opening, wherein the first sound outlet opening is located proximate the second acoustic transducer and the second sound outlet opening is located proximate the first acoustic transducer, wherein the first and second acoustic transducers are operated out of phase in a first frequency range, in phase in a second frequency range, and out of phase in a third frequency range; receiving a voice signal associated with a person other than the user; generating a second audio signal that is based on the received other person's voice; and outputting the second audio signal from the first and second acoustic transducers, wherein the first and second acoustic transducers are operated out of phase in the first frequency range, and in phase in the second and third frequency ranges.
20. The machine-readable storage device of claim 19, wherein the operations further comprise: obtaining a translation of the received user's voice signal from the user's language into a different language, and wherein the first audio signal is based on the translation; and obtaining a translation of the received other person's voice signal from the other person's language into the user's language, and wherein the second audio signal is based on the translation.
21. An audio device comprising: a housing comprising a first acoustic waveguide having a first sound outlet opening, and a second acoustic waveguide having a second sound outlet opening; a first acoustic transducer acoustically coupled to the first waveguide and not the second waveguide, wherein the first acoustic transducer is constructed and arranged to radiate sound outwardly from the housing via the first sound outlet opening; a second acoustic transducer acoustically coupled to the second waveguide and not the first waveguide, wherein the second acoustic transducer is constructed and arranged to radiate sound outwardly from the housing via the second sound outlet opening; wherein the first sound outlet opening is located proximate the second acoustic transducer and the second sound outlet opening is located proximate the first acoustic transducer; and a controller that controls the relative phases of the first and second acoustic transducers, wherein the controller establishes two operational modes comprising: a first operational mode wherein the first and second acoustic transducers are out of phase in a first frequency range, in phase in a second frequency range, and out of phase in a third frequency range; and a second operational mode wherein the first and second acoustic transducers are out of phase in the first frequency range, and in phase in the second and third frequency ranges.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
DETAILED DESCRIPTION
(15) The acoustic device directs high quality sound to the ears without direct contact with the ears, and without blocking ambient sounds. The acoustic device is unobtrusive, and can be worn under (if the clothing is sufficiently acoustically transparent) or on top of clothing.
(16) In one aspect, the acoustic device is constructed and arranged to be worn around the neck. The acoustic device has a neck loop that includes a housing. The neck loop has a horseshoe-like shape, with two legs that sit over the top of the torso on either side of the neck, and a curved central portion that sits behind the neck. The device has two acoustic drivers one on each leg of the housing. The drivers are located below the expected locations of the ears of the user, with their acoustic axes pointed at the ears. The acoustic device also has two waveguides within the housing, each one having an exit below an ear, close to a driver. The rear side of one driver is acoustically coupled to the entrance to one waveguide and the rear side of the other driver is acoustically coupled to the entrance to the other waveguide. Each waveguide has one end with the driver that feeds it located below one ear (left or right), and the other end (the open end) located below the other ear (right or left), respectively.
(17) A non-limiting example of the acoustic device is shown in the drawings. This is but one of many possible examples that would illustrate the subject acoustic device. The scope of the invention is not limited by the example but rather is supported by the example.
(18) Acoustic device 10 (
(19) Neck loop 12 comprises housing 13 that is in essence an elongated (solid or flexible) mostly hollow solid plastic tube (except for the sound inlet and outlet openings), with closed distal ends 27 and 28. Housing 13 is divided internally by integral wall (septum) 102. Two internal waveguides are defined by the external walls of the housing and the septum. Housing 13 should be stiff enough such that the sound is not substantially degraded as it travels through the waveguides. In the present non-limiting example, where the lateral distance D between the ends 27 and 28 of right and left neck loop legs 20 and 22 is less than the width of a typical human neck, the neck loop also needs to be sufficiently flexible such that ends 27 and 28 can be spread apart when device 10 is donned and doffed, yet will return to its resting shape shown in the drawings. One of many possible materials that has suitable physical properties is polyurethane. Other materials could be used. Also, the device could be constructed in other manners. For example, the device housing could be made of multiple separate portions that were coupled together, for example using fasteners and/or adhesives. And, the neck loop legs do not need to be arranged such that they need to be spread apart when the device is placed behind the neck with the legs draped over the upper chest.
(20) Housing 13 carries right and left acoustic drivers 14 and 16. The drivers are located at the top surface 30 of housing 13, and below the expected location of the ears E. See
(21) Located close to and just posteriorly of the drivers and in the top exterior wall 30 of housing 13 are waveguide outlets 40 and 50. Outlet 50 is the outlet for waveguide 110 which has its entrance at the back of right-side driver 14. Outlet 40 is the outlet for waveguide 160 which has its entrance at the back of left-side driver 16. See
(22) Acoustic device 10 includes right and left button socks or partial housing covers 60 and 62; button socks are sleeves that can define or support aspects of the device's user interface, such as volume buttons 68, power button 74, control button 76, and openings 72 that expose the microphone. When present, the microphone allows the device to be used to conduct phone calls (like a headset). Other buttons, sliders and similar controls can be included as desired. The user interface may be configured and positioned to permit ease of operation by the user. Individual buttons may be uniquely shaped and positioned to permit identification without viewing the buttons. Electronics covers are located below the button socks. Printed circuit boards that carry the hardware that is necessary for the functionality of acoustic device 10, and a battery, are located below the covers.
(23) Housing 13 includes two waveguides, 110 and 160. See
(24) The first part of waveguide 110 is shown in
(25) In one non-limiting example, each waveguide has a generally consistent cross-sectional area along its entire length, including the generally annular outlet opening, of about 2 cm.sup.2. In one non-limiting example each waveguide has an overall length in the range of about 22-44 cm; very close to 43 cm in one specific example. In one non-limiting example, the waveguides are sufficiently long to establish resonance at about 150 Hz. More generally, the main dimensions of the acoustic device (e.g., waveguide length and cross-sectional area) are dictated primarily by human ergonomics, while proper acoustic response and functionality is ensured by proper audio signal processing. Other waveguide arrangements, shapes, sizes, and lengths are contemplated within the scope of the present disclosure.
(26) An exemplary but non-limiting example of the electronics for the acoustic device are shown in
(27)
(28) In some cases there is a need to optimize the sound performance of the acoustic device to provide a better experience for the wearer and/or for a person nearby the wearer who may be communicating with the wearer. For example, in a situation where the wearer of the acoustic device is communicating with a person who speaks another language, the acoustic device can be used to provide the wearer with a translation of the other person's speech, and provide the other person with a translation of the wearer's speech. The acoustic device is thus adapted to alternately radiate sound in the near field for the wearer and in the far field for a person close to the wearer (e.g., a person standing in front of the wearer). In the acoustic device, a controller changes the acoustic radiation pattern to produce the preferred sound for both cases. This can be achieved by changing the relative phase of the acoustic transducers in the acoustic device and applying different equalization schemes when outputting sound for the wearer of the acoustic device vs. when outputting sound for another person near the wearer.
(29) For the wearer, the sound field around each ear is important, while far field radiation makes no difference to the wearer but for others close by it is best if the far field radiation is suppressed. For a person listening while standing in front of the wearer the far field sound is important. It is also helpful to a listener if this far field sound has an isotropic acoustic radiation pattern and broad spatial coverage as would be the case if the sound was coming from a human mouth.
(30) Both the near field sound for the wearer and the far field sound for a person close to the wearer can be created by the two acoustic transducers. With the construction described herein (i.e., an acoustic device with an acoustic transducer on each side, each acoustic transducer connected to an outlet on the opposite side of the acoustic device via a waveguide), phase differences between the transducers can be used to create two modes of operation. In a first private mode, which may be used, for example, when the acoustic device is translating another person's speech for the wearer of the acoustic device, both transducers are driven out of phase for a first range of frequencies below the waveguide resonant frequency, in phase for a second range of frequencies above the waveguide resonant frequency, and out of phase for a third range of frequencies further above the waveguide resonant frequency. In one non-limiting example where the waveguide resonant frequency is approximately 250 Hz, the relative phase of the acoustic transducers could be controlled as shown in Table 1 below.
(31) TABLE-US-00001 TABLE 1 Private Mode Transducer Operation Frequency Transducer A Transducer B <250 Hz + 250-750 Hz + + >750 Hz +
(32) As shown, below about 250 Hz, the transducers are driven out of phase. As previously described, when the transducers are driven out of phase, the two acoustic signals received by each ear are virtually in phase below the waveguide resonance frequency. This ensures that low frequency radiation from each transducer and the same side corresponding waveguide outlet are in phase and do not cancel each other. At the same time, the radiation from opposite side transducers and corresponding waveguides are out of phase, which reduces sound spillage from the acoustic device at these frequencies. Between about 250 and about 750 Hz, the transducers are driven in phase, to increase SPL at the ears of the wearer (see
(33) The above frequency ranges will vary depending on the waveguide resonant frequency and the desired application. In the case where the acoustic device is being used for translation, the relative phases of the transducers shown above enable effective sound output at the ears of the wearer (see
(34)
(35) In a second out loud mode, which may be used, for example, when the acoustic device is translating the wearer's speech for another person, both transducers are driven out of phase for a first range of frequencies below the waveguide resonant frequency and in phase for all frequencies at and above the waveguide resonant frequency. In one non-limiting example where the waveguide resonant frequency is approximately 250 Hz, the relative phase of the acoustic transducers could be controlled as shown in Table 2 below.
(36) TABLE-US-00002 TABLE 2 Out Loud Mode Transducer Operation Frequency Transducer A Transducer B <250 Hz + >=250 Hz + +
(37) As shown, below about 250 Hz, the transducers are driven out of phase, which produces the effect described above for the private mode. At frequencies at and above about 750 Hz, the transducers are driven in phase. By designing the waveguides to have a resonant frequency close to the speech band (which typically starts at around 300 Hz), the waveguides are particularly effective for outputting sound in the speech band to both the wearer of the acoustic device and a person nearby the acoustic device. At frequencies greater than the waveguide resonant frequency, the radiation at the waveguide dominates the transducer output, resulting in higher spillage from the acoustic device. In the out loud mode, by operating the transducers in phase for all frequencies in the speech band, the acoustic device maximizes this spillage effect, thereby improving the sound output for a person nearby the acoustic device.
(38) The above frequency ranges will vary depending on the waveguide resonant frequency and the desired application. In the case where the acoustic device is being used for translation, the relative phases of the transducers shown above enable effective sound output for a person nearby the wearer of the acoustic device (see
(39) This acoustic design thus achieves an audio system operation in which phase difference between two transducers can either provide the sound to the wearer (with lower spillage to the far field), or sound to the wearer and to the far field with isotropic directivity at lower frequencies.
(40)
(41) The selection of the mode can done automatically by one or more microphones (either on board the acoustic device or in a connected device) that detect where the sound is coming from (i.e. the wearer or another person) or by an application residing in a smartphone connected to the acoustic device via a wired or wireless connection based on the content of the speech (language recognition), or by manipulation of a user interface, for example.
(42) As described above, transitioning the transducers to a different phase can be accomplished through all pass filters having limited phase change slopes, which provide for gradual phase changes (rather than abrupt phase changes) to minimize any impact on sound reproduction.
(43) The controller element of
(44) When processes are represented or implied in the block diagram, the steps may be performed by one element or a plurality of elements. The steps may be performed together or at different times. The elements that perform the activities may be physically the same or proximate one another, or may be physically separate. One element may perform the actions of more than one block. Audio signals may be encoded or not, and may be transmitted in either digital or analog form. Conventional audio signal processing equipment and operations are in some cases omitted from the drawing.
(45) A method 90 of controlling an acoustic device to assist with oral communication between a device user and another person is set forth in
(46) In step 94, a (second) speech signal that originates from the other person's voice is received. A translation of the received other person's speech from the other person's language into the user's language is then obtained, step 95. A second audio signal that is based on this received translation is provided to the transducers, step 96. In the example described above, the translation can be played by the transducers out of phase for a first range of frequencies below the waveguide resonant frequency, in phase for a second range of frequencies above the waveguide resonant frequency, and out of phase for a third range of frequencies further above the waveguide resonant frequency. This allows the wearer of the acoustic device to hear the translation, while reducing spillage at least at some frequencies for the person communicating with the wearer.
(47) Method 90 operates such that the wearer of the acoustic device can speak normally, the speech is detected and translated into a selected language (typically, the language of the other person with whom the user is speaking). The acoustic device then plays the translation such that it can be heard by the person with whom the user is speaking. Then, when the other person speaks the speech is detected and translated into the wearer's language. The acoustic device then plays this translation such that it can be heard by the wearer, but is less audible to the other person (or third parties who are in the same vicinity). The device thus allows relatively private, translated communications between two people who do not speak the same language.
(48) Embodiments of the systems and methods described above comprise computer components and computer-implemented steps that will be apparent to those skilled in the art. For example, it should be understood by one of skill in the art that the computer-implemented steps may be stored as computer-executable instructions on a computer-readable medium such as, for example, floppy disks, hard disks, optical disks, Flash ROMS, nonvolatile ROM, and RAM. Furthermore, it should be understood by one of skill in the art that the computer-executable instructions may be executed on a variety of processors such as, for example, microprocessors, digital signal processors, gate arrays, etc. For ease of exposition, not every step or element of the systems and methods described above is described herein as part of a computer system, but those skilled in the art will recognize that each step or element may have a corresponding computer system or software component. Such computer system and/or software components are therefore enabled by describing their corresponding steps or elements (that is, their functionality), and are within the scope of the disclosure.
(49) A number of implementations have been described. Nevertheless, it will be understood that additional modifications may be made without departing from the scope of the inventive concepts described herein, and, accordingly, other embodiments are within the scope of the following claims.