Hearing device providing virtual sound
11805364 · 2023-10-31
Assignee
Inventors
Cpc classification
H04S2420/01
ELECTRICITY
H04R5/04
ELECTRICITY
H04R5/027
ELECTRICITY
International classification
H04R5/027
ELECTRICITY
H04R5/04
ELECTRICITY
Abstract
Disclosed is a method and a hearing device for audio transmission. The hearing device is configured to be worn by a user. The hearing device comprises a first earphone comprising a first speaker. The hearing device comprises a second earphone comprising a second speaker. The hearing device comprises a virtual sound processing unit connected to the first earphone and the second earphone. The virtual sound processing unit is configured for receiving and processing an audio sound signal for generating a virtual audio sound signal. The virtual audio sound signal is forwarded to the first and second speakers, where the virtual audio sound appears to the user as audio sound coming from two virtual speakers in front of the user. The hearing device further comprises a first primary microphone for capturing surrounding sounds to provide a first surrounding sound signal based on a first primary input signal from the first primary microphone. The first primary microphone being arranged in the first earphone for providing a first rear facing sensitivity pattern towards the rear direction. The hearing device further comprises a first secondary microphone for capturing surrounding sounds to provide a second surrounding sound signal based on a first secondary input signal from the first secondary microphone. The first secondary microphone being arranged in the second earphone for providing a second rear facing sensitivity pattern towards the rear direction. The hearing device is configured for transmitting the first surrounding sound signal to the first speaker. The hearing device is configured for transmitting the second surrounding sound signal to the second speaker. Thereby the user receives the surrounding sound from the rear direction, while the surrounding sound from the front direction is attenuated compared to the surrounding sound from the rear direction.
Claims
1. A hearing device for audio transmission configured to be worn by a user, the hearing device comprises: a first earphone comprising a first speaker; a second earphone comprising a second speaker; a virtual sound processing unit connected to the first earphone and the second earphone, the virtual sound processing unit is configured for receiving and processing an audio sound signal for generating a virtual audio sound signal, wherein the virtual audio sound signal is forwarded to the first and second speakers, where the virtual audio sound appears to the user as audio sound coming from two virtual speakers in front of the user, wherein the two virtual speakers are created at angles relative to a look direction of the user; a first primary microphone for capturing surrounding sounds to provide a first surrounding sound signal, the first primary microphone being arranged in the first earphone for providing a first rear facing sensitivity pattern towards a rear direction; a first secondary microphone for capturing surrounding sounds to provide a second surrounding sound signal, the first secondary microphone being arranged in the second earphone for providing a second rear facing sensitivity pattern towards the rear direction; wherein the hearing device is configured for: transmitting the first surrounding sound signal to the first speaker; and transmitting the second surrounding sound signal to the second speaker; a second primary microphone for capturing surrounding sounds, the second primary microphone being arranged in the first earphone; a second secondary microphone for capturing surrounding sounds, the second secondary microphone being arranged in the second earphone; a first beamformer configured for providing the first surrounding sound signal, where the first surrounding sound signal is based on the first primary input signal from the first primary microphone and a second primary input signal from the second primary microphone, for providing the first rear facing sensitivity pattern towards the rear direction; and a second beamformer configured for providing the second surrounding sound signal, where the second surrounding sound signal is based on the first secondary input signal from the first secondary microphone and a second secondary input signal from the second secondary microphone, for providing the second rear facing sensitivity pattern towards the rear direction; wherein the virtual sound processing unit is configured for generating the virtual audio sound signal forwarded to the first and second speakers by: applying a first left head-related transfer function to a left channel stereo audio sound signal of the received audio sound signal in the first earphone; applying a first right head-related transfer function to a right channel stereo audio sound signal of the received audio sound signal in the first earphone; applying a second left head-related transfer function to the left channel stereo audio sound signal of the received audio sound signal in the second earphone; and applying a second right head-related transfer function to the right channel stereo audio sound signal of the received audio sound signal in the second earphone; wherein the surrounding sound, for the user, captured from the front direction is attenuated compared to the surrounding sound captured from the rear direction by having a higher directional sensitivity in the rear direction than the front direction such that a volume of the surrounding sound in the front direction is smaller than a volume of the surrounding sound in the rear direction.
2. The hearing device according to claim 1, wherein the hearing device comprises a head tracking sensor comprising an accelerometer, a magnetometer and a gyroscope.
3. The hearing device according to claim 2, wherein the hearing device is configured for compensating for the user's fast/natural head movements measured as the look direction by the head tracking sensor, by providing that the two virtual speakers appear to be in a steady position in space.
4. The hearing device according to claim 3, wherein the hearing device compensates for the user's fast/natural head movements by ensuring a latency of the virtual speakers of less about 50 ms.
5. The hearing device according to claim 2, wherein the hearing device is configured for providing a rubber band effect to the virtual speakers for providing that the angles of the virtual speakers gradually shift, when the user performs real turns other than fast/natural head movements.
6. The hearing device according to claim 5, wherein the hearing device provides the rubber band effect by applying a time constant to the head tracking sensor of about 5-10 seconds.
7. The hearing device according to claim 6, wherein the hearing device comprises a high pass filter for filtering out environment noise, including frequencies below 500 Hz.
8. The hearing device according to claim 7, wherein the first primary microphone and/or the first secondary microphone is/are an omnidirectional microphone or a directional microphone.
9. The hearing device according to claim 1, wherein the hearing device further comprises: a third primary microphone and a fourth primary microphone for capturing surrounding sounds; the third primary microphone and the fourth primary microphone being arranged in the first earphone; a third secondary microphone and a fourth secondary microphone for capturing surrounding sounds; the third secondary microphone and the fourth secondary microphone being arranged in the second earphone; wherein the first surrounding sound signal provided by the first beamformer is further based on a third primary input signal from the third primary microphone and a fourth primary input signal from the fourth primary microphone, for providing the first rear facing sensitivity pattern towards the rear direction; and wherein the second surrounding sound signal provided by the second beamformer is further based on a third secondary input signal from the third secondary microphone and a fourth secondary input signal from the fourth secondary microphone, for providing the second rear facing sensitivity pattern towards the rear direction.
10. The hearing device according to claim 9, wherein the first primary microphone and/or the second primary microphone and/or the third primary microphone and/or the fourth primary microphone point rearwards for providing the first rear facing sensitivity pattern towards the rear direction.
11. The hearing device according to claim 9, wherein the first primary microphone and/or the second primary microphone and/or the third primary microphone and/or the fourth primary microphone are arranged with a distance in a horizontal direction in the first earphone.
12. The hearing device according to claim 1, wherein the hearing device is configured to be connected with an electronic device, wherein the audio sound signals is transmitted from the electronic device, and wherein the audio sound signals and/or the surrounding sound signals is configured to be set/controlled by the user via a user interface.
13. The hearing device according to claim 1, wherein the hearing device is configured to change modes selected from at least one of traffic awareness mode, hear-through mode, a noise cancellation mode, or an audio-only mode.
14. The hearing device according to claim 1, wherein the surrounding sound captured from the rear direction has the higher directional sensitivity of about 3-5 dB.
15. A method in a hearing device for audio transmission, where the hearing device is configured to be worn by a user, the method comprises: receiving an audio sound signal in a virtual sound processing unit; processing the audio sound signal in the virtual sound processing unit for generating a virtual audio sound signal; forwarding the virtual audio sound signal to a first speaker and a second speaker, the first and the second speaker being connected to the virtual sound processing unit, where the virtual audio sound appears to the user as audio sound coming from two virtual speakers in front of the user, wherein the two virtual speakers are created at angles relative to a look direction of the user; wherein the method further comprises: capturing surrounding sounds by a first primary microphone to provide a first surrounding sound signal based on a first primary input signal from the first primary microphone; the first primary microphone being arranged in a first earphone for providing a first rear facing sensitivity pattern towards a rear direction; capturing surrounding sounds by a first secondary microphone to provide a second surrounding sound signal based on a first secondary input signal from the first secondary microphone; the first secondary microphone being arranged in a second earphone for providing a second rear facing sensitivity pattern towards the rear direction; wherein the method comprises: transmitting the first surrounding sound signal to the first speaker; transmitting the second surrounding sound signal to the second speaker; and attenuating, for the user, the surrounding sound captured from the front direction compared to the surrounding sound captured from the rear direction by having a higher directional sensitivity in the rear direction than the front direction such that a volume of the surrounding sound in the front direction is smaller than a volume of the surrounding sound in the rear direction.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The above and other features and advantages will become readily apparent to those skilled in the art by the following detailed description of exemplary embodiments thereof with reference to the attached drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION
(9) Various embodiments are described hereinafter with reference to the figures. Like reference numerals refer to like elements throughout. Like elements will, thus, not be described in detail with respect to the description of each figure. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or if not so explicitly described.
(10) Throughout, the same reference numerals are used for identical or corresponding parts.
(11)
(12)
(13)
(14)
(15)
(16) In the prior art example in
(17) Furthermore, in the prior art example
(18)
(19) The hearing device 2 may further comprise a head tracking sensor 28 comprising an accelerometer, a magnetometer and a gyroscope, for tracking the user's head movements.
(20) The hearing device may further comprise a headband 30 connecting the first earphone 6 and the second earphone 10.
(21)
(22)
(23) The hearing device 2 may further comprise a second primary microphone 32 for capturing surrounding sounds. The second primary microphone 32 is arranged in the first earphone 6.
(24) The hearing device 2 may comprise a first beamformer configured for providing the first surrounding sound signal, where the first surrounding sound signal is based on the first primary input signal from the first primary microphone 16 and a second primary input signal from the second primary microphone 32, for providing the first rear facing sensitivity pattern towards the rear direction “REAR”.
(25) The hearing device may further comprise a third primary microphone 34 and a fourth primary microphone 36 for capturing surrounding sounds. The third primary microphone 34 and the fourth primary microphone 36 are arranged in the first earphone 6.
(26) The first surrounding sound signal provided by the first beamformer is further based on a third primary input signal from the third primary microphone 34 and a fourth primary input signal from the fourth primary microphone 36, for providing the first rear facing sensitivity pattern towards the rear direction “REAR”.
(27) The first primary microphone 16 and/or the second primary microphone 32 and/or the third primary microphone 34 and/or the fourth primary microphone 36 point rearwards “REAR” for providing the first rear facing sensitivity pattern towards the rear direction.
(28) The first primary microphone 16 and/or the second primary microphone 32 and/or the third primary microphone 34 and/or the fourth primary microphone 36 are arranged with a distance in a horizontal direction in the first earphone 6.
(29)
(30) The hearing device 2 may further comprise a second secondary microphone 38 for capturing surrounding sounds. The second secondary microphone 38 is arranged in the second earphone 10.
(31) The hearing device 2 may comprise a second beamformer configured for providing the second surrounding sound signal, where the second surrounding sound signal is based on the first secondary input signal from the first secondary microphone 18 and a second secondary input signal from the second secondary microphone 38, for providing the second rear facing sensitivity pattern towards the rear direction “REAR”.
(32) The hearing device may further comprise a third secondary microphone 40 and a fourth secondary microphone 42 for capturing surrounding sounds. The third secondary microphone 40 and the fourth secondary microphone 42 are arranged in the second earphone 10.
(33) The second surrounding sound signal provided by the second beamformer is further based on a third secondary input signal from the third secondary microphone 40 and a fourth secondary input signal from the fourth secondary microphone 42, for providing the second rear facing sensitivity pattern towards the rear direction “REAR”.
(34) The first secondary microphone 18 and/or the second secondary microphone 38 and/or the third secondary microphone 40 and/or the fourth secondary microphone 42 point rearwards “REAR” for providing the second rear facing sensitivity pattern towards the rear direction.
(35) The first secondary microphone 18 and/or the second secondary microphone 38 and/or the third secondary microphone 40 and/or the fourth secondary microphone 42 are arranged with a distance in a horizontal direction in the second earphone 10.
(36)
(37)
(38) S.sub.L is the left channel stereo audio input, such as left channel stereo music input. S.sub.R is the right channel stereo audio input, such as right channel stereo music input.
(39) HRIR in
(40) HRTFs for left and right ear, expressed above as HRIRs, describe the filtering of a sound source (x(t)) before it is perceived at the left and right ears as xL(t) and xR(t), respectively.
(41) The stereo audio has two audio channels sR(t) and sL(t). The two virtual sound speakers may be created at angles +θ.sub.0 and −θ.sub.0, relative to the look direction at e.g. −30 degrees and +30 degrees, by convolving the corresponding four head-related-transfer-functions (HRTF's) with sR(t) and sL(t).
(42) θ.sub.L and θ.sub.R are the angles to the left and right virtual speaker respectively, thus HRIR θ.sub.L is the left ear Head-Related Impulse Response for the left virtual speaker, see
(43) The output signals from HRIR 8R and HRIR θ.sub.L are added together at a virtual sound processing unit 14 and provided to a first calibration filter hcal1, which provides the virtual audio sound signal 56.
(44) h.sub.1, h.sub.2, h.sub.3, h.sub.4 are the beamforming filters for each microphone input. Four microphones are shown in
(45) Thus, h1 is a first primary beamforming filter for the first primary input signal 46 from the first primary microphone 16. h2 is a second primary beamforming filter for the second primary input signal 48 from the second primary microphone 32. h3 is a third primary beamforming filter for the third primary input signal 50 from the third primary microphone 34. h4 is a fourth primary beamforming filter for the fourth primary input signal 52 from the fourth primary microphone 36.
(46) The output signals from the beamforming filters h1, h2, h3 and h4 are added together at an adder 54 for the first beamformer and provided to a second calibration filter hcal2, which provides the first surrounding sound signal 58.
(47) The first h1, second h2, third h3 and fourth h4 primary beamforming filters provides the first beamformer. The first beamformer is configured for providing the first surrounding sound signal 58, where the first surrounding sound signal 58 is based on the first primary input signal 46 from the first primary microphone 16 and the second primary input signal 48 from the second primary microphone 32 and the third primary input signal 50 from the third primary microphone 34 and the fourth primary input signal 52 from the fourth primary microphone 36. The first surrounding sound signal 58 is for providing the first rear facing sensitivity pattern towards the rear direction.
(48) The virtual audio sound signal 56 and the first surrounding sound signal 58 are added together at 60 and the combined signal 62 is provided to the first speaker 8.
(49)
(50) S′.sub.L is the left channel stereo audio input, such as left channel stereo music input. S′.sub.R is the right channel stereo audio input, such as right channel stereo music input.
(51) HRIR′ in
(52) The stereo audio has two audio channels sR(t) and sL(t). The two virtual sound speakers may be created at angles +θ.sub.0 and −θ.sub.0, relative to the look direction at e.g. −30 degrees and +30 degrees, by convolving the corresponding four head-related-transfer-functions (HRTF's) with sR(t) and sL(t).
(53) θ.sub.L and θ.sub.R are the angles to the left and right virtual speaker respectively, thus HRIR′ θ.sub.L is the right ear Head-Related Impulse Response for the left virtual speaker, see
(54) The output signals from HRIR′ θ.sub.R and HRIR′ θ.sub.L are added together at a virtual sound processing unit 14′ and provided to a first calibration filter h′cal1, which provides the virtual audio sound signal 56′.
(55) h′.sub.1, h′.sub.2, h′.sub.3, h′.sub.4 are the beamforming filters for each microphone input. Four microphones are shown in
(56) Thus, h′1 is a first secondary beamforming filter for the first secondary input signal 64 from the first secondary microphone 18. h′2 is a second secondary beamforming filter for the second secondary input signal 66 from the second secondary microphone 38. h′3 is a third secondary beamforming filter for the third secondary input signal 68 from the third secondary microphone 40. h′4 is a fourth secondary beamforming filter for the fourth secondary input signal 70 from the fourth secondary microphone 42.
(57) The output signals from the beamforming filters h′1, h′2, h′3 and h′4 are added together at an adder 54′ for the second beamformer and provided to a second calibration filter h′cal2, which provides the second surrounding sound signal 72.
(58) The first h′1, second h′2, third h′3 and fourth h′4 secondary beamforming filters provides the second beamformer. The second beamformer is configured for providing the second surrounding sound signal 72, where the second surrounding sound signal 72 is based on the first secondary input signal 64 from the first secondary microphone 18 and the second secondary input signal 66 from the second secondary microphone 38 and the third secondary input signal 68 from the third secondary microphone 40 and the fourth secondary input signal 70 from the fourth secondary microphone 42. The second surrounding sound signal 72 is for providing the second rear facing sensitivity pattern towards the rear direction.
(59) The virtual audio sound signal 56′ and the second surrounding sound signal 72 are added together at 60′ and the combined signal 62′ is provided to the second speaker 12.
(60)
(61)
(62) The audio sound from an external device (not shown) may be stereo music. The stereo music has two audio channels sR(t) and sL(t). The two virtual sound speakers 20 may be created at angles +θ.sub.0 and −θ.sub.0, relative to the look direction or head direction 78 at e.g. −30 degrees and +30 degrees, by convolving the corresponding four head-related-transfer-functions (HRTF's) with sR(t) and sL(t).
(63) The angles θ.sub.L and θ.sub.R are the angles relative to the head direction 78 (θ.sub.T) to the two virtual speakers 20, left virtual speaker L and right virtual speaker R, respectively.
θ.sub.L(n)=θ.sub.C(n)−θ.sub.T(n)+30°
θ.sub.R(n)=θ.sub.C(n)−θ.sub.T(n)−30°
(64) In some embodiments, the hearing device 2 is configured for providing a rubber band effect to the virtual speakers 20 for providing that the virtual speakers 20 gradually shift position, when the user 4 performs real turns other than fast/natural head movements. The hearing device 2 may provide the rubber band effect by applying a time constant to the head tracking sensor 28 of about 5-10 seconds. The rubber effect may be provided by applying a time constant to the angle θT.
(65) The following difference equation adds the “rubber band” effect to the estimation of the angles:
θ.sub.C(n)=θ.sub.C(n−1)−α(θ.sub.C(n−1)−θ.sub.T(n−1)), 0<α<1
(66)
(67) Although particular features have been shown and described, it will be understood that they are not intended to limit the claimed invention, and it will be made obvious to those skilled in the art that various changes and modifications may be made without departing from the scope of the claimed invention. The specification and drawings are, accordingly to be regarded in an illustrative rather than restrictive sense. The claimed invention is intended to cover all alternatives, modifications and equivalents.
LIST OF REFERENCES
(68) 2 hearing device 4 user 6 first earphone 8 first speaker 10 second earphone 12 second speaker 14, 14′ virtual sound processing unit 16 first primary microphone 18 first secondary microphone 20 virtual speakers 22 audio sound 24 surrounding sounds from rear direction 26 surrounding sounds from front direction 28 head tracking sensor 30 headband 32 second primary microphone 34 third primary microphone 36 fourth primary microphone 38 second secondary microphone 40 third secondary microphone 42 fourth secondary microphone S.sub.L, S′.sub.L left channel stereo audio input S.sub.R, S′.sub.R right channel stereo audio input θ.sub.L angle to the left virtual speaker relative to head direction 78 θ.sub.R angle to the right virtual speaker relative to head direction 78 HRIR θ.sub.L left ear Head-Related Impulse Response for the left virtual speaker HRIR θ.sub.R left ear Head-Related Impulse Response for the right virtual speaker h1 first primary beamforming filter 46 first primary input signal h2 second primary beamforming filter 48 second primary input signal h3 third primary beamforming filter 50 third primary input signal h4 fourth primary beamforming filter 52 fourth primary input signal 54 adder for first beamformer 54′ adder for second beamformer h′cal1, hcal1 first calibration filter 56, 56′ virtual audio sound signal hcal2, h′cal2 second calibration filter 58 first surrounding sound signal 60, 60′ adder for virtual audio sound signal 56, 56′ and first/second surrounding sound signal 58/72 62, 62′ combined signal HRIR′ θ.sub.L right ear Head-Related Impulse Response for the left virtual speaker HRIR′ θ.sub.R right ear Head-Related Impulse Response for the right virtual speaker h′1 first secondary beamforming filter 64 first secondary input signal h′2 second secondary beamforming filter 66 second secondary input signal 66 h′3 third secondary beamforming filter 68 third secondary input signal h′4 fourth secondary beamforming filter 70 fourth secondary input signal 72 second surrounding sound signal θ.sub.C angle between the reference direction 74 and the center line 76 74 reference direction 76 center line 78 head direction of user θ.sub.T angle between the head direction 78 of the user 4 and the reference direction 74 600 method in a hearing device for audio transmission 602 step of receiving an audio sound signal in a virtual sound processing unit 604 step of processing the audio sound signal in the virtual sound processing unit for generating a virtual audio sound signal 606 step of forwarding the virtual audio sound signal to a first speaker and a second speaker, the first and the second speaker being connected to the virtual sound processing unit, where the virtual audio sound appears to the user as audio sound coming from two virtual speakers in front of the user 608 step of capturing surrounding sounds by a first primary microphone to provide a first surrounding sound signal based on a first primary input signal from the first primary microphone; the first primary microphone being arranged in the first earphone for providing a first rear facing sensitivity pattern towards the rear direction 610 step of capturing surrounding sounds by a first secondary microphone to provide a second surrounding sound signal based on a first secondary input signal from the first secondary microphone; the first secondary microphone being arranged in the second earphone for providing a second rear facing sensitivity pattern towards the rear direction 612 step of transmitting the first surrounding sound signal to the first speaker 614 step of transmitting the second surrounding sound signal to the second speaker