Hearing system and method of its operation for providing audio data with directivity
20230031093 · 2023-02-02
Inventors
Cpc classification
H04R25/40
ELECTRICITY
International classification
Abstract
The disclosure relates to a method of operating a hearing system comprising an ear unit wearable at an ear of a user, an output transducer included in the ear unit, and a detector arrangement comprising a plurality of spatially separated sound detectors and configured to provide audio data representative of the detected sound.
Claims
1. A method of operating a hearing system, the hearing system comprising an ear unit configured to be worn at an ear of a user, an output transducer included in the ear unit and configured to stimulate the user's hearing, and a detector arrangement comprising a plurality of spatially separated sound detectors and configured to provide audio data representative of detected sound, characterized by providing, in a control data provision step, control data based on orientation data generated by a handheld device configured to be held at a hand of the user during changing a spatial orientation of the handheld device, the orientation data indicative of the spatial orientation of the handheld device; and providing, in a directivity provision step, the audio data with a directivity depending on the control data.
2. The method of claim 1, characterized by determining, in a direction determining step, a selected direction by comparing the orientation data with reference data, wherein, in the directivity provision step, the directivity of the audio data is provided corresponding to the selected direction.
3. The method of claim 2, characterized in that the direction determining step is performed at the control data provision step, wherein the control data is provided such that the control data is indicative of the selected direction.
4. The method of claim 2, characterized in that the direction determining step is performed after the control data provision step, wherein the control data is provided such that it includes the orientation data compared with the reference data.
5. The method of claim 2, characterized in that said orientation data is generated by the handheld device at a second time, wherein said reference data is indicative of orientation data generated by the handheld device at a first time.
6. The method of claim 2, characterized in that said reference data is indicative of a relation between the orientation data and a spatial orientation of the detector arrangement.
7. The method of claim 2, characterized by determining, in an initialization step, the reference data based on the orientation data generated at an initial time.
8. The method of claim 7, characterized in that the initialization step comprises initiating the initialization step by a user interface.
9. The method of claim 1, characterized by determining, based on the orientation data, a spatial orientation of the handheld device relative to a predefined plane, wherein the directivity provision step is performed depending on the spatial orientation of the handheld device relative to the predefined plane.
10. The method of claim 9, characterized in that the predefined plane corresponds to a plane in which the directivity of the audio data is provided.
11. The method of claim 1, characterized in that, in the directivity provision step, the directivity of the audio data is continuously changed at a continuous change of the orientation data.
12. The method of claim 1, characterized in that, in the directivity provision step, the directivity of the audio data is unaltered when a change of the orientation data is determined to be below a threshold.
13. A computer-readable medium storing instructions that, when executed by a processing unit cause the processing unit to perform the method according to claim 1.
14. A hearing system comprising an ear unit configured to be worn at an ear of a user; an output transducer included in the ear unit and configured to stimulate the user's hearing; a detector arrangement comprising a plurality of spatially separated sound detectors and configured to provide audio data representative of detected sound; characterized by a communication port configured to receive control data from a handheld device configured to be held at a hand of the user during changing a spatial orientation of the handheld device, the control data based on orientation data generated by the handheld device, the orientation data indicative of the spatial orientation of the handheld device; and a processing unit configured to provide the audio data with a directivity depending on the control data.
15. The hearing system of claim 14, characterized in that the hearing system further comprises a computer-readable medium storing instructions that, when executed by a processor included in the handheld device, cause the processor to provide the control data.
16. The hearing system of claim 14, characterized in that the hearing system comprises the handheld device, wherein the handheld device includes a processor configured to provide the control data.
17. The hearing system of claim 14, characterized in that at least one sound detector of the detector arrangement is included in a remote device, the remote device configured to transmit the audio data representative of the detected sound to the ear unit from a position remote from the ear unit.
18. The hearing system of claim 17, characterized in that the remote device comprises a visible orientation characteristic allowing the user to align the spatial orientation of the handheld device with the orientation characteristic.
19. The hearing system of claim 17, characterized in that the remote device comprises a support configured to be stationary placed on a plane.
20. The hearing system of claim 14, characterized in that at least one sound detector of the detector arrangement is included in the ear unit.
21-23. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0037] Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. The drawings illustrate various embodiments and are a part of the specification. The illustrated embodiments are merely examples and do not limit the scope of the disclosure. Throughout the drawings, identical or similar reference numbers designate identical or similar elements. In the drawings:
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
DETAILED DESCRIPTION OF THE DRAWINGS
[0046]
[0047] Different types of hearing device 111 can also be distinguished by the position at which they are worn at the ear. Some hearing devices, such as behind-the-ear (BTE) hearing aids and receiver-in-the-canal (RIC) hearing aids, typically comprise an earpiece configured to be at least partially inserted into an ear canal of the ear, and an additional housing configured to be worn at a wearing position outside the ear canal, in particular behind the ear of the user. Some other hearing devices, as for instance earbuds, earphones, in-the-ear (ITE) hearing aids, invisible-in-the-canal (IIC) hearing aids, and completely-in-the-canal (CIC) hearing aids, commonly comprise such an earpiece to be worn at least partially inside the ear canal without an additional housing for wearing at the different ear position. Some other hearing devices, such as over-ear headphones or headsets, can be configured to be worn at the ear entirely outside the ear canal.
[0048] In the example as shown, hearing device 111 is a binaural device comprising a left ear unit 112 to be worn at a left ear of the user, and a right ear unit 113 to be worn at a right ear of the user. Each ear unit 112, 113 includes a processor 116 communicatively coupled to an output transducer 115. Output transducer 115 may be implemented by any suitable audio output device, for instance a loudspeaker or a receiver of a hearing device or an output electrode of a cochlear implant system. Processor 116 is configured to provide an audio output signal to output transducer 115. The audio output signal may be amplified by a power amplifier included in the respective ear unit 112, 113, which is not shown in
[0049] Ear units 112, 113 further include a communication port 118 configured to receive audio data via a respective wireless communication link 152, 153. Audio data communication port 118 is communicatively coupled to processor 116 via a signal channel in order to supply processor 116 with a signal D containing the received audio data. Alternatively, a plurality of signal channels may be provided for supplying distinguished audio data separately to processor 116, for instance audio data associated with sound detected by different sound detectors. Wireless link 152, 153 may be a radio frequency link, for example an analog frequency modulation (FM) link or a digital link. The FM link and/or digital link may be implemented as disclosed in patent application publication No. WO 2008/098590 in further detail, which disclosure is herewith incorporated by reference. Wireless link 152, 153 may also be established via a Bluetooth protocol.
[0050] In some implementations, ear units 112, 113 further include a microphone or a plurality of spatially separated sound detectors configured to detect sound at the ear level and to provide audio data representative of the detected sound to processor 116. Hearing device 111 may include additional or alternative components as may serve a particular implementation.
[0051] Hearing system 101 further comprises a remote device 121 configured to be operated remote from the user, in particular independently from any movement of the user. More particularly, remote device 121 can be a stationary device configured to be operated at a stationary position in an environment of moving sound sources such as, for instance, speaking individuals. Remote device 121 comprises a detector arrangement 122 including at least two spatially separated sound detectors 123, 124, 125. For instance, each sound detector 123-125 may be implemented as a microphone. Detector arrangement 122 may then be implemented as a microphone array. Sound detectors 123-125 are configured to detect sound 103 at different spatial positions allowing to distinguish between sound components detected from different directions at the spatial positions. Each of sound detectors 123-125 comprises a dedicated signal channel delivering a respective audio signal A1, A2, A3 containing audio data representative of sound 103 detected at the respective spatial position. The audio data in signals A1-A3 thus contains information about the direction from which sound represented by the audio data has been detected by sound detectors 123-125. The audio data in signals A1-A3 is unmixed. Signals A1-A3 are thus considered as “raw” audio signals.
[0052] Remote device 121 further comprises a processor 126. Processor 126 comprises a DSP. Processor 126 is communicatively coupled to sound detectors 123-125 via the separate signal channels such that the audio data in each of signals A1-A3 can be separately supplied to processor 126. Processor 126 is configured to process the audio data received via audio signals A1-A3 in order to provide the audio data with a directivity. The directivity may correspond to any direction from which sound has been detected by sound detectors 123-125. As a result, the sound detected from this direction may be predominantly represented in the audio data after the signal processing performed by processor 126. In particular, processor 126 can be configured to perform an acoustic beamforming to provide the audio data representative of an acoustic beam formed in this direction. To this end, processor 126 can be configured to perform an appropriate mixing of the audio data in raw audio signals A1-A3 to produce the processed audio data. Processor 126 comprises an output signal channel on which an output signal B containing the audio data provided with the directivity can be delivered. A processing unit of hearing system 101 comprises processor 126 of remote device 121. The processing unit may further comprise processor 116 of ear units 112, 113.
[0053] Remote device 121 further comprises a communication port 128 configured to send audio data to hearing device 111 via the respective communication link 152, 153. Audio data communication port 128 is communicatively coupled to processor 126 via the output channel delivering output signal B. The audio data processed by processor 126 can thus be supplied from processor 126 to communication port 128. Communication port 128 is configured to send the processed audio data to communication port 118 of ear units 112, 113 via the respective communication link 152, 153. After receipt, the audio data received by communication port 118 is supplied to processor 116 as a signal D via an input signal channel.
[0054] Remote device 121 further comprises a communication port 127 configured to receive control data from a handheld device 131 via a communication link 155. Communication link 155 is a wireless link. Control data communication link 155 is established separate from audio data communication link 152, 153. Control data can thus be transmitted via communication link 155 independently from audio data transmitted via communication link 152, 153. Control data communication port 127 is communicatively coupled to processor 126 via a control signal channel delivering a control signal C containing the control data to processor 126. Processor 126 is configured to provide the audio data received via audio signals A1-A3 with a directivity depending on the control data.
[0055] In some implementations, communication port 127 is configured to establish communication link 155 with handheld device 131 via a Bluetooth protocol. In those implementations, communication link 155 is referred to as a Bluetooth link. Bluetooth link 155 allows to implement the transmission of control data to remote device 121 in a reliable and convenient way, in particular by exploiting an appropriate communication port of handheld device 131 in conformity with the Bluetooth standard which may implemented by default in handheld device 131.
[0056] Handheld device 131 is configured to be held at a hand of the user during changing a spatial orientation of the handheld device. In some implementations, hearing system 101 further comprises handheld device 131 providing the control data. For instance, handheld device 131 may be a separate unit specifically dedicated to solely control an operation of hearing system 101, such as a remote control, or may be configured to also provide further functionalities unrelated to an operation of hearing system 101, such as a smartphone or a tablet. In some other implementations, hearing system 101 further comprises a computer-readable medium 143 storing instructions that, when executed by a processor included in the handheld device, cause the processor to provide the control data. In particular, the computer-readable medium 143 can be implemented as a database in a cloud 141. A program 144 enabling the processor of a handheld device to provide the control data may thus be downloaded from database 143. In this way, a user may apply a handheld device currently employed by the user for different purposes, in particular a smartphone or a tablet, to also operate hearing system 101.
[0057] Handheld device 131 comprises an orientation sensor 132 configured to generate orientation data indicative of a spatial orientation. Orientation sensor 132 can include an inertial sensor, in particular a motion sensor, for instance an accelerometer, and/or a rotation sensor, for instance a gyroscope and/or an accelerometer. Orientation sensor 132 can also comprise an optical detector such as a camera. For instance, the optical detector can be employed as a motion sensor and/or a rotation sensor by generating optical detection data over time and evaluating variations of the optical detection data. Orientation sensor 132 can also include a magnetometer, in particular an electronic compass, configured to measure the direction of an ambient magnetic field. The orientation data can comprise information of a spatial orientation of handheld device 131 relative to a reference frame 105 and/or a previous orientation of handheld device 131. Reference frame 105 can be the earth's reference frame. Reference frame 105 can be selected to correspond to a predetermined spatial orientation of handheld device 131.
[0058] In particular, the orientation data can indicate changes of the spatial orientation caused by a rotation of handheld device 131, for instance by a rotation around a z-axis in a plane formed by an x-axis and a y-axis of reference frame 105 as schematically indicated by a dashed circular arrow 104. Circular arrow 104 extends in a rotation plane defined by a normal vector pointing in the direction of the z-axis. Rotation plane 104 may thus be spanned by the x-axis and y-axis. The rotation plane may be selected to extend in parallel to a plane in which the directivity of the audio data received via audio signals A1-A3 is provided. In particular, a plane comprising the direction in which the acoustic beam is formed may be selected to correspond to rotation plane 104. In some implementations, rotation plane 104 may be selected to be substantially parallel to the floor and/or normal to the gravitational force. To this end, orientation sensor 132, for instance an accelerometer, can be configured to detect the direction of the gravitational force.
[0059] The orientation data generated by handheld device 131 can thus be provided independently from a spatial orientation of remote device 121 allowing to adjust the directivity of the audio data representing the sound detected by sound detectors 123-125 in dependence of the orientation data during a stationary positioning of remote device 121. Furthermore, the orientation data can be generated independently from a spatial orientation of hearing device 111 when worn at the user's ear, and therefore independently from a momentary orientation of the user's head. Thus, by rotating handheld device 131, the user can adjust the directivity in a convenient and reliable way, thereby avoiding unintentional changes of the directivity based on orientation data which would be sensitive to head movements. In this context, it has been found that head rotations often are spontaneous, imprecise and of short term nature such that orientation data based on manual rotations of a handheld device is more adequate for allowing a controlled adjusting of the directivity of remotely detected sound in a user-friendly way.
[0060] Handheld device 131 further comprises a processor 136 communicatively coupled to orientation sensor 132, and a communication port 137 communicatively coupled to processor 136. Processor 136 is configured to provide control data based on the orientation data generated by orientation sensor 132 to communication port 137. Communication port 137 is configured to send the control data to communication port 127 of remote device 121 via control data communication link 155. In some implementations, processor 136 is configured to determine a selected direction from the orientation data and to provide the control data such that the control data is indicative of the selected direction. The selected direction can correspond to a direction selected by the user by adjusting a spatial orientation of handheld device 131. The directivity of the audio data can thus be provided corresponding to the selected direction. In some implementations, processor 136 is configured to provide the control data such that the control data includes the orientation data. The selected direction may then be determined by processor 126 of remote device 121 and/or by processor 116 of hearing device 111 after transmission of the control data from handheld device 131. The processing unit of hearing system 101 may further comprise processor 136 of handheld device 131.
[0061] In some implementations, processor 136 is configured, based on the generated orientation data, to determine a spatial orientation of handheld device 131 relative to a predefined plane. The predefined plane may correspond to rotation plane 104. Rotation plane 104 may be any plane in which the handheld device is rotatable. A change of the directivity of the audio data may be controlled in the directivity provision step depending on the control data based on the orientation data generated during and/or after the rotation. For instance, as described above, rotation plane 104 may be predefined to extend in parallel to a plane comprising the direction in which the acoustic beam is formed and/or may be selected to be substantially parallel to the floor and/or normal to the gravitational force. In particular, rotations of handheld device 131 in the direction of the z-axis of reference frame 105, which may imply rotations around the x-axis and/or y-axis and/or linear combinations thereof, can provoke a spatial orientation of handheld device 131 deviating from rotation plane 104.
[0062] Processor 136 can be further configured to evaluate, based on the spatial orientation relative to rotation plane 104, an orientation criterion of handheld device 131. For instance, the orientation criterion may be determined to be fulfilled when a screen and/or user interface of handheld device 131 faces in an upward direction substantially in parallel to the rotation plane 104, in particular opposite to the gravitational force. To illustrate, such a condition may be fulfilled when handheld device 131 is placed on a table and/or floor with the screen and/or user interface facing up. The orientation criterion may be determined not to be fulfilled when the spatial orientation of handheld device 131 strongly deviates from this position relative to rotation plane 104 such as, for instance, when the screen and/or user interface of handheld device 131 faces downward. In a case in which the orientation criterion is determined to be fulfilled, the audio data may be provided with a directivity depending on the control data, as described above. In a case in which the orientation criterion is determined to be not fulfilled, a different operation can be activated by processor 136. The different operation may comprise disabling the provision of the directivity of the audio data may depending on the control data and/or activating an automated provision of the directivity of the audio data and/or muting the reproduction of the audio data representing the sound detected by remote device 121.
[0063] Handheld device 131 further comprises a user interface 133 communicatively coupled to processor 136. Processor 136 is configured, depending on a user command received via user interface 133, to initiate an initialization step. In the initialization step, reference data based on the orientation data generated at an initial time can be determined by processor 136. The reference data can thus be representative of the orientation data during a placement of handheld device 131 at an initial spatial orientation at the initial time, in particular relative to a placement of remote device 121 at a default spatial orientation. The reference data can be employed to determine the selected direction by comparing the orientation data generated at a later time with the reference data.
[0064] Handheld device 131 further comprises a communication port 134 configured to communicate with cloud 141 via a cloud communication link 159, for instance an internet link. Communication port 134 is communicatively coupled to processor 136. Program 144 containing instructions of providing the control data based on the orientation data can thus be downloaded by processor 136 from database 143. Processor 136 may include a memory for a non-transitory installing and/or storing of program 144.
[0065]
[0066] Hearing device 211 comprises a left ear unit 212 and a right ear unit 213. The audio data contained in audio signals B1-B3 can be transmitted from communication port 128 of remote device 221 to communication port 118 of the respective ear unit 212, 213 via audio data communication link 152, 153. Communication port 118 is communicatively coupled to processor 116 of the respective ear unit 212, 213 via a plurality of signal channels configured to supply processor 116 with separate audio signals D1-D3 containing the received audio data corresponding to separate audio signals B1-B3. Processor 116 is configured to process the audio data received via audio signals D1-D3 in order to provide the audio data with a directivity, as described above in conjunction with remote device 121. The processing of the audio data by processor 116 can be performed differently in each ear unit 212, 213 in order to exploit the binaural configuration of hearing device 211.
[0067] Ear units 212, 213 further comprise a communication port 217 configured to receive the control data from handheld device 131 via a respective wireless communication link 256, 257. Communication link 256, 257 can be established between communication port 137 of handheld device 131 and communication port 217 of ear units 212, 213, corresponding to communication link 155 described above. The control data based on the orientation data generated by handheld device 131 can thus be received by communication port 217 via communication link 256, 257. Control data communication port 217 is communicatively coupled with processor 116 via a control signal channel supplying processor 116 with control signal C containing the control data. The directivity of the audio data can thus be provided by processor 116 at the ear level depending on the control data.
[0068] In some implementations, the control data based on the orientation data generated by handheld device 131 can additionally be received by communication port 127 of remote device 221 via communication link 155. Processor 126 of remote device 221 may then be configured to provide an initial processing of raw audio signals A1-A3 in order to provide pre-processed audio data in audio signals B1-B3 depending on the control data. For instance, a signal-to-noise ratio (SNR) may be improved in audio signals B1-B3, in particular by a preliminary mixing of the audio data, before transmission to ear units 212, 213. The pre-processed audio data received via audio signals D1-D3 may then be further processed by processor 116 of ear units 212, 213 in order to provide the audio data with the directivity at the ear level.
[0069]
[0070] Processor 116 of left ear unit 312 is communicatively coupled to first sound detector 123 via a first signal channel delivering the audio data in audio signal A1. Processor 116 of right ear unit 313 is communicatively coupled to second sound detector 124 via a second signal channel delivering the audio data in audio signal A2. Ear units 312, 313 are configured to exchange audio data via an audio data communication link 352. Each ear unit 312, 313 comprises a communication port 317 configured to send and receive audio data to the communication port 317 of the other ear unit 312, 313 via communication link 352. Processor 116 of each ear unit 312, 313 is communicatively coupled to respective communication port 317 via a respective signal channel. An audio signal E1 representative of audio data in audio signal A1 can thus be received by processor 116 of right ear unit 313 from processor 116 of left ear unit 312 via communication link 352. An audio signal E2 representative of audio data in audio signal A2 can be received by processor 116 of left ear unit 312 from processor 116 of right ear unit 313 via communication link 352. Audio data contained in audio signal A1 and in audio signal A2 can thus be received by processor 116 of each ear unit 312, 313 via a separate audio channel. Processor 116 of each ear unit 312, 313 is configured to provide the received audio data with a directivity, in particular by performing a binaural acoustic beamforming, depending on the control data received the from handheld device 131 via the respective communication link 256, 257.
[0071]
[0072] Processor 116 of left ear unit 412 is communicatively coupled to sound detectors 123-125 via the separate signal channels such that the audio data in each of signals A1-A3 can be separately supplied to processor 116 of left ear unit 412. Processor 116 of right ear unit 413 is communicatively coupled to sound detectors 423-425 via separate signal channels such that audio data in audio signals A4-A6 representing sound detected by sound detectors 423-425 can be separately supplied to processor 116 of right ear unit 413. Processor 126 of left ear unit 412 is configured to process the audio data received via audio signals A1-A3 in order to provide the audio data with a directivity. Processor 126 of right ear unit 413 is configured to process the audio data received via audio signals A4-A6 in order to provide the audio data with a directivity. The directivity of the audio data is provided depending on the control data received via communication link 256, 257 from handheld device 131.
[0073] In some implementations, ear units 312, 313 are configured to exchange audio data via audio data communication link 352. The audio data in signals A1-A3 and the audio data in signals A4-A6 may then be exchanged between processor 126 of left ear unit 412 and processor 126 of right ear unit 413. Processor 116 may thus be configured to receive the audio data in signals A1-A6 via a respective separate channel and to provide the received audio data with a directivity, in particular by performing binaural acoustic beamforming. In particular, sound detectors 123-125 of first detector arrangement 122 and sound detectors 423-425 of second detector arrangement 422 may jointly form a detector arrangement for providing audio data representative of the detected sound. The audio data can then be provided with a directivity by processor 116 of each ear unit 312, 313 depending on the control data received from handheld device 131.
[0074]
[0075] Remote device 521 further comprises a detector arrangement 522 including a plurality of spatially separated sound detectors 523, 524, 525, 526. Sound detectors 523-526 each comprise a sound detection surface 533, 534, 535, 536. Sound detection surface 533-536 is provided on top face 532 of housing 531. In this way, sound impinging from various directions on top face 532 can be detected. Sound detection surface 533-536 is oriented in an opposite direction with respect to bottom face 538. The support provided at bottom face 538 allows to position sound detection surfaces 533-536 at a defined distance from the plane on which remote device 521 is disposed in a reproducible way. Sound detection surface 533-536 may be implemented as a membrane excitable to vibrate by an impinging sound. Sound detection surfaces 533-536 are spaced apart in a circular arrangement.
[0076] Housing 531 comprises at least one visible orientation characteristic 528, 529. In the illustrated example, two orientation characteristics 528, 529 are schematically indicated. Orientation characteristic 528, 529 can indicate a default spatial orientation of remote device 521. Orientation characteristic 528, 529 can thus allow the user to align a spatial orientation of handheld device 131 with a default spatial orientation of remote device 521. Orientation characteristic 528, 529 may be provided by a visual marker, for instance an arrow, indicating a default direction, for instance a front direction, of remote device 521. Orientation characteristic 528, 529 may also be provided by a shape of housing 531, in particular an asymmetric shape, allowing to identify the default direction of remote device 521. Orientation characteristic 528, 529 may also be provided by a light emitter or another visible feature provided at housing 531.
[0077] The user can position remote device 521 in a way that orientation characteristic 528, 529 is aligned to his position. A default spatial orientation of remote device 521 can be defined by the alignment. For instance, the user may choose that a particular orientation characteristic 528, 529 points in a front direction relative to his body in order to position remote device 521 in the default spatial orientation. The user may then rotate handheld device 131 to align handheld device 131 with orientation characteristic 528, 529. For instance, the user may choose to align a front direction of remote device 521, which may be defined by a direction pointing away from a front face of remote device 521, with the default spatial orientation of remote device 521 such that the front direction of remote device 521 points toward a particular orientation characteristic 528, 529. Relating the spatial orientation of remote device 521 and the spatial orientation of handheld device 131 in such a way can be exploited to also relate the direction of the sound detected by remote device 521 to the orientation data generated by handheld device 131. The user may thus select a preferred directivity of the audio data representing the detected sound by choosing an appropriate spatial orientation of handheld device 131.
[0078] In some implementations, after aligning handheld device 131 and remote device 521 with respect to their spatial orientation, the user may initiate an initialization step via a user interface. For instance, a user interface 527 provided on remote device 521 and/or user interface 133 of handheld device 131 may be configured to take instructions from the user to initiate the initialization step. In the initialization step, reference data can be determined based on orientation data generated by handheld device 131 at an initial time relative to the placement of remote device 521 at the default spatial orientation. The reference data can then be employed to relate the orientation data generated by handheld device 131 at a later time to the default spatial orientation of remote device 521. A selected direction for the directivity of the audio data can thus be determined by comparing the orientation data generated by handheld device 131 with the reference data.
[0079] In some other implementations, reference data may be employed representing orientation data generated by the handheld device at a first time. The reference data can then be compared with orientation data generated by the handheld device at a second time in order to determine the selected direction.
[0080] In some implementations, the user may select orientation characteristic 528, 529 in order to indicate his spatial position to remote device 521. For instance, a plurality of orientation characteristics 528, 529 can be circularly arranged around a center of remote device 521. The user may select a corresponding orientation characteristic 528, 529 via user interface 527. Orientation characteristics 528, 529 may also be configured to be directly manipulated by the user. For instance, orientation characteristics 528, 529 can be implemented as push buttons such that the user can indicate a selected orientation characteristic 528, 529 by pushing it.
[0081]
[0082]
[0083] In a first scenario illustrated in
[0084] In a second scenario illustrated in
[0085] In some implementations, the alignment of the front direction of handheld device 731 and the direction in which user 771 faces remote device 721, as illustrated in
[0086] In some implementations, a spatial orientation of handheld device 731 relative to a predefined plane is determined, wherein the audio data is provided with a directivity in dependence of the control data depending on the spatial orientation of the handheld device relative to the predefined plane. In particular, an orientation criterion of the determined spatial orientation relative to the predefined plane may be evaluated. The predefined plane may be provided as rotation plane 104. The orientation criterion may be determined to be fulfilled when handheld device 731 points in an upward direction away from table surface 761. In this case, the audio data may be provided with a directivity depending on the control data. The orientation criterion may be determined to be not fulfilled when handheld device 731 points in a transverse direction and/or in a downward direction toward table surface 761. In this case, the audio data may not be provided with a directivity depending on the control data. Instead, a different operation may be activated, for instance disabling the forming of beam 751 in a direction depending on the control data and/or activating an automated steering of beam 751 and/or muting the reproduction of the sound detected by remote device 721 and/or performing another operation of providing the audio data. Thus, user 771 can be enabled to control several functionalities of hearing system 701 in a convenient way. More particularly, changing the spatial orientation of the handheld device relative to the predefined plane can be performed by the user by a manual gesture, such as manually flipping or tilting handheld device 731 with respect to the spatial orientation relative to predefined plane 104, which can be carried out rather effortlessly and may be easily remembered by user 771.
[0087]
[0088] In a first scenario illustrated in
[0089] In a second scenario illustrated in
[0090] In some implementations, the alignment of the front direction of handheld device 731 and the direction in which the user 771 faces with the front side of his body, as illustrated in
[0091] In some implementations, the audio data is provided with a directivity depending on the control data depending on whether the spatial orientation of handheld device 731 is a certain range relative to a predefined plane. The predefined plane may be provided as rotation plane 104. Rotation plane 104 may be defined as a plane in parallel to the ground plane. Thus, by changing the spatial orientation of the handheld device relative to the ground plane by a manual gesture, such as manually flipping or tilting handheld device 731 relative the direction of the gravitational force, user 771 can be enabled to turn on and/or to turn off a functionality of hearing system 801 in which the audio data is provided with a directivity depending on the control data. When the functionality is turned off, a different operation of hearing system 801 may be activated instead, as described above.
[0092]
[0093] In the examples illustrated in
[0094] Spatial orientations 736-738 can be characterized by differing alignments of handheld device 731 relative to the z-axis of reference frame 105. In spatial orientation 736 illustrated in
[0095] The audio data may be provided with a directivity depending on the control data depending on whether a particular spatial orientation 736-738 relative to predefined plane 104 is determined based on the orientation data. The particular spatial orientation may be predefined relative to predefined plane 104. The provision of the audio data with a directivity depending on the control data may be disabled when a spatial orientation 736-738 deviating from the predefined spatial orientation relative to predefined plane 104 is determined. Instead, a different operation of hearing system 701, 801 may be performed, as described above.
[0096] This can allow a user of the hearing system to manually activate and/or deactivate the provision of the audio data with a directivity depending on the control data and/or the different operation by a manual gesture involving handheld device 731. In particular, the manual gesture can involve a change of the spatial orientation of handheld device 731 relative to predefined plane 104. For instance, the manual gesture may involve tilting handheld device 731 from spatial orientation 736 to spatial orientation 737 and/or vice versa. The manual gesture may also involve tilting handheld device 731 from spatial orientation 737 to spatial orientation 738 and/or vice versa. The manual gesture may also involve flipping handheld device 731 from spatial orientation 736 to spatial orientation 738 and/or vice versa.
[0097]
[0098]
[0099] At 912, control data is determined based on the orientation data by processor 136 included in handheld device 131, 731. In some implementations, the determined control data includes the generated orientation data. In particular, the orientation data may substantially correspond to the orientation data. In some other implementations, the control data is determined from the orientation data such that the control data is indicative of a selected direction. The selected direction may indicate a direction selected by the user to provide the directivity of the audio data.
[0100] At 913, the control data is transmitted by handheld device 131, 731 to remote device 121, 521, 621, 721 and/or to hearing device 111, 211, 311, 411, 711, 811 via control data communication link 155, 256, 257. At 914, the control data is received by remote device 121, 521, 621, 721 and/or hearing device 111, 211, 311, 411, 711, 811. The method including operations 911-914 may be implemented in the place of control data provision step 901. The method may also be implemented independently from hearing system 101, 201, 701 with the exception of receiving, at operation 914, the control data from handheld device 131, 731 by hearing device 111, 211, 311, 411, 711, 811 and/or by remote device 111, 211, 311, 411, 711, 811.
[0101]
[0102]
[0103] The method including operations 931-934 and/or operations 931-934 may be implemented as a direction determining step. In some implementations, the direction determining step is performed by handheld device 131, 731 such that the selected direction can be included in the control data. In some implementations, the direction determining step is at least partially performed by hearing device 111, 211, 311, 411, 711, 811 and/or remote device 121, 521, 621, 721, in particular the determining of the selected direction at 924, 934 and/or the comparison at 923, 933. The audio data provided at operation 902 can thus be provided with a directivity corresponding to the selected direction. As a result, sound detected from the selected direction may be predominantly represented in the audio data.
[0104] In other implementations, the selected direction may be determined at operation 924, 934 based on the orientation data provided at operation 922, 931 without the comparison with the reference data at 923, 933. For instance, the orientation data may be provided at 921, 931 such that the orientation data is indicative of the spatial orientation of handheld device 131, 731 relative to a predefined reference frame, such as the earth's reference frame, and/or the spatial orientation of handheld device 131, 731 relative to hearing device 111, 211, 311, 411, 711, 811 and/or remote device 121, 521, 621, 721. Thus, a comparison with reference data, as provided at operation 921, 932, may not be required for determining the selected direction.
[0105]
[0106] In other implementations, the reference data relating the orientation data to a spatial orientation of hearing device 111, 211, 311, 411, 711, 811 and/or remote device 121, 521, 621, 721 may be determined automatically and/or independently from a user interaction such that the initialization step including operations 941-945 may not be required. The reference data can be provided by orientation data indicative of the spatial orientation of the detector arrangement. The ear unit and/or the remote device may be configured to generate the orientation data indicative of the spatial orientation of the detector arrangement. The reference data may then be generated by a sensor, in particular an inertial sensor, provided at a fixed position relative to at least one sound detector of the detector arrangement. For instance, hearing device 111, 211, 311, 411, 711, 811 and/or remote device 121, 521, 621, 721 may be provided with an orientation sensor configured to provide orientation data indicative of the spatial orientation of hearing device 111, 211, 311, 411, 711, 811 and/or remote device 121, 521, 621, 721. The orientation data indicative of the spatial orientation of hearing device 111, 211, 311, 411, 711, 811 and/or remote device 121, 521, 621, 721 may then be employed as the reference data.
[0107]
[0108]
[0109] In a case in which no change of the orientation data has been determined, no change of the directivity provided in the audio data is controlled at 964. In some implementations, in a case in which a change of the orientation data has been determined, a corresponding change of the directivity provided in the audio data is controlled at 965. In this way, the directivity of the audio data may be continuously changed at operation 965 during a continuous change of the orientation data. In some other implementations, in a case in which a change of the orientation data has been determined, it is determined at 962 whether the change of the orientation data is above a threshold. In a case in which the change of the orientation data is below the threshold, operation 964 is performed such that the directivity provided in the audio data is not changed. In a case in which the change of the orientation data is above the threshold, operation 965 is performed such that the directivity provided in the audio data is changed accordingly. In this way, the directivity of the audio data may be gradually changed at operation 965 during a continuous change of the orientation data. The amount of the gradual change may be adjusted by setting the threshold in operation 853 accordingly. The method comprising operations 961-965 may be included in the directivity provision step performed at operation 902.
[0110]
[0111]
[0112] At 985, the audio data is collected from the different signal channels by a processing unit, in particular processor 126 included in remote device 121, 521, 721 and/or processor 116 included in hearing device 111, 211, 711. At 986, the collected audio data is provided with a directivity by the processing unit, in particular by performing an acoustic beam forming. The directivity can be provided depending on control data corresponding to operation 902. In particular, the directivity can correspond to a selected direction controlled by the control data such that the sound detected from the selected direction is predominantly represented in the audio data. The acoustic beam can thus be formed in the selected direction. Providing the directivity in the audio data may comprise any of operations 961-965 of the method illustrated in
[0113] While the principles of the disclosure have been described above in connection with specific devices, systems and methods, it is to be clearly understood that this description is made only by way of example and not as limitation on the scope of the invention. The above described preferred embodiments are intended to illustrate the principles of the invention, but not to limit the scope of the invention. Various other embodiments and modifications to those preferred embodiments may be made by those skilled in the art without departing from the scope of the present invention that is solely defined by the claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processing unit, processor or controller or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.