Hearing aid system containing at least one hearing aid instrument worn on the user's head, and method for operating such a hearing aid system

11665486 · 2023-05-30

Assignee

Inventors

Cpc classification

International classification

Abstract

A hearing aid system assists a user's ability to hear. The system has a hearing aid instrument worn on the user's head. A sound signal from the user's surroundings is recorded and converted into input audio signals by two input transducers. The input audio signals are processed in a signal processing step for generating an output audio signal, which is output by an output transducer. The input audio signals or audio signals derived therefrom by pre-processing are direction-dependently damped by an adaptive beamformer according to the stipulation of a variable directivity with a directional strength to generate a directed audio signal. The directivity is varied with a specified adaptation speed such that the energy content of the directed audio signal is minimized. The adaptation speed and/or the directional strength are variably set on a basis of an analysis of the input audio signals or of the pre-processed audio signals.

Claims

1. A method for operating a hearing aid system for assisting a user's ability to hear, the hearing aid system having at least one hearing aid instrument worn on a head of the user, which comprises the steps of: recording a sound signal from surroundings of the user and converting the sound signal into input audio signals by means of at least two input transducers of the hearing aid system; processing the input audio signals in a signal processing step to generate an output audio signal, wherein in the signal processing step, the input audio signals or pre-processed audio signals derived therefrom by pre-processing are direction-dependently damped by means of a first adaptive beamformer according to a stipulation of a first variable directivity with a directional strength in order to generate a first directed audio signal; varying a directivity of the first adaptive beamformer with a first adaptation speed in such a way during an adaptation step an energy content of the first directed audio signal is minimized, wherein the first adaptation speed and/or the directional strength are variably set on a basis of an analysis of the input audio signals or of the pre-processed audio signals; applying a second adaptive beamformer with a second variable directivity to the input audio signals or the pre-processed audio signals to generate a second directed audio signal for purposes of setting the first adaptation speed and/or the directional strength for the first adaptive beamformer; setting a second variable directivity of the second adaptive beamformer with a second adaptation speed in such a way that an energy content of the second directed audio signal is minimized, wherein the second adaptation speed does not drop below the first adaptation speed and at least intermittently exceeds the latter; and outputting the output audio signal by means of an output transducer of the hearing aid instrument.

2. The method according to claim 1, which further comprises setting the first adaptation speed and/or the directional strength in dependence on a time stability of the input audio signals or of the pre-processed audio signals.

3. The method according to claim 1, which further comprises variably setting the first adaptation speed and/or the directional strength in dependence on a change in the second variable directivity.

4. The method according to claim 1, which further comprises setting the first adaptation speed and/or the directional strength in dependence on a deviation of the second variable directivity from the first variable directivity.

5. The method according to claim 1, wherein: the first variable directivity is frequency dependent such that different frequency components of the input audio signals or of the pre-processed audio signals are individually direction-dependently damped in each case; the first adaptation speed and/or the directional strength are specified for the first adaptive beamformer as a frequency-dependent variable; a noise component, emanating from a noise source, in the input audio signals or in the pre-processed audio signals is identified for the purposes of setting the first adaptation speed and/or the directional strength; an interference frequency range corresponding to the noise component is ascertained; and the first adaptation speed and/or the directional strength are uniformly specified in the interference frequency range.

6. The method according to claim 1, which further comprises wearing the at least one hearing aid instrument in or on an ear of the user.

7. A hearing aid system for assisting a user's ability to hear, the hearing aid system comprising: at least one hearing aid instrument worn on a head of the user and having an output transducer set up to output an output audio signal; at least two input transducers set up to record a sound signal from surroundings of the user and convert the sound signal into input audio signals; a signal processor set up to process the input audio signals to generate the output audio signal, said signal processor having a first adaptive beamformer set up to direction-dependently damp the input audio signals, or pre-processed audio signals derived therefrom by pre-processing, according to a stipulation of a first variable directivity with a directional strength in order to generate a first directed audio signal, and to vary the first variable directivity with a first adaptation speed in such a way that an energy content of the first directed audio signal is minimized; an adaptivity controller set up to variably set the first adaptation speed and/or the directional strength on a basis of an analysis of the input audio signals or the pre-processed audio signals, said adaptivity controller further containing a second adaptive beamformer with a second variable directivity, to which the input audio signals or the pre-processed audio signals are fed, said second adaptive beamformer being set up to generate a second directed audio signal and to set the second variable directivity with a second adaptation speed in such a way that an energy content of the second directed audio signal is minimized, and the second adaptation speed does not drop below the first adaptation speed and at least intermittently exceeds the latter.

8. The hearing aid system according to claim 7, wherein said adaptivity controller is set up to set the first adaptation speed and/or the directional strength depending on a time stability of the input audio signals or of the pre-processed audio signals.

9. The hearing aid system according to claim 7, wherein said adaptivity controller is set up to variably set the first adaptation speed and/or the directional strength depending on a change in the second variable directivity.

10. The hearing aid system according to claim 7, wherein said adaptivity controller is set up to set the first adaptation speed and/or the directional strength depending on a deviation of the second variable directivity from the first variable directivity.

11. The hearing aid system according to claim 7, wherein: the first variable directivity is frequency dependent such that different frequency components of the input audio signals or of the pre-processed audio signals are direction-dependently damped in different ways; said adaptivity controller is set up to: specify the first adaptation speed and/or the directional strength as a frequency-dependent variable; identify a noise component, emanating from a noise source, in the input audio signals or in the pre-processed audio signals for purposes of setting the first adaptation speed and/or the directional strength; ascertain an interference frequency range corresponding to the noise component; and uniformly specify the first adaptation speed and/or the directional strength in the interference frequency range.

Description

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING

(1) FIG. 1 is a schematic illustration of a hearing aid system formed of a single hearing aid instrument and being in a form of a hearing aid that is wearable behind an ear of a user;

(2) FIGS. 2 to 4 are circuit blocking diagrams each showing a structure of signal processing of the hearing aid system of FIG. 1 in three alternative embodiments;

(3) FIG. 5 is a circuit block diagram, in an illustration as per FIGS. 2 to 4, a functional unit, referred to as adaptivity controller, of the signal processing of the hearing aid system in a further embodiment; and

(4) FIG. 6 is an illustration as per FIG. 1, of an alternative embodiment of the hearing aid system in which the latter contains a hearing aid instrument in the form of a behind-the-ear hearing aid and a control program implemented on a smartphone (“hearing app”).

DETAILED DESCRIPTION OF THE INVENTION

(5) Parts and variables corresponding to one another are always provided with the same reference signs in all figures.

(6) Referring now to the figures of the drawings in detail and first, particularly to FIG. 1 thereof, there is shown a hearing aid system 2 which consists in this case of a single hearing aid 4, i.e., a hearing aid instrument set up to assist the ability of a hearing-impaired user to hear. In the example illustrated here, the hearing aid 4 is a BTE hearing aid, which is able to be worn behind an ear of a user.

(7) Optionally, in a further embodiment of the invention, the hearing aid system 2 contains a second hearing aid, not expressly illustrated, which serves to supply the second ear of the user and which corresponds in terms of its setup to the hearing aid 4 illustrated in FIG. 1 in particular.

(8) Within a housing 5, the hearing aid 4 contains two microphones 6 as acousto-electric input transducers and a receiver 8 as electro-acoustic output transducer. The hearing aid 4 furthermore contains battery 10 and a signal processing in the form of a signal processor 12. Preferably, the signal processor 12 contains both a programmable subunit (e.g., a microprocessor) and a non-programmable subunit (e.g., an ASIC).

(9) The signal processor 12 is fed with a supply voltage U from the battery 10.

(10) During normal operation of the hearing aid 4, the microphones 6 each record airborne sound from the surroundings of the hearing aid 4. The microphones 6 each convert the sound into an (input) audio signal I1 and I2, respectively, which contains information about the recorded sound. Within the hearing aid 4, the input audio signals I1, I2 are fed to the signal processor 12, which modifies these input audio signals I1, I2 to assist the ability of the user to hear.

(11) The signal processor 12 outputs an output audio signal O, which contains information about the processed and hence modified sound, to the receiver 8.

(12) The receiver 8 converts the output sound signal O into modified airborne sound. This modified airborne sound is transferred into the auditory canal of the user via a sound channel 14, which connects the receiver 8 to a tip 16 of the housing 5, and via a flexible sound tube (not explicitly shown), which connects the tip 16 with an earpiece inserted into the auditory canal of the user.

(13) The structure of the signal processing is illustrated in more detail in FIG. 2. From this, it is evident that the signal processing of the hearing aid system 2 is organized in two functional constituent parts, specifically a signal processing unit 18 and a signal analysis unit 20. The signal processing unit 18 serves to generate the output audio signal O from the input audio signals I1, I2 of the microphones 6 or, in this case, from audio signals I1′, I2′ derived from pre-processing, which have consequently been pre-processed. In the case mentioned first, the input audio signals I1, I2 of the microphones 6 are directly fed to the signal processing unit 18. In the latter case, illustrated in FIG. 2 in exemplary fashion, the input audio signals I1, I2 of the microphones 6 are initially fed to a pre-processing unit 22, which then derives the pre-processed audio signals I1′, I2′ therefrom and supplies these to the signal processing unit 18.

(14) In the pre-processing unit 22, the input audio signals I1, I2 are preferably superposed on one another with a time offset to form the pre-processed audio signals I1′, I2′, in such a way that the two pre-processed audio signals I1′, I2′ correspond to a cardioid signal or an anti-cardioid signal.

(15) The signal processing unit 18 contains a number of signal processing processes 24, which successively process the input audio signals I or—in the example as per FIG. 2—the internal audio signals I1′, I2′ and modify these in the process in order to generate the output audio signal O and hence compensate the loss of hearing of the user.

(16) The signal processing processes 24 are optionally implemented in any combination in the form of (non-programmable) hardware circuits and/or in the form of software modules (firmware) in the signal processor 12. By way of example, at least one of the signal processing processes 24 is formed by a hardware circuit, at least one further one of the signal processing processes 24 is formed by a software module and yet another one of the signal processing processes 24 is formed by a combination of hardware and software constituent parts. By way of example, the signal processing processes 24 comprise:

(17) a process for suppressing noise and/or feedback,

(18) a process for dynamic compression, and

(19) a process for frequency-dependent amplification on the basis of audiogram data,

(20) etc.

(21) Here, at least one signal processing parameter P is assigned in each case to at least one of these signal processing processes 24 (as a rule, to all signal processing processes 24 or at least to most signal processing processes 24). The or each signal processing parameter P is a one-dimensional variable (binary variable, natural number, floating-point number, etc.) or a multi-dimensional variable (array, function, etc.), the value of which parameterizes (i.e., influences) the functionality of the respectively assigned signal application process 24. In this case, signal processing parameters P can activate or deactivate the respectively assigned signal processing process 24, can continuously or incrementally amplify or weaken the effect of the respectively assigned signal processing process 24, can define time constants for the respective signal processing process 24, etc.

(22) By way of example, the signal processing parameters P comprise

(23) a) the aforementioned audiogram data or frequency-specific gain factors derived therefrom, for a process for frequency-dependent amplification,

(24) b) a characteristic for a process for dynamic compression,

(25) c) a control variable for continuously setting the strength of a process for noise and/or feedback suppression,

(26) d) etc.

(27) In any case, some of the signal processing parameters P are made available to the signal processing processes 24 from a parameterization unit 26.

(28) Moreover, the signal processing processes 24 comprise a first adaptive beamformer 28—illustrated in more detail in FIG. 2—which is set up to direction-dependently damp the input audio signals I1, I2 (or, as illustrated in FIG. 2, the pre-processed audio signals I1′, I2′) according to the stipulation of a variable (first) directivity and to thus generate a first directed audio signal R1. The beamformer 28 generates the audio signal R1 by virtue of superposing the two fed audio signals I1′, I2′ (i.e., a cardioid signal and an anti-cardioid signal in the example as per FIG. 2), which are weighted by means of a first weighting factor a1:
R1=I1′−a1.Math.I2′ with a=[−1;1]  Eq. 1

(29) Here, the weighting factor a1 determines a notch direction in which—as seen relative to the head of the user—the direction-dependent damping of the beamformer 28 has a (local) maximum. Consequently, the weighting factor a1 represents a measure for the notch direction of the beamformer 28 and is therefore conceptually equated to this notch direction below. To adapt the directivity, the weighting factor a1 is varied in a closed-control method by the beamformer 28 in an adaptation step such that the energy content of the directed audio signal R1 is minimized (this self-regulation of the beamformer 28 is illustrated schematically in FIG. 2 by returning the audio signal R1 to the beamformer 28). What the described energy minimization achieves is that noise from a spatial region behind the head of the user is suppressed to the best possible extent. The directed audio signal R1 output by the beamformer 28 is processed further by the further signal processing processes 24, as a result of which the output audio signal O is generated. The beamformer 28 is preferably formed by a software module.

(30) A first adaptation speed v1 is variably specified for the beamformer 28 as signal processing parameters P. This adaptation speed v1 is determined in the signal analysis unit 20 by a functional unit denoted adaptivity controller 30, which is preferably implemented in software.

(31) In the embodiment illustrated in FIG. 2, the adaptivity controller 30 contains a second adaptive beamformer 32 and an evaluation module 34.

(32) In respect of structure and function, the second adaptive beamformer 32 preferably corresponds to the first adaptive beamformer 28. Consequently, in the manner described above, the second adaptive beamformer 32 is set up to direction-dependently damp the input audio signals I1, I2 (or, as illustrated in FIG. 2, the pre-processed audio signals I1′, I2′) according to the stipulation of a (second) variable directivity and to thus generate a second directed signal R2. The directivity of the beamformer 32 preferably has a notch direction which is characterized by a variable weighting factor a2. The weighting factor a2 (and hence the notch direction) is varied by the beamformer 32 with an adaptation speed v2 in such a way that the energy content of the directed audio signal R2 is minimized.

(33) In contrast to the beamformer 28, the beamformer 32 does not serve to generate the output audio signal O output to the user but only serves to analyze the noise background underlying the input signals I1, I2. Therefore, the audio signal R2 is not processed further but only returned to the beamformer 32 for the purposes of self-regulation. Instead, the beamformer 32 outputs as analysis result the weighting factor a2 which indicates the notch direction (and hence indirectly the arrangement of the most dominant noise sources in the surroundings of the user) to the evaluation module 34.

(34) In the evaluation module 34, the time stability (or—expressed conversely—the time variability) of the weighting factor a2 and hence of the noise background is evaluated in the embodiment as per FIG. 2, for example by virtue of forming a sliding temporal root mean square value over the first time derivative of the weighting factor a2. The evaluation module 34 varies the adaptation speed v1 for the first adaptive beamformer 28 on the basis of this variable. In a simple but expedient embodiment variant, the evaluation module 34 varies the adaptation speed v1 in binary fashion here, between a comparatively low base value and a value that has been increased in relation thereto. Here, the evaluation module 34 sets the adaptation speed v1 to the base value if and for as long as the above-described mean value does not exceed a specified threshold (which indicates that the noise background is not changeable or only weakly changeable). Consequently, the first beamformer 28 only adapts slowly in this case, as a result of which artifacts as a consequence of the adaptation are largely avoided. Otherwise, i.e., if and for as long as the mean value exceeds the threshold on account of a significant change in the noise background and the weighting factor a2, the adaptation speed v1 is increased relative to the base value such that the first adaptive beamformer 28 can quickly adapt to the altered hearing situation (in particular without perceivable delay).

(35) To analyze the noise background with great precision, the second adaptive beamformer 32 has a quickly adapting embodiment. Here, the adaptation speed v2 is chosen (preferably as a constant) in such a way that it never drops below the variable adaptation speed v1 of the first adaptive beamformer 28 (v2≥v1).

(36) In addition or as an alternative to the adaptation speed v1, a directional strength s of the first adaptive beamformer 28 is preferably also variable. Here, the variation in the directional strength s is realized, for example, by virtue of the weighted sum as per Eq. 1 being mixed at different levels with an omnidirectional audio signal A which is derived from the input audio signals I1, I2 (and which is optionally supplied to the beamformer 28 as per FIG. 2 as an additional input variable). Here, the directional strength s is reduced by the evaluation module 34 in relation to a specified base value if and for as long as a significant changeability of the noise background is determined—in particular on the basis of the threshold being exceeded described above.

(37) As can further be gathered from FIG. 2, the signal analysis unit 20 optionally comprises a classifier 36 in addition to the adaptivity controller 30 and preferably in addition to further functions for sound analysis not explicitly illustrated here, said classifier, in a manner conventional per se, analyzing the current hearing situation by analyzing the input audio signals I1, I2 (or the pre-processed audio signals I1′, I2′ as illustrated in FIG. 2) in view of their similarity to a plurality of typical hearing situation classes (such as, e.g., “speech”, “speech with background noise” or “music”) and outputting a corresponding classification signal K.

(38) The classification signal K is supplied firstly to the parameterization unit 26, which, in a manner conventional per se, makes a selection between different hearing programs, i.e., different parameter sets of the signal processing parameters P which are each optimized for one of the typical hearing situation classes, depending on the classification signal K.

(39) Secondly, the classification signal K is also supplied to the evaluation module 34 of the adaptivity controller 30 and influences the determination of the adaptation speed v1 and/or the directional strength s there. By way of example, the values between which the adaptation speed v1 and/or the directional strength s are varied are altered in turn on the basis of the classification signal K.

(40) FIG. 3 illustrates an alternative embodiment of the hearing aid system 2. In contrast to the embodiment as per FIG. 2, the weighting factor a1 of the beamformer 28 is supplied to the evaluation module 34 in the embodiment as per FIG. 3, in addition to the weighting factor a2 of the beamformer 32. Here, the evaluation module 34 analyzes the changeability of the noise background underlying the input audio signals I1, I2 and the audio signals I1′, I2′ by virtue of comparing the weighting factors a1 and a2. A great deviation of the quickly changeable weighting factor a1 from the weighting factor a2, which changes slowly in the base state, is considered an indication here for a substantial change in the noise background. Accordingly, the evaluation module 34 increases the adaptation speed v1 and/or reduces the directional strength s if and for as long as the difference between the weighting factors a1 and a2 exceeds a specified threshold.

(41) FIG. 4 illustrates a further embodiment of the hearing aid system 2. In contrast to the embodiments as per FIGS. 2 and 3, the adaptivity controller 30 in this case comprises, in addition to the second adaptive beamformer 28, at least one further adaptive beamformer 28 which generates a further directed audio signal R3 and, on account of an energy minimization of this audio signal R3, varies an associated further weighting factor a3 (as a measure for a changeable notch direction of the beamformer 38).

(42) In an expedient embodiment variant, an adaptation speed v3 that is assigned to the beamformer 38 and preferably specified to be constant has a value below the adaptation speed v2 and, in particular, corresponding exactly or approximately to the base value of the adaptation speed v1. In this case, the further adaptive beamformer 38 consequently has a slowly adapting embodiment in comparison with the second adaptive beamformer 32, with both beamformers 32 and 38 setting the respective weighting factor a2 and a3, respectively, preferably independently of one another (coupling of the beamformers 32 and 38, as indicated in FIG. 4 on the basis of the supply of the weighting factor a2 to the beamformer 38, is preferably not provided in this embodiment variant). The changeability of the noise background underlying the input audio signals I1, I2 and the pre-processed audio signals I1′, I2′ is determined here by the evaluation module 34 in a manner analogous to the exemplary embodiment as per FIG. 3 on the basis of the deviations between the weighting factors a2 and a3 of the beamformers 32 and 38.

(43) In an alternative embodiment variant, the adaptation speeds v2 and v3 of the beamformers 32 and 38 are chosen to be exactly the same or approximately the same such that both beamformers 32 and 38 adapt quickly. In this case, the beamformers 26 and 38 are preferably coupled to one another—as indicated in FIG. 4—such that a different setting of the weighting factors a2 and a3 is forced. This coupling ensures that the beamformers 26 and 38 adjust to different dominant noise sources in the surroundings of the user. In this case, the changeability of the noise background underlying the input audio signals I1, I2 and the pre-processed audio signals I1′, I2′ is determined here by the evaluation module 34 in a manner preferably analogous to the exemplary embodiment as per FIG. 2 on the basis of the time stability of the weighting factors a2 and a3. Here, in particular, the adaptation speed v1 is increased and/or the directional strength s is lowered if the condition for increasing the adaptation speed v1 and/or reducing the directional strength s is satisfied for at least one of the weighting factors a2 and a3.

(44) The classifier 36 is optionally also present in the exemplary embodiments as per FIGS. 3 and 4 and not also illustrated in these figures purely for reasons of clarity.

(45) Preferably, the signal processing in the signal processing unit 18 is implemented in frequency-resolved fashion in a plurality of frequency channels (e.g., 64 frequency channels). In this case, preferably even before being supplied to the pre-processing unit 22, the input audio signals I1, I2 are respectively split into frequency components by an analysis filter bank (not explicitly illustrated in FIGS. 2 to 4), the frequency components being processed individually in each case in the frequency channels and subsequently being merged in a synthesis filter bank (likewise not explicitly illustrated in FIGS. 2 to 4) to form the output audio signal O.

(46) In this case, the first adaptive beamformer 28 is set up to direction-dependently damp, in each case on an individual basis, the frequency components of the input audio signals I1, I2 or of the pre-processed audio signals I1′, I2′ carried in the frequency channels. Consequently, the directivity of the beamformer 28 and the associated notch direction or the weighting factor a1 also have a frequency dependence. Preferably, the weighting factor a1 and/or the directional strength s are each specified as a vector, which has an associated individual value for each frequency channel. Moreover, the directivity of the beamformer 28 is preferably also adapted on an individual basis for each frequency channel. Consequently, the adaptation speed v1 is also preferably specified as a vector, which has an associated individual value for each frequency channel.

(47) To prevent a noise originating from a certain sound source being perceivably distorted by the beamformer 28 as a consequence of a frequency-specific different adaptation of the directivity, the adaptivity controller 30 is preferably set up to couple frequency channels, which carry essential frequency components of a dominant noise, in respect of the adaptation speed v1 and/or the directional strength s. Expressed differently, the adaptivity controller 30 specifies the adaptation speed v1 and/or the directional strength s in uniform fashion (i.e., with the same value) for those frequency channels which carry essential frequency components of a dominant noise.

(48) For this purpose, the second adaptive beamformer 32 (and optionally the third adaptive beamformer 38) are preferably also designed analogous to the beamformer 28, in such a way that they direction-dependently damp, in each case on an individual basis, the frequency components of the input audio signals I1, I2 or of the pre-processed audio signals I1′, I2′ carried in the frequency channels. Consequently, the noise background is analyzed in frequency-resolved fashion by the second adaptive beamformer 32 (and optionally the third adaptive beamformer 38).

(49) To ascertain the spectral composition of one or more dominant noises, the directed audio signal R2 (and R3 respectively) output by the second adaptive beamformer 32 (and optionally the third adaptive beamformer 38) is inverted in an inverter member 40 and subsequently multiplied by the omnidirectional audio signal A in a multiplier member 42. This signal processing is shown in FIG. 5 in exemplary fashion for an embodiment of the adaptivity controller 30 which, in a manner analogous to FIG. 4, comprises both the second adaptive beamformer 32 and the third adaptive beamformer 38. By multiplying the omnidirectional audio signal A with the inverted, directed audio signal R2 (or R3), an audio signal R2′ (or R3′) arises, in which precisely the dominant noise, which was selectively filtered out for the second adaptive beamformer 32 (or optionally the third adaptive beamformer 38), is selectively amplified. The audio signal R2′ (or optionally R3′) is now fed to the evaluation module 34, which analyzes the spectral composition of the audio signal R2′ (or optionally R3′) and ascertains an interference frequency range corresponding to the respective noise. The frequency channels located in this interference frequency range are coupled here by the evaluation module 34 in respect of the adaptivity of the first adaptive beamformer 28 by virtue of the evaluation module 34 uniformly specifying the values of the adaptation speed v1 and/or the directional strength s corresponding to these frequency channels. By way of example, the adaptation speed v1 is increased in relation to the base value for all coupled frequency channels and/or the directional strength s is reduced in relation to the base value for all coupled frequency channels if and for as long as it emerges (from the evaluation of the weighting factor a2 or the weighting factors a2 and a3 undertaken by the evaluation module 34 as per FIG. 2 or 4) that the condition for increasing the adaptation speed v1 or reducing the directional strength s is satisfied for at least one of the coupled frequency channels.

(50) FIG. 6 shows a further embodiment of the hearing aid system 2, in which the latter comprises control software in addition to the hearing aid 4 (or two hearing aids of this type for supplying the two ears of the user). This control software is referred to as hearing app 44 below. The hearing app 44 is installed on a smartphone 46 in the example illustrated in FIG. 5. Here, the smartphone 46 itself is not part of the hearing aid system 2. Rather, the smartphone 46 is only used as a resource for memory and computing power by the hearing app 44.

(51) The hearing aid 4 and the hearing app 46 exchange data via a wireless data transmission link 48 during the operation of the hearing aid system 2. By way of example, the data transmission link 48 is based on the Bluetooth standard. In this case, the hearing app 44 accesses a Bluetooth transceiver of the smartphone 46 in order to receive data from the hearing aid 4 and in order to transmit data to the latter. In turn, the hearing aid 4 contains a Bluetooth transceiver (not explicitly illustrated) in order to transmit data to the hearing app 44 and to receive data from this app.

(52) In the embodiment as per FIG. 6, parts of the signal processing shown in FIGS. 2 to 5 (e.g., the adaptivity controller 30) are not implemented in the signal processor 12 of the hearing aid 4 but instead in the hearing app 44.

(53) The invention becomes particularly clear on the basis of the above-described exemplary embodiments although it is equally not restricted to these exemplary embodiments. Rather, further embodiments of the invention can be derived by a person skilled in the art from the claims and the above description. In particular, the individual features of the hearing aid system and of the associated operating method respectively explained on the basis of the various exemplary embodiments can also be combined differently with one another within the scope of the claims without leaving the scope of the invention.

(54) The following is a summary list of reference numerals and the corresponding structure used in the above description of the invention: 2 Hearing aid system 4 Hearing aid 5 Housing 6 Microphone 8 Receiver 10 Battery 12 Signal processor 14 Sound channel 16 Tip 18 Signal processing unit 20 Signal analysis unit 22 Pre-processing unit 24 Signal processing process 26 Parameterization unit 28 (First adaptive) beamformer 30 Adaptivity controller 32 (Second adaptive) beamformer 34 Evaluation module 36 Classifier 38 (Third adaptive) beamformer 40 Inverter member 42 Multiplier member 44 Hearing app 46 Smartphone 48 Data transmission link a1 (First) weighting factor a2 (Second) weighting factor a3 (Third) weighting factor s Directional strength v1 (First) adaptation speed v2 (Second) adaptation speed v3 (Third) adaptation speed A (Omnidirectional) audio signal I1, I2 Input audio signal I1′, I2′ (Internal) audio signal K Classification signal O Output audio signal P Signal processing parameter R1 (First directed) audio signal R2 (Second directed) audio signal R3 (Third directed) audio signal U Supply voltage