Hearing device comprising a directional system

09800981 · 2017-10-24

Assignee

Inventors

Cpc classification

International classification

Abstract

The application relates to a hearing device comprising an input unit for providing first and second electric input signals representing sound signals, a beamformer filter for making frequency-dependent directional filtering of the electric input signals, the output of said beamformer filter providing a resulting beamformed output signal. The application further relates to a method of providing a directional signal. The object of the present application is to create a directional signal. The problem is solved in that the beamformer filter comprises a directional unit for providing respective first and second beamformed signals from weighted combinations of the electric input signals, an equalization unit for equalizing a phase (and possibly an amplitude) of the beamformed signals and providing first and second equalized beamformed signals, and a beamformer output unit for providing the resulting beamformed output signal from the first and second equalized beamformed signals. This has the advantage to create a directional signal where the phase of the individual components is preserved, and therefore introducing no phase distortions. The invention may e.g. be used in hearing aids, headsets, ear phones, active ear protection systems, and combinations thereof.

Claims

1. A hearing device comprising an input that provides first and second electric input signals (I.sub.1, I.sub.2) representing sound signals, and a beamformer filter that frequency-dependent directionally filters the electric input signals, and outputs a resulting beamformed output signal, the beamformer filter comprising a directional filter that provides respective first and second beamformed signals from weighted combinations of the electric input signals, wherein the first and second beamformed signals are an omni-directional signal and a directional signal with a maximum gain in a rear direction, respectively, a rear direction being defined relative to a target sound source, an equalizer that equalizes a phase of at least one of the beamformed signals and provides at least first and/or second equalized beamformed signals, and a beamformer output that provides the resulting beamformed output signal from the first and second equalized beamformed signals, wherein the equalizer is configured to compensate the beamformed signals for phase differences imposed by the directional filter, the beamformer output comprises an adaptive filter configured to filter the second equalized beamformed signal and to provide a modified second equalized beamformed signal, and a subtraction unit for subtracting the modified second equalized beamformed signal from the first equalized beamformed signal thereby providing the resulting beamformed output signal, and the adaptive filter is configured to provide the resulting beamformed output signal in accordance with a predefined rule or criterion.

2. A hearing device according to claim 1 wherein the equalizer is configured to compensate the beamformed signals for phase differences imposed by the input unit.

3. A hearing device according to claim 1 wherein the beamformer output is configured to optimize a property of the resulting beamformed output signal.

4. A hearing device according to claim 1 wherein the beamformer output is configured to provide the resulting beamformed output signal in accordance with a predefined rule or criterion.

5. A hearing device according to claim 4 wherein the predefined rule or criterion comprises minimizing the energy, amplitude or amplitude fluctuations of the resulting beamformed output signal.

6. A hearing device according to claim 1 wherein the adaptive filter is configured to use a first order LMS or NLMS algorithm to fade between an omni-directional and a directional mode.

7. A hearing device according to claim 1 wherein the first beamformed signal is an enhanced omni-directional signal created by adding said first and second electric input signals.

8. A hearing device according to claim 1 wherein the first beamformed signal is an enhanced omni-directional signal created by a delay and sum beamformer, the enhanced omni-directional signal being substantially omni-directional at relatively low frequencies and slightly directional at relatively high frequencies.

9. A hearing device according to claim 1 comprising a TF-conversion unit for providing a time-frequency representation of a time-variant input signals.

10. A hearing device according to claim 1 wherein said input provides more than two electric input signals.

11. A hearing device according to claim 1 wherein the equalizer is configured to compensate the beamformed signals for phase and amplitude differences imposed by the input unit and/or the directional unit.

12. A hearing device according to claim 1 wherein the equalization is only performed on the second beamformed signal.

13. A hearing device according to claim 1 comprising a hearing aid, a headset, an active ear protection system, or combinations thereof.

14. A hearing device according to claim 1, wherein the predefined rule or criterion comprises minimizing the signal from one specific direction.

15. A hearing device according to claim 1, wherein the predefined rule or criterion comprises sweeping a zero of the angle dependent characteristics of the resulting beamformed output signal over predefined angles or over a predefined range of angles.

16. A method of operating a hearing device comprising first and second input transducers for converting an input sound to respective first and second electric input signals, a beamformer filter for frequency-dependent directionally filtering the electric input signals, and outputting a resulting beamformed output signal, the method comprising: directionally filtering to provide respective first and second beamformed signals from weighted combinations of said electric input signals wherein the first and second beamformed signals are an omni-directional signal and a directional signal with a maximum gain in a rear direction, respectively, a rear direction being defined relative to a target sound source; equalizing a phase of at least one of said beamformed signals and providing a first and second equalized beamformed signals; adaptive filtering the second equalized beamformed signal and providing a modified second equalized beamformed signal; subtracting the modified second equalized beamformed signal from the first equalized beamformed signal thereby providing a resulting beamformed output signal; and providing the resulting beamformed output signal in accordance with a predefined rule or criterion, wherein the beamformed signals are compensated for phase differences imposed by the directional filtering.

17. A data processing system comprising: a processor; and memory having stored thereon program code that when executed causes the processor to perform the method of claim 16.

Description

BRIEF DESCRIPTION OF DRAWINGS

(1) The disclosure will be explained more fully below in connection with a preferred embodiment and with reference to the drawings in which:

(2) FIG. 1 shows three embodiments (FIG. 1A, 1B, 1C) of a hearing device according to the present disclosure,

(3) FIG. 2 shows four embodiments (FIG. 2A, 2B, 2C, 2D) of a hearing device according to the present disclosure comprising two or more audio inputs and a beamformer filter,

(4) FIG. 3 shows two embodiments (FIG. 3A, 3B) of a hearing device comprising first and second input transducers and a beamformer filter according to the present disclosure,

(5) FIG. 4 shows a schematic visualization of the functionality of an embodiment of a beamforming algorithm according to the present disclosure,

(6) FIG. 5 shows an exemplary application scenario of an embodiment of a hearing assistance system according to the present disclosure, FIG. 5A illustrating a user, a binaural hearing aid system and an auxiliary device comprising a user interface for the system, and FIG. 5B illustrating the auxiliary device running an APP for initialization of the directional system, and

(7) FIGS. 6A-6B illustrate a definition of the terms front and rear relative to a user of a hearing device, FIG. 6A showing an ear and a hearing device and the location of the front and rear microphones, and FIG. 6B showing a user's head wearing left and right hearing devices at left and right ears.

(8) The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.

(9) Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.

DETAILED DESCRIPTION OF EMBODIMENTS

(10) FIG. 1 shows three embodiments (FIG. 1A, 1B, 1C) of a hearing device according to the present disclosure. The hearing device (HAD), e.g. a hearing aid, comprises a forward or signal path from an input unit (IU; (M1, M2)) to an output unit (OU; SP), the forward path comprising a beamformer filter (BF) and a processing unit (HA-DSP). The input unit (IU in FIG. 1A) may comprise an input transducer, e.g. a microphone unit (such as M1, M2 in FIG. 1B, 1C, preferably having an omni-directional gain characteristic), and/or a receiver of an audio signal, e.g. a wireless receiver. The output unit (OU in FIG. 1A) may comprise an output transducer, e.g. a receiver or loudspeaker (such as SP in FIG. 1B, 1C) for converting an electric signal to an acoustic signal, and/or a transmitter (e.g. a wireless transmitter) for forwarding the resulting signal to another device for further analysis and/or presentation. The output unit may alternatively (or additionally) comprise a vibrator of a bone anchored hearing aid and/or a multi-electrode stimulation arrangement of a cochlear implant type hearing aid for providing a mechanical vibration of bony tissue and electrical stimulation of the cochlear nerve, respectively.

(11) In the embodiment of FIG. 1A, the input unit (IU) picks up or receives a signal constituted by or representative of an acoustic signal from the environment (Sound input x) of the hearing device and converts (or propagates) it to a number of electric input signals (I.sub.1, I.sub.2, . . . , I.sub.M, where M is the number of input signals, e.g. two or more). In an embodiment, the input unit comprises a microphone array comprising a multitude of microphones (e.g. more than two). The beamformer filter (BF) is configured for making frequency-dependent directional filtering of the electric input signals (I.sub.1, I.sub.2, . . . , I.sub.M). The output of the beamformer filter (BF) is a resulting beamformed output signal (RBFS), e.g. being optimized to comprise a relatively large (target) signal (S) component and a relatively small noise (N) component (e.g. to have a relatively large gain in a direction of the target signal and to comprise a minimum of noise). The (optional) processing unit (HA-DSP) is configured to process the beamformed signal (RBFS) (or a signal derived therefrom) and to provide an enhanced output signal (EOUT). In an embodiment, wherein the hearing device comprises a hearing instrument, the processing unit (HA-DSP) is configured to apply a frequency dependent gain to the input signal (here RBFS), e.g. to adjust the input signal to the impaired hearing of a user. The output unit (OU) is configured to propagate or convert enhanced output signal (EOUT) to an output stimulus u perceptible by the user as sound (preferably representative of the acoustic input signal).

(12) The embodiment of a hearing device of FIG. 1B is similar to the embodiment of FIG. 1A. The only difference is that the input unit (IU) is embodied in first and second (preferably matched) microphones (M1, M2) for converting each their versions of an input sound (x.sub.1, x.sub.2) present at their respective locations to respective first and second electric input signals (I.sub.1, I.sub.2), whereas the output unit (OU) is embodied in a loudspeaker (SP) providing acoustic output u.

(13) The embodiment of a hearing device of FIG. 1C is similar to the embodiment of FIG. 1B. The only difference is that each of the microphone paths of the hearing device of FIG. 1C comprises an analysis filter bank (A-FB) for converting a time variant input signal to a number of time-frequency signals (as indicated by the bold line out of analysis filter bank (A-FB)), wherein the time domain signals (I.sub.1, I.sub.2) are represented in the frequency domain as time variant signals (IF.sub.1, IF.sub.2) in a number of frequency bands (e.g. 16 bands). In the embodiment of FIG. 1C, the further signal processing is assumed to be performed in the frequency domain (cf. beamformer filter (BF) and signal processing unit (HA-DSP) and corresponding output signals RBFSF and EOUTF, respectively (bold lines)). The hearing device of FIG. 1C further comprises a synthesis filter bank (S-FB) for converting the time-frequency signals EOUTF to time variant output signal EOUT which is fed to speaker (SP) and converted to an acoustic output sound signal (Acoustic output u).

(14) Apart from the mentioned features, the hearing device of FIG. 1 may further comprise other functionality, such as a feedback estimation and/or cancellation system (for reducing or cancelling acoustic or mechanical feedback leaked via an ‘external’ feedback path from output to input transducer of the hearing device). Typically, the signal processing is performed on digital signals. In such case the hearing device comprises appropriate analogue-to-digital (AD) and possibly digital-to-analogue (DA) converters (e.g. forming part of the input and possibly output units (e.g. transducers)). Alternatively, the signal processing (or a part thereof) is performed in the analogue domain. The forward path of the hearing device comprises (optional) signal processing (‘HA-DSP’ in FIG. 1) e.g. adapted to adjust the signal to the impaired hearing of a user.

(15) FIGS. 2A, 2B, 2C and 2D show four embodiments of a hearing device according to the present disclosure comprising two or more audio inputs and a beamformer filter.

(16) FIGS. 2A, 2B, and 2C may represent more specific embodiments of the hearing devices illustrated in FIGS. 1A, 1B and 1C, respectively.

(17) FIG. 2A illustrates an embodiment, wherein (as in FIG. 1A) the input unit (IU) provides a multitude of electric input signals (I.sub.1, I.sub.2, . . . , I.sub.M), which are fed to the beamformer filter (BF, solid enclosure). The beamformer filter (BF) comprises a directional unit (DIR) for providing respective beamformed signals (ID.sub.1, ID.sub.2, . . . , ID.sub.D, where D is the number of beamformers, D≧2), from weighted combinations of the electric input signals (I.sub.1, I.sub.2, . . . , I.sub.M). The beamformer filter (BF) further comprises an equalization unit (EQU) for equalizing a phase of the beamformed signals (ID.sub.1, ID.sub.2, . . . , ID.sub.D) and providing respective equalized beamformed signals (IDE.sub.1, IDE.sub.2, . . . , IDE.sub.D). The beamformer filter (BF) comprises a beamformer output unit (BOU) for providing the resulting beamformed output signal (RBFS) from the equalized beamformed signals (IDE.sub.1, IDE.sub.2, . . . , IDE.sub.D).

(18) FIGS. 2B and 2C illustrate embodiments of a hearing device, wherein (as in FIGS. 1B and 1C, respectively) the input unit (IU) is embodied in first and second (preferably matched) microphones (M1, M2) providing first and second electric input signals (I.sub.1, I.sub.2); IF.sub.1, IF.sub.2). The beamformer filter (BF) comprises a directional unit (DIR) for providing respective first and second beamformed signals (ID.sub.1, ID.sub.2) from weighted combinations of electric input signals (I.sub.1, I.sub.2); IF.sub.1, IF.sub.2), e.g. an omni-directional signal and a directional signal or two directional signals of different direction. The beamformer filter (BF) further comprises an equalization unit (EQU) for equalizing phase (incl. group delay, and optionally amplitude) of the beamformed signals (ID.sub.1, ID.sub.2) and providing first and second equalized beamformed signals (IDE.sub.1, IDE.sub.2). An example of an equalization unit is described in connection with FIG. 3. The beamformer filter further comprises a beamformer output unit (BOU), here comprising an adaptive filter (AF) for filtering the second equalized beamformed signal (IDE.sub.2) and providing a modified second equalized beamformed signal (IDEM.sub.2), and a subtraction unit (‘+’) for subtracting the modified second equalized beamformed signal (IDEM.sub.2) from the first equalized beamformed signal (IDE.sub.1) thereby providing a resulting beamformed output signal (RBFS). The adaptive filter (AF) is e.g. configured to optimize (e.g. minimize the energy of) the resulting beamformed output signal (RBFS).

(19) The embodiment of a hearing device of FIG. 2C is identical to the embodiment of FIG. 2B apart from the processing being performed in the (time-)frequency domain in FIG. 2C. Each of the microphone paths of FIG. 2C comprises an analysis filter bank (A-FB) for converting time domain signals (I.sub.1, I.sub.2) to frequency domain signals (IF.sub.1, IF.sub.2) as indicated by bold lines in FIG. 2C. The resulting beamformed output signal (RBFS) is indicated in FIG. 2C to be a (time-)frequency domain signal. The signal may be converted to the time domain by a synthesis filter bank and may be further processed before (as indicated in FIG. 1C) or after being converted to the time domain.

(20) FIG. 2D shows an embodiment of a hearing device according to the present disclosure comprising two or more (here M) audio inputs and a beamformer filter (BF), wherein the beamformer filter comprises a directional filter unit (DIR) providing first and second beamformed (frequency domain) signals (ID.sub.1, ID.sub.2) from weighted combinations of electric (frequency domain) input signals (IF.sub.1, . . . , IF.sub.M). The directional filter unit (DIR) is configured to determine, or (as indicated in FIG. 2D) to receive an input indicative of (T-DIR), the direction to or location of the target signal (such direction may be assumed to be fixed, e.g. as a front direction relative to the user, or be configurable via a user interface, see e.g. FIG. 5). The directional filter unit (DIR) comprises first (TI-BF) and second (TC-BF) beamformers for generating the first and second beamformed signals (ID.sub.1, ID.sub.2), respectively. The first beamformer (TI-BF) of the embodiment of FIG. 2D is a target including beamformer configured to attenuate or apply gain to signals from all directions substantially equally (providing signal ID.sub.1). The second beamformer (TC-BF) is a target cancelling beamformer configured to attenuate (preferably cancel) signals from the direction of the target signal (providing signal ID.sub.2). The other parts of the embodiment of FIG. 2D resembles those of the embodiment of FIG. 2C(B). In an embodiment, the target including beamformer comprises an enhanced omni-directional beamformer.

(21) FIG. 3 shows two embodiments (FIG. 3A, 3B) of a hearing device comprising first and second input transducers and a beamformer filter according to the present disclosure.

(22) FIG. 3A shows an embodiment as in FIG. 2A. Additionally, the embodiment of FIG. 3A comprises a control unit (CONT) for controlling the equalization unit (EQU).

(23) The aim of the equalization unit (EQU) is to remove the phase difference between the beamformed signals (ID.sub.1, ID.sub.2, . . . , ID.sub.D) (possibly) introduced by the input unit (IU) and/or the directional unit (DIR) (e.g. by determining an inverse transfer function and apply it to the relevant signals to equalize the phases of the beamformed signals, cf. e.g. FIG. 3B). An aim of the ‘cleaning’ of the introduced phase changes is further to simplify the interpretation of the different beamformed signals and hence to improve their use in to provide the resulting beamformed signal.

(24) Phase differences (generally frequency dependent) may e.g. be introduced in the beamformed signals depending on the geometric configuration of the input transducers, e.g. the distance between two microphones, or the mutual position of units of a microphone array. Likewise, phase differences may e.g. be introduced in the beamformed signals due to mismatched input transducers (i.e. input transducers having different gain characteristics, e.g. having non-ideal (and different) omni-directional characteristics). The geometrical influence on phase differences is typically stationary (e.g. determined by fixed locations of microphones on a hearing device) and may be determined in advance of the use of the hearing device. Likewise, phase differences may e.g. be introduced in the beamformed signals due to sound field modifying effects, e.g. shadowing effects, e.g. from the user, e.g. an ear or a hat located close to the input unit of the hearing device and modifying the impinging sound field. Such sound field modifying effects are typically dynamic, in the sense that they are not predictable and have to be estimated during use of the hearing device. In FIG. 3A such ‘information related to the configuration of the input unit is provided to the control unit (CONT) by signal IUconf.

(25) Another possible source of introduction of phase differences in the beamformed signals are the individual beamformers (providing respective beamformed signals ID.sub.n) of the directional unit (DIR). Different beamformers may introduce different (frequency dependent) phase ‘distortions’ (leading to introduction of phase differences between the beamformed signals (ID.sub.1, ID.sub.2, . . . , ID.sub.D). Examples of different beamformers (formed as (possibly complex) weighting of the input signals) are Omni-directional, Enhanced omni-directional, Front cardioid (target aiming), Rear cardioid (target cancelling) Dipole.

(26) Equalization of the mentioned (unintentionally introduced) phase differences may be performed as exemplified in the following. In general, if two microphones have a distance that result in a time delay d (where d has the unit of samples and is used to synchronize the microphones for signals from the look direction), the enhanced omni (ID.sub.1) signal is calculated as I.sub.2+I.sub.1, (where I1=Im.sub.1*z.sup.−d). The rear cardioid (ID.sub.2) signal is calculated as I.sub.2−I.sub.1, (where I1=Im.sub.1*z.sup.−d). So the transfer function difference of ID.sub.2 relative to ID.sub.1 is: (1−z.sup.−d)/(1+z.sup.−d). It is assumed, that the two input signals I.sub.1 and I.sub.2 are perfectly amplitude-matched for signals coming from the front (by the Mic matching block in FIG. 3B). However, if the individual microphones (M.sub.1, M.sub.2) are not perfectly omni-directional, there will be a mismatch to the rear direction. This mismatch to the rear direction can be estimated by the Mic matching block. If the signal I.sub.1 is mismatched by a factor ‘mm’ for sounds from the back, the transfer function difference between ID.sub.1 and ID.sub.2 for sounds from the back becomes (1−mm*z.sup.−d)/(1+mm*z.sup.−d). To compensate this we apply the inverse transfer function which is (mm+z.sup.−d)/(mm−z.sup.−d). After this compensation, the signals IDE.sub.1 and IDE.sub.2 are phase (and amplitude) equalized for signals from the rear direction.

(27) The phase error introduced by the beamformer is compensated by applying the inverse transfer function. The geometrical configuration is taken into account by the delay d, the sum and difference operations in the beamformers are compensated by the corresponding sums/differences in the inverse transfer function. The mismatch mm is also included and compensated in the inverse transfer function.

(28) Based on the current input unit configuration (signal IUconf) and the currently chosen configuration of beamformers (signal BFcont), the control unit generates control input EQcont for setting parameters of the equalizer unit (determining a transfer function of the EQU unit that inverses the phase changes applied to the sound input by the input unit (IU) and the directional unit (DIR), in other words to implement a currently relevant phase correction for application to the beamformed signals ID.sub.n to provide phase equalized beamformed signals IDE.sub.n). The same inverse transfer function as explained above applies here. All compensations are preferably applied at the same time.

(29) The beamformer output unit (BOU) determines the resulting beamformed signal (RFBS) from the equalized input signals according to a predefined rule or criterion. This information is embodied in control signal RBFcont, which is fed from the control unit (CONT) to the beamformer output unit (BOU). A predefined rule or criterion can in general be to optimize a property of the resulting beamformed output signal. More specifically, a predefined rule or criterion can e.g. be to minimize the energy of the resulting beamformed output signal (RBFS) (or to minimize the magnitude). A predefined rule or criterion may e.g. comprise minimizing amplitude fluctuations of the resulting beamformed output signal. Other rules or criteria may be implemented to provide a specific resulting beamformed output signal for a given application or sound environment. Other rules may be implemented, that are partly or completely independent of the resulting beamformed signal, e.g. put a static beamformer null towards a specified direction or sweep the beamformer null over a predefined range of angles.

(30) FIG. 3B illustrates the embodiment of a hearing device as shown in FIG. 2B in more detail. The first and second input transducers (M.sub.1, M.sub.2 in FIG. 2B) are denoted Front and Rear (omni-directional) microphones (the Front microphone being e.g. located in front of the Rear microphone on a (BTE-)part of a hearing device, when the (BTE-)part is worn, the BTE-part being adapted to be worn behind an ear of a user, front and rear being defined with respect to a direction indicated by the user's nose. This definition is illustrated in FIGS. 6A-6B. As an alternative to this assumption of the signal source of interest to the user being located in front of the user, other fixed directions may be assumed, e.g. to the right or left of the user (e.g. in a situation where the user is driving in a car at a front seat). Further alternatively, the location of the currently ‘interesting’ sound signal source may be dynamically determined.

(31) The input unit (IU) of FIG. 3B comprises section Microphone configuration comprising sub-sections Microphone synchronization, Mic matching,

(32) The beamformer filter (BF) of FIG. 3B comprises sections Directional signal creation (DIR), Equalization (EQU) (phase and amplitude correction), and Adaptive algorithm (BOU).

(33) The directional microphone system comprising a microphone array and a directional algorithm, e.g. microphones M.sub.1, M.sub.2 and directional unit DIR of the embodiment of FIG. 2B is in FIG. 3B embodied in the sections denoted Microphone synchronization, Mic matching, and Directional signal creation (DIR), respectively. The Microphone synchronization section comprises first (Front) and second (Rear) omni-directional microphones providing electric input signals (Im.sub.1, Im.sub.2). The Microphone synchronization section further comprises a delay unit (Delay) for introducing a delay in one of the microphone paths (here in the path of the Front microphone) to provide that one microphone signal (Im.sub.1) is delayed (providing delayed Front signal Im.sub.1d) relative to the other (I.sub.2, e.g. to compensate for a difference in propagation delay of the acoustic signal corresponding to the physical distance d (e.g. 10 mm) between the Front and the Rear microphones, i.e. to compensate for a geometrical configuration of the array). The Mic matching section comprises a microphone matching unit (Mic matching) for matching the Front and the Rear microphones (ideally to equalize their angle and frequency dependent gain characteristics/transfer functions). The Mic matching block ideally matches the amplitude (gain characteristics) only for signals from the look direction. The reason is that signals from the look direction are (ideally) cancelled in the target cancelling beamformer. The better the amplitude match for the look direction, the better the cancelling. In an embodiment, the Mic matching block detects the absolute level of the two microphone signals (I.sub.1d, I.sub.2) and attenuates the stronger of the two microphone signals and provides respective matched microphone signals (IM.sub.1, IM.sub.2). This is only one possible way to match the signals. In another embodiment, gain/attenuation is applied on only one of the two signals (always the same). In still another embodiment, the Mic matching block is configured to compensate the mismatch by keeping the amplitude of the sum signal (ID.sub.1) constant. The Mic matching section (output of input unit IU) provides electric input signals (I.sub.1, I.sub.2) to the beamformer filter (BF). The Microphone synchronization and Mic matching sections together represent the Microphone configuration of the hearing device (and constitute in this embodiment input unit IU). The Directional signal creation section receives matched microphone signals (I.sub.1, I.sub.2) as input signals and provides directional (e.g. including omni-directional) signals (ID.sub.1, ID.sub.2) as output signals. In the Directional signal creation section, the delayed and microphone matched signal of the Front microphone path (signal I.sub.1) is subtracted from the microphone matched signal of the Rear microphone path (signal I.sub.2) in sum unit ‘+’ (denoted Rear Cardioid) of the lower branch to provide directional signal ID.sub.2 representing a rear cardioid signal. Further, the microphone matched signal of the Rear microphone path (signal I.sub.2) is added to the delayed and microphone matched signal of the Front microphone path (signal I.sub.1) in sum unit ‘+’ (here denoted Double Omni (cf. Enhanced omni-directional)) of the upper branch to provide directional signal ID.sub.1 representing an ‘enhanced omni-directional’ signal.

(34) The equalization unit (EQU) of the embodiment of FIG. 2 is in FIG. 3B embodied in the section denoted Equalization (EQU), having directional (e.g. omni-directional) signals (ID.sub.1, ID.sub.2) as input signals and providing equalized signals (IDE.sub.1, IDE.sub.2) as output signals. The aim is to provide two directional signals (IDE.sub.1, IDE.sub.2) that have exactly (or substantially) the same phase over all frequencies (for signals from one specific direction that is not the look direction, e.g. the rear direction of from 90°, etc.).

(35) The DoubleOmni signal (ID.sub.1) is the sum of the two matched microphone signals (I.sub.1, I.sub.2) and the RearCardioid signal (ID.sub.2) is the difference between the two matched microphone signals (I.sub.1, I.sub.2). The phase compensation of the sum operation (I.sub.2+I.sub.1) for the DoubleOmni signal (ID.sub.1) is included in the ID.sub.2 path (cf. Amplitude Correction below). Signal ID.sub.1 is passed to the amplitude correction unit (cf. below). The differentiator operation (I.sub.2−I.sub.1) for the RearCardioid signal is compensated by an integrator operation. Using the Z-Transform, this can be formulated as follows: The Differentiator can be represented as 1−z.sup.−1, the integrator is therefore 1/1−z.sup.−1. The summation (I.sub.2+I.sub.1) in the DoubleOmni signal can be represented as 1+z.sup.−1 but to keep the DoubleOmni signal (ID.sub.1) as natural as possible, this is not compensated. To equalize the phase between the DoubleOmni and RearCardioid signals, the same summation is applied on the RearCardioid signal (1+z.sup.−1). The complete transfer function for the RearCardioid signal is provided by the combination of the two mentioned transfer functions: 1+z.sup.−1/(1−0.998*z.sup.−1) (optionally (1+mm.sup.−1*z.sup.−1)/(1−mm.sup.−1*z.sup.−1) to compensate for a possible microphone mismatch, cf. above). This corresponds exactly to the RearCardioid equalization filter in the block diagram of FIG. 3B. The 0.998 factor is used to provide a stable filter. The output of the second (rightmost) sum unit ‘+’ in the lower branch of the equalization unit (EQU) is passed as equalized signal IDx.sub.2 to the amplitude correction unit (cf. Amplitude Correction below). Note that, in principle, these calculations are only true for a signal coming from one specific direction that generates a signal delay of exactly 1 sample between the front and rear microphone signals. That's also the direction where the signals are perfectly subtracted. However, assuming that we have perfect omni-directional signals, it can be shown, that no matter what the signal delay d is in the formula (1+z.sup.−d)/(1−z.sup.−d), the resulting phase difference introduced by this transfer function is always 90° (or π/2) over all frequencies. In other words, for perfect omni-directional signals, the required phase compensation does not depend on the array size or the direction of the incoming signal. Changing the delay d will however have a frequency dependent influence on the amplitude response.

(36) In the equalization unit (EQU), the amplitude of the DoubleOmni signal (ID.sub.1) is equalized to the amplitude of the input signal (ID.sub.2) by multiplication with a factor of 0.5 (unit ‘½’ in Amplitude Correction unit in FIG. 3B) thereby providing the (first) phase and amplitude equalized OmniDirectional signal IDE.sub.1. This correction of the amplitude of the DoubleOmni signal (ID.sub.1) might as well form part of the DIR-block (in which case no equalization of the DoubleOmni signal (ID.sub.1) would be performed by the equalization unit (EQU)). A corresponding correction (multiplication with a factor of 0.5) of the RearCardioid signal (ID.sub.2) might also form part of the DIR-block (in which case this part of the amplitude correction would not be performed in the equalization unit (EQU)). The (phase) equalized RearCardioid signal (IDx.sub.2) is also equalized in amplitude (cf. unit Amplitude Correction in FIG. 3B) thereby providing the (second) phase and amplitude equalized RearCardioid signal IDE.sub.2. A part of the amplitude equalization is performed elsewhere in the EQU (and/or in the DIR and/or in the Mic Matching unit) block. E.g. the integrator that is part of the EQU block will also amplify the low frequencies. However, this part of the EQU block only equalizes the amplitude for signals with exactly 1 sample delay.

(37) The amplitude equalization for a signal that has a specific delay d is simply given by the quotient of the two transfer functions (one with delay 1 and one with delay d):
Amplitude correction=[(1+z.sup.−d)/(1−z.sup.−d)]/[(1+z.sup.−1)/(1−z.sup.−1)].

(38) For perfect omni-directional microphones, it can be shown that this expression is purely real (no phase shift) and can be simplified to:
Amplitude correction=tan(pi*f)/tan(pi*f*d),
where f is the normalized frequency and d is the delay. Note that this corresponds to a frequency dependent gain correction.

(39) The adaptive filter (AF) and subtraction unit ‘+’ of the embodiment of FIG. 2B is in FIG. 3B embodied in units denoted LMS and ‘+’, respectively, in the Amplitude correction, adaptive algorithm (BOU) section. LMS is short for Least Mean Square and is a commonly used algorithm used in adaptive filters (other adaptive algorithms may be used, however, e.g. NLMS, RLS, etc.). If the LMS filter comprises more than one coefficient, a delay element (Del in FIG. 3B) is inserted into the upper signal path (delaying the signal IDE1 to match the delay introduced by the LMS block. The adaptive filter (denoted LMS in FIG. 3) and sum unit ‘+’ subtracts a modified version IDEm.sub.2 of the equalized RearCardioid signal IDE.sub.2 from the equalized OmniDirectional (optionally delayed) signal IDE.sub.1 to create a signal RBFS with the smallest possible energy. It reduces the energy by attenuating all signals except the signals coming from the front. The output signal RBFS represents a FrontCardioid signal determined by subtracting a modified (equalized in phase and amplitude) RearCardioid signal from the OmniDirectional signal (equalized in amplitude).

(40) The task of the adaptive filter (LMS) (and the subtraction unit ‘+’) is to minimize the expected value of the squared magnitude of the output signal RBFS (E[ABS(RBFS).sup.2]). According to this rule or criterion, it is for example an ‘advantage’ to attenuate (filter out) time frequency units (TFU, (k,m), where k, m are frequency and time indices, respectively) of the rear signal that have large magnitudes where the corresponding time frequency units of the front signal do not. This is beneficial, because if (TFU(front)=LOW, TFU(rear)=HIGH), it may be concluded that the signal content of the rear signal is noise. Otherwise—i.e. if not filtered out—these contributions from the rear signal would increase the E[ABS(RBFS).sup.2].

(41) FIG. 4 shows a schematic visualization of the functionality of an embodiment of a beamforming algorithm according to the present disclosure (as exemplified in FIG. 3B). The individual plots of FIG. 4 illustrate the angle dependent gain or attenuation of the signal in question (front and rear directions being represented in the plots as vertical up and vertical down directions, and to correspond to the definition outlined in FIG. 6B). A circular plot indicates an equal gain or attenuation irrespective of the angle (termed ‘omni-directional’). The algorithm preferably fades to the configuration with the lowest level by keeping the front response unchanged. It can fade from ‘Enhanced Omni’ (termed Omni in top part of FIG. 4) to Dipole directionality (termed Dipole in lower part of FIG. 4) over a number of intermediate directional characteristics (in FIG. 4, two are shown, termed Front Omni, Front Cardioid), or vice versa (from Dipole to Enhanced Omni). In very quiet situations or if wind noise is present, it will immediately fade to Enhanced Omni. If there is a lot of noise in the rear direction, it will fade to the best possible directionality mode, depending on the surrounding noise. At the same time, the system transfer function to the front direction is not changed, when fading from Enhanced Omni to one of the ‘true’ directional modes, meaning that there is no LF roll off. An advantage thereof is that the proposed solution makes the fading almost inaudible and offers sufficient loudness even in directional mode. Further, the choice of the correct directionality doesn't depend on a classification system as usual, but on a simple first order LMS algorithm, which will always find the best possible solution.

(42) In the illustration of FIG. 4, the adaptive algorithm (LMS, cf. FIG. 3B) is very simple and implements the following formula: RBFS=Output=Omni−A*RearCardioid. A is a scalar factor and varies between 0 and 2 for example. In an embodiment, A is a complex constant. In an embodiment, A is defined for each frequency band (A.sub.i, i=1, 2, . . . , N.sub.FB, where N.sub.FB is the number of frequency bands). FIG. 4 schematically shows four situations corresponding to four different values of A (from top to bottom): A=0, A=0.1, A=1, A=2. For each value of A, the two input signals (Omni (=IDE.sub.1 in FIG. 3B) and A*RearCardioid (=A*IDE.sub.2 in FIG. 3B)) and the resulting signal (Output (=RBFS in FIG. 3B)) are schematically shown. It is seen that the resulting Output changes from an omni-directional signal (Omni) for A=0 (by increasing the value of A) to a dipole signal (Dipole) for A=2. The intermediate values represented in FIG. 4, A=0.1 and A=1 result in a light front dominated omni-directional signal (FrontOmni) and a front cardioid signal (FrontCardioid), respectively.

(43) The LMS is adapting the factor A so that the output Energy (E[ABS(Output).sup.2]) is as small as possible. Normally, this means that the Null in the output polar plot is directed to the loudest noise source. An advantage of the present algorithm is that it allows a fading to Omni mode to reduce specific directional noise (e.g. wind noise).

(44) FIG. 5 shows an exemplary application scenario of an embodiment of a hearing assistance system according to the present disclosure.

(45) FIG. 5A shows an embodiment of a binaural assistance system, e.g. a binaural hearing aid system, comprising left (second) and right (first) hearing devices (HAD.sub.l, HAD.sub.r) in communication with a portable (handheld) auxiliary device (AD) functioning as a user interface (UI) for the binaural hearing aid system. In an embodiment, the binaural hearing aid system comprises the auxiliary device AD (and the user interface UI). The user interface UI of the auxiliary device AD is shown in FIG. 5B. The user interface comprises a display (e.g. a touch sensitive display) displaying a user of the hearing assistance system and a number of predefined locations of the target sound source relative to the user. Via the display of the user interface (under the heading Beamformer initialization), the user U is instructed to: Drag source symbol to relevant position of current target signal source. Press START to make the chosen direction active (in the beamforming filter).

(46) These instructions should prompt the user to Locate the source symbol in a direction relative to the user where the target sound source is expected to be located (e.g. in front of the user (φ.sub.s=0°), or at an angle different from the front, e.g. φ.sub.s=−45° or φ.sub.s=+45°)). Press START to initiate the use of the chosen direction as the ‘look direction’ of a target aiming beamformer.

(47) Hence, the user is encouraged to choose a location for a current target sound source by dragging a sound source symbol (circular icon with a grey shaded inner ring) to its approximate location relative to the user (e.g. if deviating from a front direction, where the front direction is assumed as default). The ‘Beamformer initialization’ is e.g. implemented as an APP of the auxiliary device AD (e.g. a SmartPhone). Preferably, when the procedure is initiated (by pressing START), the chosen location (e.g. angle and possibly distance to the user), are communicated to the left and right hearing devices for use in choosing an appropriate corresponding (possibly predetermined) set of filter weights, or for calculating such weights. In the embodiment of FIG. 5, the auxiliary device AD comprising the user interface UI is adapted for being held in a hand of a user (U), and hence convenient for displaying a current location of a target sound source.

(48) In an embodiment, communication between the hearing device and the auxiliary device is in the base band (audio frequency range, e.g. between 0 and 20 kHz). Preferably however, communication between the hearing device and the auxiliary device is based on some sort of modulation at frequencies above 100 kHz. Preferably, frequencies used to establish a communication link between the hearing device and the auxiliary device is below 70 GHz, e.g. located in a range from 50 MHz to 70 GHz, e.g. above 300 MHz, e.g. in an ISM range above 300 MHz, e.g. in the 900 MHz range or in the 2.4 GHz range or in the 5.8 GHz range or in the 60 GHz range (ISM=Industrial, Scientific and Medical, such standardized ranges being e.g. defined by the International Telecommunication Union, ITU). In an embodiment, the wireless link is based on a standardized or proprietary technology. In an embodiment, the wireless link is based on Bluetooth technology (e.g. Bluetooth Low-Energy technology) or a related technology.

(49) In the embodiment of FIG. 5A, wireless links denoted IA-WL (e.g. an inductive link between the left and right assistance devices) and WL-RF (e.g. RF-links (e.g. Bluetooth) between the auxiliary device AD and the left HAD.sub.l, and between the auxiliary device AD and the right HAD.sub.r, hearing device, respectively) are indicated (and implemented in the devices by corresponding antenna and transceiver circuitry, indicated in FIG. 5A in the left and right hearing devices as RF-IA-Rx/Tx-I and RF-IA-Rx/Tx-r, respectively).

(50) In an embodiment, the auxiliary device AD is or comprises an audio gateway device adapted for receiving a multitude of audio signals (e.g. from an entertainment device, e.g. a TV or a music player, a telephone apparatus, e.g. a mobile telephone or a computer, e.g. a PC) and adapted for allowing the selection an appropriate one of the received audio signals (and/or a combination of signals) for transmission to the hearing device(s). In an embodiment, the auxiliary device is or comprises a remote control for controlling functionality and operation of the hearing device(s). In an embodiment, the auxiliary device AD is or comprises a cellular telephone, e.g. a SmartPhone, or similar device. In an embodiment, the function of a remote control is implemented in a SmartPhone, the SmartPhone possibly running an APP allowing to control the functionality of the audio processing device via the SmartPhone (the hearing device(s) comprising an appropriate wireless interface to the SmartPhone, e.g. based on Bluetooth (e.g. Bluetooth Low Energy) or some other standardized or proprietary scheme).

(51) In the present context, a SmartPhone, may comprise a (A) cellular telephone comprising a microphone, a speaker, and a (wireless) interface to the public switched telephone network (PSTN) COMBINED with a (B) personal computer comprising a processor, a memory, an operative system (OS), a user interface (e.g. a keyboard and display, e.g. integrated in a touch sensitive display) and a wireless data interface (including a Web-browser), allowing a user to download and execute application programs (APPs) implementing specific functional features (e.g. displaying information retrieved from the Internet, remotely controlling another device, combining information from various sensors of the smartphone (e.g. camera, scanner, GPS, microphone, etc.) and/or external sensors to provide special features, etc.).

(52) FIGS. 6A-6B illustrate a possible definition of the terms front (front) and rear (rear) relative to a user (U) of a hearing device (HAD). FIG. 6A shows an ear (ear (pinna)) and a hearing device (HAD) operationally mounted at the ear of the user. The hearing device (HAD) comprises a BTE part (HAD (BTE)) adapted for being located behind an ear of the user, an ITE part (HAD (ITE)) adapted for being located in an ear canal of the use, and a connecting element (HAD (Con)) for electrically and/or mechanically and/or acoustically connecting the BTE and ITE parts. The location of the front and rear microphones (M.sub.1, and M.sub.2 respectively) on the BTE part (HAD (BTE)) of the hearing device are indicated together with arrows indicating front and rear directions relative to the user. and FIG. 6B showing a user's head wearing left and right hearing devices at left and right ears. Other definitions of preferred directions may be used. Likewise, other configurations (partitions) of hearing devices may be used. Further, other types of hearing devices, e.g. comprising vibrational stimulation of the user's skull or electrical stimulation of the user's cochlear nerve, may be used.

(53) The invention is defined by the features of the independent claim(s). Preferred embodiments are defined in the dependent claims. Any reference numerals in the claims are intended to be non-limiting for their scope.

(54) Some preferred embodiments have been shown in the foregoing, but it should be stressed that the invention is not limited to these, but may be embodied in other ways within the subject-matter defined in the following claims and equivalents thereof.

REFERENCES

(55) [Griffiths and Jim; 1981] L. J. Griffith, C. W. Jim, An Alternative Approach to Linearly Constrained Adaptive Beamforming, IEEE Transactions on Antennas and Propagation, Vol. AP-30, No. 1, January 1982, pp. 27-34. [Schaub; 2008] Arthur Schaub, Digital hearing Aids, Thieme Medical. Pub., 2008. [Gooch; 1982] Richard P. Gooch, Adaptive Pole-Zero array processing, Proc. 16Th Asilomar Conf. Circuits Syst. Comput., pp. 45-49, November 1982. [Joho and Moschytz; 1998] Marcel Joho, George S. Moschytz, On the design of the target-signal filter in adaptive beamforming, ISCAS '98. Proceedings of the 1998 IEEE International Symposium on Circuits and Systems, Vol. 5, pp. 166-169, 1998.