METHOD AND DEVICE FOR FREQUENCY-SELECTIVE PROCESSING OF AN AUDIO SIGNAL WITH LOW LATENCY

20220386042 · 2022-12-01

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for processing an input audio signal includes using a first analytical filter bank to divide the input audio signal in a first frequency splitting process into a plurality of first frequency bands. The first frequency bands of a first subgroup are divided in a further frequency splitting process by a further analytical filter bank into a plurality of frequency subbands. The divided input audio signal is frequency-selectively processed or amplified. The divided and processed input audio signal is then combined again into an output audio signal. A prediction is applied to the first frequency bands of the first subgroup and/or the frequency subbands derived therefrom, to compensate for latency differences between the first frequency bands and the frequency subbands as a result of the or each further frequency splitting process. A device or hearing aid for carrying out the method is also provided.

    Claims

    1. A method for processing an input audio signal or an input audio signal in a hearing aid, the method comprising: using a first analytical filter bank to divide the input audio signal into a plurality of first frequency bands in a first frequency splitting process; using at least one further analytical filter bank to divide a first subgroup of the first frequency bands into a plurality of frequency subbands in at least one further frequency splitting process; frequency-selectively processing or amplifying the input audio signal divided into the first frequency bands or the frequency subbands; applying a prediction to at least one of the first frequency bands of the first subgroup or the frequency subbands derived from the first frequency bands of the first subgroup, to compensate for latency differences between the first frequency bands and the frequency subbands as a result of at least one of the frequency splitting processes; and then recombining the input audio signal, divided into the first frequency bands or the frequency subbands and frequency-selectively processed, into an output audio signal.

    2. The method according to claim 1, which further comprises forming the first subgroup of the first frequency bands of a plurality of the first frequency bands having respective center frequencies being immediately adjacent and including a lowest first frequency band.

    3. The method according to claim 1, which further comprises subjecting a second subgroup of the first frequency bands to the frequency-selective processing without any further frequency splitting process.

    4. The method according to claim 1, which further comprises providing the first frequency bands with a consistent first bandwidth.

    5. The method according to claim 1, which further comprises providing the prediction applied to the first frequency bands of the first subgroup or to the frequency subbands as a non-linear prediction.

    6. The method according to claim 1, which further comprises providing the prediction applied to the first frequency bands of the first subgroup or to the frequency subbands to be adaptive during the signal processing.

    7. The method according to claim 1, which further comprises analyzing the input audio signal for a presence of voiced speech in at least one of the first frequency bands or frequency subbands, and only performing at least one of the frequency splitting processes upon detecting the presence of voiced speech in the input audio signal.

    8. The method according to claim 1, which further comprises ascertaining an accuracy of the prediction, and only performing at least one of the frequency splitting processes in at least one of the first frequency bands or frequency subbands, upon the accuracy of the prediction satisfying a predefined criterion.

    9. A device for processing an input audio signal or an input audio signal in a hearing aid, the device comprising: a first analytical filter bank configured to divide the input audio signal into a plurality of first frequency bands in a first frequency splitting process; at least one further analytical filter bank disposed downstream of said first analytical filter bank and configured to divide the first frequency bands of a first subgroup of the first frequency bands into a plurality of frequency subbands in at least one further frequency splitting process; a signal processing unit for frequency-selective processing or amplification of the input audio signal having been divided into the first frequency bands or the frequency subbands; at least one predictor configured to apply a prediction to at least one of the first frequency bands of the first subgroup or the frequency subbands derived from the first frequency bands of the first subgroup, to compensate for latency differences between the first frequency bands and the frequency subbands as a result of at least one of the frequency splitting processes; and a synthetic filter bank apparatus disposed downstream of said signal processing unit and configured to combine the input audio signal having been divided into the first frequency bands or the frequency subbands and frequency-selectively processed into an output audio signal.

    10. The device according to claim 9, wherein the first subgroup of the first frequency bands is formed of a plurality of the first frequency bands having respective center frequencies being immediately adjacent and including a lowest first frequency band.

    11. The device according to claim 9, wherein said signal processing unit directly receives a second subgroup of the first frequency bands to subject the second subgroup of the first frequency bands to the frequency-selective processing without any further frequency splitting process.

    12. The device according to claim 9, wherein the first frequency bands have a consistent first bandwidth.

    13. The device according to claim 9, wherein said at least one predictor is a non-linear predictor.

    14. The device according to claim 9, wherein said at least one predictor is an adaptive predictor during the signal processing.

    15. The device according to claim 9, which further comprises: a speech detection module configured to analyze the input audio signal for a presence of voiced speech; and a switching apparatus configured to only activate at least one of said analytical filter banks upon said voice detection module detecting the presence of voiced speech in the input audio signal.

    16. The device according to claim 9, which further comprises a switching apparatus configured to activate and deactivate at least one further analytical filter bank depending on an accuracy of the prediction.

    Description

    BRIEF DESCRIPTION OF THE FIGURES

    [0057] FIG. 1 is a diagrammatic, longitudinal-sectional view of a hearing aid in the form of a hearing device that can be worn behind the ear of a user;

    [0058] FIG. 2 is a schematic block diagram of the structure of the signal processing of the hearing aid of FIG. 1; and

    [0059] FIGS. 3 and 4 each show a schematic block diagram, illustrated in accordance with FIG. 2, of two alternative embodiments of the hearing aid.

    DETAILED DESCRIPTION OF THE INVENTION

    [0060] Referring now in detail to the figures of the drawings, in which parts and values that correspond to one another are always given the same reference signs, and first, particularly, to FIG. 1 thereof, there is seen, as an example for a device according to the invention for processing an audio signal, a hearing device 2, i.e. a hearing aid configured to assist the hearing capacity of a user with impaired hearing. In the example illustrated herein, the hearing device 2 is a BTE hearing device that can be worn behind an ear of a user.

    [0061] The hearing device 2 includes at least one microphone 6 as an input transducer and one earphone 8 as an output transducer inside a housing 4. The hearing device 2 further includes a battery 10 and an (in particular digital) signal processor 12. Preferably the signal processor 12 includes both a programmable subunit (a microprocessor, for example) as well as a non-programmable subunit (an ASIC, for example).

    [0062] The signal processor 12 is supplied with an electrical supply voltage U from the battery 10.

    [0063] In normal operation of the hearing device 2, the microphone 6 receives airborne sound from the environment of the hearing device 2. The microphone 6 converts the sound into an (input) audio signal I which contains information about the received sound. The input audio signal I is supplied to the signal processor 12 inside the hearing device 2, which modifies this input audio signal I to assist the hearing capacity of the user.

    [0064] The signal processor 12 outputs an output audio signal O, which contains information about the processed and thereby modified sound, to the earphone 8.

    [0065] The earphone 8 converts the output sound signal O into modified airborne sound. This modified airborne sound is transferred into the auditory canal of the user through a sound channel 14 that connects the earphone 8 to a tip 16 of the housing 4, as well as through a flexible sound tube (not shown explicitly) that connects the tip 16 to an earpiece inserted into the auditory canal of the user.

    [0066] The functional structure of the signal processor 12 is illustrated in more detail in FIG. 2.

    [0067] In a manner not shown in detail, the input audio signal I recorded by the microphone 6 is first digitized by an analog-digital converter integrated into the signal processor 12 or upstream of the signal processor 12. The digitized input audio signal I is first supplied inside the signal processor 12 to an analytical filter bank apparatus 20 which, in the example illustrated in FIG. 2, includes a first analytical filter bank 22 and a second analytical filter bank 24 downstream thereof.

    [0068] By using the first analytical filter bank 22, the input audio signal I is divided in a first frequency splitting process into a plurality of first frequency bands 26, i.e. first frequency channels, each of which carries a partial band signal of the input audio signal I. For simplicity, just four first frequency bands 26 are illustrated in FIG. 2. In a useful practical implementation of the invention, the first analytical filter bank 22 divides the input audio signal I into, for example, 32 first frequency bands 26. The frequency bands 26 have a consistent (first) bandwidth of, for example, 500 Hz, and a consistent spectral spacing of 250 Hz.

    [0069] The second analytical filter bank 24 only acts on a (first) subgroup 28 of the frequency bands 26, covering a range from 2 to 3 kHz at the low-frequency edge of the sound spectrum. The subgroup 28 in this case includes a number of adjacent frequency bands 26 that contain the lowest (i.e. lowest frequency) first frequency band 26. In the example illustrated in FIG. 2, the subgroup 28 includes, for example, the bottom two of the total of four frequency bands 26. In the practical implementation, the subgroup 28 includes, for example, the bottom 12 of the total of 32 first frequency bands 26.

    [0070] Each first frequency band 26 of the subgroup 28 is split by the second analytical filter bank 24 in a second frequency splitting process into multiple (for example, into two, according to FIG. 2) second frequency bands 30. The frequency bands 30 have a consistent (second) bandwidth of, for example, 125 Hz, and a consistent spectral spacing of 62.5 Hz.

    [0071] A second subgroup 32 of the frequency bands 26, which does not belong to the high-frequency frequency bands 26 belonging to the subgroup 28, bypasses the second analytical filter bank 24 and is thus not subject to a second (and finer) frequency splitting process.

    [0072] The respective partial band signals of the high-frequency frequency bands 26 of the subgroup 32, as well as the frequency bands 30, are processed (i.e. signal-modified) in a signal processing unit 34. In the course of this processing, the respective partial band signal of one of the frequency bands 26 of the subgroup 32, as well as one of those frequency bands 30, are in particular amplified in accordance with an individual (i.e. with a predefined specific frequency) amplification factor. For the purposes of efficient signal processing, the signal processing unit 34 in the example according to FIG. 2 includes two subunits 36 and 38 for the high-frequency frequency bands 26 of the subgroup 32, or for the frequency bands 30, wherein the subunits 36 and 38 are each specifically configured for the different bandwidths of the supplied frequency bands 26 or 30 respectively.

    [0073] The processed partial band signals of the high-frequency frequency bands 26 of the subgroup 32 and the frequency bands 30 are combined to form the output audio signal O by a synthetic filter bank apparatus 40. The synthetic filter bank apparatus 40 is configured with mirror symmetry to the analytical filter bank apparatus 20. It accordingly includes a second synthetic filter bank 42 that combines the second frequency bands 30 again to form the first frequency bands 26 of the subgroup 28, as well as a first synthetic filter bank 44 that combines the first frequency channels 26 of the first subgroup 28 and of the subgroup 32 to form the output audio signal O.

    [0074] Through the finer frequency splitting process performed by using the analytical filter bank 24, a latency difference occurs between the low-frequency partial band signals of the frequency bands 26 of the subgroup 28 as compared with the high-frequency partial band signals of the frequency bands 26 of the subgroup 32, which would, in the absence of further measures, lead to a distortion of the output audio signal O.

    [0075] In order to compensate for this latency difference (i.e. to eliminate it entirely or at least to reduce it) a predictor 46 is connected into the signal path of the frequency bands 30. The predictor 46 is preferably configured as a non-linear predictor that is continuously adaptable during operation of the hearing device 2, in particular as a Hammerstein model. The predictor 46 has parameters specifically adapted to each supplied frequency band 30.

    [0076] In contrast to the method known from European Patent Application EP 3 197 181 A1, corresponding to U.S. Pat. No. 10,142,741, the prediction in the hearing device 2 takes place in the frequency domain, and is applied exclusively to the more finely frequency-divided low-frequency part of the sound spectrum. In the frequency domain, i.e. between the first analytical filter bank 22 and the first synthetic filter bank 44, the predictor can, however, be disposed at various positions. In the example according to FIG. 2, the predictor 46 is connected between the second analytical filter bank 24 and the subunits 38 of the signal processing unit 34. In FIG. 2, furthermore, three alternative positions are given for the predictor (indicated therein with the reference signs 46′), namely: [0077] between the first analytical filter bank 22 and the second analytical filter bank 24, [0078] between the subunit 38 of the signal processing unit 34 and the second synthetic filter bank 42, and [0079] between the second synthetic filter bank 42 and the first synthetic filter bank 44.

    [0080] In one possible modification of the embodiment illustrated in FIG. 2, the hearing device 2 contains multiple predictors 46, 46′ connected in series, disposed in particular at a plurality of the positions given in feet. 2, each compensating for a part of the latency difference described above.

    [0081] The output audio signal O output from the first synthetic filter bank 44 is converted back into an analog signal by a digital-analog converter (not shown in more detail) that is integrated into the signal processor 12 or is downstream of the signal processor 12, and is supplied to the earphone 8 for output to the user of the hearing device 2.

    [0082] An alternative embodiment of the hearing device 2 is illustrated in FIG. 3, in which the analytical filter bank apparatus 20 and the synthetic filter bank apparatus 40 are each built with three stages. In addition to the first analytical filter bank 22 and the second analytical filter bank 24, the analytical filter bank apparatus 20 in this embodiment includes a third analytical filter bank 50 that acts on a (first) subgroup 52 of the frequency bands 30. The subgroup 52 in turn covers a low-frequency part of the sound spectrum that altogether extends over the frequency band 30. For example, the subgroup 52 in particular includes—in the example according to FIG. 3—the bottom two of the total of four frequency bands 30; in the practical implementation, the subgroup 52 includes, for example, the bottom six of the total of 12 second frequency bands 30.

    [0083] Each second frequency band 30 of the subgroup 52 is split by the third analytical filter bank 50 in a second, yet finer, frequency splitting process into multiple (for example, into two, according to FIG. 3) third frequency bands 54 (third frequency splitting process). The frequency bands 54 have a consistent (third) bandwidth of, for example, 62.5 Hz, and a consistent spectral spacing of 31.25 Hz.

    [0084] A second subgroup 56 of the frequency bands 30, which does not belong to the high-frequency frequency bands 30 belonging to the subgroup 52, bypasses the third analytical filter bank 50 and is thus not subject to a third frequency splitting process.

    [0085] In the embodiment of the hearing device 2 according to FIG. 3, the subunit 38 of the signal processing unit 34 only processes the respective partial band signals of the high-frequency frequency bands 30 of the second subgroup 56. For the processing, in particular the frequency-selective amplification, of the partial band signals of the frequency bands 54, the signal processing unit 34 also includes, according to FIG. 3, a further subunit 58 that is configured for the bandwidth of the frequency bands 54.

    [0086] The synthetic filter bank apparatus 40 is, also in the embodiment according to FIG. 3, configured with mirror symmetry to the analytical filter bank apparatus 20. In addition to the first synthetic filter bank 44 and the second synthetic filter bank 42 it therefore also includes a third synthetic filter bank 60 that combines the third frequency bands 54 back into the second frequency bands 30 of the subgroup 52 after the signal processing.

    [0087] In the embodiment of the hearing device 2 according to FIG. 3, predictor 46 only acts on the respective partial band signals of the high-frequency frequency bands 30 of the second subgroup 56. In order to predict the partial band signals of the low-frequency second frequency bands 30 of the first subgroup 52 and of the third frequency bands 54, the hearing device 2 according to FIG. 3 includes a further predictor 62. The predictor 62 is preferably of the same type as the predictor 46, but is configured in such a way that it compensates for the latency difference of the partial band signals of the high-frequency first frequency bands 26 of the subgroup 32 caused by the second and third frequency splitting processes.

    [0088] The predictor 62 can also be disposed at different positions between the second analytical filter bank 24 and the second synthetic filter bank 42. Furthermore, in variants of the embodiment according to FIG. 3, multiple predictors 62 can also be connected in series, each of which compensates for a part of the latency difference. In a further variant embodiment, the predictors 46 and 62 are connected in series with one another. In this case the predictor 46 is disposed between the first analytical filter bank 22 and the second analytical filter bank 24, or between the second synthetic filter bank 42 and the first synthetic filter bank 44. In these cases, the predictor 62 is configured in such a way that it only compensates for the latency difference caused by the third frequency splitting process.

    [0089] FIG. 4 shows a further embodiment of the hearing device 2, which corresponds substantially to the embodiment according to FIG. 2. In contrast to the embodiment according to FIG. 2, the second frequency splitting process of the low-frequency first frequency bands 26 of the subgroup 28 however only takes place when the input audio signal I contains voiced speech (i.e. voice sounds that are spoken or sung).

    [0090] A speech detection module 64 is implemented for this purpose in the signal processor 12. The speech detection module 64 detects the presence of voiced speech by analyzing the input audio signal I that has been divided into the first frequency bands 26, and in particular the low-frequency parts thereof. In the illustrated example, the frequency bands 26 of the subgroup 28 are supplied as the input variable to the speech detection module 64. The speech detection module 64 detects the presence of voiced speech in this case, in particular through the presence of a marked fundamental frequency and/or the occurrence of characteristic dominant frequencies (formants) that are characteristic of voiced speech. When voiced speech is detected in the input audio signal I, the voice detection module outputs a control signal S1.

    [0091] In order to only perform the second frequency splitting process when voiced speech has been detected, a signal switch 66 is connected in the signal path of the frequency bands 26 of the subgroup 28, and passes the partial band signals of the frequency bands 26 of the subgroup 28 either to the second analytical filter bank 24 or to the subunit 36 of the data processing unit 34 depending on the control signal S1. When the control signal S1 is applied (and thus when voiced speech has been detected in the input audio signal I) the signal switch 66 passes the partial band signals of the frequency bands 26 of the subgroup 28 to the second analytical filter bank 24. In this case, the function of the hearing device 2 of FIG. 4 corresponds to the embodiment illustrated in FIG. 2. If, on the other hand, the control signal S1 is not applied to the signal switch 66 (meaning that voiced speech is not detected in the input audio signal I by the speech detection module 64), the signal switch 66 instead passes the partial band signals of the frequency bands 26 of the subgroup 28 directly to the subunit 36 of the data processing unit 34. In this case the partial band signals of all the first frequency bands 26 are processed without any further frequency splitting process, and amplified, in particular in a frequency-specific manner. Prediction, again, does not occur in this case.

    [0092] In an alternative variant embodiment of the hearing device 2 of FIG. 4, the second frequency splitting process is not activated depending on the detection of voiced speech, but depending on the accuracy (reliability) of the prediction. In this case the predictor 46 outputs a magnitude Q that is characteristic for the accuracy of the prediction, in particular in such a way that is known as the “predictor gain” which is given in decibels by the variance σ.sub.x.sup.2 of the input signal of the predictor 46 in relation to the variance of the prediction error σ.sub.e.sup.2.

    [00001] Q [ dB ] = 10 log 1 0 ( σ x 2 σ e 2 )

    [0093] If—as shown in FIG. 4—a plurality of partial band signals is supplied to the predictor, the characteristic magnitude Q is calculated from the mean value, the minimum value or the maximum value of the individual, band-specific, predictor gains. Alternatively, the predictor gain of a partial band signal chosen as a reference is employed as the characteristic magnitude Q. In all of these cases, the value of the characteristic magnitude Q becomes higher the more accurately the predictor 46 can predict the profile of the supplied partial band signals.

    [0094] The characteristic magnitude Q is compared with a predefined threshold value is by an evaluation module 68 implemented in the signal processor 12 (and shown in FIG. 4 with a dashed line). As long as the characteristic magnitude Q remains above the threshold value, the evaluation module 68 outputs a control signal S2 to that is supplied to the signal switch 66 instead of the control signal S1.

    [0095] When the control signal S2 is applied (and thus when the accuracy of the prediction is sufficient) the signal switch 66 passes the partial band signals of the frequency bands 26 of the subgroup 28 to the second analytical filter bank 24. In this case, the function of the hearing device 2 of FIG. 4 again corresponds to the embodiment illustrated in FIG. 2. If, on the other hand, the control signal S2 is not applied to the signal switch 66 (meaning that the prediction does not show sufficient accuracy), the signal switch 66 instead passes the partial band signals of the frequency bands 26 of the subgroup 28 directly to the subunit 36 of the data processing unit 34 for a predefined period of time. In this case the partial band signals of all the first frequency bands 26 are again processed without any further frequency splitting process, and amplified, in particular in a frequency-specific manner. Prediction, again, does not occur in this case. Once the predefined period of time has elapsed, the second frequency splitting process, and therefore also the prediction, is reactivated in order to check the accuracy of the prediction again by using the evaluation module 68.

    [0096] The speech detection module 64 is not provided in the variant embodiment described above. The signal switch 66 is, accordingly, only controlled by the control signal S2.

    [0097] In a further variant embodiment of the hearing device 2 according to FIG. 4, both the speech detection module 64 and the evaluation module 68 are provided. The signal switch 66 is operated in this case both by the control signal S1 as well as by the control signal S2. The control signals S1 and S2 are preferably combined in this case with AND logic, so that the second frequency splitting process is only activated by the signal switch 66 when both the control signal S1 and the control signal S2 are present, in other words when the presence of voiced speech is detected in the input audio signal I, and when the prediction has sufficient accuracy.

    [0098] In a further variant of the hearing device 2 according to FIG. 4 not shown in more detail, the characteristic magnitude Q is calculated separately for each frequency band 26 of the first subgroup 28, in particular through a determination of the respective, band-specific prediction, and compared with a respective, band-specific, threshold value. The evaluation module 68 in this case outputs the control signal S2 in a band-specific manner only for the frequency band 26 or the frequency bands 26, for which the band-specific predictor gain exceeds the respectively assigned threshold value. The signal switch 66 accordingly activates the second frequency splitting process selectively only for the frequency band 26 or the frequency bands 26 concerned. In this variant of the hearing device, the predictor 46 is preferably disposed between the signal switch 66 and the second analytical filter bank 24.

    [0099] In a further variant of the hearing device 2 according to FIG. 4, not illustrated in more detail, the control signal S1 is generated in a band-specific manner if the presence of voiced speech is detected in the respective frequency band 26 by the speech detection module 64. In this case again, the signal switch 66 accordingly activates the second frequency splitting process selectively only for the frequency band 26 or the frequency bands 26 concerned.

    [0100] In a further variant of the hearing device 2 according to FIG. 4, not illustrated in more detail, the predictor 46 is only switched out of the signal path connecting the microphone 6 the earpiece 8 when a value of the characteristic magnitude Q falls below the threshold value, but continues to run in the background of the signal processing (without the prediction in this case having any effect on the output audio signal O). This is, for example, achieved in that the signal switch 66 is connected, with mirror symmetry to the illustration according to FIG. 4, between the second synthetic filter bank 42 and the first synthetic filter bank 44. In this case, the predictor 46 also outputs the characteristic magnitude Q continuously when the characteristic magnitude Q does not exceed the threshold value. Switching back the signal switch after the predefined period of time has elapsed to check the accuracy of the prediction, as described for the exemplary embodiment according to FIG. 4, is not necessary in this case, and is therefore also not provided.

    [0101] The components of the signal processor 12 illustrated in FIGS. 2 to 4, namely the analytical filter bank apparatus 20 with the analytical filter banks 22, 24 and, if relevant, 50, the data processing unit 34 with the subunits 36, 38 and, if relevant, 58, the synthetic filter bank apparatus 40 with the synthetic filter banks 42, 44 and, if relevant, 60, the predictor 46 and, if relevant, the predictor 62, as well as, if relevant, the speech detection module 64, the signal switch 66 and the evaluation module 68 are preferably implemented as software modules that run in the signal processor 12 when the hearing device 2 is operating. Alternatively, one or a plurality of these components are formed by non-programmable electronic circuits.

    [0102] The invention is particularly clear in the exemplary embodiments described above, but is, however, not restricted to these exemplary embodiments. Rather can further embodiments of the invention be derived from the claims and the above description. In particular, the individual features of the invention described with reference to the exemplary embodiments in the context of the claims can also be combined in other ways without leaving the scope of the invention.

    [0103] The following is a summary list of reference numerals and the corresponding structure used in the above description of the invention.

    LIST OF REFERENCE SIGNS

    [0104] 2 Hearing device [0105] 4 Housing [0106] 6 Microphone [0107] 8 Earphone [0108] 10 Battery [0109] 12 Signal processor [0110] 14 Sound channel [0111] 16 Tip [0112] 20 Analytical filter bank apparatus [0113] 22 (First) analytical filter bank [0114] 24 (Second) analytical filter bank [0115] 26 (First) frequency band [0116] 28 (First) subgroup [0117] 30 (Second) frequency band [0118] 32 (Second) subgroup [0119] 34 Data processing unit [0120] 36 Subunit [0121] 38 Subunit [0122] 40 Synthetic filter bank [0123] 42 (Second) synthetic filter bank [0124] 44 (First) synthetic filter bank [0125] 46 Predictor [0126] 46′ Predictor (alternative position) [0127] 50 (Third) analytical filter bank [0128] 52 (First) subgroup [0129] 54 (Third) frequency band [0130] 56 (Second) subgroup [0131] 58 Subunit [0132] 60 (Third) synthetic filter bank [0133] 62 Predictor [0134] 64 Voice detection module [0135] 66 Signal switch [0136] 68 Evaluation module [0137] I Input audio signal [0138] F Error [0139] Output audio signal [0140] S1 Control signal [0141] S2 Control signal [0142] U Supply voltage