ROBUST VOICE ACTIVITY DETECTOR SYSTEM FOR USE WITH AN EARPHONE

20230280965 · 2023-09-07

Assignee

Inventors

Cpc classification

International classification

Abstract

An electronic device or method for adjusting a gain on a voice operated control system can include one or more processors and a memory having computer instructions. The instructions, when executed by the one or more processors causes the one or more processors to perform the operations of receiving a first microphone signal, receiving a second microphone signal, updating a slow time weighted ratio of the filtered first and second signals, and updating a fast time weighted ratio of the filtered first and second signals. The one or more processors can further perform the operations of calculating an absolute difference between the fast time weighted ratio and the slow time weighted ratio, comparing the absolute difference with a threshold, and increasing the gain when the absolute difference is greater than the threshold. Other embodiments are disclosed.

Claims

1. A device comprising: a first microphone configured to generate a first microphone signal; a second microphone configured to generate a second microphone signal; memory configured to store instructions; a processor configured to execute the instructions to perform operations, the operations comprising: receive the first microphone signal; receive the second microphone signal; generating a modified first microphone signal by applying a first filter to the first microphone signal; generating a modified second microphone signal by applying a second filter to the second microphone signal; determining if a voice activity is in an active mode by analyzing at least one of a first fast time weighted average of the modified first microphone signal, a second fast time weighted average of the modified second microphone signal, a first slow time weighted average of the modified first microphone signal, a second slow time weighted average of the modified second microphone signal, a first ratio of the first fast and the second fast time weighted averages, a second ratio of the first slow and the second slow time weighted averages, or a difference of the first and second ratios, or a combination thereof; and updating a VOX gain if the voice activity is in the active mode.

2. The device of claim 1, wherein the operation of determining if a voice activity is in an active mode uses at least one of a singular value decomposition, a neural net system, a bounded probability value or a combination thereof.

3. The device according to claim 1, wherein the first filter is a bandpass filter.

4. The device according to claim 3, wherein the bandpass filter is configured to pass frequencies primarily between 100 Hz to 2 kHz.

5. The device according to claim 1, wherein the second filter is a bandpass filter.

6. The device according to claim 5, wherein the bandpass filter is configured to pass frequencies primarily between 100 Hz to 2 kHz.

7. The device of claim 1, wherein the operation of determining if a voice activity is in an active mode uses the difference of the first and second ratios and compares the difference to a threshold value.

8. The device according to claim 7, wherein the voice activity is in an active mode if the difference is greater than the threshold.

9. A device comprising: a first microphone configured to generate a first microphone signal; a second microphone configured to generate a second microphone signal; memory configured to store instructions; a processor configured to execute the instructions to perform operations, the operations comprising: receive the first microphone signal; receive the second microphone signal; generating a modified first microphone signal by applying a first filter to the first microphone signal; generating a modified second microphone signal by applying a second filter to the second microphone signal; generating a voice activity status by analyzing at least one of a first fast time weighted average of the modified first microphone signal, a second fast time weighted average of the modified second microphone signal, a first slow time weighted average of the modified first microphone signal, a second slow time weighted average of the modified second microphone signal, a first ratio of the first fast and the second fast time weighted averages, a second ratio of the first slow and the second slow time weighted averages, or a difference of the first and second ratios, or a combination thereof; and storing the voice activity status.

10. The device of claim 9, wherein the operation of generating a voice activity status uses at least one of a singular value decomposition, a neural net system, a bounded probability value or a combination thereof.

11. The device according to claim 9, wherein the first filter is a bandpass filter.

12. The device according to claim 11, wherein the bandpass filter is configured to pass frequencies primarily between 100 Hz to 2 kHz.

13. The device according to claim 9, wherein the second filter is a bandpass filter.

14. The device according to claim 13, wherein the bandpass filter is configured to pass frequencies primarily between 100 Hz to 2 kHz.

15. The device of claim 9, wherein the operation of generating a voice activity status uses the difference of the first and second ratios and compares the difference to a threshold value.

16. The device according to claim 9, wherein the voice activity status can be a binary value.

17. The device according to claim 9, wherein the voice activity status can be a probability value from 0% to 100%.

18. (canceled)

19. (canceled)

20. (canceled)

21. (canceled)

22. (canceled)

23. (canceled)

24. (canceled)

25. (canceled)

26. (canceled)

27. (canceled)

28. (canceled)

29. (canceled)

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] The invention may be understood from the following detailed description when read in connection with the accompanying drawing. It is emphasized, according to common practice, that various features of the drawings may not be drawn to scale. On the contrary, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. Moreover, in the drawing, common numerical references are used to represent like features. Included in the drawing are the following figures:

[0008] FIG. 1 is a block diagram of a voice activity detector system in accordance with an embodiment;

[0009] FIG. 2 is a flow chart of a method for determining voice activity status in accordance with an embodiment herein; and

[0010] FIG. 3 is an overview block diagram of a system for determining voice activity status in accordance with an embodiment herein.

DETAILED DESCRIPTION OF THE INVENTION

[0011] A new method and system is presented to robustly determined voice activity using typically two microphones mounted in a small earpiece. The determined voice activity status can be used to control the gain on a voice operated control system to gate the level of a signal directed to a second voice receiving system. This voice receiving system can be a voice communication system (e.g. radio or telephone system), a voice recording system, a speech to text system, a voice machine-control system. The gain of the voice operated control system is typically set to zero when no voice active is detected, and set to unity otherwise. The overall data rate in a voice communication system can therefore be adjusted, and large data rate reductions are possible: thereby increasing the number of voice communications channels and/or increasing the voice quality for each voice communication channel. The voice activity status can also be used to adjust the power used in a wireless voice communication system, thereby extending the battery life of the system.

[0012] FIG. 1 describes an exemplary overview or system 10 of an embodiment in accordance the present invention. In one exemplary embodiment, two microphones in an earphone are used to determine voice activity of the earphone wearer. The first microphone 12 is an ambient sound microphone, detecting sound in the ambient environment of the earphone wearer. The second microphone 13 is an ear-canal microphone detecting sound in the ear canal of the earphone wearer. In a preferred embodiment, the earphone incorporates a sound isolating or a partially sound isolating barrier to isolate the ear canal microphone from the ambient environment, where this barrier can be an inflatable balloon or foam plug. The microphones 12 and 13 can serve as inputs to a voice activity detector or VAD 14. The voice activity detector 14 can provide a VAD status to a gain update module 15. The gain update module 15 and the ear canal microphone 13 can provide inputs to an amplifier 16. The output from the gain update module 15 serves as the voice controller gain input signal to the amplifier 16. The amplifier 16 can provide an output signal 17 which can be used by a speaker 18 for example.

[0013] FIG. 2 describes a method 20 for determining the status of user voice activity of the earphone system in FIG. 1. The first and second microphone signals are received at 21 and 22 and are band pass filtered (or “band limited”) and a power estimate of these filtered first and second microphone signals are calculated at 23 and 24 respectively. In the preferred embodiment, the band pass filtering is accomplished by a weighted FFT operation, familiar to those skilled in the art, with the signal power estimated between approximately 100 Hz and 2 kHz.


P_1(t)=W*FFT(M_1(t))


P_2(t)=W*FFT(M_2(t))

[0014] Where

[0015] P_1(t) is the weighted power estimate of signal microphone 1 at time t.

[0016] W is a frequency weighting vector.

[0017] FFT( ) is a Fast Fourier Transform operation.

[0018] M_1(t) is the signal from the first microphone at time t.

[0019] M_2(t) is the signal from the second microphone at time t.

[0020] A fast-time weighted average of the two band pass filtered power estimates is calculated at 25 and 26 respectively, with a fast time constant, which in the preferred embodiment is equal to 45 ms.


AV_M1_fast(t)=a*AV_M1_fast(t−1)+(a−1)*P_1(t)


AV_M2_fast(t)=a*AV_M2_fast(t−1)+(a−1)*P_1(t)

[0021] Where

[0022] AV_M1_fast(t) is the fast time weighted average of the first band pass filtered microphone signal.

[0023] AV_M2_fast(t) is the fast time weighted average of the second band pass filtered microphone signal.

[0024] a is a fast time weighting coefficient.

[0025] A slow-time weighted average of the two band pass filtered power estimates is calculated at 27 and 28 respectively, with a fast time constant which in the preferred embodiment is equal to 500 ms.


AV_M1_slow(t)=b*AV_M1_slow(t−1)+(b−1)*P_1(t)


AV_M2_slow(t)=b*AV_M2_slow(t−1)+(b−1)*P_1(t)

[0026] Where

[0027] AV_M1_slow(t) is the slow time weighted average of the first band pass filtered microphone signal.

[0028] AV_M2_slow(t) is the slow time weighted average of the second band pass filtered microphone signal.

[0029] b is a slow time weighting coefficient, where a>b.

[0030] The ratio of the two fast time weighted power estimates is calculated at 30 (i.e., the fast weighted power of the second microphone divided by the fast weighted power of the first microphone).


ratio_fast(t)=AV_M2_fast(t)/AV_M1_fast(t)

[0031] The ratio of the two slow time weighted power estimates is calculated at 29 (ie the slow weighted power of the second microphone divided by the slow weighted power of the first microphone).


ratio_slow(t)=AV_M2_slow(t)/AV_M1_slow(t)

[0032] The absolute difference of the two above ratio values is then calculated at 31.


diff(t)=abs(ratio_fast(t)−ratio_slow(t))

[0033] Note that the updating of the slow time weighted ratio in one embodiment is of the first filtered signal and the second filtered signal where the first filtered signal and the second filtered signal are the slow weighted powers of the first and second microphone signals. Similarly, updating of the fast time weighted ratio is of the first filtered signal and the second filtered signal where the first filtered signal and the second filtered signals are the fast weighted powers of the first and second microphone signals. As noted above, the absolute differences between the fast time weighted ratio and the slow time weighted ratios are calculated to provide a value.

[0034] This value is then compared with a threshold at 32, and if the value diff(t) is greater than this threshold, then we determine that voice activity is current in an active mode at 33, and the VOX gain value is updated at 34 or in this example increased (up to a maximum value of unity).

[0035] In one exemplary embodiment the threshold value is fixed.

[0036] In a second embodiment the threshold value is dependent on the slow weighted level AV_M1 slow.

[0037] In a third embodiment the threshold value is set to be equal to the time averaged value of the diff(t), for example calculated according to the following:


threshold(t)=c*threshold(t−1)+(c−1)*diff(t)

[0038] where c is a time smoothing coefficient such that the time smoothing is a leaky integrator type with a smoothing time of approximately 500 ms.

[0039] FIG. 3 is a diagram showing a high level overview of a system 35 to detect voice activity status. The system 35 uses a signal analysis system 41 to analyze at least 2 audio input signals (e.g., 37, 38, 39, and 40) to determine a voice activity status at 42, where this status can be a binary value (e.g., 0=“no voice activity” and 1=“voice activity”, or the status can be a bounded probability value, e.g., between 0% and 100%, where the value is a probability likelihood of voice activity. The signal analysis system 41 can utilize the previously described method, where 2 input signals from a single earphone are used to determine a voice activity status. Alternative methods for determining voice activity can include algorithms using Singular Value Decomposition (SVD) or neural net systems.

[0040] Although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the embodiments claimed.