Method for converting vibration to voice frequency wirelessly

11699428 ยท 2023-07-11

Assignee

Inventors

Cpc classification

International classification

Abstract

The present application discloses a Method for converting vibration to voice frequency wirelessly and a method thereof. By sensing a first vibration variation data and a voice frequency variation data of a vocal vibration part in a first sensing period, a voice frequency reference data is obtained from the voice frequency variation data and the first vibration result. A second vibration result is obtained at a second sensing period for converting to a voice frequency output signal, and the voice frequency output signal is used to output as a voice signal corresponding to the voice frequency various result. Thus, the present application provides a voice signal close to a human voice.

Claims

1. A method far converting vibration to voice frequency wirelessly with an intelligence learning capability, comprising steps of: sensing a throat part in a first sensing period by using a vibration sensor of a sound collecting device to generate a first vibration variation data, and sensing a mouth part in said first sensing period by using a voice frequency sensor of said sound collecting device to generate a voice frequency variation data; transmitting said first vibration variation data and said voice frequency variation data to a computing device by a wireless interface; said computing device executing a voice frequency and vibration conversion program and converting said vibration variation data and said voice frequency variation data to two corresponding features resulting in a voice-frequency corresponding feature and a vibration corresponding feature based on the same format; and said computing device executing an artificial intelligence program for matching voice and vibration according to said two corresponding features of said voice frequency variation data and said first vibration variation data and producing a corresponding voice-frequency reference data, said artificial intelligence program including an artificial intelligence algorithm; wherein said voice-frequency corresponding feature and said vibration corresponding feature are converted based on the same format by said artificial intelligence algorithm learning said voice-frequency corresponding feature and said vibration corresponding feature, said voice-frequency reference data is produced by said artificial intelligence algorithm learning the correspondence between said voice-frequency corresponding feature and said vibration corresponding feature.

2. The method for converting vibration to voice frequency wirelessly of claim 1, wherein said artificial intelligence algorithm is a deep neural network (DNN).

3. The method for converting vibration to voice frequency wirelessly of claim 1, wherein said voice-frequency corresponding feature and said vibration corresponding feature result in the log power spectrum, the Mel-frequency cepstrum (MFC), or the linear predictive coding (LPC) spectrum.

4. The method for converting vibration to voice frequency wirelessly of claim 1, wherein said vibration sensor is an accelerometer sensor or a piezoelectric sensor.

5. A Method for converting vibration to voice frequency wirelessly with intelligence learning capability, comprising: a sound collecting device, including: a vibration sensor, sensing a vibration variation data of a throat part in a sensing period; a voice frequency sensor, sensing a voice frequency variation data of said throat in said sensing period; and a first wireless transmission unit, connected to said vibration sensor and said voice frequency sensor; a computing device, including: a second wireless transmission unit, connected to said first wireless transmission unit wirelessly; a processing unit, connected electrically to said second wireless transmission unit; and a storage unit, storing an artificial-intelligence program and a voice frequency and vibration conversion program, said artificial intelligence program including an artificial intelligence algorithm, said processing unit receiving said vibration variation data and said voice frequency variation data via said first wireless transmission unit and said second wireless transmission unit, said processing unit executing said voice frequency and vibration conversion program for converting said vibration variation data and said voice frequency variation data to two corresponding features resulting in a voice-frequency corresponding feature and a vibration corresponding feature based on the same format by said artificial intelligence algorithm learning said voice-frequency corresponding feature and said vibration corresponding feature, and said processing unit producing a learned voice-frequency reference data according to said two corresponding features of said first vibration variation data and said voice frequency variation data, said learned voice-frequency reference data is produced by said artificial intelligence algorithm learning the correspondence between said voice-frequency corresponding feature and said vibration corresponding feature.

6. The Method for converting vibration to voice frequency wirelessly of claim 5, wherein said artificial intelligence algorithm is a deep neural network (DNN).

7. The Method for converting vibration to voice frequency wirelessly of claim 5, wherein said voice-frequency corresponding feature and said vibration corresponding feature are the signal processing results for the log power spectrum, the Mel-frequency cepstrum (MFC), or the linear predictive coding (LPC) spectrum.

8. The Method for converting vibration to voice frequency wirelessly of claim 5, wherein said vibration sensor is an accelerometer sensor or a piezoelectric sensor.

9. A method for converting vibration to voice frequency wirelessly, comprising steps of: sensing a throat part in a sensing period using a vibration sensor and producing a vibration variation data; transmitting said vibration variation data to a computing device; said computing device executing a voice frequency and vibration conversion program and converting said vibration variation data to a vibration corresponding feature; said computing device executing an artificial intelligence program for converting said vibration corresponding feature of said vibration variation data to a voice-frequency mapping signal with a reference sound-field feature according to a learned voice-frequency reference data prestored in a storage unit, said artificial intelligence program including an artificial intelligence algorithm, wherein said voice-frequency reference data and said vibration corresponding feature are converted based on the same format by said artificial intelligence algorithm learning said vibration corresponding feature, said vibration corresponding feature of said vibration variation data converted to a voice-frequency mapping signal by said artificial intelligence algorithm learning the correspondence between said voice-frequency corresponding feature and said vibration corresponding feature according to said learned voice-frequency reference data; and said computing device executing said voice frequency and vibration conversion program for converting inversely said voice-frequency mapping signal of said vibration corresponding feature to a voice-frequency output signal.

10. The method for converting vibration to voice frequency wirelessly of claim 9, wherein said artificial intelligence algorithm is a deep neural network (DNN).

11. The method for converting vibration to voice frequency wirelessly of claim 9, wherein said vibration corresponding feature and said voice-frequency reference data result in the log power spectrum, the Mel-frequency cepstrum (MFC), or the linear predictive coding (LPC) spectrum.

12. The method for converting vibration to voice frequency wirelessly of claim 9, wherein said vibration sensor is an accelerometer sensor or a piezoelectric sensor.

13. A Method for converting vibration to voice frequency wirelessly, comprising: a sound collecting device, including: a vibration sensor, sensing a vibration variation data of a throat part in a sensing period; and a first wireless transmission unit, connected to said vibration sensor; a computing device, including: a second wireless transmission unit, connected to said first wireless transmission unit wirelessly; a processing unit, connected electrically to said second wireless transmission unit; and a storage unit, storing an artificial-intelligence application program and a voice frequency and vibration conversion program, said processing unit receiving said vibration variation data via said first wireless transmission unit and said second wireless transmission unit, said processing unit executing said voice frequency and vibration conversion program for converting said vibration variation data to a corresponding feature, said processing unit executing said artificial intelligence application program for converting said vibration variation data of said corresponding feature to a voice-frequency mapping signal with a reference sound-field feature according to a learned voice-frequency reference data prestored in said storage unit, and said processing unit executing said voice frequency and vibration conversion program for converting said voice-frequency mapping signal of said corresponding feature to a voice-frequency output signal in an outputable format; wherein said artificial intelligence program including an artificial intelligence algorithm, said voice-frequency reference data and said vibration corresponding feature are converted based on the same format by said artificial intelligence algorithm learning said voice-frequency corresponding feature and said vibration corresponding feature, said vibration corresponding feature of said vibration variation data is converted to a voice-frequency mapping signal by said artificial intelligence algorithm learning the correspondence between said voice-frequency corresponding feature and said vibration corresponding feature according said learned voice-frequency reference data.

14. The Method for converting vibration to voice frequency wirelessly of claim 13, further comprising an output device, connected to said computing device, receiving said voice-frequency output signal in an outputable format, and outputting a voice signal according said voice-frequency output signal in an outputable format.

15. The Method for converting vibration to voice frequency wirelessly of claim 13, wherein said artificial intelligence algorithm is a deep neural network (DNN).

16. The Method for converting vibration to voice frequency wirelessly of claim 13, wherein said vibration corresponding feature and said voice-frequency reference data result in the log power spectrum, the Mel-frequency cepstrum (MFC), or the linear predictive coding (LPC) spectrum.

17. The Method for converting vibration to voice frequency wirelessly of claim 13, wherein said vibration sensor is an accelerometer sensor or a piezoelectric sensor.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows a flowchart according to an embodiment of the present application;

(2) FIG. 2A shows a schematic diagram of sensing voice frequency and vibration simultaneously according to an embodiment of the present application;

(3) FIG. 2B shows a schematic diagram of calculating to give voice-frequency reference data according to an embodiment of the present application;

(4) FIG. 3 shows a flowchart according to another embodiment of the present application;

(5) FIG. 4A shows a schematic diagram of sensing vibration according to another embodiment of the present application;

(6) FIG. 4B shows a schematic diagram of converting vibration to voice frequency according to another embodiment of the present application; and

(7) FIG. 4C shows a schematic diagram of outputting voice frequency according to another embodiment of the present application.

DETAILED DESCRIPTION

(8) Since the current vibration sound collecting mechanism is unable to provide output signals with expected quality, the present application provides a Method for converting vibration to voice frequency wireless and the method thereof to solve the problem.

(9) First, please refer to FIG. 1, which shows a flowchart according to an embodiment of the present application. As shown in the figure, the method for converting vibration to voice frequency wirelessly according to the present application comprises steps of: Step S10: Sensing a throat part in a first sensing period using a vibration sensor of a sound collecting device to generate a first vibration variation data, and sensing a mouth part in the first sensing period using a voice frequency sensor of the sound collecting device to generate a voice frequency variation data; Step S20: Transmitting the first vibration variation data and the voice frequency variation data to an computing device; Step S25: The computing device executing a voice frequency and vibration conversion program and converting the vibration variation data and the voice frequency variation data to corresponding features; and Step S30: The computing device executing an application program for comparing the first vibration variation data according to the voice frequency variation data to generate a corresponding voice-frequency reference data.

(10) Please refer to FIG. 2A and FIG. 2B, which show a schematic diagram of sensing voice frequency and vibration simultaneously in the first sensing period and a schematic diagram of calculating to give voice-frequency reference data according to an embodiment of the present application. As shown in the figures, the Method for converting vibration to voice frequency wirelessly 1 comprises a sound collecting device 10 and an computing device 20. The sound collecting device 10 includes a communication unit 12, a voice frequency sensor 14, and a first wireless transmission unit 16. The computing device 20 includes a processing unit 22, a storage unit 24, and a second wireless transmission unit 26. The storage unit 24 stores an application program P. The first wireless transmission unit 16 is connected to the second wireless transmission unit 26.

(11) In the step S10, as shown in FIG. 2A, a user U wears the sound collecting device 10 at a throat part T by hanging or using a neck strap or a neck ring. When the user U give off sound, the throat part T generates vibration V1 correspondingly. The vibration V1 is conducted to the mouth part M and give off sound W. The vibration sensor 12 in the sound collecting device 10 senses a first vibration variation data S.sub.V1 of the vibration V1 generated by the throat part T in a first sensing period Pd1. Meanwhile, the voice frequency sensor 14 of the sound collecting device 10 senses the sound W emitted from the mouth part M in the first sensing period Pd1 and produces a voice frequency variation data S.sub.W correspondingly. Next, in the step S20, as shown in FIG. 2A, the sound collecting device 10 transmits the first vibration variation data S.sub.V1 and the voice frequency variation data S.sub.W to the computing device 20 via the wireless transmission interface (such as Bluetooth, Wi-Fi, ZigBee, or LoRa) formed by the first wireless transmission unit 16 and the second wireless transmission unit 26. In particular, the processing unit 22 stores the first vibration variation data S.sub.V1 and the voice frequency variation data S.sub.W in the storage unit 24 temporarily.

(12) In the step S25, as shown in FIG. 2B, the computing device 20 uses the processing unit 22 to load the application program P from the storage unit 24 to calculate the first vibration variation data S.sub.V1 and the voice frequency variation data S.sub.W for producing voice-frequency reference data REF. The application program P includes a voice frequency and vibration conversion program P1 and an artificial intelligence module P2. The voice frequency and vibration conversion program P1 includes a Fourier transform module ST and an audio conversion module WT. The Fourier transform module ST performs Fourier transform for converting the first vibration variation data S.sub.V1 to a first vibration corresponding feature VF1. The audio conversion module WT converts the voice frequency variation data S.sub.W to a voice-frequency corresponding feature. According to the present embodiment, the voice-frequency corresponding feature WF and the vibration corresponding feature VF1 are the log power spectrum (LPS). Besides, the voice-frequency corresponding feature WF and the vibration corresponding feature VF1 can further be the signal processing results for the Mel-frequency cepstrum (MFC) or the linear predictive coding (LPC) spectrum.

(13) In the step S30, as shown in FIG. 2B, the artificial intelligence module P2 runs one or more artificial intelligence algorithm AI, for example, a deep neural network (DNN). Based on the same format, the artificial intelligence algorithm AI learns the correspondence between the voice-frequency corresponding feature WF and the first vibration corresponding feature VF1, namely, the weighting relation between the two, for producing the voice-frequency reference data REF correspondingly. In other words, the weighting relation between the voice-frequency corresponding feature WF and the first vibration corresponding feature VF1 is adopted as the voice-frequency reference data REF.

(14) The method for converting vibration to voice frequency wirelessly as described above uses the computing device to execute the artificial-intelligence application program. By using the artificial intelligence algorithm, the corresponding weighting relation between the voice-frequency corresponding feature and the first vibration corresponding feature can be learned. The weighting relation can be used as the reference for the artificial intelligence algorithm to convert the vibration variation data to voice-frequency output data. In the method for converting vibration to voice frequency wirelessly according to the following embodiment, the received vibration variation data is converted to the corresponding voice-frequency output signal by using the artificial intelligence algorithm with reference to the learned voice-frequency reference data. The details will be described as follows.

(15) Please refer to FIG. 3, which shows a flowchart according to another embodiment of the present application. As shown in the figure, the method for converting vibration to voice frequency wirelessly according to the present application comprises steps of: Step S40: Sensing the throat part in a second sensing period using the vibration sensor to generate a second vibration variation data; Step S42: Transmitting the second vibration variation data to the computing device through a wireless interface; Step S45: The computing device executing the voice frequency and vibration conversion program and converting the vibration variation data to the corresponding feature; and Step S50: The computing device executing the application program for converting the second vibration variation data to a voice-frequency output signal with a reference sound-field feature according to the voice-frequency reference data prestored in a storage unit.

(16) In the step S40, as shown in FIG. 4A, the vibration sensor 12 of the sound collecting device 10 senses the vibration V2 from the throat part T in the second sensing period Pd2 and giving a second vibration variation data S.sub.V2. In the step S42, as shown in FIG. 4A, the second vibration variation data S.sub.W is transmitted to the computing device 20 via the wireless transmission interface formed by the first wireless transmission unit 16 and the second wireless transmission unit 26. Furthermore, the processing unit 22 stores the second vibration variation data S.sub.V2 received by the computing device 20 in the storage unit 24.

(17) In the step S45, as shown in FIG. 4B, the processing unit 22 loads and executes the application program P stored in the storage unit 24. In addition, the processing unit 22 reads the second vibration variation data S.sub.V2 for calculation in the application program P. The artificial intelligence algorithm AI executed by the processing unit 22 is to read the transformed second vibration variation data S.sub.V2 performed by the Fourier transform module for converting the second vibration variation data S.sub.V2 to a corresponding feature, namely, a second variation data corresponding feature VF2. According to the present embodiment, the second vibration corresponding feature VF2 is the log power spectrum (LPS). Besides, the second vibration corresponding feature VF2 can further be the signal processing results for the Mel-frequency cepstrum (MFC) or the linear predictive coding (LPC) spectrum. Next, in the step S50, as shown in FIG. 4B, the processing unit 22 converts the second vibration variation data S.sub.W to a voice-frequency mapping signal WI according to the artificial intelligence algorithm AI and the voice-frequency reference data REF prestored in the corresponding storage unit RAM, for example, the memory, of the processing unit 22. By using an inverse Fourier transform module IFT, the voice-frequency mapping signal WI can be converted to a voice-frequency output signal WO in an outputable format for subsequent outputting to an output device 30 such as a loudspeaker or an earphone. As shown in FIG. 4C, the voice-frequency output signal WO in an outputable format is output to the output unit 30 by the computing device 20 and thus outputting the output signal OUT close human voice.

(18) Accordingly, the voice-frequency output signal WO according to the present application corresponds to the voice-frequency variation data S.sub.W extracted in the step S10. In other words, the computing device 20 according to the present application calculates to give the voice-frequency reference data according to the first vibration variation data S.sub.V1 and the voice-frequency variation data S.sub.W acquired in the step S10. The voice-frequency reference data is then referred by the computing device 20 for converting the second vibration variation data S.sub.W acquired subsequently to the voice-frequency output signal WO, which is an output signal OUT close to the human voice. Thereby, for the applications of converting the vibration signals from the throat part to audio signals, the present application can provide less-distorted audio signals.

(19) To sum up, the present application provides a Method for converting vibration to voice frequency wirelessly. The computing device according to the present application calculates the first vibration variation data and the voice frequency variation data sensed by the sound collecting device in the first sensing period and produces the corresponding voice-frequency reference data, which is used for training the computing device. Next, the second vibration variation data sensed in the second sensing period can be converted to the voice-frequency output signal corresponding to the voice frequency variation data. Thereby, the output signal close to human voice can be provided.