Patent classifications
G10L25/12
Linear prediction analysis device, method, program, and storage medium
An autocorrelation calculation unit 21 calculates an autocorrelation R.sub.O(i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′.sub.O(i) obtained by multiplying a coefficient w.sub.O(i) by the autocorrelation R.sub.O(i). It is assumed here, for each order i of some orders i at least, that the coefficient w.sub.O(i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.
LINEAR PREDICTION ANALYSIS DEVICE, METHOD, PROGRAM, AND STORAGE MEDIUM
An autocorrelation calculation unit 21 calculates an autocorrelation R.sub.O(i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′.sub.O(i) obtained by multiplying a coefficient w.sub.O( ) by the autocorrelation R.sub.O(i). It is assumed here, for each order i of some orders i at least, that the coefficient w.sub.O(i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.
LINEAR PREDICTION ANALYSIS DEVICE, METHOD, PROGRAM, AND STORAGE MEDIUM
An autocorrelation calculation unit 21 calculates an autocorrelation R.sub.O(i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′.sub.O(i) obtained by multiplying a coefficient w.sub.O( ) by the autocorrelation R.sub.O(i). It is assumed here, for each order i of some orders i at least, that the coefficient w.sub.O(i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.
Voice modification detection using physical models of speech production
A computer may train a single-class machine learning using normal speech recordings. The machine learning model or any other model may estimate the normal range of parameters of a physical speech production model based on the normal speech recordings. For example, the computer may use a source-filter model of speech production, where voiced speech is represented by a pulse train and unvoiced speech by a random noise and a combination of the pulse train and the random noise is passed through an auto-regressive filter that emulates the human vocal tract. The computer leverages the fact that intentional modification of human voice introduces errors to source-filter model or any other physical model of speech production. The computer may identify anomalies in the physical model to generate a voice modification score for an audio signal. The voice modification score may indicate a degree of abnormality of human voice in the audio signal.
System and method for dialog modeling
Disclosed herein are systems, computer-implemented methods, and computer-readable media for dialog modeling. The method includes receiving spoken dialogs annotated to indicate dialog acts and task/subtask information, parsing the spoken dialogs with a hierarchical, parse-based dialog model which operates incrementally from left to right and which only analyzes a preceding dialog context to generate parsed spoken dialogs, and constructing a functional task structure of the parsed spoken dialogs. The method can further either interpret user utterances with the functional task structure of the parsed spoken dialogs or plan system responses to user utterances with the functional task structure of the parsed spoken dialogs. The parse-based dialog model can be a shift-reduce model, a start-complete model, or a connection path model.
System and method for dialog modeling
Disclosed herein are systems, computer-implemented methods, and computer-readable media for dialog modeling. The method includes receiving spoken dialogs annotated to indicate dialog acts and task/subtask information, parsing the spoken dialogs with a hierarchical, parse-based dialog model which operates incrementally from left to right and which only analyzes a preceding dialog context to generate parsed spoken dialogs, and constructing a functional task structure of the parsed spoken dialogs. The method can further either interpret user utterances with the functional task structure of the parsed spoken dialogs or plan system responses to user utterances with the functional task structure of the parsed spoken dialogs. The parse-based dialog model can be a shift-reduce model, a start-complete model, or a connection path model.
SELF-SUPERVISED PITCH ESTIMATION
Example embodiments relate to techniques for training artificial neural networks or oilier machine-learning encoders to accurately predict the pitch of input audio samples in a semitone or otherwise logarithmically-scaled pitch space. An example method may include generating, from a sample of audio data, two training samples by applying two different pitch shifts to the sample of audio training data. This can be done by converting the sample of audio data into the frequency domain and then shifting the transformed data. These known shifts are then compared to the predicted pitches generated by applying the two training samples to the encoder. The encoder is then updated based on the comparison, such that the relative pitch output by the encoder is improved with respect to accuracy. One or more audio samples, labeled with absolute pitch values, can then be used to calibrate the relative pitch values generated by the trained encoder.
SIGNAL TRANSFORMATION BASED ON UNIQUE KEY-BASED NETWORK GUIDANCE AND CONDITIONING
A method comprises receiving input audio and target audio having a target audio characteristic. The method includes estimating key parameters that represent the target audio characteristic based on one or more of the target audio and the input audio. The method further comprises configuring a neural network, trained to be configured by the key parameters, with the key parameters to cause the neural network to perform a signal transformation of the input audio, to produce output audio having an output audio characteristic corresponding to and that matches the target audio characteristic.
TECHNOLOGIES FOR AUTHENTICATING A SPEAKER USING VOICE BIOMETRICS
Technologies for authenticating a speaker in a voice authentication system using voice biometrics include a speech collection computing device and a speech authentication computing device. The speech collection computing device is configured to collect a speech signal from a speaker and transmit the speech signal to the speech authentication computing device. The speech authentication computing device is configured to compute a speech signal feature vector for the received speech signal, retrieve a speech signal classifier associated with the speaker, and feed the speech signal feature vector to the retrieved speech signal classifier. Additionally, the speech authentication computing device is configured to determine whether the speaker is an authorized speaker based on an output of the retrieved speech signal classifier. Additional embodiments are described herein.
TECHNOLOGIES FOR AUTHENTICATING A SPEAKER USING VOICE BIOMETRICS
Technologies for authenticating a speaker in a voice authentication system using voice biometrics include a speech collection computing device and a speech authentication computing device. The speech collection computing device is configured to collect a speech signal from a speaker and transmit the speech signal to the speech authentication computing device. The speech authentication computing device is configured to compute a speech signal feature vector for the received speech signal, retrieve a speech signal classifier associated with the speaker, and feed the speech signal feature vector to the retrieved speech signal classifier. Additionally, the speech authentication computing device is configured to determine whether the speaker is an authorized speaker based on an output of the retrieved speech signal classifier. Additional embodiments are described herein.