Patent classifications
G10L15/06
SPEECH RECOGNITION APPARATUS, CONTROL METHOD, AND NON-TRANSITORY STORAGE MEDIUM
A speech recognition apparatus (2000) includes a first model (10) and a second model (20). The first model (10) is learned by training data with an audio frame as input data, and with, as correct answer data, compressed character string data acquired by encoding character string data represented by the audio frame. The second model (20) is a learned decoder (44) acquired by learning an autoencoder (40) being constituted of an encoder (42) converting input character string data into compressed character string data, and the decoder (44) converting, into character string data, the compressed character string data output from the encoder. The speech recognition apparatus (2000) inputs an audio frame to the first model (10), inputs, to the second model (20), compressed character string data output from the first model (10), and thereby generates character string data corresponding to the audio frame.
MULTIMODAL SPEECH RECOGNITION METHOD AND SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM
The disclosure provides a multimodal speech recognition method and system, and a computer-readable storage medium. The method includes calculating a first logarithmic mel-frequency spectral coefficient and a second logarithmic mel-frequency spectral coefficient when a target millimeter-wave signal and a target audio signal both contain speech information corresponding to a target user; inputting the first and the second logarithmic mel-frequency spectral coefficient into a fusion network to determine a target fusion feature, where the fusion network includes at least a calibration module and a mapping module, the calibration module is configured to perform mutual feature calibration on the target audio/millimeter-wave signals, and the mapping module is configured to fuse a calibrated millimeter-wave feature and a calibrated audio feature; and inputting the target fusion feature into a semantic feature network to determine a speech recognition result corresponding to the target user. The disclosure can implement high-accuracy speech recognition.
SPEECH RECOGNITION IN A VEHICLE
An audio sample including speech and ambient sounds is transmitted to a vehicle computer. Recorded audio is received from the vehicle computer, the recorded audio including the audio sample broadcast by the vehicle computer and recorded by the vehicle computer and recognized speech from the recorded audio. The recognized speech and text of the speech are input to a machine learning program that outputs whether the recognized speech matches the text. When the output from the machine learning program indicates that the recognized speech does not match the text, the recognized speech and the text are included in a training dataset for the machine learning program.
SPEECH RECOGNITION IN A VEHICLE
An audio sample including speech and ambient sounds is transmitted to a vehicle computer. Recorded audio is received from the vehicle computer, the recorded audio including the audio sample broadcast by the vehicle computer and recorded by the vehicle computer and recognized speech from the recorded audio. The recognized speech and text of the speech are input to a machine learning program that outputs whether the recognized speech matches the text. When the output from the machine learning program indicates that the recognized speech does not match the text, the recognized speech and the text are included in a training dataset for the machine learning program.
Synthetic speech processing
A speech-processing system receives input data representing text. A first encoder processes segments of the text to determine embedding data representing the text, and a second encoder processes corresponding audio data to determine prosodic data corresponding to the text. The embedding and prosodic data is processed to create output data including a representation of speech corresponding to the text and prosody.
Tracking specialized concepts, topics, and activities in conversations
Embodiments are directed to organizing conversation information. A tracker vocabulary may be provided to a universal model to predict a generalized vocabulary associated with the tracker vocabulary. A tracker model may be generated based on the portions of the universal model activated by the tracker vocabulary such that a remainder of the universal model may be excluded from the tracker model. Portions of a conversation stream may be provided to the tracker model. A match score may be generated based on the track model and the portions of the conversation stream such that the match score predicts if the portions of the conversation stream may be in the generalized vocabulary predicted for the tracker vocabulary. Tracker metrics may be collected based on the portions of the conversation and the match scores such that the tracker metrics may be included in reports or notifications.
Method for training speech recognition model, method and system for speech recognition
Disclosed are a method for training speech recognition model, a method and a system for speech recognition. The disclosure relates to field of speech recognition and includes: inputting an audio training sample into the acoustic encoder to represent acoustic features of the audio training sample in an encoded way and determine an acoustic encoded state vector; inputting a preset vocabulary into the language predictor to determine text prediction vector; inputting the text prediction vector into the text mapping layer to obtain a text output probability distribution; calculating a first loss function according to a target text sequence corresponding to the audio training sample and the text output probability distribution; inputting the text prediction vector and the acoustic encoded state vector into the joint network to calculate a second loss function, and performing iterative optimization according to the first loss function and the second loss function.
Systems and methods for response selection in multi-party conversations with dynamic topic tracking
Embodiments described herein provide a dynamic topic tracking mechanism that tracks how the conversation topics change from one utterance to another and use the tracking information to rank candidate responses. A pre-trained language model may be used for response selection in the multi-party conversations, which consists of two steps: (1) a topic-based pre-training to embed topic information into the language model with self-supervised learning, and (2) a multi-task learning on the pretrained model by jointly training response selection and dynamic topic prediction and disentanglement tasks.
Emitting word timings with end-to-end models
A method includes receiving a training example that includes audio data representing a spoken utterance and a ground truth transcription. For each word in the spoken utterance, the method also includes inserting a placeholder symbol before the respective word identifying a respective ground truth alignment for a beginning and an end of the respective word, determining a beginning word piece and an ending word piece, and generating a first constrained alignment for the beginning word piece and a second constrained alignment for the ending word piece. The first constrained alignment is aligned with the ground truth alignment for the beginning of the respective word and the second constrained alignment is aligned with the ground truth alignment for the ending of the respective word. The method also includes constraining an attention head of a second pass decoder by applying the first and second constrained alignments.
User-specific acoustic models
Systems and processes for providing user-specific acoustic models are provided. In accordance with one example, a method includes, at an electronic device having one or more processors, receiving a plurality of speech inputs, each of the speech inputs associated with a same user of the electronic device; providing each of the plurality of speech inputs to a user-independent acoustic model, the user-independent acoustic model providing a plurality of speech results based on the plurality of speech inputs; initiating a user-specific acoustic model on the electronic device; and adjusting the user-specific acoustic model based on the plurality of speech inputs and the plurality of speech results.