Patent classifications
G10L15/063
Natural language processing
Example embodiments provide techniques for configuring a natural-language processing system to perform a new function given at least one sample invocation of the function. The training data consisting of the sample invocation may be augmented by determining which subset of available training data most closely resembles the sample invocation and/or function. The effect of re-training a component this this augmented training data may be determined, and an annotator may review any annotations corresponding to the invocation if the effect is large.
Model training system for custom speech-to-text models
- Vivek Govindan ,
- Varun Sembium Varadarajan ,
- Christian Egon Berkhoff Dossow ,
- Himalay Mohanlal Joriwal ,
- Sai Madhuri Bhavirisetty ,
- Abhinav Kumar ,
- Orestis Lykouropoulos ,
- Akshay Nalwaya ,
- Rahul Gupta ,
- Sravan Babu Bodapati ,
- Liangwei Guo ,
- Julian E. S. Salazar ,
- Yibin Wang ,
- K P N V D S Siva Rama ,
- Calvin Xuan Li ,
- Mohit Narendra Gupta ,
- Asem Rustum ,
- Katrin Kirchhoff ,
- Pu Zhao
A transcription service may receive a request from a developer to build a custom speech-to-text model for a specific domain of speech. The custom speech-to-text model for the specific domain may replace a general speech-to-text model or add to a set of one or more speech-to-text models available for transcribing speech. The transcription service may receive a training data and instructions representing tasks. The transcription service may determine respective schedules for executing the instructions based at least in part on dependencies between the tasks. The transcription service may execute the instructions according to the respective schedules to train a speech-to-text model for a specific domain using the training data set. The transcription service may deploy the trained speech-to-text model as part of a network-accessible service for an end user to convert audio in the specific domain into texts.
Method and apparatus for predicting customer satisfaction from a conversation
A method and an apparatus for predicting satisfaction of a customer pursuant to a call between the customer and an agent, in which the method comprises receiving a transcribed text of the call, dividing the transcribed text into a plurality of phases of a conversation, extracting at least one call feature for each of the plurality of phases, receiving call metadata, extracting metadata features from the call metadata, combining the call features and the metadata features, and generating an output, using a trained machine learning (ML) model, based on the combined features, indicating whether the customer is satisfied or not. The ML model is trained to generate an output indicating whether the customer is satisfied or not, based on an input of the combined features.
Systems and methods for generating labeled data to facilitate configuration of network microphone devices
Systems and methods for generating training data are described herein. Pieces of metadata captured by a plurality of networked sensor systems can be captured, where each piece of metadata is associated with a specific set of sensor data captured by one of the plurality of networked sensor systems and includes a set of characteristics for the specific set of captured sensor data. A probabilistic model can be generated based on the received metadata and simulations can be performed based upon a training corpus by generating multiple scenarios, and, for each scenario, a scenario specific version of a particular annotated sample is generated by performing a simulation using the particular annotated sample. The scenario specific versions of annotated samples from the training corpus can be stored as a training data set on the at least one network device.
Attention-based joint acoustic and text on-device end-to-end model
A method includes receiving a training example for a listen-attend-spell (LAS) decoder of a two-pass streaming neural network model and determining whether the training example corresponds to a supervised audio-text pair or an unpaired text sequence. When the training example corresponds to an unpaired text sequence, the method also includes determining a cross entropy loss based on a log probability associated with a context vector of the training example. The method also includes updating the LAS decoder and the context vector based on the determined cross entropy loss.
MULTIPLE PITCH EXTRACTION BY STRENGTH CALCULATION FROM EXTREMA
An apparatus includes a function module, a strength module, and a filter module. The function module compares an input signal, which has a component, to a first delayed version of the input signal and a second delayed version of the input signal to produce a multi-dimensional model. The strength module calculates a strength of each extremum from a plurality of extrema of the multi-dimensional model based on a value of at least one opposite extremum of the multi-dimensional model. The strength module then identifies a first extremum from the plurality of extrema, which is associated with a pitch of the component of the input signal, that has the strength greater than the strength of the remaining extrema. The filter module extracts the pitch of the component from the input signal based on the strength of the first extremum.
OBFUSCATING TRAINING DATA
Examples disclosed herein involve obfuscating training data. An example method includes computing a sequence of acoustic features from audio data of training data, the training data comprising the audio data and a corresponding text transcript; mapping the acoustic features to acoustic model states to generate annotated feature vectors, the annotated feature vectors comprising the acoustic features and corresponding context from the text transcript; and providing a randomized sequence of the annotated feature vectors as obfuscated training data to an audio analysis system.
Contextual biasing of neural language models using metadata from a natural language understanding component and embedded recent history
Techniques for implementing a chatbot that utilizes context embeddings are described. An exemplary method includes determining a next turn by: applying a language model to the utterance to determine a probability of a sequence of words, generating a context embedding for the utterance based at least on one or more of: a dialog act as defined by a chatbot definition of the chatbot, a topic vector identifying a domain of the chatbot, a previous chatbot response, and one or more slot options; performing neural language model rescoring using the determined probability of a sequence of words as a word embedding and the generated context embedding to predict an hypothesis; determining at least a name of a slot and type to be fulfilled based at least in part on the hypothesis and the chatbot definition; and determining a next turn based at least in part on the chatbot definition, any previous state, and the name of the slot and type to be fulfilled.
MINIMUM WORD ERROR RATE TRAINING FOR ATTENTION-BASED SEQUENCE-TO-SEQUENCE MODELS
Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses a set of speech recognition hypothesis samples, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.
EXTERNAL LANGUAGE MODEL INFORMATION INTEGRATED INTO NEURAL TRANSDUCER MODEL
A computer-implemented method for training a neural transducer is provided including, by using audio data and transcription data of the audio data as input data, obtaining outputs from a trained language model and a seed neural transducer, respectively, combining the outputs to obtain a supervisory output, and updating parameters of another neural transducer in training so that its output is close to the supervisory output. The neural transducer can be a Recurrent Neural Network Transducer (RNN-T).