Patent classifications
G10L2015/081
Dynamic language model
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving a base language model for speech recognition including a first word sequence having a base probability value; receiving a voice search query associated with a query context; determining that a customized language model is to be used when the query context satisfies one or more criteria associated with the customized language model; obtaining the customized language model, the customized language model including the first word sequence having an adjusted probability value being the base probability value adjusted according to the query context; and converting the voice search query to a text search query based on one or more probabilities, each of the probabilities corresponding to a word sequence in a group of one or more word sequences, the group including the first word sequence having the adjusted probability value.
System and method for combining geographic metadata in automatic speech recognition language and acoustic models
Disclosed herein are systems, methods, and computer-readable storage media for a speech recognition application for directory assistance that is based on a user's spoken search query. The spoken search query is received by a portable device and portable device then determines its present location. Upon determining the location of the portable device, that information is incorporated into a local language model that is used to process the search query. Finally, the portable device outputs the results of the search query based on the local language model.
ADVERSARIAL LANGUAGE IMITATION WITH CONSTRAINED EXEMPLARS
Generally discussed herein are devices, systems, and methods for generating a phrase that is confusing to a language classifier. A method can include determining, by the LC, a first classification score (CS) of a prompt indicating whether the prompt is a first class or a second class, predicting, based on the prompt and by a pre-trained language model (PLM), likely next words and a corresponding probability for each of the likely next words, determining, by the LC, a second CS for each of the likely next words, determining, by an adversarial classifier, respective scores for each of the likely next words, the respective scores determined based on the first CS of the prompt, the second CS of the likely next words, and the probabilities of the likely next words, and selecting, by an adversarial classifier, a next word of the likely next words based on the respective scores.
Scalable entities and patterns mining pipeline to improve automatic speech recognition
A computing system obtains features that have been extracted from an acoustic signal, where the acoustic signal comprises spoken words uttered by a user. The computing system performs automatic speech recognition (ASR) based upon the features and a language model (LM) generated based upon expanded pattern data. The expanded pattern data includes a name of an entity and a search term, where the entity belongs to a segment identified in a knowledge base. The search term has been included in queries for entities belonging to the segment. The computing system identifies a sequence of words corresponding to the features based upon results of the ASR. The computing system transmits computer-readable text to a search engine, where the text includes the sequence of words.
METHOD AND DEVICE FOR WAKING UP VIA SPEECH BASED ON ARTIFICIAL INTELLIGENCE
A method and a device for waking up via a speech based on artificial intelligence are provided in the present disclosure. The method includes: clustering phones to select garbage phones for representing the phones; constructing an alternative wake-up word approximate to a preset wake-up word according to the preset wake-up word; constructing a decoding network according to the garbage phones, the alternative wake-up word and the preset wake-up word; and waking up via the speech by using the decoding network. Due to the data size for the garbage phones is significantly smaller than the data size for the garbage words, a problem that the data size occupied is too large by using a garbage word model in the prior art is solved. Meanwhile, as a word is composed of several phones, the garbage phones may be more likely to cover all words than the garbage words. Thus, an accuracy of waking up is improved and a probability of false waking up is reduced.
Decoder for searching a path according to a signal sequence, decoding method, and computer program product
According to an embodiment, a decoder searches a finite state transducer and outputs an output symbol string corresponding to a signal that is input or corresponding to a feature sequence of signal that is input. The decoder includes a token operating unit and a duplication eliminator. The token operating unit is configured to, every time the signal or the feature is input, propagate each of a plurality of tokens, which is assigned with a state of the head of a path being searched, according to the finite state transducer. The duplication eliminator is configured to eliminate duplication of two or more tokens which have same state assigned thereto and for which respective previously-passed transitions are assigned with same input symbol.
Speaker dependent voiced sound pattern template mapping
Various implementations disclosed herein include a training module configured to produce a set of segment templates from a concurrent segmentation of a plurality of vocalization instances of a VSP vocalized by a particular speaker, who is identifiable by a corresponding set of vocal characteristics. Each segment template provides a stochastic characterization of how each of one or more portions of a VSP is vocalized by the particular speaker in accordance with the corresponding set of vocal characteristics. Additionally, in various implementations, the training module includes systems, methods and/or devices configured to produce a set of VSP segment maps that each provide a quantitative characterization of how respective segments of the plurality of vocalization instances vary in relation to a corresponding one of a set of segment templates.
Stable Output Streaming Speech Translation System
A computer implemented method includes receiving speech data representative of speech in a first language The speech data is divided into chunks of speech data, each chunk comprising multiple temporally consecutive frames of acoustic information. Each temporally consecutive chunk of data is processed using beam search on each frame to identify candidate language tokens representing a second language different from the first language. A best candidate language token(s) is selected for each chunk as processed. The selected best candidate language token or tokens for each chunk of data is committed as a prefix for a next temporally consecutive chunk of data.
Improving custom keyword spotting system accuracy with text-to-speech-based data augmentation
The present disclosure provides methods and apparatus for optimizing a keyword spotting system. A set of utterance texts including a given keyword may be generated. A set of speech signals corresponding to the set of utterance texts may be synthesized. An acoustic model in the keyword spotting system may be optimized with at least a part of speech signals in the set of speech signals and utterance texts in the set of utterance texts corresponding to the at least a part of speech signals.
System and method for learning alternate pronunciations for speech recognition
A system and method for learning alternate pronunciations for speech recognition is disclosed. Alternative name pronunciations may be covered, through pronunciation learning, that have not been previously covered in a general pronunciation dictionary. In an embodiment, the detection of phone-level and syllable-level mispronunciations in words and sentences may be based on acoustic models trained by Hidden Markov Models. Mispronunciations may be detected by comparing the likelihood of the potential state of the targeting pronunciation unit with a pre-determined threshold through a series of tests. It is also within the scope of an embodiment to detect accents.