G10L15/144

Method and apparatus for improving speech recognition processing performance

Computing the feature Maximum Mutual Information (fMMI) method requires multiplication of vectors with a huge matrix. The huge matrix is subdivided into block sub-matrices. The sub-matrices are quantized into different values and compressed by replacing the quantized element values with 1 or 2 bit indices. Fast multiplication with those compressed matrices with far fewer multiply/accumulate operations compared to standard matrix computation is enabled and additionally obviates a de-compression method for decompressing the sub-matrices before use.

AUTOMATIC SPEECH RECOGNITION FOR DISFLUENT SPEECH
20170236511 · 2017-08-17 ·

A system and method of processing disfluent speech at an automatic speech recognition (ASR) system includes: receiving speech from a speaker via a microphone; determining the received speech includes disfluent speech; accessing a disfluent speech grammar or acoustic model in response to the determination; and processing the received speech using the disfluent speech grammar.

REALTIME ASSESSMENT OF TTS QUALITY USING SINGLE ENDED AUDIO QUALITY MEASUREMENT
20170278506 · 2017-09-28 ·

A system and method of regulating speech output by a text-to-speech (TTS) system includes: evaluating speech that has been converted from text using an initial speech quality test before presentation to a user; applying a classification test to the evaluated speech if the evaluated speech falls below a threshold based on the initial speech quality test; generating an abnormal speech classification for the evaluated speech; and applying a corrective action to the evaluated speech based on the abnormal speech classification.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, COMPUTER PROGRAM PRODUCT, AND RECOGNITION SYSTEM
20170263242 · 2017-09-14 ·

An information processing device includes a phonetic converting unit, an HMM converting unit, and a searching unit. The phonetic converting unit converts a phonetic symbol sequence into a hidden Markov model (HMM) state sequence in which states of an HMM are aligned. The HMM converting unit converts the HMM state sequence into a score vector sequence indicating the degree of similarity to a specific pronunciation using a similarity matrix defining the similarity between the states of the HMM. The searching unit searches for a path having a better score for the score vector sequence than that of the other paths out of paths included in a search network and outputs a phonetic symbol sequence corresponding to the retrieved path.

Method and apparatus for recognition of sound events based on convolutional neural network

Provided is a sound event recognition method that may improve a sound event recognition performance using a correlation between difference sound signal feature parameters based on a neural network, in detail, that may extract a sound signal feature parameter from a sound signal including a sound event, and recognize the sound event included in the sound signal by applying a convolutional neural network (CNN) trained using the sound signal feature parameter.

METHOD AND APPARATUS FOR EVALUATING USER INTENTION UNDERSTANDING SATISFACTION, ELECTRONIC DEVICE AND STORAGE MEDIUM
20210383802 · 2021-12-09 ·

A method and apparatus for generating a user intention understanding satisfaction evaluation model, a method and apparatus for evaluating a user intention understanding satisfaction, an electronic device and a storage medium are provided, relating to intelligent voice recognition and knowledge graphs. The method for generating a user intention understanding satisfaction evaluation model is: acquiring a plurality of sets of intention understanding data, at least one set of which comprises a plurality of sequences corresponding to multi-round behaviors of an intelligent device in multi-round man-machine interactions; and learning the plurality of sets of intention understanding data through a first machine learning model, to obtain the user intention understanding satisfaction evaluation model after the learning, wherein the user intention understanding satisfaction evaluation model is configured to evaluate user intention understanding satisfactions of the intelligent device in the multi-round man-machine interactions according to the plurality of sequences corresponding to the multi-round man-machine interactions.

METHODS AND SYSTEMS FOR PREDICTING NON-DEFAULT ACTIONS AGAINST UNSTRUCTURED UTTERANCES
20220148580 · 2022-05-12 ·

A method to adaptively predict non-default actions against unstructured utterances by an automated assistant operating in a computing-system is provided. The method includes extracting voice-features based on receiving an input utterance from at-least one speaker by an automatic speech recognition (ASR) device, identifying the input utterance as an unstructured utterance based on the extracted voice-features and a mapping between the input utterance with one or more default actions as drawn by the ASR, obtaining at least one probable action to be performed in response to the unstructured utterance through a dynamic bayesian network (DBN). The method further includes providing the at least one probable action obtained by the DBN to the speaker in an order of the posterior probability with respect to each action.

SPEECH EMOTION DETECTION METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

A speech emotion detection system may obtain to-be-detected speech data. The system may generate speech frames based on framing processing and the to-be-detected speech data. The system may extract speech features corresponding to the speech frames to form a speech feature matrix corresponding to the to-be-detected speech data. The system may input the speech feature matrix to an emotion state probability detection model. The system may generate, based on the speech feature matrix and the emotion state probability detection model, an emotion state probability matrix corresponding to the to-be-detected speech data. The system may input the emotion state probability matrix and the speech feature matrix to an emotion state transition model. The system may generate an emotion state sequence based on the emotional state probability matrix, the speech feature matrix, and the emotional state transition model. The system may determine an emotion state based on the emotion state sequence.

Speech emotion detection method and apparatus, computer device, and storage medium

A speech emotion detection system may obtain to-be-detected speech data. The system may generate speech frames based on framing processing and the to-be-detected speech data. The system may extract speech features corresponding to the speech frames to form a speech feature matrix corresponding to the to-be-detected speech data. The system may input the speech feature matrix to an emotion state probability detection model. The system may generate, based on the speech feature matrix and the emotion state probability detection model, an emotion state probability matrix corresponding to the to-be-detected speech data. The system may input the emotion state probability matrix and the speech feature matrix to an emotion state transition model. The system may generate an emotion state sequence based on the emotional state probability matrix, the speech feature matrix, and the emotional state transition model. The system may determine an emotion state based on the emotion state sequence.

SYSTEMS AND METHODS FOR FAST FILTERING OF AUDIO KEYWORD SEARCH
20220020361 · 2022-01-20 · ·

An audio keyword searcher arranged to identify a voice segment of a received audio signal; identify, by an automatic speech recognition engine, one or more phonemes included in the voice segment; output, from the automatic speech recognition engine, the one or more phonemes to a keyword filter to detect whether the voice segment includes any of the one or more first keywords of the first keyword list and, if detected, output the one or more phonemes included in the voice segment to a decoder but, if not detected, not output the one or more phonemes included in the voice segment to the decoder. If the one or more phonemes are output to the decoder: generate a word lattice associated with the voice segment; search the word lattice for one or more second keywords, and determine whether the voice segment includes the one or more second keywords.