Patent classifications
G10L2015/027
METHOD AND SYSTEM FOR SPEECH EMOTION RECOGNITION
A method for speech emotion recognition for enriching speech to text communications between users in speech chat sessions including implementing a speech emotion recognition model to enable converting observed emotions in speech samples to enrich text with visual emotion content by: generating a data set of speech samples with labels of a plurality of emotion classes; extracting a set of acoustic features from each of the emotion classes; generating a machine learning (ML) model based on the acoustic features and data set; training the ML model from acoustic features from speech samples during speech chat sessions; predicting emotion content based on a trained ML model in the observed speech; generating enriched text based on predicted emotion content of the trained ML model; and presenting the enriched text in speech to text communications between users in the chat session for visual notice of an observed emotion in the speech sample.
METHOD AND APPARATUS FOR SPEECH RECOGNITION, AND STORAGE MEDIUM
Proposed are a method and apparatus for speech recognition, and a storage medium. The specific solution includes: obtaining audio data to be recognized; decoding the audio data to obtain a first syllable of a to-be-converted word, in which the first syllable is a combination of at least one phoneme corresponding to the to-be-converted word; obtaining a sentence to which the to-be-converted word belongs and a converted word in the sentence, and obtaining a second syllable of the converted word; encoding the first syllable and the second syllable to generate first encoding information of the first syllable; and decoding the first encoding information to obtain a text corresponding to the to-be-converted word.
Electronic device for recognizing abbreviated content name and control method thereof
An electronic device secures diversity of a user utterance with respect to a content name when a user searches a content through a display device by utilizing a voice. A method by an electronic device includes steps of receiving input of a user voice, acquiring a keyword related to a content included in the user voice, and acquiring at least one modified keyword based on the keyword, acquiring a plurality of search results corresponding to the keyword and the at least one modified keyword, comparing the keyword and the modified keyword with the plurality of search results and acquiring a content name corresponding to the keyword, and updating a database of content names based on the keyword, the modified keyword, and the final content name.
Method, apparatus, device and computer readable storage medium for recognizing and decoding voice based on streaming attention model
A method, apparatus, device, and computer readable storage medium for recognizing and decoding a voice based on a streaming attention model are provided. The method may include generating a plurality of acoustic paths for decoding the voice using the streaming attention model, and then merging acoustic paths with identical last syllables of the plurality of acoustic paths to obtain a plurality of merged acoustic paths. The method may further include selecting a preset number of acoustic paths from the plurality of merged acoustic paths as retained candidate acoustic paths. Embodiments of the present disclosure present a concept that acoustic score calculating of a current voice fragment is only affected by its last voice fragment and has nothing to do with earlier voice history, and merge acoustic paths with the identical last syllables of the plurality of candidate acoustic paths.
Clockwork Hierarchal Variational Encoder
A method for providing a frame-based mel spectral representation of speech includes receiving a text utterance having at least one word, and selecting a mel spectral embedding for the text utterance. Each word in the text utterance has at least one syllable and each syllable has at least one phoneme. For each phoneme, using the selected mel spectral embedding, the method also includes: predicting a duration of the corresponding phoneme by encoding linguistic features of the corresponding phoneme with a corresponding syllable embedding for the syllable that includes the corresponding phoneme; and generating a plurality of fixed-length predicted mel-frequency spectrogram frames based on the predicted duration for the corresponding phoneme. Each fixed-length predicted mel-frequency spectrogram frame representing mel-spectral information of the corresponding phoneme.
METHOD FOR SEMANTIC RECOGNITION, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The disclosure discloses a method for semantic recognition, an electronic device, and a storage medium. The detailed solution includes: obtaining a speech recognition result of a speech to be processed, in which the speech recognition result includes a newly added recognition result fragment and a historical recognition result fragment; obtaining a semantic vector of each historical object in the historical recognition result fragment, and obtaining a semantic vector of each newly added object by inputting the semantic vector of each historical object and each newly added object in the newly added recognition result fragment into a streaming semantic coding layer; and obtaining a semantic recognition result of the speech by inputting the semantic vector of each historical object and the semantic vector of each newly added object into a streaming semantic vector fusion layer and a semantic understanding multi-task layer sequentially arranged.
ARTIFICIAL INTELLIGENCE-BASED WAKEUP WORD DETECTION METHOD AND APPARATUS, DEVICE, AND MEDIUM
This application discloses an artificial intelligence-based (AI-based) wakeup word detection method performed by a computing device. The method includes: constructing, by using a preset pronunciation dictionary, at least one syllable combination sequence for self-defined wakeup word text inputted by a user; obtaining to-be-recognized speech data, and extracting speech features of speech frames in the speech data; inputting the speech features into a pre-constructed deep neural network (DNN) model, to output posterior probability vectors of the speech features corresponding to syllable identifiers; determine a target probability vector from the posterior probability vectors according to the syllable combination sequence; and calculate a confidence according to the target probability vector, and determine that the speech frames include the wakeup word text when the confidence is greater than or equal to a threshold.
Speech Synthesis Prosody Using A BERT Model
A method for generating a prosodic representation includes receiving a text utterance having one or more words. Each word has at least one syllable having at least one phoneme. The method also includes generating, using a Bidirectional Encoder Representations from Transformers (BERT) model, a sequence of wordpiece embeddings and selecting an utterance embedding for the text utterance, the utterance embedding representing an intended prosody. Each wordpiece embedding is associated with one of the one or more words of the text utterance. For each syllable, using the selected utterance embedding and a prosody model that incorporates the BERT model, the method also includes generating a corresponding prosodic syllable embedding for the syllable based on the wordpiece embedding associated with the word that includes the syllable and predicting a duration of the syllable by encoding linguistic features of each phoneme of the syllable with the corresponding prosodic syllable embedding for the syllable.
Method and system for building speech recognizer, and speech recognition method and system
A method and system for building a speech recognizer, and a speech recognition method and system are proposed. The method for building a speech recognizer includes: reading and parsing each grammar file, and building a network of each grammar; reading an acoustic syllable mapping relationship table, and deploying the network of each grammar as a syllable network; performing a merge minimization operation for each syllable network to form a sound element decoding network; forming the speech recognizer by using the sound element decoding network and a language model. The technical solutions of the present disclosure may be applied to exhibit strong extensibility, support an N-Gram language model, support a class model, present flexible use, and adapt for an embedded recognizer in a vehicle-mounted environment.
METHOD AND DEVICE FOR PROCESSING VOICE INPUT OF USER
A method, performed by an electronic device, of processing a voice input of a user. The method includes obtaining a first audio signal from a first user voice input, obtaining a second audio signal from a second user voice input that is obtained subsequent to the first audio signal, identifying whether the second audio signal is an audio signal for correcting the obtained first audio signal, when the obtained second audio signal is an audio signal for correcting the obtained first audio signal, obtaining, from the obtained second audio signal, at least one of one or more corrected words or one or more corrected syllables, based on the at least one of the one or more corrected words or the one or more corrected syllables, identifying at least one corrected audio signal for the obtained first audio signal, and processing the at least one corrected audio signal.