Patent classifications
G10L2015/027
VOCAL COMMAND RECOGNITION
A method to detect a vocal command, the method including: analyzing audio data received from a transducer configured to convert audio into an electric signal and analyzing the data using a first neural network. The method also includes detecting a keyword from the audio data using the first neural network on the edge device, the first neural network being trained to recognize the keyword. The method further includes activating a second neural network after the keyword is identified by the first neural network and analyzing the audio data using the second neural network, the second neural network being trained to recognize a set of vocal commands. The method to detect a vocal command may also include detecting the vocal command word using the second neural network.
Transportation vehicle control with phoneme generation
A transportation vehicle having a navigation system and an operating system connected to the navigation system for data transmission via a bus system. The transportation vehicle has a microphone and includes a phoneme generation module for generating phonemes from an acoustic voice signal or the output signal of the microphone; the phonemes are part of a predefined selection of exclusively monosyllabic phonemes; and a phoneme-to-grapheme module for generating inputs to operate the transportation vehicle based on monosyllabic phonemes generated by the phoneme generation module.
METHOD FOR ANIMATION SYNTHESIS, ELECTRONIC DEVICE AND STORAGE MEDIUM
A method for animation synthesis includes: obtaining an audio stream to be processed and a syllable sequence, wherein both the audio stream and the syllable sequence correspond to the same text and each syllable in the syllable sequence is pinyin of each character of the text; obtaining a phoneme information sequence of the audio stream by performing phoneme detection on the audio stream, wherein each piece of phoneme information in the phoneme information sequence comprises a phoneme category and a pronunciation time period; determining a pronunciation time period corresponding to each syllable in the syllable sequence based on the syllable sequence, phoneme categories and pronunciation time periods in the phoneme information sequence; and generating an animation video corresponding to the audio stream based on the pronunciation time period corresponding to each syllable in the syllable sequence and an animation frame sequence corresponding to each syllable.
System and method of generating effects during live recitations of stories
One aspect of this disclosure relates to presentation of a first effect on one or more presentation devices during an oral recitation of a first story. The first effect is associated with a first trigger point, first content, and/or first story. The first trigger point being one or more specific syllables from a word and/or phrase in the first story. A first transmission point associated with the first effect can be determined based on a latency of a presentation device and user speaking profile. The first transmission point being one or more specific syllables from a word and/or phrase before the first trigger point in the first story. Control signals for instructions to present the first content at the first trigger point are transmitted to the presentation device when a user recites the first transmission point such that first content is presented at the first trigger point.
WIRELESS COMMUNICATION DEVICE USING VOICE RECOGNITION AND VOICE SYNTHESIS
Disclosed is a wireless communication device including a voice recognition portion configured to convert a voice signal input through a microphone into a syllable information stream using voice recognition, an encoding portion configured to encode the syllable information stream to generate digital transmission data, a transmission portion configured to modulate from the digital transmission data to a transmission signal and transmit the transmission signal through an antenna, a reception portion configured to demodulate from a reception signal received through the antenna to a digital reception data and output the digital reception data, a decoding portion configured to decode the digital reception data to generate the syllable information stream and a voice synthesis portion configured to convert the syllable information stream into the voice signal using voice synthesis and output the voice signal through a speaker.
Method, device and storage medium for speech recognition
Disclosed are a method, device and readable storage medium for speech recognition. The method includes: determining speech features of the speech data by feature extraction on the speech data; determining syllable data corresponding to each of the speech features based on a plurality of feature extraction layers and a softmax function layer included in an acoustic model, where the acoustic model is configured to convert the speech feature into the syllable data; determining text data corresponding to the speech data based on a language model, a pronouncing dictionary and the syllable data, where the pronouncing dictionary is configured to convert the syllable data into the text data, and the language model is configured to evaluate the text data; and outputting the text data.
Clockwork hierarchal variational encoder
A method of providing a frame-based mel spectral representation of speech includes receiving a text utterance having at least one word and selecting a mel spectral embedding for the text utterance. Each word has at least one syllable and each syllable has at least one phoneme. For each phoneme, the method further includes using the selected mel spectral embedding to: (i) predict a duration of the corresponding phoneme based on corresponding linguistic features associated with the word that includes the corresponding phoneme and corresponding linguistic features associated with the syllable that includes the corresponding phoneme; and (ii) generate a plurality of fixed-length predicted mel-frequency spectrogram frames based on the predicted duration for the corresponding phoneme. Each fixed-length predicted mel-frequency spectrogram frame represents mel-spectral information of the corresponding phoneme.
Method and system for speech emotion recognition
Systems and methods enrich speech to text communications between users in speech chat sessions using a speech emotion recognition model to convert observed emotions in speech samples to enrich text with visual emotion content. The method may include generating a data set of speech samples with labels of a plurality of emotion classes, selecting a set of acoustic features from each of the emotion classes, generating a machine learning (ML) model based on the acoustic features and data set, applying the set of rules based on the selected set of acoustic features and data set, computing a number of rules that have been satisfied, and presenting the enriched text in speech-to-text communications between users in the chat session for visual notice of an observed emotion in the speech sample.
PREDICTING PRONUNCIATIONS WITH WORD STRESS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating word pronunciations. One of the methods includes determining, by one or more computers, spelling data that indicates the spelling of a word, providing the spelling data as input to a trained recurrent neural network, the trained recurrent neural network being trained to indicate characteristics of word pronunciations based at least on data indicating the spelling of words, receiving output indicating a stress pattern for pronunciation of the word generated by the trained recurrent neural network in response to providing the spelling data as input, using the output of the trained recurrent neural network to generate pronunciation data indicating the stress pattern for a pronunciation of the word, and providing, by the one or more computers, the pronunciation data to a text-to-speech system or an automatic speech recognition system.
Sound sample verification for generating sound detection model
A method for verifying at least one sound sample to be used in generating a sound detection model in an electronic device includes receiving a first sound sample; extracting a first acoustic feature from the first sound sample; receiving a second sound sample; extracting a second acoustic feature from the second sound sample; and determining whether the second acoustic feature is similar to the first acoustic feature.