G10L2015/027

Clockwork hierarchical variational encoder

A method for providing a frame-based mel spectral representation of speech includes receiving a text utterance having at least one word, and selecting a mel spectral embedding for the text utterance. Each word in the text utterance has at least one syllable and each syllable has at least one phoneme. For each phoneme, using the selected mel spectral embedding, the method also includes: predicting a duration of the corresponding phoneme by encoding linguistic features of the corresponding phoneme with a corresponding syllable embedding for the syllable that includes the corresponding phoneme; and generating a plurality of fixed-length predicted mel-frequency spectrogram frames based on the predicted duration for the corresponding phoneme. Each fixed-length predicted mel-frequency spectrogram frame representing mel-spectral information of the corresponding phoneme.

SYSTEM FOR DETECTING FRAUDULENT ELECTRONIC COMMUNICATIONS IMPERSONATION, INSIDER THREATS AND ATTACKS
20170251006 · 2017-08-31 · ·

A system for detecting fraudulent emails from entities impersonating legitimate senders that are intended to cause the recipients to unknowingly conduct unauthorized transactions, for example, transferring funds or divulging sensitive information. The system monitors emails being sent from and received at the protected domain to detect suspected fraudulent emails. The emails are monitored for, among other aspects, linguistic variations, changes in normal patterns of email communications, new or unfamiliar source domains. Suspicious emails can be held and flagged for later review, discarded or passed through with an alert raised indicating a review is needed.

Method and apparatus for electronically sythesizing acoustic waveforms representing a series of words based on syllable-defining beats
09747892 · 2017-08-29 ·

Speech is modeled as a cognitively-driven sensory-motor activity where the form of speech is the result of categorization processes that any given subject recreates by focusing on creating sound patterns that are represented by syllables. These syllables are then combined in characteristic patterns to form words, which are in turn, combined in characteristic patterns to form utterances. A speech recognition process first identifies syllables in an electronic waveform representing ongoing speech. The pattern of syllables is then deconstructed into a standard form that is used to identify words. The words are then concatenated to identify an utterance. Similarly, a speech synthesis process converts written words into patterns of syllables. The pattern of syllables is then processed to produce the characteristic rhythmic sound of naturally spoken words. The words are then assembled into an utterance which is also processed to produce a natural sounding speech.

Method and computer system for performing audio search on a social networking platform

Methods and computer systems for audio search on a social networking platform are disclosed. The method includes: while running a social networking application, receiving a first audio input from a user of the computer system, the first audio input including one or more search keywords; generating a first audio confusion network from the first audio input; determining whether the first audio confusion network matches at least one of one or more second audio confusion networks, wherein a respective second audio confusion network was generated from a corresponding second audio input associated with a chat session of which the user is a participant; and identifying a second audio input corresponding to the at least one second audio confusion network that matches the first audio confusion network, wherein the identified second audio input includes the one or more search keywords that are included in the first audio input.

Voice wake-up detection from syllable and frequency characteristic

A voice wake-up apparatus used in an electronic device that includes a voice activity detection circuit, a storage circuit and a smart detection circuit is provided. The voice activity detection circuit receives an input sound signal and detects a voice activity section of the input sound signal. The storage circuit stores a predetermined voice sample. The smart detection circuit receives the input sound signal to perform a time domain and a frequency domain detection on the voice activity section to generate a syllable and frequency characteristic detection result, compare the syllable and frequency characteristic detection result with the predetermined voice sample and generate a wake-up signal to a processing circuit of the electronic device when the syllable and frequency characteristic detection result matches the predetermined voice sample to wake up the processing circuit.

Phonetic patterns for fuzzy matching in natural language processing

A token is extracted from a Natural Language input. A phonetic pattern is computed corresponding to the token, the phonetic pattern including a sound pattern that represents a part of the token when the token is spoken. New data is created from data of the phonetic pattern, the new data including a syllable sequence corresponding to the phonetic pattern. A state of a data storage device is changed by storing the new data in a matrix of syllable sequences corresponding to the token. An option is selected that corresponds to the token by executing a fuzzy matching algorithm using a processor and a memory, the selecting of the option is based on a syllable sequence in the matrix.

CREATIVE WORK SYSTEMS AND METHODS THEREOF
20220230626 · 2022-07-21 · ·

A computer-implemented method for measuring cognitive load of a user creating a creative work in a creative work system, may include generating at least one verbal statement capable of provoking at least one verbal response from the user, prompting the user to vocally interact with the creative work system by vocalizing the at least one generated verbal statement to the user via an audio interface of the creative work system, and obtaining the at least one verbal response from the user via the audio interface, and determining the cognitive load of the user based on the at least one verbal response obtained from the user, wherein generating the at least one verbal statement is based on at least one predicted verbal response suitable for determining the cognitive load of the user.

VOICE CONTROL METHOD AND APPARATUS, CHIP, EARPHONES, AND SYSTEM
20220230657 · 2022-07-21 ·

A voice control method and apparatus, a chip, earphones, and a system. The method includes: recognizing (001) whether a voice signal includes a keyword; in response to the voice signal including the keyword, executing (001a) an instruction corresponding to the keyword or sending the instruction; before recognizing whether the voice signal includes the keyword, determining (002) whether the voice signal is from a target user and, in response to the voice signal being from the target user, starting to recognize (001) whether the voice signal includes the keyword; or during recognizing whether the voice signal includes the keyword, determining (002) whether the voice signal is from the target user and, in response to the voice signal being from a non-target user, stopping recognizing (003a) whether the voice signal includes the keyword. The voice control method reduces the power consumption of voice control and improves the endurance.

Filtering model training method and speech recognition method
11211052 · 2021-12-28 · ·

A filtering model training method includes obtaining N original syllables, obtaining N recognized syllables, and obtaining N syllable distances based on the N original syllables and the N recognized syllables, where the N syllable distances are in a one-to-one correspondence with N syllable pairs, the N original syllables and the N recognized syllables form the N syllable pairs, each syllable pair includes an original syllable and a recognized syllable that correspond to each other, and each syllable distance is used to indicate a similarity between an original syllable and a recognized syllable that are included in a corresponding syllable pair.

Artificial intelligence device for learning deidentified speech signal and method therefor
11211047 · 2021-12-28 · ·

An artificial intelligence device for learning a de-identified speech signal includes a memory configured to store a speech recognition model, a microphone configured to acquire an original speech signal, and a processor configured to perform de-identification with respect to the acquired original speech signal and perform speech recognition with respect to the de-identified speech signal through the speech recognition model.