G10L2015/025

Techniques for language independent wake-up word detection
11545146 · 2023-01-03 · ·

A user device configured to perform wake-up word detection in a target language. The user device comprises at least one microphone (430) configured to obtain acoustic information from the environment of the user device, at least one computer readable medium (435) storing an acoustic model (150) trained on a corpus of training data (105) in a source language different than the target language, and storing a first sequence of speech units obtained by providing acoustic features (110) derived from audio comprising the user speaking a wake-up word in the target language to the acoustic model (150), and at least one processor (415,425) coupled to the at least one computer readable medium (435) and programmed to perform receiving, from the at least one microphone (430), acoustic input from the user speaking in the target language while the user device is operating in a low-power mode, applying acoustic features derived from the acoustic input to the acoustic model (150) to obtain a second sequence of speech units corresponding to the acoustic input, determining if the user spoke the wake-up word at least in part by comparing the first sequence of speech units to the second sequence of speech units, and exiting the low-power mode if it is determined that the user spoke the wake-up word.

TRACKING ARTICULATORY AND PROSODIC DEVELOPMENT IN CHILDREN

Systems, devices, and methods for tracking articulatory and prosodic development in children are disclosed. Human speech in a given language can be divided into phonemes, which are a sound or group of sounds perceived by speakers of the language to have a common linguistic function (e.g., consonant sounds, vowel sounds). In an exemplary aspect, a normative model can be generated for production characteristics of each phoneme in a given language using a database of normative speech samples. One or more speech samples of a human subject can be analyzed to identify the phonemes used by the human subject and measured against the normative model. Based on this analysis, a normed score is generated of the articulation accuracy, duration, rhythm, volume, and/or other production characteristics for each phoneme of the speech sample of the human subject.

SELECTIVE FINE-TUNING OF SPEECH

Speech conveyed over a network, such as during an electronic conference may be more difficult to understand if the recipient has difficulty understanding the speech of users having a particular speech attribute. However, other recipients may have no difficulty understanding the speech. As provided herein, speech provided by a user may have phonemes comprising accents or other speech pattern that, if removed, are more readily understood by a particular user. Such alterations are provided only to the users that require it, such as by a server or a specific user's communication device, without affecting the speech concurrently presented to other users.

Transportation vehicle control with phoneme generation
11530930 · 2022-12-20 · ·

A transportation vehicle having a navigation system and an operating system connected to the navigation system for data transmission via a bus system. The transportation vehicle has a microphone and includes a phoneme generation module for generating phonemes from an acoustic voice signal or the output signal of the microphone; the phonemes are part of a predefined selection of exclusively monosyllabic phonemes; and a phoneme-to-grapheme module for generating inputs to operate the transportation vehicle based on monosyllabic phonemes generated by the phoneme generation module.

Reading progress estimation based on phonetic fuzzy matching and confidence interval

An example method for identifying a reading location in a text source as a user reads the text source aloud includes determining phoneme data of the text source, the text source comprising a sequence of words; receiving audio data comprising a spoken word associated with the text source; comparing, by a processing device, the phoneme data of the text source and phoneme data of the audio data; and identifying a location in the sequence of words based on the comparing phoneme data.

Method and apparatus for generating hint words for automated speech recognition
11527234 · 2022-12-13 · ·

Systems and methods for determining hint words that improve the accuracy of automated speech recognition (ASR) systems. Hint words are determined in the context of a user issuing voice commands in connection with a voice interface system. Terms are initially taken from most frequently occurring terms in operation of a voice interface system. For example, most frequently occurring terms that arise in electronic search queries or received commands are selected. Certain of these terms are selected as hint words, and the selected hint words are then transmitted to an ASR system to assist in translation of speech to text.

SYSTEMS AND METHODS FOR GENERATING LOCALE-SPECIFIC PHONETIC SPELLING VARIATIONS

Systems and methods for generating phonetic spelling variations of a given word based on locale-specific pronunciations. A phoneme-letter density model may be configured to identify a phoneme sequence corresponding to an input word, and to identify all character sequences that may correspond to an input phoneme sequence and their respective probabilities. The phoneme-phoneme error model may be configured to identify locale-specific alternative phoneme sequences that may correspond to a given phoneme sequence, and their respective probabilities. Using these two models, a processing system may be configured to generate, for a given input word, a list of alternative character sequences that may correspond to the input word based on locale-specific pronunciations, and/or a probability distribution representing how likely each alternative character sequence is to correspond to the input word.

SYSTEM AND METHOD FOR POSTHUMOUS DYNAMIC SPEECH SYNTHESIS USING NEURAL NETWORKS AND DEEP LEARNING
20220383850 · 2022-12-01 ·

A system and method for posthumous dynamic speech synthesis digitally clones the original voice of a deceased user, which allows an operational user to remember the original user, post mortem. The system utilizes a neural network and deep learning to digitally duplicate the vocal frequency, personality, and characteristics of the original voice from the deceased user. This systematic approach to dynamic speech synthesis involves several stages of compression, coding, decoding, and training the speech patterns of original voice. The data processing of original voice includes audio sampling and a Lossy-Lossless method of dual compression. Additionally, the voice data is compressed to generate a Mel spectrogram. A voice codec converts the spectrogram into a PNG file, which is synthesized into the cloned voice. After the algorithmic operations, coding, and decoding of voice data, the subsequently generated cloned voice is implemented into a physical media outlet for consumption by the operational user.

Assessing Reading Ability Through Grapheme-Phoneme Correspondence Analysis
20220383895 · 2022-12-01 ·

A computing device translates a spoken word into a corresponding ordered set of spoken phonemes and analyzes correctness of the spoken word relative to a target word. The analyzing includes attempting to locate each of the spoken phonemes in an ordered set of grapheme-phoneme correspondences (GPCs) describing the target word, and determining whether or not the ordered set of spoken phonemes comprises a same number of phonemes as in the ordered set of GPCs. The analyzing also includes comparing the order of the ordered set of spoken phonemes against the order of the ordered set of GPCs. The computing device generates a report, based on the analyzing, that identifies at least one of the GPCs in the ordered set of GPCs as having been incorrectly applied in decoding the target word.

Collaborative content management

A technique manages collaborative web sessions (CWS). The technique receives graphical content of a CWS. The technique translates a set of portions of the graphical content into text output. The technique provides the text output to a set of text application services. The set of text application services associate the text output with the CWS.