G10L15/14

Device and method for supporting creation of reception history, non-transitory computer readable recording medium

The present invention makes it possible to efficiently create an appropriate dialogue history. This device for supporting creation of dialogue history (1) is provided with: a dialogue utterance focus point information store (19) which, according to utterance data indicating utterances, stores dialogue scene data indicating dialogue scenes of the utterances, utterance type indicating the types of the utterances, and utterance focus point information of the utterances; and an input/output interface (20) which, with respect to each of the dialogue scenes indicated by the dialogue scene data stored in the dialogue utterance focus point information store (19), causes a display device to display any one or more of utterances, utterance type, and utterance focus point information. Based on an operation input to the input/output interface (20), the dialogue utterance focus point information store (19) adds, modifies, or deletes any one or more of the dialogue scene data, the utterance type, and the utterance focus point information.

Spoken words analyzer
11636835 · 2023-04-25 · ·

A lyrics analyzer generates tags and explicitness indicators for a set of tracks. These tags may indicate the genre, mood, occasion, or other features of each track. The lyrics analyzer does so by generating an n-dimensional vector relating to a set of topics extracted from the lyrics and then using those vectors to train a classifier to determine whether each tag applies to each track. The lyrics analyzer may also generate playlists for a user based on a single seed song by comparing the lyrics vector or the lyrics and acoustics vectors of the seed song to other songs to select songs that closely match the seed song. Such a playlist generator may also take into account the tags generated for each track.

Spoken words analyzer
11636835 · 2023-04-25 · ·

A lyrics analyzer generates tags and explicitness indicators for a set of tracks. These tags may indicate the genre, mood, occasion, or other features of each track. The lyrics analyzer does so by generating an n-dimensional vector relating to a set of topics extracted from the lyrics and then using those vectors to train a classifier to determine whether each tag applies to each track. The lyrics analyzer may also generate playlists for a user based on a single seed song by comparing the lyrics vector or the lyrics and acoustics vectors of the seed song to other songs to select songs that closely match the seed song. Such a playlist generator may also take into account the tags generated for each track.

Joint endpointing and automatic speech recognition

A method includes receiving audio data of an utterance and processing the audio data to obtain, as output from a speech recognition model configured to jointly perform speech decoding and endpointing of utterances: partial speech recognition results for the utterance; and an endpoint indication indicating when the utterance has ended. While processing the audio data, the method also includes detecting, based on the endpoint indication, the end of the utterance. In response to detecting the end of the utterance, the method also includes terminating the processing of any subsequent audio data received after the end of the utterance was detected.

Joint endpointing and automatic speech recognition

A method includes receiving audio data of an utterance and processing the audio data to obtain, as output from a speech recognition model configured to jointly perform speech decoding and endpointing of utterances: partial speech recognition results for the utterance; and an endpoint indication indicating when the utterance has ended. While processing the audio data, the method also includes detecting, based on the endpoint indication, the end of the utterance. In response to detecting the end of the utterance, the method also includes terminating the processing of any subsequent audio data received after the end of the utterance was detected.

Method and device for evaluating quality of content, electronic equipment, and storage medium

Text content is determined. The text content is input to a content classifying model. The content classifying model is adapted to determine a probability of the text content belonging to a category. An evaluated value of quality of the text content is determined according to the probability of the category and a weight of the category. The weight represents importance of the category.

Method and device for evaluating quality of content, electronic equipment, and storage medium

Text content is determined. The text content is input to a content classifying model. The content classifying model is adapted to determine a probability of the text content belonging to a category. An evaluated value of quality of the text content is determined according to the probability of the category and a weight of the category. The weight represents importance of the category.

Token-wise training for attention based end-to-end speech recognition
11636848 · 2023-04-25 · ·

A method of attention-based end-to-end (A-E2E) automatic speech recognition (ASR) training, includes performing cross-entropy training of a model, based on one or more input features of a speech signal, determining a posterior probability vector at a time of a first wrong token among one or more output tokens of the model of which the cross-entropy training is performed, and determining a loss of the first wrong token at the time, based on the determined posterior probability vector. The method further includes determining a total loss of a training set of the model of which the cross-entropy training is performed, based on the determined loss of the first wrong token, and updating the model of which the cross-entropy training is performed, based on the determined total loss of the training set.

Token-wise training for attention based end-to-end speech recognition
11636848 · 2023-04-25 · ·

A method of attention-based end-to-end (A-E2E) automatic speech recognition (ASR) training, includes performing cross-entropy training of a model, based on one or more input features of a speech signal, determining a posterior probability vector at a time of a first wrong token among one or more output tokens of the model of which the cross-entropy training is performed, and determining a loss of the first wrong token at the time, based on the determined posterior probability vector. The method further includes determining a total loss of a training set of the model of which the cross-entropy training is performed, based on the determined loss of the first wrong token, and updating the model of which the cross-entropy training is performed, based on the determined total loss of the training set.

Voice data processing based on deep learning

Disclosed of the present application is relation to deep learning based voice data processing. The voice data to be detected is converted into target text data based on a voice recognition model so that the keyword text corresponding to the predetermined target voice keyword can be converted. Then, the data is matched with the target text data to determine whether the voice data to be detected includes the target voice keyword based on the matching result. Thus, because the voice recognition model is obtained by deep learning based on the obtained voice recognition data training set, it can obtain high-precision target text data, thereby improving the accuracy of subsequent matching. The problem of low accuracy of detecting voice data for keyword detection can therefor be solved.