G10L25/54

Automatic synthesis of translated speech using speaker-specific phonemes

An embodiment includes converting an original audio signal to an original text string, the original audio signal being from a recording of the original text string spoken by a specific person in a source language. The embodiment generates a translated text string by translating the original text string from the source language to a target language, including translation of a word from the source language to a target language. The embodiment assembles a standard phoneme sequence from a set of standard phonemes, where the standard phoneme sequence includes a standard pronunciation of the translated word. The embodiment also associates a custom phoneme with a standard phoneme of the standard phoneme sequence, where the custom phoneme includes the specific person's pronunciation of a sound in the translated word. The embodiment synthesizes the translated text string to a translated audio signal including the translated word pronounced using the custom phoneme.

Automatic synthesis of translated speech using speaker-specific phonemes

An embodiment includes converting an original audio signal to an original text string, the original audio signal being from a recording of the original text string spoken by a specific person in a source language. The embodiment generates a translated text string by translating the original text string from the source language to a target language, including translation of a word from the source language to a target language. The embodiment assembles a standard phoneme sequence from a set of standard phonemes, where the standard phoneme sequence includes a standard pronunciation of the translated word. The embodiment also associates a custom phoneme with a standard phoneme of the standard phoneme sequence, where the custom phoneme includes the specific person's pronunciation of a sound in the translated word. The embodiment synthesizes the translated text string to a translated audio signal including the translated word pronounced using the custom phoneme.

Personal Voice-Based Information Retrieval System
20180007201 · 2018-01-04 ·

The present invention relates to a system for retrieving information from a network such as the Internet. A user creates a user-defined record in a database that identifies an information source, such as a web site, containing information of interest to the user. This record identifies the location of the information source and also contains a recognition grammar based upon a speech command assigned by the user. Upon receiving the speech command from the user that is described within the recognition grammar, a network interface system accesses the information source and retrieves the information requested by the user.

Personal Voice-Based Information Retrieval System
20180007201 · 2018-01-04 ·

The present invention relates to a system for retrieving information from a network such as the Internet. A user creates a user-defined record in a database that identifies an information source, such as a web site, containing information of interest to the user. This record identifies the location of the information source and also contains a recognition grammar based upon a speech command assigned by the user. Upon receiving the speech command from the user that is described within the recognition grammar, a network interface system accesses the information source and retrieves the information requested by the user.

Recommending Results In Multiple Languages For Search Queries Based On User Profile
20230237098 · 2023-07-27 ·

Systems and methods for a media guidance application that generates results in multiple languages for search queries. In particular, the media guidance application resolves multiple language barriers by taking automatic and manual user language settings and applying those settings to a variety of potential search results.

Recommending Results In Multiple Languages For Search Queries Based On User Profile
20230237098 · 2023-07-27 ·

Systems and methods for a media guidance application that generates results in multiple languages for search queries. In particular, the media guidance application resolves multiple language barriers by taking automatic and manual user language settings and applying those settings to a variety of potential search results.

Terminal control method, terminal and non-transitory computer readable storage medium
11568888 · 2023-01-31 · ·

A terminal control method, a terminal and a non-transitory computer-readable storage medium are provided. The terminal control method includes: receiving, by a microphone, a detection audio signal emitted from a speaker and having a frequency within a pre-set detection frequency range; acquiring actual audio parameters of the detection audio signal when being received by the microphone, and original audio parameters of the detection audio signal when being emitted from the speaker; determining a relative state between the microphone and the speaker according to the actual audio parameters and the original audio parameters; determining a terminal control operation to be performed, according to the relative state and a pre-set correspondence between relative states and terminal control operations; and performing the determined terminal control operation on a terminal where the microphone is located.

Terminal control method, terminal and non-transitory computer readable storage medium
11568888 · 2023-01-31 · ·

A terminal control method, a terminal and a non-transitory computer-readable storage medium are provided. The terminal control method includes: receiving, by a microphone, a detection audio signal emitted from a speaker and having a frequency within a pre-set detection frequency range; acquiring actual audio parameters of the detection audio signal when being received by the microphone, and original audio parameters of the detection audio signal when being emitted from the speaker; determining a relative state between the microphone and the speaker according to the actual audio parameters and the original audio parameters; determining a terminal control operation to be performed, according to the relative state and a pre-set correspondence between relative states and terminal control operations; and performing the determined terminal control operation on a terminal where the microphone is located.

MACHINE LEARNING MODELS FOR AUTOMATED PROCESSING OF AUDIO WAVEFORM DATABASE ENTRIES
20230238019 · 2023-07-27 ·

A computer system includes memory hardware and processor hardware configured to execute stored instructions. The instructions include training a machine learning model with the historical feature vector inputs including multiple audio data entries and multiple claims data entries, to generate a condition likelihood output indicative of a specified condition associated with one of multiple historical database entities. The instructions include for each of a set of multiple database entities, generating a feature vector input according to audio data and the claims data associated with the entity, processing the feature vector input with the machine learning model to generate the condition likelihood output, and assigning the database entity to an identified condition subset in response to determining that the condition likelihood output is greater than a specified likelihood threshold. The instructions include transforming a user interface to display the condition likelihood output associated with the database entity.

MACHINE LEARNING MODELS FOR AUTOMATED PROCESSING OF AUDIO WAVEFORM DATABASE ENTRIES
20230238019 · 2023-07-27 ·

A computer system includes memory hardware and processor hardware configured to execute stored instructions. The instructions include training a machine learning model with the historical feature vector inputs including multiple audio data entries and multiple claims data entries, to generate a condition likelihood output indicative of a specified condition associated with one of multiple historical database entities. The instructions include for each of a set of multiple database entities, generating a feature vector input according to audio data and the claims data associated with the entity, processing the feature vector input with the machine learning model to generate the condition likelihood output, and assigning the database entity to an identified condition subset in response to determining that the condition likelihood output is greater than a specified likelihood threshold. The instructions include transforming a user interface to display the condition likelihood output associated with the database entity.