Patent classifications
G10L15/16
Training method of a speaker identification model based on a first language and a second language
A training method of training a speaker identification model which receives voice data as an input and outputs speaker identification information for identifying a speaker of an utterance included in the voice data is provided. The training method includes: performing voice quality conversion of first voice data of a first speaker to generate second voice data of a second speaker; and performing training of the speaker identification model using, as training data, the first voice data and the second voice data.
System-on-a-chip incorporating artificial neural network and general-purpose processor circuitry
A circuit system and a method of analyzing audio or video input data that is capable of detecting, classifying, and post-processing patterns in an input data stream. The circuit system may consist of one or more digital processors, one or more configurable spiking neural network circuits, and digital logic for the selection of two-dimensional input data. The system may use the neural network circuits for detecting and classifying patterns and one or more the digital processors to perform further detailed analyses on the input data and for signaling the result of an analysis to outputs of the system.
System-on-a-chip incorporating artificial neural network and general-purpose processor circuitry
A circuit system and a method of analyzing audio or video input data that is capable of detecting, classifying, and post-processing patterns in an input data stream. The circuit system may consist of one or more digital processors, one or more configurable spiking neural network circuits, and digital logic for the selection of two-dimensional input data. The system may use the neural network circuits for detecting and classifying patterns and one or more the digital processors to perform further detailed analyses on the input data and for signaling the result of an analysis to outputs of the system.
Method and apparatus for generating speech
A speech generation method and apparatus are disclosed. The speech generation method includes obtaining, by a processor, a linguistic feature and a prosodic feature from an input text, determining, by the processor, a first candidate speech element through a cost calculation and a Viterbi search based on the linguistic feature and the prosodic feature, generating, at a speech element generator implemented at the processor, a second candidate speech element based on the linguistic feature or the prosodic feature and the first candidate speech element, and outputting, by the processor, an output speech by concatenating the second candidate speech element and a speech sequence determined through the Viterbi search.
Method and apparatus for generating speech
A speech generation method and apparatus are disclosed. The speech generation method includes obtaining, by a processor, a linguistic feature and a prosodic feature from an input text, determining, by the processor, a first candidate speech element through a cost calculation and a Viterbi search based on the linguistic feature and the prosodic feature, generating, at a speech element generator implemented at the processor, a second candidate speech element based on the linguistic feature or the prosodic feature and the first candidate speech element, and outputting, by the processor, an output speech by concatenating the second candidate speech element and a speech sequence determined through the Viterbi search.
Methods and systems for pushing audiovisual playlist based on text-attentional convolutional neural network
In some embodiments, methods and systems for pushing audiovisual playlists based on a text-attentional convolutional neural network include a local voice interactive terminal, a dialog system server and a playlist recommendation engine, where the dialog system server and the playlist recommendation engine are respectively connected to the local voice interactive terminal. In some embodiments, the local voice interactive terminal includes a microphone array, a host computer connected to the microphone array, and a voice synthesis chip board connected to the microphone array. In some embodiments, the playlist recommendation engine obtains rating data based on a rating predictor constructed by the neural network; the host computer parses the data into recommended playlist information; and the voice terminal synthesizes the results and pushes them to a user in the form of voice.
Methods and systems for pushing audiovisual playlist based on text-attentional convolutional neural network
In some embodiments, methods and systems for pushing audiovisual playlists based on a text-attentional convolutional neural network include a local voice interactive terminal, a dialog system server and a playlist recommendation engine, where the dialog system server and the playlist recommendation engine are respectively connected to the local voice interactive terminal. In some embodiments, the local voice interactive terminal includes a microphone array, a host computer connected to the microphone array, and a voice synthesis chip board connected to the microphone array. In some embodiments, the playlist recommendation engine obtains rating data based on a rating predictor constructed by the neural network; the host computer parses the data into recommended playlist information; and the voice terminal synthesizes the results and pushes them to a user in the form of voice.
Preventing audio delay-induced miscommunication in audio/video conferences
Embodiments for delay-induced miscommunication reduction are provided. The embodiment may include capturing data streams transmitted between participants in an A/V exchange; translating, on a sender device prior to transmission to a recipient device, an audio stream within the data streams to text; timestamping, on a sender device prior to transmission to the recipient device, each word in the translated audio stream; transmitting the audio stream and the sender-side translated and timestamped audio stream to the recipient device; translating, on the recipient device, the transmitted audio stream to text; timestamping, on the recipient device, each word in the translated audio stream; determining a lag exists in the A/V exchange based on a comparison of each timestamp for corresponding words on the sender-side translated and timestamped audio stream and the recipient-side translated and timestamped audio stream; and generating a true transcript of an intended exchange between the participants based on the comparison.
Preventing audio delay-induced miscommunication in audio/video conferences
Embodiments for delay-induced miscommunication reduction are provided. The embodiment may include capturing data streams transmitted between participants in an A/V exchange; translating, on a sender device prior to transmission to a recipient device, an audio stream within the data streams to text; timestamping, on a sender device prior to transmission to the recipient device, each word in the translated audio stream; transmitting the audio stream and the sender-side translated and timestamped audio stream to the recipient device; translating, on the recipient device, the transmitted audio stream to text; timestamping, on the recipient device, each word in the translated audio stream; determining a lag exists in the A/V exchange based on a comparison of each timestamp for corresponding words on the sender-side translated and timestamped audio stream and the recipient-side translated and timestamped audio stream; and generating a true transcript of an intended exchange between the participants based on the comparison.
EMOTIONALLY-AWARE CONVERSATIONAL RESPONSE GENERATION METHOD AND APPARATUS
Techniques for generating conversational responses for a conversational user interface are disclosed. In one embodiment, a method is disclosed comprising obtaining user input from a user via a conversational user interface, using the user input to obtain a user emotion and a user intent, obtaining candidate probabilities for a fragment of a response to the user input using the obtained user emotion, the obtained user intent and the user input, generating the response to the user input using the candidate probabilities obtained for the fragment to select a candidate for the fragment of the response, and communicating the response to the user via the conversational user interface.