Patent classifications
G10L25/63
APPARATUS AND METHOD FOR SPEECH-EMOTION RECOGNITION WITH QUANTIFIED EMOTIONAL STATES
A method for training a speech-emotion recognition classifier under a continuous self-updating and re-trainable ASER machine learning model, wherein the training data is generated by: obtaining an utterance of a human speech source; processing the utterance in an emotion evaluation and rating process with normalization; extracting the features of the utterance; quantifying the feature attributes of the extracted features by labelling, tagging, and weighting the feature attributes, with their values assigned under measurable scales; and hashing the quantified feature attributes in a feature attribute hashing process to obtain hash values for creating a feature vector space. The run-time speech-emotion recognition comprising: extracting the features of an utterance; the trained recognition classifier recognizing the emotions and levels of intensity of the utterance units; and computing a quantified emotional state of the utterance by fusing recognized emotions and levels of intensity, and the quantified extracted feature attributes by their respective weightings.
APPARATUS AND METHOD FOR SPEECH-EMOTION RECOGNITION WITH QUANTIFIED EMOTIONAL STATES
A method for training a speech-emotion recognition classifier under a continuous self-updating and re-trainable ASER machine learning model, wherein the training data is generated by: obtaining an utterance of a human speech source; processing the utterance in an emotion evaluation and rating process with normalization; extracting the features of the utterance; quantifying the feature attributes of the extracted features by labelling, tagging, and weighting the feature attributes, with their values assigned under measurable scales; and hashing the quantified feature attributes in a feature attribute hashing process to obtain hash values for creating a feature vector space. The run-time speech-emotion recognition comprising: extracting the features of an utterance; the trained recognition classifier recognizing the emotions and levels of intensity of the utterance units; and computing a quantified emotional state of the utterance by fusing recognized emotions and levels of intensity, and the quantified extracted feature attributes by their respective weightings.
ARTIFICIAL INTELLIGENCE SYSTEM TRAINED BY ROBOTIC PROCESS AUTOMATION SYSTEM AUTOMATICALLY CONTROLLING VEHICLE FOR USER
A system for transportation includes a vehicle having a user interface, and a robotic process automation system wherein a set of data is captured for each user in a set of users as each user interacts with the user interface, and wherein an artificial intelligence system is trained using the set of data to interact with the vehicle to automatically undertake actions with the vehicle on behalf of the user.
ARTIFICIAL INTELLIGENCE SYSTEM TRAINED BY ROBOTIC PROCESS AUTOMATION SYSTEM AUTOMATICALLY CONTROLLING VEHICLE FOR USER
A system for transportation includes a vehicle having a user interface, and a robotic process automation system wherein a set of data is captured for each user in a set of users as each user interacts with the user interface, and wherein an artificial intelligence system is trained using the set of data to interact with the vehicle to automatically undertake actions with the vehicle on behalf of the user.
THREE DIFFERENT NEURAL NETWORKS TO OPTIMIZE THE STATE OF THE VEHICLE USING SOCIAL DATA
A method of optimizing an operating state of a vehicle includes classifying, using a first neural network of a hybrid neural network, social media data sourced from a plurality of social media sources as affecting a transportation system. The method further includes predicting, using a second neural network of the hybrid neural network, one or more effects of the classified social media data on the transportation system. The method further includes optimizing, using a third neural network of the hybrid neural network, a state of at least one vehicle of the transportation system, wherein the optimizing addresses an influence of the predicted one or more effects on the at least one vehicle.
THREE DIFFERENT NEURAL NETWORKS TO OPTIMIZE THE STATE OF THE VEHICLE USING SOCIAL DATA
A method of optimizing an operating state of a vehicle includes classifying, using a first neural network of a hybrid neural network, social media data sourced from a plurality of social media sources as affecting a transportation system. The method further includes predicting, using a second neural network of the hybrid neural network, one or more effects of the classified social media data on the transportation system. The method further includes optimizing, using a third neural network of the hybrid neural network, a state of at least one vehicle of the transportation system, wherein the optimizing addresses an influence of the predicted one or more effects on the at least one vehicle.
EMOTION TAG ASSIGNING SYSTEM, METHOD, AND PROGRAM
Provided are an emotion tag assigning system, method, and program for assigning, to a content, an emotion tag indicating an emotion of a user in execution of an event using the content.
An emotion tag assigning method includes a step of detecting, by a voice detector, voice data indicating a voice uttered by a person who participates in an event using a content during execution of the event; a step of recognizing, by an emotion recognizer, an emotion of the person based on the voice data; a step of acquiring, by a processor, emotion information indicating the recognized emotion of the person during the execution of the event using the content; and a step of assigning, by the emotion recognizer, an emotion rank calculated from the acquired emotion information to the content as an emotion tag.
EMOTION TAG ASSIGNING SYSTEM, METHOD, AND PROGRAM
Provided are an emotion tag assigning system, method, and program for assigning, to a content, an emotion tag indicating an emotion of a user in execution of an event using the content.
An emotion tag assigning method includes a step of detecting, by a voice detector, voice data indicating a voice uttered by a person who participates in an event using a content during execution of the event; a step of recognizing, by an emotion recognizer, an emotion of the person based on the voice data; a step of acquiring, by a processor, emotion information indicating the recognized emotion of the person during the execution of the event using the content; and a step of assigning, by the emotion recognizer, an emotion rank calculated from the acquired emotion information to the content as an emotion tag.
SYSTEMS AND METHODS FOR MULTI-AGENT CONVERSATIONS
A first input is received from a user input device. Based on the first input, a list of candidate intents is generated, and a plurality of agents is initialized. Each agent of the plurality of agents corresponds to a respective candidate intent. Each agent then provides a different response to the first input in accordance with its respective corresponding intent. A second input is then received that responds to one or more of the agents. Based on the agents to which the second input is responsive, the list of candidate intents is refined and, based on the refined list, one or more agents are deactivated.
SYSTEMS AND METHODS FOR MULTI-AGENT CONVERSATIONS
A first input is received from a user input device. Based on the first input, a list of candidate intents is generated, and a plurality of agents is initialized. Each agent of the plurality of agents corresponds to a respective candidate intent. Each agent then provides a different response to the first input in accordance with its respective corresponding intent. A second input is then received that responds to one or more of the agents. Based on the agents to which the second input is responsive, the list of candidate intents is refined and, based on the refined list, one or more agents are deactivated.