Patent classifications
G06N3/008
Efficient transferring of human experiences to robots and other autonomous machines
A mechanism is described for facilitating transferring of human experiences to autonomous machines. A method of embodiments, as described herein, includes facilitating sensing, by one or more sensors, one or more inputs relating to a user, and evaluating the one or more inputs to capture one or more behavior traits of the user. The method may further include training a neural network model based on the one or more behavior traits, and applying the trained neural network model to a computing device to facilitate the computing device to adopt the one or more behavior traits to behave as the user.
Efficient transferring of human experiences to robots and other autonomous machines
A mechanism is described for facilitating transferring of human experiences to autonomous machines. A method of embodiments, as described herein, includes facilitating sensing, by one or more sensors, one or more inputs relating to a user, and evaluating the one or more inputs to capture one or more behavior traits of the user. The method may further include training a neural network model based on the one or more behavior traits, and applying the trained neural network model to a computing device to facilitate the computing device to adopt the one or more behavior traits to behave as the user.
PROCESS ASSEMBLY LINE WITH ROBOTIC PROCESS AUTOMATION
In an example embodiment, a novel “process assembly line” solution is provided that organizes software robots in a manner that, once configured, allows them to be duplicated and cloned into multiple scenarios, including in organizational structures where one software robot is triggered or called by another software robot. Instead of designing software robots with multiple functionalities in each to perform complex situations, the process assembly line can utilize software robots with single functions. This aids developers in building robust software robots and reduces potential errors.
Neural network modules
Methods, apparatus, and computer readable media related to combining and/or training one or more neural network modules based on version identifier(s) assigned to the neural network module(s). Some implementations are directed to using version identifiers of neural network modules in determining whether and/or how to combine multiple neural network modules to generate a combined neural network model for use by a robot and/or other apparatus. Some implementations are additionally or alternatively directed to assigning a version identifier to an endpoint of a neural network module based on one or more other neural network modules to which the neural network module is joined during training of the neural network module.
Controlling of device based on user recognition utilizing vision and speech features
An artificial intelligence-based control method is disclosed. In an artificial intelligence-based control method according to an exemplary embodiment of the present disclosure, when a user approaches within a set sensing range of a device, the device may capture a user image and predict whether the user has an intent to use the device by using motion features included in the captured image. An AI control method of the present disclosure may be associated with an artificial intelligent module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a 5G service-related device, etc.
Smart autonomous machines utilizing cloud, error corrections, and predictions
A mechanism is described for facilitating smart collection of data and smart management of autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and combining a first computation directed to be performed locally at a local computing device with a second computation directed to be performed remotely at a remote computing device in communication with the local computing device over the one or more networks, where the first computation consumes low power, wherein the second computation consumes high power.
Robot that Concurrently Learns Recognition and Synthesis while Developing a Motor
Traditionally, learning speech synthesis and speech recognition were investigated as two separate tasks. This separation hinders incremental development for concurrent synthesis and recognition, where partially-learned synthesis and partially-learned recognition must help each other throughout lifelong learning. This invention is a paradigm shift—we treat synthesis and recognition as two intertwined aspects of a lifelong learning robot. Furthermore, in contrast to existing recognition or synthesis systems, babies do not need their mothers to directly supervise their vocal tracts at every moment during the learning. We argue that self-generated non-symbolic states/actions at fine-grained time level help such a learner as necessary temporal contexts. Here, we approach a new and challenging problem—how to enable an autonomous learning system to develop an artificial motor for generating temporally-dense (e.g., frame-wise) actions on the fly without human handcrafting a set of symbolic states. Here the artificial motor corresponds to a combination of a multiplicity of robotic effectors, including, but not limited to, speaking, singing, dancing, riding a bike, swimming, and driving a car. The self-generated states/actions are Muscles-like, High-dimensional, Temporally-dense and Globally-smooth (MHTG), so that these states/actions are directly attended for concurrent synthesis and recognition for each time frame. Human teachers are relieved from supervising learner's motor ends. The Candid Covariance-free Incremental (CCI) Principal Component Analysis (PCA) is applied to develop such an artificial speaking motor where PCA features drive the motor. Since each life must develop normally, each Developmental Network-2 (DN-2) reaches the same network (maximum likelihood, ML) regardless of randomly initialized weights, where ML is not just for a function approximator but rather an emergent Turing Machine. The machine-synthesized sounds are evaluated by both the neural network and humans with recognition experiments. Our experimental results showed learning-to-synthesize and learning-to-recognize-through-synthesis for phonemes. This invention corresponds to a key step toward our goal to close a great gap toward fully autonomous machine learning directly from the physical world.
Robotic control using profiles
Techniques for robotic control using profiles are disclosed. Cognitive state data for an individual is obtained. A cognitive state profile for the individual is learned using the cognitive state data that was obtained. Further cognitive state data for the individual is collected. The further cognitive state data is compared with the cognitive state profile. Stimuli are provided by a robot to the individual based on the comparing. The robot can be a smart toy. The cognitive state data can include facial image data for the individual. The further cognitive state data can include audio data for the individual. The audio data can be voice data. The voice data augments the cognitive state data. Cognitive state data for the individual is obtained using another robot. The cognitive state profile is updated based on input from either of the robots.
CONVERSATON METHOD, CONVERSATION SYSTEM, CONVERSATION APPARATUS, AND PROGRAM
An object is to give the user the impression that the system has sufficient dialogue capabilities. A humanoid robot (50) presents a first system speech to elicit information regarding a user's experience in a subject contained in a dialogue. A microphone (11) accepts a first user speech spoken by a user (101) after the first system speech has been spoken. When the first user speech is a speech that contains information regarding the user's experience, the humanoid robot (50) presents a second system speech to elicit information regarding the user's evaluation of the user's experience. The microphone (11) accepts a second user speech spoken by the user (101) after the second system speech has been spoken. When the second user speech is a speech that contains the user's positive or negative evaluation, the humanoid robot (50) presents a third system speech to sympathizes with the positive or negative evaluation.
Artificial intelligence learning method and operating method of robot using the same
Disclosed are an artificial intelligence learning method and an operating method of a robot using the same. An on-screen label is generated based on image data acquired through a camera, an off-screen label is generated based on data acquired through other sensors, and the on-screen label and the off-screen label are used in learning for action recognition, thereby raising action recognition performance and recognizing a user's action even in a situation in which the user deviates from a camera's view.