Patent classifications
G05B2219/39546
Automatic robot perception programming by imitation learning
Apparatus, systems, methods, and articles of manufacture for automatic robot perception programming by imitation learning are disclosed. An example apparatus includes a percept mapper to identify a first percept and a second percept from data gathered from a demonstration of a task and an entropy encoder to calculate a first saliency of the first percept and a second saliency of the second percept. The example apparatus also includes a trajectory mapper to map a trajectory based on the first percept and the second percept, the first percept skewed based on the first saliency, the second percept skewed based on the second saliency. In addition, the example apparatus includes a probabilistic encoder to determine a plurality of variations of the trajectory and create a collection of trajectories including the trajectory and the variations of the trajectory. The example apparatus also includes an assemble network to imitate an action based on a first simulated signal from a first neural network of a first modality and a second simulated signal from a second neural network of a second modality, the action representative of a perceptual skill.
Method and device for training manipulation skills of a robot system
A method of training a robot system for manipulation of objects, the robot system being able to perform a set of skills, wherein each skill is learned as a skill model, the method comprising: receiving physical input from a human trainer, regarding the skill to be learned by the robot; determining for the skill model a set of task parameters including determining for each task parameter of the set of task parameters if a task parameter is an attached task parameter, which is related to an object being part of said kinesthetic demonstration or if a task parameter is a free task parameter, which is not related to a physical object; obtaining data for each task parameter of the set of task parameters from the set of kinesthetic demonstrations, and training the skill model with the set of task parameters and the data obtained for each task parameter.
SYSTEM AND METHOD FOR DETERMINING A GRASPING HAND MODEL
Method for determining a grasping hand model suitable for grasping an object by receiving an image including at least one object; obtaining an object model estimating a pose and shape of the object from the image of the object; selecting a grasp class from a set of grasp classes by means of a neural network, with a cross entropy loss, thus, obtaining a set of parameters defining a coarse grasping hand model; refining the coarse grasping hand model, by minimizing loss functions referring to the parameters of the hand model for obtaining an operable grasping hand model while minimizing the distance between the finger of the hand model and the surface of the object and preventing interpenetration; and obtaining a mesh of the hand represented by the enhanced set of parameters.
Machine learning control of object handovers
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
DEVICE AND METHOD FOR CONTROLLING A ROBOT
A method for controlling a robot. The method includes performing demonstrations and descriptor images for the demonstrations from a point of view of the robot of the object; selecting a set of feature points, wherein the feature points are selected by searching an optimum of an objective function which rewards selected feature points being visible in the descriptor images; training a robot control model using the demonstrations and controlling the robot for a control scene with the object by determining a descriptor image of the object, locating the selected set of feature points in the descriptor image of the object; determining Euclidean coordinates of the located feature points; estimating a pose from the determined Euclidean coordinates; and controlling the robot to handle the object by means of the robot control model with the estimated pose.
MACHINE LEARNING CONTROL OF OBJECT HANDOVERS
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
Skill template distribution for robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving, from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
Distributed robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.
MACHINE LEARNING CONTROL OF OBJECT HANDOVERS
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
Systems, Methods, and Computer-Readable Media for Task-Oriented Motion Mapping on Machines, Robots, Agents and Virtual Embodiments Thereof Using Body Role Division
Systems, methods, and computer-readable media are disclosed for task-oriented motion mapping on an agent using body role division. One method includes: receiving task demonstration information of a particular task; receiving a set of instructions for the particular task; receiving a configuration of an agent to perform the particular task, the configuration of the agent including a plurality of joints, and each joint belong to one or more of a configurational group, a positional group, and a orientational group: mapping the configurational group of the agent based on the task demonstration information; changing values in the orientational group based on one or more of the task demonstration information and the set of instructions; changing values in the positional group based on the set of instructions; and producing a task-oriented motion mapping based on the mapped configuration group, changed values in the orientation group, and changed values in the positional group.