Patent classifications
G05B2219/39546
DISTRIBUTED ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.
SKILL TEMPLATE DISTRIBUTION FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving, from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
SKILL TEMPLATE DISTRIBUTION FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
Robot teaching by human demonstration
A method for teaching a robot to perform an operation based on human demonstration with images from a camera. The method includes a teaching phase where a 2D or 3D camera detects a human hand grasping and moving a workpiece, and images of the hand and workpiece are analyzed to determine a robot gripper pose and positions which equate to the pose and positions of the hand and corresponding pose and positions of the workpiece. Robot programming commands are then generated from the computed gripper pose and position relative to the workpiece pose and position. In a replay phase, the camera identifies workpiece pose and position, and the programming commands cause the robot to move the gripper to pick, move and place the workpiece as demonstrated. A teleoperation mode is also disclosed, where camera images of a human hand are used to control movement of the robot in real time.
ONLINE AUGMENTATION OF LEARNED GRASPING
Systems and methods for online augmentation for learned grasping are provided. In one embodiment, a method is provided that includes identifying an action from a discrete action space. The method includes identifying a second set of grasps of the agent utilizing a transition model based on the action and at least one contact parameter. The at least one contact parameter defines allowed states of contact for the agent. The method includes applying a reward function to evaluate each grasp of the second set of grasps based on a set of contact forces within a friction cone that minimizes a difference between an actual net wrench on the object and a predetermined net wrench. The reward function is optimized online using a lookahead tree. The method includes selecting a next grasp from the second set. The method includes causing the agent to execute the next grasp.
ROBOT TEACHING BY HUMAN DEMONSTRATION
A method for teaching a robot to perform an operation based on human demonstration with images from a camera. The method includes a teaching phase where a 2D or 3D camera detects a human hand grasping and moving a workpiece, and images of the hand and workpiece are analyzed to determine a robot gripper pose and positions which equate to the pose and positions of the hand and corresponding pose and positions of the workpiece. Robot programming commands are then generated from the computed gripper pose and position relative to the workpiece pose and position. In a replay phase, the camera identifies workpiece pose and position, and the programming commands cause the robot to move the gripper to pick, move and place the workpiece as demonstrated. A teleoperation mode is also disclosed, where camera images of a human hand are used to control movement of the robot in real time.
METHOD AND DEVICE FOR TRAINING MANIPULATION SKILLS OF A ROBOT SYSTEM
A method of training a robot system for manipulation of objects, the robot system being able to perform a set of skills, wherein each skill is learned as a skill model, the method comprising: receiving physical input from a human trainer, regarding the skill to be learned by the robot; determining for the skill model a set of task parameters including determining for each task parameter of the set of task parameters if a task parameter is an attached task parameter, which is related to an object being part of said kinesthetic demonstration or if a task parameter is a free task parameter, which is not related to a physical object; obtaining data for each task parameter of the set of task parameters from the set of kinesthetic demonstrations, and training the skill model with the set of task parameters and the data obtained for each task parameter.
Link-sequence mapping device, link-sequence mapping method, and program
Provided is a link-sequence mapping device capable of automatically mapping a model link-sequence to a link sequence of an arbitrarily defined robot. The link-sequence mapping device (1) is equipped with: a reception unit (11) for receiving model link-sequence information indicating the positions of respective links included in a model link-sequence; an identification unit (14) for identifying, by using the model link-sequence information, coordinate values of predetermined multiple positions in the model link-sequence; a calculation unit (15) for calculating robot link-sequence information, that is, information about the positions of respective links included in a robot link-sequence, such that objective functions corresponding to the respective distances between the identified multiple positions and corresponding multiple positions in the robot link-sequence are reduced; and an output unit (17) for outputting information about angles of respective joints in the robot link-sequence which are determined in accordance with the calculated robot link-sequence information.
System and method for optimizing body and object interactions
Systems and methods for optimizing body and object interactions are provided. Based on obtained contact pressure maps and coefficient of friction (COF) maps at a contact interface where at least a portion of a body is in physical contact with a surface of an object, friction force maps can be determined, which can be used to optimize body and object interactions.
LINK-SEQUENCE MAPPING DEVICE, LINK-SEQUENCE MAPPING METHOD, AND PROGRAM
Provided is a link-sequence mapping device capable of automatically mapping a model link-sequence to a link sequence of an arbitrarily defined robot. The link-sequence mapping device (1) is equipped with: a reception unit (11) for receiving model link-sequence information indicating the positions of respective links included in a model link-sequence; an identification unit (14) for identifying, by using the model link-sequence information, coordinate values of predetermined multiple positions in the model link-sequence; a calculation unit (15) for calculating robot link-sequence information, that is, information about the positions of respective links included in a robot link-sequence, such that objective functions corresponding to the respective distances between the identified multiple positions and corresponding multiple positions in the robot link-sequence are reduced; and an output unit (17) for outputting information about angles of respective joints in the robot link-sequence which are determined in accordance with the calculated robot link-sequence information.