Patent classifications
G05B2219/39244
Method and device for training manipulation skills of a robot system
A method of training a robot system for manipulation of objects, the robot system being able to perform a set of skills, wherein each skill is learned as a skill model, the method comprising: receiving physical input from a human trainer, regarding the skill to be learned by the robot; determining for the skill model a set of task parameters including determining for each task parameter of the set of task parameters if a task parameter is an attached task parameter, which is related to an object being part of said kinesthetic demonstration or if a task parameter is a free task parameter, which is not related to a physical object; obtaining data for each task parameter of the set of task parameters from the set of kinesthetic demonstrations, and training the skill model with the set of task parameters and the data obtained for each task parameter.
Integrating sensor streams for robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for integrating sensor streams for robotic demonstration learning. One of the methods includes selecting, by a learning system for a robot, a base update rate for combining multiple sensor streams into a task state representation. The learning system repeatedly generates the task state representation at the base update rate, including combining, during each time period defined by the update rate, the task state representation from most recently updated sensor data processed by the plurality of neural networks. The learning system repeatedly uses the task state representations to generate commands for the robot at the base update rate.
LEARNING TO ACQUIRE AND ADAPT CONTACT-RICH MANIPULATION SKILLS WITH MOTION PRIMITIVES
A computer-implemented method comprising, receiving data representing a successful trajectory for an insertion task using a robot to insert a connector into a receptacle, performing a parameter optimization process for the robot to perform the insertion task. This parameter optimization includes defining an objective function that measures a similarity of a current trajectory generated with a current set of parameters to the successful trajectory and repeatedly modifying the current set of parameters and evaluating the modified set of parameters according to the objective function until generating a final set of parameters.
User feedback for robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for providing user feedback for robotic demonstration learning. One of the methods includes initiating a local demonstration learning process to collect respective local demonstration data for each of one or more demonstration subtasks defined by a skill template to be executed by a robot. Local demonstration data is repeatedly collected for each of the one or more demonstration subtasks of the skill template while a user manipulates a robot to perform each of the one or more demonstration subtasks defined by the skill template. A respective progress value for each of the one or more demonstration subtasks defined by the skill template is maintained. A user interface presentation is generated that presents a suggested demonstration to be performed by the user based on a respective progress value for each demonstration subtask.
METHOD FOR CONTROLLING A ROBOT AND ROBOT CONTROLLER
A method for controlling a robot. The method includes providing demonstrations for performing each of a plurality of skills; training from the demonstrations, a robot trajectory model for each skill, each trajectory model is a hidden semi-Markov model having one or more initial states and one or more final states; training, from the demonstrations, a precondition model for each skill comprising, for each initial state, a probability distribution of robot configurations before executing the skill, and a final condition model for each skill comprising, for each final state, a probability distribution of robot configurations after executing the skill; receiving a description of a task, the task includes performing the skills of the plurality of skills in sequence and/or branches; generating a composed robot trajectory model; and controlling the robot according to the composed robot trajectory model to execute the task.
ROBOT PLANNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for controlling robotic movements. One of the methods includes receiving, for a robot, an initial plan specifying a path and a local trajectory; receiving an updated observation of an environment of the robot; generating an initial modified local trajectory for the robot based on the updated observation in the environment of the robot; repeatedly following the initial modified local trajectory for the robot while generating a modified global path for the robot, comprising: obtaining data representing a workspace footprint for the robot, the workspace footprint defining a volume for a workspace of the robot, and generating the modified global path to avoid causing the robot to cross a boundary of the volume defined by the workspace footprint; and causing the robot to follow the modified global path for the robot.
Mixed reality assisted spatial programming of robotic systems
A computer-based system and method is disclosed for spatial programming of a robotic device. A mixed reality tool may select an object related to one or more interactive tasks for the robotic device. A spatial location of the object may be located including Cartesian coordinates and orientation coordinates of the object. An application program may be executed to operate the robotic device using the spatial location. Based on initial parameters, execution of the one or more tasks by the robotic device on the object related to a skill set may be simulated in a mixed reality environment.
Systems, devices, and methods for grasping by multi-purpose robots
Systems, devices, and methods for training and operating (semi-)autonomous robots to complete multiple different work objectives are described. A robot control system stores a library of reusable work primitives each corresponding to a respective basic sub-task or sub-action that the robot is operative to autonomously perform. A work objective is analyzed to determine a sequence (i.e., a combination and/or permutation) of reusable work primitives that, when executed by the robot, will complete the work objective. The robot executes the sequence of reusable work primitives to complete the work objective. The reusable work primitives may include one or more reusable grasp primitives that enable(s) a robot's end effector to grasp objects. Simulated instances of real physical robots may be trained in simulated environments to develop control instructions that, once uploaded to the real physical robots, enable such real physical robots to autonomously perform reusable work primitives.
USER FEEDBACK FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for providing user feedback for robotic demonstration learning. One of the methods includes initiating a local demonstration learning process to collect respective local demonstration data for each of one or more demonstration subtasks defined by a skill template to be executed by a robot. Local demonstration data is repeatedly collected for each of the one or more demonstration subtasks of the skill template while a user manipulates a robot to perform each of the one or more demonstration subtasks defined by the skill template. A respective progress value for each of the one or more demonstration subtasks defined by the skill template is maintained. A user interface presentation is generated that presents a suggested demonstration to be performed by the user based on a respective progress value for each demonstration subtask.
SIMULATED LOCAL DEMONSTRATION DATA FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using simulated local demonstration data for robotic demonstration learning. One of the methods includes receiving perceptual data of a workcell of a robot to be configured to execute a task according to a skill template, wherein the skill template specifies one or more subtasks required to perform the skill, wherein at least one of the subtasks is a demonstration subtask that relies on learning visual characteristics of the workcell. A virtual model is generated of a portion of the workcell. A training system generates simulated local demonstration data from the virtual model of the portion of the workcell and tunes a base control policy for the demonstration subtask using the simulated local demonstration data generated from the virtual model of the portion of the workcell.