Patent classifications
G05B2219/40391
SKILL TEMPLATES FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using skill templates for robotic demonstration learning. One of the methods includes receiving a skill template for a task to be performed by a robot, wherein the skill template defines a state machine having a plurality of subtasks and one or more respective transition conditions between one or more of the subtasks. Local demonstration data for a demonstration subtask of the skill template is received, where the local demonstration data is generated from a user demonstrating how to perform the demonstration subtask with the robot. A machine learning model is refined for the demonstration subtask and the skill template is executed on the robot, causing the robot to transition through the state machine defined by the skill template to perform the task.
DISTRIBUTED ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.
SKILL TEMPLATE DISTRIBUTION FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving, from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
USER FEEDBACK FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for providing user feedback for robotic demonstration learning. One of the methods includes initiating a local demonstration learning process to collect respective local demonstration data for each of one or more demonstration subtasks defined by a skill template to be executed by a robot. Local demonstration data is repeatedly collected for each of the one or more demonstration subtasks of the skill template while a user manipulates a robot to perform each of the one or more demonstration subtasks defined by the skill template. A respective progress value for each of the one or more demonstration subtasks defined by the skill template is maintained. A user interface presentation is generated that presents a suggested demonstration to be performed by the user based on a respective progress value for each demonstration subtask.
SIMULATED LOCAL DEMONSTRATION DATA FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using simulated local demonstration data for robotic demonstration learning. One of the methods includes receiving perceptual data of a workcell of a robot to be configured to execute a task according to a skill template, wherein the skill template specifies one or more subtasks required to perform the skill, wherein at least one of the subtasks is a demonstration subtask that relies on learning visual characteristics of the workcell. A virtual model is generated of a portion of the workcell. A training system generates simulated local demonstration data from the virtual model of the portion of the workcell and tunes a base control policy for the demonstration subtask using the simulated local demonstration data generated from the virtual model of the portion of the workcell.
INTEGRATING SENSOR STREAMS FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for integrating sensor streams for robotic demonstration learning. One of the methods includes selecting, by a learning system for a robot, a base update rate for combining multiple sensor streams into a task state representation. The learning system repeatedly generates the task state representation at the base update rate, including combining, during each time period defined by the update rate, the task state representation from most recently updated sensor data processed by the plurality of neural networks. The learning system repeatedly uses the task state representations to generate commands for the robot at the base update rate.
INERTIAL MEASUREMENT UNITS FOR TELEOPERATION OF ROBOTS
Provided are teleoperation systems and methods for using inertial measurement units for teleoperation of robots.
Systems, devices, articles, and methods for using trained robots
Robotic systems, methods of operation of robotic systems, and storage media including processor-executable instructions are disclosed herein. The system may include a robot, at least one processor in communication with the robot, and an operator interface in communication with the robot and the at least one processor. The method may include executing a first set of autonomous robot control instructions which causes a robot to autonomously perform the at least one task in an autonomous mode, and generating a second set of autonomous robot control instructions from the first set of autonomous robot control instructions and a first set of environmental sensor data received from a senor. Execution of the second set of autonomous robot control instructions causes the robot to autonomously perform the at least one task. The method may include producing at least one signal that represents the second set of autonomous robot control instructions.
System(s) and method(s) of using imitation learning in training and refining robotic control policies
Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
Robot Teaching Device and Work Teaching Method
An operation is generated in which a robot can perform work without causing interference with a surrounding structure within a movable range of the robot. The robot teaching device relates to a work teaching device that teaches work to a robot that holds and moves a held object. The device includes a teaching pose measurement unit configured to measure and/or calculate a teaching pose that is a pose of the held object during teaching work, and a robot operation generation unit configured to generate a joint displacement sequence of the robot such that a pose of a held object of the same type as the held object whose teaching pose is measured becomes the same pose as the teaching pose.