G05B2219/39244

Collaborative multi-robot tasks using action primitives

Various aspects of methods, systems, and use cases include techniques for training or using a model to control a robot. A method may include identifying a set of action primitives applicable to a set of robots, receiving information corresponding to a task (e.g., a collaborative task), and determining at least one action primitive based on the received information. The method may include training a model to control operations of at least one robot of the set of robots using the received information and the at least one action primitive.

System and method for controlling a robot using constrained dynamic movement primitives

A controller for controlling an operation of a robot to execute a task is provided. The controller comprises a memory configured to store a set of dynamic movement primitives (DMPs) associated with the task. The set of DMPs comprise a set of at least two dynamical systems: a function representing point attractor dynamics and a forcing function corresponding to a learned demonstration of the task. The controller comprises a processor configured to transform the set of DMPs to a set of constrained DMPs (CDMPs) by determining a perturbation function associated with the forcing function. The perturbation function is associated with a set of operational constraints. The processor is further configured to solve, a non-linear optimization problem for the set of CDMPs based on the set of operational constraints and generate, a control input for controlling the robot for executing the task, based on the solution.

Device and Method for Natural Language Controlled Industrial Assembly Robotics

A computer-implemented method of determining actions for controlling a robot, in particular an assembly robot, includes (i) receiving a first and second input, wherein the first input is a sentence describing an action which should be carried out by the robot, wherein the second input is an image of a current state of an environment of the robot, (ii) feeding the first input into a first machine learning model and feeding the second input into a second machine learning model, wherein the first and second machine learning models are configured to determine tokens for their respective inputs, and (iv) feeding the tokens into a third machine learning model, wherein the third machine learning model outputs two outputs, wherein the first output is a switch for incorporating specialized skill networks and the second output are actions.

System and method for learning sequences in robotic tasks for generalization to new tasks

A robotic controller is provided for generating sequences of movement primitives for sequential tasks of a robot having a manipulator. The controller includes at least one control processor, and a memory circuitry storing a dictionary including the movement primitives, a pretrained learning module, and a graph-search based planning module having instructions stored thereon. The controller to perform steps acquiring a planned task provided by an interface device operated by a user, wherein the planned task is represented by an initial state and a goal state with respect to an object, generating a planning graph by searching a feasible path of the object for the novel task using the graph-search based planning module and selecting movement primitives from the dictionary in the pretrained learning module, wherein the pretrained learning module has been trained based on demonstration tasks, parameterizing the feasible path represented by the movement primitives as dynamic movement primitives (DMPs) using the initial state and goal state, and implementing the parameterized feasible path as a trajectory according to the selected movement primitives using the manipulator of the robot by tracking and following the parameterized for the planned task.

Robotic demonstration learning

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using simulated local demonstration data for robotic demonstration learning. One of the methods includes receiving perceptual data of a workcell of a robot to be configured to execute a task according to a skill template, wherein the skill template specifies one or more subtasks required to perform the skill, wherein at least one of the subtasks is a demonstration subtask that relies on learning visual characteristics of the workcell. A virtual model is generated of a portion of the workcell. A training system generates simulated local demonstration data from the virtual model of the portion of the workcell and tunes a base control policy for the demonstration subtask using the simulated local demonstration data generated from the virtual model of the portion of the workcell.