Patent classifications
G05B2219/39244
Mixed Reality Assisted Spatial Programming of Robotic Systems
A computer-based system and method is disclosed for spatial programming of a robotic device. A mixed reality tool may select an object related to one or more interactive tasks for the robotic device. A spatial location of the object may be located including Cartesian coordinates and orientation coordinates of the object. An application program may be executed to operate the robotic device using the spatial location. Based on initial parameters, execution of the one or more tasks by the robotic device on the object related to a skill set may be simulated in a mixed reality environment.
Systems, devices, and methods for multi-purpose robots
Systems, devices, and methods for training and operating (semi-)autonomous robots to complete multiple different work objectives are described. A robot control system stores a library of reusable work primitives each corresponding to a respective basic sub-task or sub-action that the robot is operative to autonomously perform. A work objective is analyzed to determine a sequence (i.e., a combination and/or permutation) of reusable work primitives that, when executed by the robot, will complete the work objective. The robot executes the sequence of reusable work primitives to complete the work objective. The reusable work primitives may include one or more reusable grasp primitives that enable(s) a robot's end effector to grasp objects. Simulated instances of real physical robots may be trained in simulated environments to develop control instructions that, once uploaded to the real physical robots, enable such real physical robots to autonomously perform reusable work primitives.
System and Method for Controlling a Robot using Constrained Dynamic Movement Primitives
A controller for controlling an operation of a robot to execute a task is provided. The controller comprises a memory configured to store a set of dynamic movement primitives (DMPs) associated with the task. The set of DMPs comprise a set of at least two dynamical systems: a function representing point attractor dynamics and a forcing function corresponding to a learned demonstration of the task. The controller comprises a processor configured to transform the set of DMPs to a set of constrained DMPs (CDMPs) by determining a perturbation function associated with the forcing function. The perturbation function is associated with a set of operational constraints. The processor is further configured to solve, a non-linear optimization problem for the set of CDMPs based on the set of operational constraints and generate, a control input for controlling the robot for executing the task, based on the solution.
System and Method for Learning Sequences in Robotic Tasks for Generalization to New Tasks
A robotic controller is provided for generating sequences of movement primitives for sequential tasks of a robot having a manipulator. The controller includes at least one control processor, and a memory circuitry storing a dictionary including the movement primitives, a pretrained learning module, and a graph-search based planning module having instructions stored thereon. The controller to perform steps acquiring a planned task provided by an interface device operated by a user, wherein the planned task is represented by an initial state and a goal state with respect to an object, generating a planning graph by searching a feasible path of the object for the novel task using the graph-search based planning module and selecting movement primitives from the dictionary in the pretrained learning module, wherein the pretrained learning module has been trained based on demonstration tasks, parameterizing the feasible path represented by the movement primitives as dynamic movement primitives (DMPs) using the initial state and goal state, and implementing the parameterized feasible path as a trajectory according to the selected movement primitives using the manipulator of the robot by tracking and following the parameterized for the planned task.
LEARNING DEVICE, LEARNING METHOD, AND RECORDING MEDIUM
A learning device 1X mainly includes an optimization problem calculation means 51X and an executable state set learning means 52X. The optimization problem calculation means 51X calculates a function value to be a solution for an optimization problem which uses an evaluation function for evaluating reachability to a target state, based on an abstract system model and a detailed system model concerning a system in which a robot operates. The executable state set learning means 52X learns an executable state set of an action of the robot to be executed by a controller based on a function value.
ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using simulated local demonstration data for robotic demonstration learning. One of the methods includes receiving perceptual data of a workcell of a robot to be configured to execute a task according to a skill template, wherein the skill template specifies one or more subtasks required to perform the skill, wherein at least one of the subtasks is a demonstration subtask that relies on learning visual characteristics of the workcell. A virtual model is generated of a portion of the workcell. A training system generates simulated local demonstration data from the virtual model of the portion of the workcell and tunes a base control policy for the demonstration subtask using the simulated local demonstration data generated from the virtual model of the portion of the workcell.
System and Method for Learning Sequences in Robotic Tasks for Generalization to New Tasks
A robotic controller is provided for generating sequences of movement primitives for sequential tasks of a robot having a manipulator. The controller includes at least one control processor, and a memory circuitry storing a dictionary including the movement primitives, a pretrained learning module, and a graph-search based planning module having instructions stored thereon. The controller to perform steps acquiring a planned task provided by an interface device operated by a user, wherein the planned task is represented by an initial state and a goal state with respect to an object, generating a planning graph by searching a feasible path of the object for the novel task using the graph-search based planning module and selecting movement primitives from the dictionary in the pretrained learning module, wherein the pretrained learning module has been trained based on demonstration tasks, parameterizing the feasible path represented by the movement primitives as dynamic movement primitives (DMPs) using the initial state and goal state, and implementing the parameterized feasible path as a trajectory according to the selected movement primitives using the manipulator of the robot by tracking and following the parameterized for the planned task.
Systems, methods, and computer program products for automating tasks
Systems, methods, and computer program products for automating tasks are described. A multi-step framework enables a gradient towards task automation. An agent performs a task while sensors collect data. The data are used to generate a script that characterizes the discrete actions executed by the agent in the performance of the task. The script is used by a robot teleoperation system to control a robot to perform the task. The robot teleoperation system maps the script into an ordered set of action commands that the robot is operative to auto-complete to enable the robot to semi-autonomously perform the task. The ordered set of action commands is converted into an automation program that may be accessed by an autonomous robot and executed to cause the autonomous robot to autonomously perform the task. In training, simulated instances of the robot may perform simulated instances of the task in simulated environments.
A System, Method and Storage Medium for Production System Automatic Control
An example system includes: a production system skill library, with a plurality of skill blocks describing and encapsulating the realization part of the skills involved in the production process; a unified execution engine with a plurality of skill function blocks describing and encapsulating the interface part of the skills involved in the production process; to receive a production procedure programmed by a user based on the skill function blocks, and successively start each skill function block in the production procedure to call at least one corresponding skill block; and device agents for controlling devices in the production system. Each device agent is used to provide a unified interface to control the corresponding device to perform operations according to the operation instructions from the unified execution engine or the skill block.
System(s) and method(s) of using imitation learning in training and refining robotic control policies
Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.