Patent classifications
B25J9/1661
TEMPLATE ROBOTIC CONTROL PLANS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using template robotic control plans. One of the methods comprises obtaining a template robotic control plan that is configurable for a plurality of different robotics applications, wherein the template robotic control plan comprises data defining (i) an adaptation procedure and (ii) a set of one or more open parameters; obtaining a user input defining a respective value or range of values for each open parameter in the set of open parameters, wherein the user input characterizes a specific robotics application for which the template robotic control plan can be configured; and executing, using the obtained values for the set of open parameters, the adaptation procedure to generate a specific robotic control plan from the template robotic control plan.
MULTI-DIRECTIONAL THREE-DIMENSIONAL PRINTING WITH A DYNAMIC SUPPORTING BASE
A computer-implemented dynamic supporting base creation method that interacts with a three-dimensional (3D) printer that prints an object, the method including providing a physical support, via a first robotic gripper, for an object during three-dimensional (3D) printing using a printing head of the 3D printer and transferring the object to a second robotic gripper to provide a physical support at a different location on the object.
ROBOTIC PROCESS AUTOMATION SYSTEM FOR MANAGING HUMAN AND ROBOTIC TASKS
Improved techniques for combining human tasks and robotic tasks in an organized manner to define an automation workflow process. A workflow process platform can assist a developer in creating an automation workflow process, and/or manage performance of an automation workflow process. The improved techniques enable a Robotic Process Automation (RPA) system to support programmatically combining various robotic tasks with human actions to provide an interrelated relationship of both human tasks and automated tasks.
MACHINE-LEARNABLE ROBOTIC CONTROL PLANS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using learnable robotic control plans. One of the methods comprises obtaining a learnable robotic control plan comprising data defining a state machine that includes a plurality of states and a plurality of transitions between states, wherein: one or more states are learnable states, and each learnable state comprises data defining (i) one or more learnable parameters of the learnable state and (ii) a machine learning procedure for automatically learning a respective value for each learnable parameter of the learnable state; and processing the learnable robotic control plan to generate a specific robotic control plan, comprising: obtaining data characterizing a robotic execution environment; and for each learnable state, executing, using the obtained data, the respective machine learning procedures defined by the learnable state to generate a respective value for each learnable parameter of the learnable state.
Splitting transformers for robotics planning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for optimizing a plan for one or more robots using a process definition graph. One of the methods includes receiving a process definition graph for a robot, the process definition graph having a plurality of action nodes. One or more of the action nodes are motion nodes that represent a motion to be taken by the robot from a respective start location to an end location. It is determined that a motion node satisfies one or more splitting criteria, and in response to determining that the motion node satisfies the one or more splitting criteria, the process definition graph is modified. Modifying the process definition graph includes splitting the motion node into two or more separate motion nodes whose respective paths can be scheduled independently.
Automatic robot perception programming by imitation learning
Apparatus, systems, methods, and articles of manufacture for automatic robot perception programming by imitation learning are disclosed. An example apparatus includes a percept mapper to identify a first percept and a second percept from data gathered from a demonstration of a task and an entropy encoder to calculate a first saliency of the first percept and a second saliency of the second percept. The example apparatus also includes a trajectory mapper to map a trajectory based on the first percept and the second percept, the first percept skewed based on the first saliency, the second percept skewed based on the second saliency. In addition, the example apparatus includes a probabilistic encoder to determine a plurality of variations of the trajectory and create a collection of trajectories including the trajectory and the variations of the trajectory. The example apparatus also includes an assemble network to imitate an action based on a first simulated signal from a first neural network of a first modality and a second simulated signal from a second neural network of a second modality, the action representative of a perceptual skill.
Virtual teach and repeat mobile manipulation system
A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.
Conveyance robot system, method of controlling a conveyance robot and non-transitory computer readable storage medium storing a robot control program
A conveyance robot system according to the present disclosure includes an intrusion detection sensor that detects an intrusion of an object into the arm opening, and a distance sensor that measures a clearance distance indicating a distance between an arm entry/exit surface and a shelf, the arm entry/exit surface being a surface of the conveyance robot in which the arm opening is provided from among surfaces of the conveyance robot 1 constituting the safety cover, and the object being stored in the shelf. The distance sensor is disposed at a fixed height of the shelf in a horizontal direction and at a height of the shelf corresponding to a part to be measured.
Measuring device
A user can easily create a robot program. A measuring device includes a position determination processing part that determines a holding position, held by a robot hand, of a workpiece placed in a work space and determines coordinates of a fixed via point having any single attribute based on a result of measurement made by a measuring part and holding information, the fixed via point being one of an approach position of the robot hand for holding the holding position, the holding position, and a retreat position after holding, and an output part that outputs, to a robot controller, the coordinates of the fixed via point determined by the position determination processing part and attribute information showing the attribute of the fixed via point.
Method and device for training manipulation skills of a robot system
A method of training a robot system for manipulation of objects, the robot system being able to perform a set of skills, wherein each skill is learned as a skill model, the method comprising: receiving physical input from a human trainer, regarding the skill to be learned by the robot; determining for the skill model a set of task parameters including determining for each task parameter of the set of task parameters if a task parameter is an attached task parameter, which is related to an object being part of said kinesthetic demonstration or if a task parameter is a free task parameter, which is not related to a physical object; obtaining data for each task parameter of the set of task parameters from the set of kinesthetic demonstrations, and training the skill model with the set of task parameters and the data obtained for each task parameter.