G05B2219/40514

METHOD AND SYSTEM FOR PREDICTING A COLLISION FREE POSTURE OF A KINEMATIC SYSTEM
20220366660 · 2022-11-17 ·

A system and a method predict a collision free posture of a kinematic system. The method includes: receiving a 3D virtual environment, receiving a 3D representation of the kinematic system and a set of 3D postures defined for the 3D virtual kinematic system, receiving a target task to be performed by the kinematic system with respect to the surrounding environment, and receiving a prescribed location within the 3D virtual environment. The prescribed location defines a position at which the 3D virtual kinematic system has to be placed within the 3D virtual environment. A collision free detection function (CFD) is applied to a set of input data containing the 3D virtual environment, the target task, the prescribed location and the set of postures. The CFD function outputs a set of collision free postures enabling the kinematic system to perform the target task when located at the prescribed location.

Machine learning methods and apparatus for automated robotic placement of secured object in appropriate location

Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action. When at least one release criteria is satisfied, control commands can be provided to cause the end effector to release the object, thereby leading to the object being placed in the target placement location.

AUTOMATION SYSTEM AND METHOD FOR HANDLING PRODUCTS
20220314433 · 2022-10-06 ·

The invention relates to a method for handling products (17) using an automation system and to an automation system (10), the products being captured by means of an imaging sensor (18) of a control device (12) of the automation system and being handled by means of a handling mechanism (13) of a handling device (11) of the automation system, the control device processing sensor image data from the imaging sensor and controlling the handling device as specified by training data sets contained in a data memory (21) of the control device, the training data sets comprising training image data and/or geometric data and control instructions associated therewith, the training data sets being generated, as a statistical model, exclusively from geometric data contained in the training image data of products, by means of a computer using a computer program product executed thereon, the training data sets being transmitted to the control device.

DEVICE AND METHOD FOR CONTROLLING A ROBOTIC DEVICE
20220161424 · 2022-05-26 ·

A device and a method for controlling a robotic device, including a control model. The control model includes a robot trajectory model, which for the pickup includes a hidden semi-Markov model with one or multiple initial states, a precondition model, which for each initial state of the robot trajectory model includes a probability distribution of robot configurations before the pickup is carried out, and an object pickup model, which for a depth image outputs a plurality of pickup robot configurations having a respective associated probability of success.

Transformer-Based Meta-Imitation Learning Of Robots

A training system for a robot includes: a model having a transformer architecture and configured to determine how to actuate at least one of arms and an end effector of the robot; a training dataset including sets of demonstrations for the robot to perform training tasks, respectively; and a training module configured to: meta-train a policy of the model using first ones of the sets of demonstrations for first ones of the training tasks, respectively; and optimize the policy of the model using second ones of the sets of demonstrations for second ones of the training tasks, respectively, where the sets of demonstrations for the training tasks each include more than one demonstration and less than a first predetermined number of demonstrations.

Simulating multiple robots in virtual environments
11813748 · 2023-11-14 · ·

Implementations are provided for operably coupling multiple robot controllers to a single virtual environment, e.g., to generate training examples for training machine learning model(s). In various implementations, a virtual environment may be simulated that includes an interactive object and a plurality of robot avatars that are controlled independently and contemporaneously by a corresponding plurality of robot controllers that are external from the virtual environment. Sensor data generated from a perspective of each robot avatar of the plurality of robot avatars may be provided to a corresponding robot controller. Joint commands that cause actuation of one or more joints of each robot avatar may be received from the corresponding robot controller. Joint(s) of each robot avatar may be actuated pursuant to corresponding joint commands. The actuating may cause two or more of the robot avatars to act upon the interactive object in the virtual environment.

SIMULATING MULTIPLE ROBOTS IN VIRTUAL ENVIRONMENTS
20220111517 · 2022-04-14 ·

Implementations are provided for operably coupling multiple robot controllers to a single virtual environment, e.g., to generate training examples for training machine learning model(s). In various implementations, a virtual environment may be simulated that includes an interactive object and a plurality of robot avatars that are controlled independently and contemporaneously by a corresponding plurality of robot controllers that are external from the virtual environment. Sensor data generated from a perspective of each robot avatar of the plurality of robot avatars may be provided to a corresponding robot controller. Joint commands that cause actuation of one or more joints of each robot avatar may be received from the corresponding robot controller. Joint(s) of each robot avatar may be actuated pursuant to corresponding joint commands. The actuating may cause two or more of the robot avatars to act upon the interactive object in the virtual environment.

MACHINE LEARNING METHODS AND APPARATUS FOR AUTOMATED ROBOTIC PLACEMENT OF SECURED OBJECT IN APPROPRIATE LOCATION

Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action. When at least one release criteria is satisfied, control commands can be provided to cause the end effector to release the object, thereby leading to the object being placed in the target placement location.

Machine learning methods and apparatus for automated robotic placement of secured object in appropriate location

Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action. When at least one release criteria is satisfied, control commands can be provided to cause the end effector to release the object, thereby leading to the object being placed in the target placement location.

MACHINE LEARNING METHODS AND APPARATUS FOR ROBOTIC MANIPULATION AND THAT UTILIZE MULTI-TASK DOMAIN ADAPTATION

Implementations are directed to training a machine learning model that, once trained, is used in performance of robotic grasping and/or other manipulation task(s) by a robot. The model can be trained using simulated training examples that are based on simulated data that is based on simulated robot(s) attempting simulated manipulations of various simulated objects. At least portions of the model can also be trained based on real training examples that are based on data from real-world physical robots attempting manipulations of various objects. The simulated training examples can be utilized to train the model to predict an output that can be utilized in a particular taskand the real training examples used to adapt at least a portion of the model to the real-world domain can be tailored to a distinct task. In some implementations, domain-adversarial similarity losses are determined during training, and utilized to regularize at least portion(s) of the model.