G05B2219/39298

Machine learning methods and apparatus for automated robotic placement of secured object in appropriate location

Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action. When at least one release criteria is satisfied, control commands can be provided to cause the end effector to release the object, thereby leading to the object being placed in the target placement location.

Method and system for estimating the trajectory of an object on a map

A method is disclosed for estimating a trajectory of an object on a map given a sequence of traces for the moving object. Each trace of the object including information defining a position measured at a given time for the object, as well as information as to an area of accuracy around the measured position. The method processes pairs of successive traces, corresponding to two positions successive in time in the sequence of measured positions for the moving object. For each trace of a pair of successive traces, the method defines road segments on the map within the area of accuracy of the trace. For each road segment within the area of accuracy of a first trace of a pair of traces and each road segment within the area of accuracy of the second trace of the pair, the method determines at least one candidate path between the two road segments. A neural network and a neural graph model are used to compute the most probable sequence of candidate paths to estimate the trajectory of the object on the map.

TASK AND PROCESS MINING BY ROBOTIC PROCESS AUTOMATIONS ACROSS A COMPUTING ENVIRONMENT

Disclosed herein is a method implemented by a task mining engine. The task mining engine is stored as processor executable code on a memory. The processor executable code is executed by a processor that is communicatively coupled to the memory. The method includes receiving recorded tasks identifying user activity with respect to a computing environment and clustering the recorded user tasks into steps by processing and scoring each recorded user task. The method also includes extracting step sequences that identify similar combinations or repeated combinations of the steps to mimic the user activity.

Transfer between Tasks in Different Domains

A system for trajectories imitation for robotic manipulators is provided. The system includes an interface configured to receive a plurality of task descriptions, wherein the interface is configured to communicate with a real-world robot, a memory to store computer-executable programs including a robot simulator, a training module and a transfer module, and a processor, in connection with the memory. The processor is configured to perform training using the training module, for the task descriptions on the robot simulator, to produce a plurality of source policy with subgoals for the task descriptions. The processor performs training using the training module, for the task descriptions on the real-world robot, to produce a plurality of target policy with subgoals for the task descriptions, and update the parameters of the transfer module from corresponding trajectories with the subgoals for the robot simulator and real-world robot.

NEURAL NETWORKS TO GENERATE ROBOTIC TASK DEMONSTRATIONS

A technique for training a neural network, including generating a plurality of input vectors based on a first plurality of task demonstrations associated with a first robot performing a first task in a simulated environment, wherein each input vector included in the plurality of input vectors specifies a sequence of poses of an end-effector of the first robot, and training the neural network to generate a plurality of output vectors based on the plurality of input vectors. Another technique for generating a task demonstration, including generating a simulated environment that includes a robot and at least one object, causing the robot to at least partially perform a task associated with the at least one object within the simulated environment based on a first output vector generated by a trained neural network, and recording demonstration data of the robot at least partially performing the task within the simulated environment.

Self-learning intelligent driving device

A self-learning intelligent driving device including: a first neural network module for performing a corresponding action evaluation operation on an input image to generate at least one set of trajectory coordinates: a switching unit controlled by a switching signal, where when the switching signal is active, data received at a first port is sent to a second port, and when the switching signal is inactive, data received at the first port is sent to a third port; a second neural network module for performing a corresponding image evaluation operation on the at least one set of trajectory coordinates when the switching signal is active to generate at least one simulated trajectory image; and a driving unit having a robotic arm for generating at least one corresponding motion trajectory according to the at least one set of trajectory coordinates when the switching signal is inactive.

SELF-LEARNING INTELLIGENT DRIVING DEVICE
20220055211 · 2022-02-24 ·

A self-learning intelligent driving device including: a first neural network module for performing a corresponding action evaluation operation on an input image to generate at least one set of trajectory coordinates: a switching unit controlled by a switching signal, where when the switching signal is active, data received at a first port is sent to a second port, and when the switching signal is inactive, data received at the first port is sent to a third port; a second neural network module for performing a corresponding image evaluation operation on the at least one set of trajectory coordinates when the switching signal is active to generate at least one simulated trajectory image; and a driving unit having a robotic arm for generating at least one corresponding motion trajectory according to the at least one set of trajectory coordinates when the switching signal is inactive.

MACHINE LEARNING DEVICE THAT PERFORMS LEARNING USING SIMULATION RESULT, MACHINE SYSTEM, MANUFACTURING SYSTEM, AND MACHINE LEARNING METHOD
20170285584 · 2017-10-05 ·

A machine learning device that learns a control command for a machine by machine learning, including a machine learning unit that performs the machine learning to output the control command; a simulator that performs a simulation of a work operation of the machine based on the control command; and a first determination unit that determines the control command based on an execution result of the simulation by the simulator.

ROBOTIC ACTIVITY DECOMPOSITION
20220048191 · 2022-02-17 ·

Provided are systems and methods for decomposing learned robotic activities into smaller sub-activities that can be used independently. In one example, a method may include storing simulation data comprising an activity of a robot during a training simulation performed via a robotic simulator, decompose the activity into a plurality of sub-activities that are performed by the robot during the training simulation based on changes in behavior of the robot identified within the simulation data, and generating and storing a plurality of programs for executing the plurality of sub-activities, respectively, in the storage.

ROBOT LEARNING VIA HUMAN-DEMONSTRATION OF TASKS WITH FORCE AND POSITION OBJECTIVES

A system for demonstrating a task to a robot includes a glove, sensors, and a controller. The sensors measure task characteristics while a human operator wears the glove and demonstrates the task. The task characteristics include a pose, joint angle configuration, and distributed force of the glove. The controller receives the task characteristics and uses machine learning logic to learn and record the demonstrated task as a task application file. The controller transmits control signals to the robot to cause the robot to automatically perform the demonstrated task. A method includes measuring the task characteristics using the glove, transmitting the task characteristics to the controller, processing the task characteristics using the machine learning logic, generating the control signals, and transmitting the control signals to the robot to cause the robot to automatically execute the task.