G05B2219/39271

DEVICE AND METHOD FOR TRAINING A NEURAL NETWORK FOR CONTROLLING A ROBOT FOR AN INSERTING TASK
20220335295 · 2022-10-20 ·

A method for training a neural network to derive, from a force and a moment exerted on an object when pressed on a plane in which an insertion for inserting the object is located, a movement vector to insert an object into an insertion. The method includes, for a plurality of positions in which the object or the part of the object held by the robot touches a plane in which the insertion is located, controlling the robot to move to the position, controlling the robot to press the object onto the plane, measuring the force and moment experienced by the object, scaling the pair of force and moment by a number randomly chosen between zero and a predetermined positive maximum number and labelling the scaled pair by a movement vector between the position and the insertion, and training the neural network using the labelled pairs of force and moment.

DEVICE AND METHOD FOR CONTROLLING A ROBOT TO INSERT AN OBJECT INTO AN INSERTION
20220331964 · 2022-10-20 ·

A method for controlling a robot to insert an object into an insertion. The method includes controlling the robot to hold the object, generating an estimate of a target position to insert the object into the insertion, controlling the robot to move to the estimated target position, taking a camera image using a camera mounted on the robot after having controlled the robot to move to the estimated target position, feeding the camera image into a neural network which is trained to derive, from camera images, movement vectors which specify movements from the positions at which the camera images are taken to insert objects into insertions and controlling the robot to move according to the movement vector derived by the neural network from the camera image.

DEVICE AND METHOD FOR TRAINING A NEURAL NETWORK FOR CONTROLLING A ROBOT FOR AN INSERTING TASK
20220335710 · 2022-10-20 ·

A method for training a neural network to derive, from an image of a camera mounted on a robot, a movement vector for the robot to insert an object into an insertion. The method includes controlling the robot to hold the object, bringing the robot into a target position in which the object is inserted in the insertion, for a plurality of positions different from the target position controlling the robot to move away from the target position to the position, taking a camera image by the camera and labelling the camera image by a movement vector to move back from the position to the target position and training the neural network using the labelled camera images.

DEVICE AND METHOD FOR TRAINING A NEURAL NETWORK FOR CONTROLLING A ROBOT FOR AN INSERTING TASK
20220335622 · 2022-10-20 ·

A method for training a neural network to derive, from an image of a camera mounted on a robot, a movement vector to insert an object into an insertion. The method includes, for a plurality of positions in which the object held by the robot touches a plane in which the insertion is located controlling the robot to move to the position, taking a camera image by the camera and labelling the camera image with a movement vector between the position and the insertion in the plane and training the neural network using the labelled camera images.

COMPOSITIONAL GENERALIZATION FOR REINFORCEMENT LEARNING
20230107460 · 2023-04-06 ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for controlling an agent interacting with an environment to perform a task. In one aspect, one of the methods comprises receiving an observation; processing the observation using an a recurrent encoder neural network configured to receive as input the observation and to generate as output an encoder representation of the observation that comprises a respective feature vector for each of a plurality of spatially distinct portions of the observation, wherein each respective feature vector has a plurality of dimensions; for each of a plurality of subschema recurrent neural networks: generating a respective attention weight for each of the plurality of dimensions, generating an attended encoder representation, and updating the subschema hidden state using at least the attended encoder representation; and selecting an action using the updated subschema hidden states of the plurality of subschema recurrent neural networks.

MACHINE LEARNING CONTROL OF OBJECT HANDOVERS
20230202031 · 2023-06-29 ·

A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.

NEURAL NETWORKS TO GENERATE ROBOTIC TASK DEMONSTRATIONS

A technique for training a neural network, including generating a plurality of input vectors based on a first plurality of task demonstrations associated with a first robot performing a first task in a simulated environment, wherein each input vector included in the plurality of input vectors specifies a sequence of poses of an end-effector of the first robot, and training the neural network to generate a plurality of output vectors based on the plurality of input vectors. Another technique for generating a task demonstration, including generating a simulated environment that includes a robot and at least one object, causing the robot to at least partially perform a task associated with the at least one object within the simulated environment based on a first output vector generated by a trained neural network, and recording demonstration data of the robot at least partially performing the task within the simulated environment.

NEURAL NETWORKS TO GENERATE ROBOTIC TASK DEMONSTRATIONS

A technique for training a neural network, including generating a plurality of input vectors based on a first plurality of task demonstrations associated with a first robot performing a first task in a simulated environment, wherein each input vector included in the plurality of input vectors specifies a sequence of poses of an end-effector of the first robot, and training the neural network to generate a plurality of output vectors based on the plurality of input vectors. Another technique for generating a task demonstration, including generating a simulated environment that includes a robot and at least one object, causing the robot to at least partially perform a task associated with the at least one object within the simulated environment based on a first output vector generated by a trained neural network, and recording demonstration data of the robot at least partially performing the task within the simulated environment.

Robot controller that controls robot, learned model, method of controlling robot, and storage medium
11679496 · 2023-06-20 · ·

A robot controller that controls a robot by automatically obtaining a controller capable of suitably controlling a wide range of robots. An image is acquired from an image capturing apparatus that photographs an environment including the robot. The robot is driven based on an output result obtained by inputting the image to a neural network. The neural network is updated according to a reward generated in a case where a plurality of virtual images photographed while changing an environmental condition of a virtual environment generated by virtualizing the environment and a state of a virtual robot are input to the neural network, and a policy of the virtual robot, which is output from the neural network, satisfies a predetermined condition.

Robot control device for issuing motion command to robot on the basis of motion sequence of basic motions
11673266 · 2023-06-13 · ·

This control device for controlling the motion of a robot comprises a first processing part and a command part. The first processing part sets a first state of the robot and a second state to which the robot transitions from the first state as inputs, and sets at least one basic motion selected from a plurality of basic motions the robot is instructed to perform for transitioning from the first state to the second state and the order in which the basic motions are to be performed as outputs. Prescribed operating parameters are set for each of the basic motions. The command part executes motion commands for the robot on the basis of the output from the first processing part.