G05B19/423

TEACHING DEVICE FOR PERFORMING ROBOT TEACHING OPERATIONS AND TEACHING METHOD
20190160662 · 2019-05-30 ·

A teaching device for performing teaching operations of a robot includes a selection unit which, during a teaching operation or after a teaching operation of the robot, moves among a plurality of lines of a program of the robot and selects a single line, an error calculation unit which calculates, after the robot has been moved by hand-guiding or jog-feeding to a teaching point which has already been taught in the selected single line, a position error between the teaching point and a position of the robot after movement, and an instruction unit which instructs to re-teach the teaching point when the position error is within a predetermined range.

HUMAN SKILL LEARNING BY INVERSE REINFORCEMENT LEARNING
20240201677 · 2024-06-20 ·

A method for teaching a robot to perform an operation including human demonstration using inverse reinforcement learning and a reinforcement learning reward function. A demonstrator performs an operation with contact force and workpiece motion data recorded. The demonstration data is used to train an encoder neural network which captures the human skill, defining a Gaussian distribution of probabilities for a set of states and actions. Encoder and decoder neural networks are then used in live robotic operations, where the decoder is used by a robot controller to compute actions based on force and motion state data from the robot. After each operation, the reward function is computed, with a Kullback-Leibler divergence term which rewards a small difference between human demonstration and robot operation probability curves, and a completion term which rewards a successful operation by the robot. The decoder is trained using reinforcement learning to maximize the reward function.

HUMAN SKILL LEARNING BY INVERSE REINFORCEMENT LEARNING
20240201677 · 2024-06-20 ·

A method for teaching a robot to perform an operation including human demonstration using inverse reinforcement learning and a reinforcement learning reward function. A demonstrator performs an operation with contact force and workpiece motion data recorded. The demonstration data is used to train an encoder neural network which captures the human skill, defining a Gaussian distribution of probabilities for a set of states and actions. Encoder and decoder neural networks are then used in live robotic operations, where the decoder is used by a robot controller to compute actions based on force and motion state data from the robot. After each operation, the reward function is computed, with a Kullback-Leibler divergence term which rewards a small difference between human demonstration and robot operation probability curves, and a completion term which rewards a successful operation by the robot. The decoder is trained using reinforcement learning to maximize the reward function.

POWER TOOL OPERATION RECORDING AND PLAYBACK

Systems and methods of operating power tools. The method includes receiving a command to start a recording mode at a first electronic processor of a first power tool, and receiving at the first electronic processor, a measured parameter from a sensor of the first power tool while a first motor of the first power tool is operating. The method also includes generating a recorded motor parameter by recording the measured parameter, on a first memory of the first power tool, when the first power tool operates in the recording mode, and transmitting, with a first transceiver of the first power tool, the recorded motor parameter. The method further includes receiving the recorded motor parameter at an external device, transmitting the recorded motor parameter to a second power tool via the external device, and receiving the recorded motor parameter via a second transceiver of the second power tool.

SYSTEM FOR TESTING AND TRAINING ROBOT CONTROL
20240189993 · 2024-06-13 ·

A method for training and/or testing a robot control module. The method includes generating an instruction specified by a robot control module configured for robot training and/or testing, the instruction indicating how a human-driven robot task is to be performed when training and/or testing the robot control module; providing the instruction to a mixed reality device worn by a human data collector, the mixed device rendering the instruction in a manner that shows the human data collector how to perform the human-driven robot task; collecting performance data and environmental data in response to the human data collector attempting to perform the human-driven robot task using the data collection device; receiving feedback data in response to the human data collector attempting to perform the human-driven robot task specified by the instruction; and updating the robot control module using the feedback data and the collected performance and environmental data.

CONTROLLED RESISTANCE IN BACKDRIVABLE JOINTS

A computer-assisted system includes a manipulator arm including a joint, an actuator mechanism configured to drive the joint, and a controller including a computer processor. The controller is communicatively coupled to the manipulator arm and configured with a first control mode and a second control mode. In each of the first control mode and the second control mode, the controller commands the actuator mechanism to allow an external articulation to reconfigure the manipulator arm by backdriving the joint. The first control mode is distinguished from the second control mode at least by the controller being configured to, in the first control mode, command the actuator mechanism to provide a first speed-dependent resistance in response to the joint being backdriven at a first backdriven speed above a first speed threshold. The first speed-dependent resistance opposes the joint being backdriven.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
20240219895 · 2024-07-04 ·

Provided are a device and a method for presenting an evaluation score of teaching data and necessary teaching data to a user in an easy-to-understand manner in a configuration for performing learning processing using teaching data. A teaching data execution unit generates, as learning data, a camera-captured image corresponding to movement of a robot by a user operation based on the teaching data and movement position information of the robot, a learning processing unit executes machine learning by inputting learning data generated by the teaching data execution unit and generates a teaching data set including an image and a robot behavior rule as learning result data, a feedback information generation unit executes evaluation of teaching data by inputting the learning data generated by the teaching data execution unit and the learning result data generated by the learning processing unit, and generates and outputs numerical feedback information and visual feedback information based on an evaluation result.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
20240219895 · 2024-07-04 ·

Provided are a device and a method for presenting an evaluation score of teaching data and necessary teaching data to a user in an easy-to-understand manner in a configuration for performing learning processing using teaching data. A teaching data execution unit generates, as learning data, a camera-captured image corresponding to movement of a robot by a user operation based on the teaching data and movement position information of the robot, a learning processing unit executes machine learning by inputting learning data generated by the teaching data execution unit and generates a teaching data set including an image and a robot behavior rule as learning result data, a feedback information generation unit executes evaluation of teaching data by inputting the learning data generated by the teaching data execution unit and the learning result data generated by the learning processing unit, and generates and outputs numerical feedback information and visual feedback information based on an evaluation result.

Method and means for handling an object
10300602 · 2019-05-28 · ·

A method for handling an object comprises the steps: a) connecting the object (1) with a manipulator (5) and with an input tool (7) by means of which a direction ({right arrow over (d)}) within an internal coordinate system (K) relating to the input tool (7) can be entered, d) initiating a test movement of the manipulator (5) on the basis of a direction ({right arrow over (r)}) known in the external coordinate system (K); e) determining the direction ({right arrow over (r)}) of a movement of the input tool (7) in the internal coordinate system (K) resulting from the test movement of the manipulator (5); f) determining a coordinate transformation (T) which transforms the direction of the resulting movement ({right arrow over (r)}) in the internal coordinate system into the known direction ({right arrow over (r)}) in the external coordinate system; g) detecting an internal direction ({right arrow over (d)}) within the internal coordinate system (K) entered by a user using the input tool (7); h) applying the coordinate transformation (T) to the detected internal direction ({right arrow over (d)}) in order to obtain an external direction ({right arrow over (d)}); and i) controlling a movement of the manipulator (5) on the basis of the external direction ({right arrow over (d)}).

Method and means for handling an object
10300602 · 2019-05-28 · ·

A method for handling an object comprises the steps: a) connecting the object (1) with a manipulator (5) and with an input tool (7) by means of which a direction ({right arrow over (d)}) within an internal coordinate system (K) relating to the input tool (7) can be entered, d) initiating a test movement of the manipulator (5) on the basis of a direction ({right arrow over (r)}) known in the external coordinate system (K); e) determining the direction ({right arrow over (r)}) of a movement of the input tool (7) in the internal coordinate system (K) resulting from the test movement of the manipulator (5); f) determining a coordinate transformation (T) which transforms the direction of the resulting movement ({right arrow over (r)}) in the internal coordinate system into the known direction ({right arrow over (r)}) in the external coordinate system; g) detecting an internal direction ({right arrow over (d)}) within the internal coordinate system (K) entered by a user using the input tool (7); h) applying the coordinate transformation (T) to the detected internal direction ({right arrow over (d)}) in order to obtain an external direction ({right arrow over (d)}); and i) controlling a movement of the manipulator (5) on the basis of the external direction ({right arrow over (d)}).