ROBOT LEARNING VIA HUMAN-DEMONSTRATION OF TASKS WITH FORCE AND POSITION OBJECTIVES
20170249561 · 2017-08-31
Assignee
Inventors
Cpc classification
B25J9/1633
PERFORMING OPERATIONS; TRANSPORTING
G06K7/10366
PHYSICS
B25J9/1664
PERFORMING OPERATIONS; TRANSPORTING
G06N3/008
PHYSICS
International classification
Abstract
A system for demonstrating a task to a robot includes a glove, sensors, and a controller. The sensors measure task characteristics while a human operator wears the glove and demonstrates the task. The task characteristics include a pose, joint angle configuration, and distributed force of the glove. The controller receives the task characteristics and uses machine learning logic to learn and record the demonstrated task as a task application file. The controller transmits control signals to the robot to cause the robot to automatically perform the demonstrated task. A method includes measuring the task characteristics using the glove, transmitting the task characteristics to the controller, processing the task characteristics using the machine learning logic, generating the control signals, and transmitting the control signals to the robot to cause the robot to automatically execute the task.
Claims
1. A system for demonstrating a task having force and position objectives to a robot, the system comprising: a glove; a plurality of sensors configured to collectively measure a set of task characteristics while a human operator wears the glove and demonstrates the task, wherein the set of task characteristics includes a pose, a joint angle configuration, and a distributed force of the glove; and a controller in communication with the sensors that is programmed to: receive the measured task characteristics from the sensors; and apply machine learning logic to the received measured task characteristics to thereby learn and record the demonstrated task as a task application file.
2. The system of claim 1, wherein the controller is further programmed to generate a set of control signals using the task application file, and to transmit the set of control signals to the robot to thereby cause the robot to automatically perform the demonstrated task.
3. The system of claim 1, wherein the glove includes a palm and a plurality of fingers, and wherein the sensors that measure the distributed force of the glove include a plurality of force sensors arranged on the fingers and palm of the glove.
4. The system of claim 3, wherein the plurality of force sensors are piezo-resistive sensors.
5. The system of claim 3, wherein the plurality of fingers includes four fingers and an opposable thumb.
6. The system of claim 1, wherein the sensors that measure the joint angle configuration of the glove include a plurality of flexible conductive sensors each having a variable resistance corresponding to a different joint angle of the glove.
7. The system of claim 1, wherein the sensors that measure the pose of the glove include an inertial sensor.
8. The system of claim 1, wherein the sensors that measure the pose of the glove include a magnetic sensor.
9. The system of claim 1, wherein the sensors that measure the pose of the glove include an RFID device.
10. The system of claim 1, further comprising a camera operable for detecting a position of a target in the form of the operator, the operator's hands, or an object, wherein the first controller is programmed to receive the detected position as part of the set of task characteristics.
11. The system of claim 1, wherein the first controller is programmed with kinematics information of an end-effector of the glove and kinematics information of the glove, and is operable for calculating relative positions and orientations of the end-effector using the kinematics information of the end-effector and of the glove.
12. A method for demonstrating a task having force and position objectives to a robot using a glove on which is positioned a plurality of sensors configured to collectively measure a set of task characteristics, including a pose, a joint angle configuration, and a distributed force of the glove, the method comprising: measuring the set of task characteristics using the glove while a human operator wears the glove and demonstrates the task; transmitting the task characteristics to a controller; and processing the task characteristics via the controller using machine learning logic to thereby learn and record the demonstrated task as a task application file.
13. The method of claim 12, further comprising: generating a set of control signals via the controller using the task application file; and transmitting the set of control signals from the controller to the robot to thereby cause the robot to automatically perform the demonstrated task.
14. The method of claim 12, wherein processing the task characteristics using machine learning logic includes generating task primitives defining core steps of the demonstrated task.
15. The method of claim 12, wherein the system includes a camera, and wherein the task characteristics include a relative position of the human operator or the glove and a point in a workspace.
16. The method of claim 12, wherein processing the task characteristics via the first controller using machine learning logic to thereby learn and record the demonstrated task includes translating the demonstrated task into machine readable and executable code using kinematics information describing kinematics of the glove.
17. The method of claim 12, wherein the glove includes a palm and a plurality of fingers, the sensors include a plurality of piezo-resistive force sensors arranged on the fingers and palm, and measuring the set of task characteristics includes measuring the distributed force using the piezo-resistive force sensors.
18. The method of claim 12, wherein the sensors include a plurality of flexible conductive sensors each having a variable resistance corresponding to a different joint angle of the glove, and wherein measuring the set of task characteristics includes measuring the joint angle configuration via the flexible conductive sensors.
19. The method of claim 12, wherein measuring the set of task characteristics includes measuring the pose of the glove via an inertial or a magnetic sensor.
20. The system of claim 12, wherein measuring the set of task characteristics includes measuring the pose of the glove via an RFID device.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010]
[0011]
[0012]
[0013]
DETAILED DESCRIPTION
[0014] Referring to the drawings, wherein like reference numbers correspond to like or similar components throughout the several figures, a glove 10 is shown schematically in
[0015] With respect to the glove 10 shown in
[0016] Unlike conventional methodologies using vision systems to determine position and teach pendants to drive a robot during a given task demonstration, the present approach instead allows the human operator 50 to perform a dexterous task directly, i.e., by the human operator 50 acting alone without any involvement of the robot 70 in the demonstration. As shown in
[0017] To address this challenge, the human operator 50 directly performs the task herein, with the demonstrated task having both force and position objectives as noted above. In order to accomplish the desired ends, the glove 10 may be equipped with a plurality of different sensors, including at least a palm pose sensor 20, joint configuration sensors 30, and an array of force sensors 40, all of which are arranged on the palm 17, fingers 12, and thumb 12T as shown in
[0018] The task characteristics may include a distributed force (arrow F.sub.10) on the glove 10 as determined using the array of the force sensors 40, as well as a palm pose (arrow O.sub.17) determined via the palm pose sensor 20 and a joint angle configuration (arrow J.sub.12) determined using the various joint configuration sensors 30. The first controller 60, which may be programmed with kinematics data (K.sub.10) describing the kinematics of the glove 10, may processes the task characteristics and output a task application file (TAF) (arrow 85) to a second controller (C2) 80 prior to the control of the robot 70, as described in more detail later below. While first and second controllers 60 and 80 are described herein, a single controller or more than two controllers may be used in other embodiments.
[0019] With respect to the array of force sensors 40 shown in
[0020] The joint configuration sensors 30 of
[0021] In an example embodiment, the joint configuration sensors 30 may be embodied as individual resolvers positioned at each joint, or as flexible strips as shown that are embedded in or connected to the material of the glove 10. The joint configuration sensors 30 determine a bending angle of each joint, and output the individual joint angles (arrow J.sub.12) to the first controller 60 of
[0022] The palm pose sensor 20 of
[0023] Referring to
[0024] Optionally, the system 25 may include a camera 38 operable for detecting a target, such as a position of the human operator 50 or the operator's hands, or an assembled or other object held by or proximate to the operator 50, during demonstration of the task and outputting the same as a position signal (arrow P.sub.50), in which case the position signal (arrow P.sub.50) may be received as part of the measured task characteristics. A machine vision module (MVM) can be used by the first controller 60 to determine position of the human operator 50 from the received position signal (arrow P.sub.50) for such a purpose, e.g., by receiving an image file and determining the position via the machine vision module (MVM) using known image processing algorithms, as well as to determine a relative position of the glove 10 with respect to the human operator 50.
[0025] The first controller 60 can thereafter apply conventional machine learning techniques to the measured task characteristics using a machine learning (ML) logic module of the first controller 60 to thereby learn and record the demonstrated task as the task application file 85. The second controller 80 is programmed to receive the task application file 85 from the first controller 60 as machine-readable instructions, and to ultimately execute the task application file 85 and thereby control an operation of the robot 70 of
[0026] The respective first and second controllers 60 and 80 may include such common elements as the processor (P) and memory (M), the latter including tangible, non-transitory memory devices or media such as read only memory, random access memory, optical memory, flash memory, electrically-programmable read-only memory, and the like. The first and second controllers 60 and 80 may also include any required logic circuitry including but not limited to proportional-integral-derivative control logic, a high-speed clock, analog-to-digital circuitry, digital-to-analog circuitry, a digital signal processor, and the necessary input/output devices and other signal conditioning and/or buffer circuitry. The term “module” as used herein, including the machine vision module (MVM) and the machine learning (ML) logic module, may be embodied as all necessary hardware and software needed for performing designated tasks.
[0027] Kinematics information K.sub.72 of the end-effector 72 and kinematics information (K.sub.10) of the glove 10 may be stored in memory M, such that the first controller 60 is able to calculate the relative positions and orientations of the human operator 50 and/or the glove 10 and a point in a workspace in which the task demonstration is taking place. As used herein, the term “kinematics” refers to the calibrated and thus known size, relative positions, configuration, motion trajectories, and range of motion limitations of a given device or object. Thus, by knowing precisely how the glove 10 is constructed and moves, and how the end-effector 72 likewise moves, the first controller 60 can translate the motion of the glove 10 into motion of the end-effector 72, and thereby compile the required machine-executable instructions.
[0028] With respect to machine learning in general, this term refers herein to the types of artificial intelligence that are well known in the art. Thus, the first controller 60 is programmed with the requisite data analysis logic for iteratively learning from and adapting to dynamic input data. For instance, the first controller 60 can perform such example operations as pattern detection and recognition, e.g., using supervised or unsupervised learning, Bayesian algorithms, clustering algorithms, decision tree algorithms, or neural networks. Ultimately, the machine learning module (ML) outputs the task application file 85, i.e., a computer-readable program or code that is executable by the robot 70 using the second controller 80. The second controller 80 ultimately outputs control signals (arrow CC.sub.70) to the robot 70 to thereby cause the robot 70 to perform the demonstrated task as set forth in the task application file 85.
[0029]
[0030] Step S104 includes measuring the task characteristics (TC) using the glove 10 while the human operator 50 wears the glove 10 and demonstrates the task. The sensors 20, 30, and 40 collectively measure the task characteristics (TC) and transmit the signals describing the task characteristics, i.e., the forces F.sub.10, palm pose O.sub.17, and the joint configuration J.sub.12, to the first controller 60. The method 100 continues with step S106.
[0031] At step S106, the first controller 60 may determine if the demonstration of the task is complete. Various approaches may be taken to implementing step S106, including detecting a home position or calibrated gesture or position of the glove 10, or detection of depression of a button (not shown) informing the first controller 60 that the demonstration of the task is complete. The method 100 then proceeds to step S108, which may be optionally informed by data collected at step S107.
[0032] Optional step S107 includes using the camera 38 of
[0033] Step S108 includes learning the demonstrated task from steps S102-S106. This entails processing the received task characteristics during or after completion of the demonstration via the machine learning (ML) module shown in
[0034] Step S110 includes translating the demonstrated task from step S108 into the task application file 85. Step S110 may include using the kinematics information K.sub.10 and K.sub.72 to translate the task as performed by the human operator 50 into machine readable and executable code suitable for the end-effector 72 shown in
[0035] At step S112, the second controller 80 receives the task application file 85 from the first controller 60 and executes a control action with respect to the robot 70 of
[0036] While the best modes for carrying out the present disclosure have been described in detail, those familiar with the art to which this disclosure pertains will recognize various alternative designs and embodiments may exist that fall within the scope of the appended claims.