ROBOT TRAJECTORY LEARNING BY DEMONSTRATION WITH PROBE SENSOR
20180348744 · 2018-12-06
Inventors
Cpc classification
G05B2219/36312
PHYSICS
G05B19/427
PHYSICS
G05B19/421
PHYSICS
G05B2219/36473
PHYSICS
G05B19/423
PHYSICS
G05B19/4148
PHYSICS
International classification
G05B19/421
PHYSICS
G05B19/427
PHYSICS
G05B19/423
PHYSICS
Abstract
A robot learning system for trajectory learning of a robot (RB) having a robot arm between a base and a tool center point (TCP). A user interface allows the user to control the robot arm in order to follow a desired trajectory during a real-time. A probe sensor (PS) is mounted on the TCP during the learning session. The probe sensor (PS) measures a distance parameter (Z) indicative of distance from the TCP and a surface forming the trajectory to be followed, and an orientation parameter (X, Y) indicative of orientation of the TCP and the surface forming the trajectory to be followed. These distance and orientation data are provided as a feedback to the controller of the robot (CTL) during the real-time learning session, thereby allowing the robot controller software to assist the user in following a desired trajectory in a continuous manner. Especially, the probe sensor (PS) may have a displaceable tip (TP) to follow a surface and having a neutral or center position, and where the robot controller software controls the robot movements to seek the neutral or center position irrespective of the user's control inputs. Data (DT) is logged during the learning session, so as to allow later control of the robot (RB) in response to the data (DT) logged during the learning session.
Claims
1. A robot learning system for trajectory learning of an associated robot comprising a robot arm between a base and a tool center point, by demonstration from a user, the system comprising: a user interface configured for connection to a controller of the robot, so as to allow the user to control the robot arm in order to follow a desired trajectory during a real-time learning session, wherein the user interface comprises at least a first control element for being operated by the user, and being configured to control position in space of the tool center point of the robot, a probe sensor configured to be mounted on the tool center point during the real-time learning session, wherein the probe sensor is configured to measure a distance parameter indicative of distance from the tool center point and a surface forming the trajectory to be followed and an orientation parameter indicative of orientation of the tool center point and the surface forming the trajectory to be followed, and wherein the probe sensor (PS) is configured to continuously generate one or more signals corresponding to said distance and orientation parameters, and wherein said one or more signals is provided as a feedback to the controller of the robot during the real-time learning session, and a processor configured to log data in response to the user's operation of the at least first control element during the real-time learning session, or continuously logging data at a predetermined sample rate, so as to allow later control of the robot in response to the data logged during the learning session.
2-15. (canceled)
16. The robot learning system according to claim 1, wherein the probe sensor comprises a longitudinally displaceable rod connected to the tip at one end and connected to a base of the probe sensor at the opposite end, so as to allow sensing of a distance between the base of the probe sensor and the tip by a distance sensor.
17. The robot learning system according to claim 16, wherein the longitudinally displaceable rod and the base of the probe sensor are connected at a joint, so as to allow multidirectional movement of the longitudinally displaceable rod in relation to the base of the probe sensor, wherein a first angle sensor is configured to sense an angle between the base of the probe sensor and the displaceable rod.
18. The robot learning system according to claim 17, wherein the first angle sensor is configured to sense an angle in a first direction between the base of the probe sensor and the longitudinally displaceable rod, and wherein a second angle sensor is configured to sense an angle in a second direction between the base of the probe sensor and the longitudinally displaceable rod, wherein said first and second directions are different.
19. The robot learning system according to claim 1, wherein the probe sensor has a neutral or center position of its tip relative to its base.
20. The robot learning system according to claim 19, wherein the tip of the probe sensor is resiliently connected to its base, so that that the tip will return to the neutral or center position after being forced away from the neutral or center position.
21. The robot learning system according to claim 1, wherein the processor or robot controller is programmed to continuously calculate a transformation of the robot coordinates in response to data representing said signal from the probe sensor during the real-time learning session.
22. The robot learning system according to claim 21, wherein the processor or robot controller is programmed to control the robot in response to a combination of input from the user interface and data representing the signal from the probe sensor, during the real-time learning session.
23. The robot learning system according to claim 22, wherein the robot controller is programmed to move the robot in response to feedback from the probe sensor, during the real-time learning session, so as to minimize a deviation between an actual position of the tip of the probe sensor and a neutral or center position of the tip of the probe sensor.
24. The robot learning system according to claim 22, wherein the robot controller is programmed to move the robot so as to help the user in continuously controlling the robot to ensure that the tip of the probe sensor is in contact with a surface of an object to be followed.
25. The robot learning system according to claim 1, wherein the user interface comprises a first control element comprising a first joystick configured for operation by the user's one hand for control of position in space of the tool center point of the robot, wherein the user interface comprises a second control element comprising a second joystick for tilting or rotating the tool center point of the robot, wherein the second joystick is configured for simultaneous operation by the user's second hand.
26. The robot learning system according to claim 1, wherein the processor is configured to control the robot in response to the data logged during the learning session, and wherein the processor is programmed to calculate a transformation of the robot coordinates in response to an input regarding physical properties and further in response to known properties of the probe sensor.
27. A robot system comprising a robot comprising a robot arm with a plurality of moveable arm elements arranged between a base and a tool center point, wherein the tool center point is configured for mounting of a tool a robot controller configured to control movement of the robot, and a robot learning system according to claim 1.
28. A method for controlling a robot during trajectory learning of the robot by demonstration from a user to make the robot follow a desired trajectory during a real-time learning session, the method comprising: receiving distance input indicative of a distance from the tool center point (TCP) of the robot and a surface forming the trajectory to be followed during the real-time learning session, receiving orientation input indicative of an orientation of the tool center point relative to the surface forming the trajectory to be followed during the real-time learning session continuously controlling the robot in response to the user input, the distance input and the orientation input during the real-time learning session, and logging data in response to the user's operation of the at least first control element during the learning session so as to allow later control of the robot in response to the data logged during the learning session.
29. A computer program product having instructions which, when executed, cause a computing device or system comprising a processor to perform the method according to claim 28.
Description
BRIEF DESCRIPTION OF THE FIGURES
[0049] The invention will now be described in more detail with regard to the accompanying figures of which
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060] The figures illustrate specific ways of implementing the present invention and are not to be construed as being limiting to other possible embodiments falling within the scope of the attached claim set.
DETAILED DESCRIPTION OF THE INVENTION
[0061]
[0062] The robot RB has in the example an arm formed by 5 joint arm elements between a base and a tool center point TCP. A controller of the robot CTL serves to control actuators for moving the arm elements. A probe sensor PS with a probe tip PT is mounted on the TCP of the robot RB, so as to continuously sense distance and orientation between the probe tip PT and the TCP during the real-time trajectory learning session. This allows contact with the probe tip during the trajectory following on an object, e.g. a welding trace. A signal PTL_IN is generated by the probe sensor in response to the sensed distance and orientation.
[0063] The user interface comprises two three-axis joysticks J1, J2 both mounted on the robot arm, namely on the TPC, and on an elbow between two arm elements, a suitable distance (e.g. 30-80 cm) away from each other to allow the user to comfortably operate both joysticks J1, J2 simultaneously. A specific example of a low cost joystick is Apem 3140SAL600, which is based on Hall effect sensing technology.
[0064] In
[0065] In the shown embodiment in
[0066] In
[0067] In the system, key elements are two three axes joysticks J1, J2 mounted in this example on a six axes industrial robot body, and a preferably three axes probe sensor PS. In the shown example the position controlling joystick J1 is mounted near to the 6'th joint of the robot, i.e. near the TCP. The orientation controlling joystick J2 is mounted near to the robot elbow, or at distance of human shoulder width, e.g. around 45-80 centimeters, in case of a large industrial robot. The three axes probe sensor PS is mounted on the TCP.
[0068]
[0069] In the three sketches of
[0070] In
[0071]
[0072]
[0073] All the axes preferably have spring designs, so the axis will center the position when no forces are applied on the probe sensor PS. The probe tip PT is preferably designed for the actual workpiece and therefore must be easily replaceable. E.g. the probe tip PT may be formed as a small ball to be able to easily slide on a surface.
[0074] Equation (1) describes the transformation defined in h{1, . . . , n} where n is the number of records. The transformation in equation is calculated by using the rotation and position part in a homogeneous coordinate description. Equation (2) describes the transformation from the robot base to the end of the last joint for the i'th transformation. This equation is input to the robot controller for moving the robot around the scene. Equation (3) describes the transformation from the robot base to the end of the probe sensor for the i'th transformation. During the learning process, the two transformations on the right side are recorded for calculating the desired path during execution described in equation (4):
.sub.iT.sub.6Joint.sup.Probe(1)
.sub.iT.sub.Base.sup.6Joint(2)
.sub.iT.sub.Base.sup.Probe=.sub.iT.sub.Base.sup.6Joint.sub.iT.sub.6Joint.sup.Probe(3)
.sub.iT.sub.Base.sup.6Probe=.sub.iT.sub.Base.sup.6Joint.sub.iT.sub.6Joint.sup.ProbeT.sub.6Joint.sup.Tool.sup.
[0075]
[0076] The learning part, to the left, comprises a main loop while the learning is active: #1: Loop start. #2: Read from analog to digital converter data from position joystick J1, rotation joystick J2, and probe sensor PS. #3: Calculate a transformation from the probe sensor PS equation (1) with a Feedback-Assisted algorithm to avoid outer mechanic limits for the probe sensor PS that adjusts the three probe sensor PS axes so they will be close to a center position during the learning process. Calculate transformations based thereon according to equations (1) and (2), and save them. #4: Determine probe sensor correction. #5: Calculates the new transformation from equation (2) with adjustment calculated in state #4, and send it to the robot controller CTL and return to state #1.
[0077] The execution part, to the right, comprises two states in a main loop until (user) stop STP. #6: Calculate equation (4) and send it to the robot controller CTL and return to state #6, as long as there is recorded transformation left.
[0078]
[0079] A real-time learning session may begin with the robot being controlled to start with a tip of the probe sensor mounted on its TCP so as to start on one end of the desired trajectory to be followed. In a continuous real-time manner, the robot is then controlled to follow the trajectory by simultaneously controlling movement of position in space and orientation of the TCP by the user operating e.g. joysticks, and at the same time, the probe sensor provides input to the robot controller regarding distance and orientation deviation between a neutral position of the probe sensor tip and the actual position of the probe sensor tip. This is performed until the end E_T2 of the desired trajectory has been reached. During the controlling, a logging of data from the robot controller is performed with a suitable temporal and spatial precision in response to the user's operation of the user interface during the learning session. Finally, a calculation of a transformation is performed, so as to adapt the robot movement to a process tool with a given length, and this transformation calculation is performed in response to data logged from the probe sensor.
[0080]
[0086] This method can be programmed as a control algorithm to form part of a robot controller software package, or it can be fully or partly implemented as a dedicated computer device.
[0087]
[0088]
[0089] Each potentiometer (X, Y, Z) in the probe sensor has a center position defined, meaning that the two angle potentiometers (X, Y) will be in center position, and the linear potentiometer (Z) will be partly pressed in, e.g. half way pressed in between its two outer positions. During the real-time learning process, the trajectory generator will try to keep this center position for the probe sensor, and therefore the trajectory generator will add this to the rest of the control system.
[0090] The Trajectory Generator measures the current probe sensor position and calculates a new relative robot position as output. The system will force the robot to move in a direction, so the next sensor position will be closer to the neutral or center position of the probe sensor. This behaving will avoid any exceed of the probe sensor position limits. Furthermore, this behaving will be the primary control loop and help the user controlling the robot during the real-time continuous trajectory learning session, where the system logs or records the positions and/or control signals at a predetermined sample rate throughout the learning session.
[0091]
[0092] To sum up: the invention provides a robot learning system for trajectory learning of a robot RB having a robot arm between a base and a tool center point TCP. A user interface allows the user to control the robot arm in order to follow a desired trajectory during a real-time. A probe sensor PS is mounted on the TCP during the learning session. The probe sensor PS measures a distance parameter Z indicative of distance from the TCP and a surface forming the trajectory to be followed, and an orientation parameter X, Y indicative of orientation of the TCP and the surface forming the trajectory to be followed. These distance and orientation data are provided as a feedback to the controller of the robot CTL during the real-time learning session, thereby allowing the robot controller software to assist the user in following a desired trajectory in a continuous manner. Especially, the probe sensor PS may have a displaceable tip TP to follow a surface and having a neutral or center position, and where the robot controller software controls the robot movements to seek the neutral or center position irrespective of the user's control inputs. Data DT is logged during the learning session, so as to allow later control of the robot RB in response to the data DT logged during the learning session.
[0093] Although the present invention has been described in connection with the specified embodiments, it should not be construed as being in any way limited to the presented examples. The scope of the present invention is to be interpreted in the light of the accompanying claim set. In the context of the claims, the terms including or includes do not exclude other possible elements or steps. Also, the mentioning of references such as a or an etc. should not be construed as excluding a plurality. The use of reference signs in the claims with respect to elements indicated in the figures shall also not be construed as limiting the scope of the invention. Furthermore, individual features mentioned in different claims, may possibly be advantageously combined, and the mentioning of these features in different claims does not exclude that a combination of features is not possible and advantageous.