Intuitive motion coordinate system for controlling an industrial robot
09958862 ยท 2018-05-01
Assignee
Inventors
Cpc classification
Y10S901/14
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
Y10S901/28
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
B25J9/161
PERFORMING OPERATIONS; TRANSPORTING
G05B19/427
PHYSICS
Y10S901/30
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
Y10S901/06
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
International classification
G05B19/427
PHYSICS
Abstract
A method and apparatus for controlling an industrial robot relative to an intuitive motion coordinate system. The current 3D position of a touch-screen teach pendant relative to the robot is sensed, and an operator-centric frame of reference is developed relative to the robot-centric frame of reference. A simulacra of the robot is generated, oriented so as to correspond with an operator view of the robot from the current position of the controller, and displayed on the pendant. A motion-control construction, generated and displayed on the pendant, is adapted to receive jog commands from the operator indicative of a respective incremental movement of the simulacra in the operator-centric frame of reference. Each jog command is transformed from the operator-centric frame of reference to the robot-centric frame of reference, and the robot moved in accordance with the transformed jog command. Movement of the pendant relative to the robot is sensed and, in response, the displayed simulacra is reoriented to correspond to the new position of the pendant relative to the robot as viewed by the operator.
Claims
1. A method for controlling a multi-axis robot using an operator interface adapted to interact with an operator, the operator interface comprising a teach pendant having an input element, a display element, and a sensor element adapted to sense a current 3D position of the pendant, the method comprising the steps of: [1] generating a simulacra of the robot for display on the display element; [2] determining a current 3D position of the robot relative to a robot-centric first frame of reference; [3] using the sensor element to sense the current 3D position of the pendant relative to the first frame of reference; [4] developing an operator-centric second frame of reference corresponding to the current 3D position of the pendant with respect to the first frame of reference; [5] orienting the simulacra in the second frame of reference to correspond with an operator view of the robot from the current 3D position of the pendant; [6] displaying the oriented simulacra on the display element; [7] receiving from the operator via the input element a jog command indicative of a respective incremental movement of the simulacra in the second frame of reference; [8] transforming the jog command from the second frame of reference to the first frame of reference; and [9] moving the robot in the first frame of reference in accordance with the transformed jog command.
2. The method of claim 1 further comprising the steps of: [10] displaying on the display element a motion-control construct representing a movement of the robot; wherein step [7] is further characterized as: [7] receiving from the operator via the motion-control construct displayed on the display element a jog command indicative of a respective incremental movement of the simulacra in the second frame of reference.
3. The method of claim 2 further comprising the steps of: [11] sensing a change in the current 3D position of the pendant relative to the first frame of reference; [12] translating the second frame of reference to correspond to the current 3D position of the pendant with respect to the first frame of reference; and [13] returning to step [5].
4. A method for controlling a multi-axis robot using an operator interface adapted to interact with an operator, the operator interface comprising a teach pendant having an input element, a display element, and a sensor element adapted to sense a current 3D position of the pendant, the method comprising the steps of: [1] generating a simulacra of the robot for display on the display element; [2] determining a current 3D position of the robot relative to a robot-centric first frame of reference; [3] using the sensor element to sense the current 3D position of the pendant relative to the first frame of reference; [4] developing an operator-centric second frame of reference corresponding to the current 3D position of the pendant with respect to the first frame of reference; [5] orienting the simulacra in the second frame of reference to correspond with an operator view of the robot from the current 3D position of the pendant; [6] displaying the oriented simulacra on the display element; [7] displaying on the display element a motion-control construct suggestive of a movement of the simulacra; [8] receiving from the operator via the motion-control construct displayed on the display element a jog command indicative of a respective incremental movement of the simulacra in the second frame of reference; [9] transforming the jog command from the second frame of reference to the first frame of reference; [10] moving the robot in the first frame of reference in accordance with the transformed jog command; [11] sensing a change in the current 3D position of the pendant relative to the first frame of reference; [12] translating the second frame of reference to correspond to the current 3D position of the pendant with respect to the first frame of reference; and [13] returning to step [5].
5. A computer-implemented method for controlling a multi-axis robot using an operator interface adapted to interact with an operator, the operator interface comprising a teach pendant having an input element, a display element, and a sensor element adapted to sense a current 3D position of the pendant, the method comprising the steps of: [1] determining a current 3D position of the robot relative to a robot-centric first frame of reference; [2] using the sensor element to sense the current 3D position of the pendant relative to the first frame of reference; [3] developing an operator-centric second frame of reference corresponding to the current 3D position of the pendant with respect to the first frame of reference; [4] displaying on the display element a motion-control construct suggestive of a movement of the robot; [5] receiving from the operator via the motion-control construct a jog command indicative of a respective incremental movement of the robot in the second frame of reference; [6] transforming the jog command from the second frame of reference to the first frame of reference; and [7] moving the robot in the first frame of reference in accordance with the transformed jog command.
6. The method of claim 5 further comprising the steps of: [4.1] generating a simulacra of the robot for display on the display element; [4.2] orienting the simulacra in the second frame of reference to correspond with an operator view of the robot from the current 3D position of the pendant; and [4.3] displaying the oriented simulacra on the display element in association with the displayed motion-control construct.
7. The method of claim 6 further comprising the steps of: [8] sensing a change in the current 3D position of the pendant relative to the first frame of reference; [9] translating the second frame of reference to correspond to the current 3D position of the pendant with respect to the first frame of reference; and [10] returning to step [4].
8. A method for using a computer to develop a simulacra of a multi-axis robot for display on a display screen integrated into a teach pendant adapted for use by an operator to control the robot, the method comprising the steps of: [1] generating a simulacra of the robot for display on the display screen; [2] determining a current 3D position of the robot relative to a robot-centric first frame of reference; [3] sensing a current 3D position of the pendant relative to the first frame of reference; [4] developing an operator-centric second frame of reference corresponding to the current 3D position of the pendant with respect to the first frame of reference; [5] orienting the simulacra in the second frame of reference to correspond with an operator view of the robot from the current 3D position of the pendant; and [6] displaying the oriented simulacra on the display screen.
9. The method of claim 8 further comprising the steps of: [7] sensing a change in the current 3D position of the pendant relative to the first frame of reference; [8] translating the second frame of reference to correspond to the current 3D position of the pendant with respect to the first frame of reference; and [9] returning to step [5].
10. Apparatus configured to perform the method according to any preceding claim.
11. A computer readable medium including executable instructions which, when executed by a computer, cause the computer to perform a method according to any one of claims 1 to 9.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
(1) Our invention may be more fully understood by a description of certain preferred embodiments in conjunction with the attached drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8) In the drawings, similar elements will be similarly numbered whenever possible. However, this practice is simply for convenience of reference and to avoid unnecessary proliferation of numbers, and is not intended to imply or suggest that our invention requires identity in either function or structure in the several embodiments.
DETAILED DESCRIPTION OF THE INVENTION
(9) In accordance with our invention, we have developed a new operator coordinate frame of reference adapted to facilitate intuitive control of the relative motions of all axes of a multi-axis robot, as follows:
(10) Operator Coordinate Frame:
(11) In general, our new operator frame defines the direction of motion of the robot tool plate 106 in operator understandable terms, including, e.g., left, right, up, down, in and out. In accordance with our invention, these directions of motion will always be relative to the operator's current view of the robot, as the operator moves around the robot 100 in its workspace 108. For example, let us assume that the robot is in its home position and that the operator is standing directly in front of the robot, as shown in
.sub.view=HEADING.sub.currentHEADING.sub.zero[Eq. 1] Of course, known position orientation means other than a compass sensor could be used. As will be clear to those skilled in this art, the position of the operator and, in particular, the teach pendant relative to the robot frame may be determined using any of several known 3D position location systems, including, e.g., the Global Positioning System (GPS) or, if higher precision is desired, one of the known Ultra-Wide-Band (UWB) systems. One such UWB system known to us is commercially available from DecaWave, Ltd. (Dublin, Ireland). Thus, for example, using only one single-chip UWB transceiver (as shown in U.S. Pat. No. 8,437,432, incorporated herein by reference), and at least two independently-operating, fixed UWB base stations, the teach pendant/operator can be precisely located within the workcell using known triangulation techniques. Indeed, we recognize that, in addition to determining the relative position of the teach pendant with respect to the robot frame, the angular orientation of the pendant itself can be determined using a plurality of independently-operating sensors located at respective spaced-apart positions within (or on) the pendant. We will demonstrate hereinafter how such an enhanced position/orientation sensing system can be exploited to great advantage. Transform Operator Frame to User Frame: As has been noted, our operator frame can be transformed into any of the conventional frame formats. For example, our operator frame can be represented mathematically in terms of the viewing angle, , relative to the user frame, as follows:
X.sub.Operator=[Cos(.sub.view) Sin(.sub.view) 0][Eq. 2]
Z.sub.Operator=[0 0 1][Eq. 3]
Y.sub.Operator=Z.sub.OperatorX.sub.Operator[Eq. 4] where: x denotes matrix cross-product, or
Y.sub.Operator=[Sin(.sub.view) Cos(.sub.view) 0][Eq. 5] where, (X.sub.Operator, Y.sub.Operator, Z.sub.Operator) denotes the user frame representing the current operator frame. In this example, it should be noted that the origin of this user frame coincides with the origin of the robot frame. Thus, assuming that the robot is mounted on the floor and the operator is walking around on the same plane as the floor, then the 44 homogenous transformation matrix for the transformation from the user frame representing the operator frame to the robot frame can be written as:
(12)
(13) TABLE-US-00001 Jog Left Jog in the negative Y Operator direction Jog Right Jog in the positive Y Operator direction Jog Down Jog in the negative Z Operator direction Jog Up Jog in the positive Z Operator direction Jog In Jog in the negative X Operator direction Jog Out Jog in the positive X Operator direction In
.sub.robot=(.sub.RobotT.sup.Operator).Math..sub.operator[Eq. 7] where: .sub.robot is the incremental motion in the robot frame expressed as a 44 homogenous transform; .sub.RobotT.sup.Operator is the 44 homogenous transformation matrix representing the operator's user frame; .sub.operator is the incremental motion in the operator's user frame expressed as a 44 homogenous transform; and .Math. represents a matrix multiplication operation.
(14) Robot Hand Frame:
(15) Our new robot hand frame is a operator understandable coordinate frame for commanding the motion of the robot's TCP relative to the operator frame using a 3D simulacra of the robot. In this method, a 3D simulacra of the robot is displayed on the teach pendant. In this mode, the operator moves the simulated TCP by touching the anchors in the 3D simulation as shown in
(16) Robot Axis Frame:
(17) Our new robot axis frame is another operator understandable coordinate frame for commanding the motion of the individual joints of a robot using a 3D simulacra of the robot. In this method, a 3D simulacra of the robot is displayed on the pendant. The operator moves the robot's joints by touching the anchors in the 3D simulation as shown in
(18) Although we have described our invention in the context of particular embodiments, one of ordinary skill in this art will readily realize that many modifications may be made in such embodiments to adapt either to specific implementations.
(19) Thus it is apparent that we have provided an improved method and apparatus for robot programming that encompasses the capabilities of the most prevalent method, i.e., the Teach Pendant Based (see, above), while simplifying this method by using a new operator-oriented coordinate frame of reference for commanding the motion of the robot. In particular, we submit our new operator-oriented coordinate frame of reference is more intuitive to the operator, and, thus, substantially reduces the need for the operator to understand geometric coordinate frames and their respective directions. Further, we submit that our method and apparatus provides performance generally comparable to the best prior art techniques but more efficiently than known implementations of such prior art techniques.