METHOD AND APPARATUS FOR METROLOGY-IN-THE-LOOP ROBOT CONTROL
20230075352 · 2023-03-09
Assignee
Inventors
- Mitchell R. WOODSIDE (Rolla, MO, US)
- Douglas A. BRISTOW (Rolla, MO, US)
- Robert G. LANDERS (Rolla, MO, US)
Cpc classification
B25J9/1628
PERFORMING OPERATIONS; TRANSPORTING
B25J13/089
PERFORMING OPERATIONS; TRANSPORTING
G01S17/66
PHYSICS
International classification
G01S17/66
PHYSICS
G01B11/00
PHYSICS
Abstract
In an industrial robot, an external high-precision metrology tracking system, such as a laser tracker system, is used to directly measure robot kinematic errors and corrections are implemented during processing so that the end effector of the robot may be accurately positioned so that a tool or other object carried by the robot effector can carry out a designated function, such as machining a workpiece or other operation requiring that the effector be accurately positioned with respect to a workpiece.
Claims
1.-14. (canceled)
15. Apparatus for controlling an industrial robot, the latter having an immovable base, a plurality of links supported by the base, a movable joint between the base and a most proximate link and between each of the adjacent links, one of the links constituting a most distal link with respect to the base, an end effector carried by the most distal link, each of the joints generating a robot measurement signal corresponding to the position and orientation of the end effector as the end effector is moved by the robot to a desired position and orientation, the industrial robot having a robot control system for controlling movement of the end effector to its the desired position and orientation, wherein said apparatus comprises: a. a metrology tracking system for determining an actual position and orientation of the end effector as it moves toward its the desired position and orientation; b. the metrology tracking system having a tracker and a sensor, the sensor being carried by the end effector for communicating with the tracker; c. the metrology tracking system generating a tracker measurement signal corresponding to the actual position and orientation of the end effector as the end effector moves toward its the desired position and orientation and supplying the tracker measurement signal to a computer; d. the computer being configured to receive the robot measurement signal from the robot control system, the robot measurement signal corresponding to the position and orientation of the end effector as determined by the robot control system; and e. the computer being further configured to generate a correction command and to communicate the correction command to the robot control system for correcting the position and orientation of the end effector to better match the actual position and orientation of the end effector as determined by the tracker measurement signal as the end effector moves toward its the desired position thereby to result in a more accurate positioning and orienting of the end effector when in its the desired position and orientation.
16. The apparatus as set forth in claim 15 wherein the metrology tracking system comprises a laser tracker having a six degree of freedom laser sensor target carried by the end effector, the tracker being a laser tracker having a laser configured to emit a laser signal to the laser sensor target, the latter having a retro reflector therewithin for reflecting the laser signal back to the laser tracker thereby to establish a position and orientation of the end effector as the latter is moved toward its the desired position and orientation.
17. The apparatus as set forth in claim 16 wherein the tracker measurement signal is a laser tracker measurement signal that is communicated to the computer.
18. The apparatus as set forth in claim 17 wherein the computer receives a robot measurement signal, as determined by the robot control system, to construct a kinematic end effector position and orientation measurement signal, the computer being configured to utilize the laser tracker measurement signal to construct an actual end effector position and orientation measurement signal and to generate the correction command which is transmitted to the robot control system whereby the correction command is employed by the robot control system such that the kinematic end effector position and orientation, as determined by the robot control system, is corrected to better agree with the actual position and orientation of the end effector as determined by the laser tracker.
19. A method of controlling an industrial robot, the latter having an immovable base, a plurality of links, a first movable joint between the base and a most proximate link and other movable joints between each of the adjacent links, one of the links constituting a most distal link with respect to the base, an end effector carried by the most distal ink, each of the joints generating a robot measurement signal corresponding to the position and orientation of the end effector as the end effector is moved by the robot to a desired position and orientation, the industrial robot having a robot control system for controlling movement of the end effector to its the desired position and orientation, said method comprising the steps of: f. utilizing a metrology tracking system to determine the actual position and orientation of the end effector as the latter is moved toward its the desired position and orientation; g. utilizing the metrology tracking system to generate a tracker measurement signal corresponding to the actual position and orientation of the end effector as the latter is moved toward its the desired position; h. supplying the tracker measurement signal to a computer; and i. the computer receiving a robot measurement signal as determined by the robot control system, the computer constructing an end effector kinematic position and orientation signal using the robot measurement signal, and comparing the tracker measurement signal and the end effector kinematic position and orientation signal and generating an incremental correction command in response to the difference between the tracker measurement signal and the kinematic position and orientation signal with the command being transmitted to the robot control system, whereby the robot control system corrects the end effector location so as to better agree with the measurement signal.
20. The method of claim 19 wherein the metrology tracking system is a laser tracker system having a six degree of freedom laser sensor target carried by the end effector and a laser tracker, and wherein the method includes emitting a laser beam from the laser tracker which is reflected back to the laser tracker to determine the actual position and orientation of the end effector.
21. The method of claim 20 further comprises the step of the laser tracker generating a tracker measurement signal and transmitting the tracker measurement signal to the computer.
22. The method of claim 19 wherein the step of the computer constructing the kinematic position and orientation signal of the end effector further comprises matching the robot measurement signal to the tracker measurement signal, computing a kinematic error measurement, computing the kinematic error estimate using a Kinematic Error Observer (KEO) algorithm, and computing a rounded incremental correction using the Kinematic Error Controller (KEC) algorithm.
23. The method of claim 19 wherein the robot controller has a robot clock and the laser tracker has a laser tracker clock, each of the clocks generating a respective clock signal, the method further comprising identifying an average relative time delay between the robot controller clock signal and a laser tracker clock signal.
24. The method of claim 19 further compromising matching the robot measurement signal to tracker measurement signal using a lookup table to correct the average relative time delay therebetween.
25. The method of claim 22 wherein the step of computing the kinematic error measurement is determined by a relative transformation between a matched set of robot and tracker measurements and is computed by Equation Error! Reference source not found.
26. The method of claim 22 wherein the step of computing the kinematic error estimate comprises using the Kinematic Error Observer (KEO) algorithm and the Equations Error! Reference source not found. and Error! Reference source not found.
27. The method of claim 22 further comprising the steps of computing the rounded incremental correction using the Kinematic Error Controller (KEC) algorithm using Equations Error! Reference source not found—Error! Reference source not found. to compute the incremental correction.
28. The method of claim 27 further comprising modifying the incremental correction to create the rounded incremental correction to account for resolution of the robot controller using Equations Error! Reference source not found.—Error! Reference source not found.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present teachings in any way. Corresponding reference numerals indicate corresponding parts throughout the several views of drawings.
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
DETAILED DESCRIPTION
[0019] The following description is merely exemplary in nature and is in no way intended to limit the present teachings, application, or uses. Throughout this specification, like reference numerals will be used to refer to like elements. Additionally, the embodiments disclosed below are not intended to be exhaustive or to limit the invention to the precise forms disclosed in the following detailed description. Rather, the embodiments are chosen and described so that others skilled in the art can utilize their teachings. As well, it should be understood that the drawings are intended to illustrate and plainly disclose presently envisioned embodiments to one of skill in the art, but are not intended to be manufacturing level drawings or renditions of final products and may include simplified conceptual views to facilitate understanding or explanation. As well, the relative size and arrangement of the components may differ from that shown and still operate within the spirit of the invention.
[0020] As used herein, the word “exemplary” or “illustrative” means “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” or “illustrative” is not necessarily to be construed as preferred or advantageous over other implementations. All the implementations described below are exemplary implementations provided to enable persons skilled in the art to practice the disclosure and are not intended to limit the scope of the appended claims.
[0021] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing a particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an”, and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises”, “comprising”, “including”, and “having” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps can be employed.
[0022] When an element, object, device, apparatus, component, region or section, etc., is referred to as being “on”, “engaged to or with”, “connected to or with”, or “coupled to or with” another element, object, device, apparatus, component, region or section, etc., it can be directly on, engaged, connected or coupled to or with the other element, object, device, apparatus, component, region or section, etc., or intervening elements, objects, devices, apparatuses, components, regions or sections, etc., can be present. In contrast, when an element, object, device, apparatus, component, region or section, etc., is referred to as being “directly on”, “directly engaged to”, “directly connected to”, or “directly coupled to” another element, object, device, apparatus, component, region or section, etc., there may be no intervening elements, objects, devices, apparatuses, components, regions or sections, etc., present. Other words used to describe the relationship between elements, objects, devices, apparatuses, components, regions or sections, etc., should be interpreted in a like fashion (e.g., “between” versus “directly between”, “adjacent” versus “directly adjacent”, etc.).
[0023] As used herein the phrase “operably connected to” will be understood to mean two are more elements, objects, devices, apparatuses, components, etc., that are directly or indirectly connected to each other in an operational and/or cooperative manner such that operation or function of at least one of the elements, objects, devices, apparatuses, components, etc., imparts are causes operation or function of at least one other of the elements, objects, devices, apparatuses, components, etc. Such imparting or causing of operation or function can be unilateral or bilateral.
[0024] As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. For example, A and/or B includes A alone, or B alone, or both A and B.
[0025] Although the terms first, second, third, etc. can be used herein to describe various elements, objects, devices, apparatuses, components, regions or sections, etc., these elements, objects, devices, apparatuses, components, regions or sections, etc., should not be limited by these terms. These terms may be used only to distinguish one element, object, device, apparatus, component, region or section, etc., from another element, object, device, apparatus, component, region or section, etc., and do not necessarily imply a sequence or order unless clearly indicated by the context.
[0026] Moreover, it will be understood that various directions such as “upper”, “lower”, “bottom”, “top”, “left”, “right”, “first”, “second” and so forth are made only with respect to explanation in conjunction with the drawings, and that components may be oriented differently, for instance, during transportation and manufacturing as well as operation. Because many varying and different embodiments may be made within the scope of the concept(s) taught herein, and because many modifications may be made in the embodiments described herein, it is to be understood that the details herein are to be interpreted as illustrative and non-limiting.
[0027] The apparatuses/systems and methods described herein can be implemented at least in part by one or more computer program products comprising one or more non-transitory, tangible, computer-readable mediums storing computer programs with instructions that may be performed by one or more processors. The computer programs may include processor executable instructions and/or instructions that may be translated or otherwise interpreted by a processor such that the processor may perform the instructions. The computer programs can also include stored data. Non-limiting examples of the non-transitory, tangible, computer readable medium are nonvolatile memory, magnetic storage, and optical storage.
[0028] As used herein, the term module can refer to, be part of, or include an application specific integrated circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that performs instructions included in code, including for example, execution of executable code instructions and/or interpretation/translation of uncompiled code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module can include memory (shared, dedicated, or group) that stores code executed by the processor.
[0029] The term code, as used herein, can include software, firmware, and/or microcode, and can refer to one or more programs, routines, functions, classes, and/or objects. The term shared, as used herein, means that some or all code from multiple modules can be executed using a single (shared) processor. In addition, some or all code from multiple modules can be stored by a single (shared) memory. The term group, as used above, means that some or all code from a single module can be executed using a group of processors. In addition, some or all code from a single module can be stored using a group of memories.
[0030] The nomenclature used in this disclosure is as follows.
TABLE-US-00001 Nomenclature T.sup.b.sub.r kinematic location of 6DoF sensor with respect to the robot's base frame R.sup.b.sub.r rotation matrix of kinematic location of 6DoF sensor with respect to the robot's base frame p.sup.b.sub.r position of kinematic location of 6DoF sensor with respect to the robot's base frame T.sup.b.sub.n(•) equation that converts robot measurement into homogenous transformation r robot measurement T.sup.n.sub.r transformation to the 6DoF sensor location with respect to robot's tool flange T.sup.b.sub.m measurement of 6DoF sensor with respect to the robot's base frame R.sup.b.sub.m rotation matrix of 6DoF sensor measurement with respect to the robot's base frame P.sup.b.sub.m position of 6DoF sensor measurement with respect to the robot's base frame T.sup.b.sub.lt transformation of laser tracker with respect to the robot's base frame T.sup.lt.sub.m(•) equation that converts tracker measurement into homogenous transformation matrix s tracker measurement k control iteration e kinematic error measurement e.sub.r translational component of kinematic error measurement ƒ.sub.r(•) rotational component of kinematic error measurement ƒ.sub.θ(•) equation that converts rotation matrix into axis angle representation equation that converts rotation matrix into robot manufacturer's orientation representation ê kinematic error estimate Δt time difference between current and previous control iteration L observer gain matrix Δe.sub.p corrected positional kinematic error Δe.sub.r corrected rotational kinematic error Δp translational incremental correction Δr rotational incremental correction in axis angle representation Δθ rotational incremental correction in manufacturer's orientation representation K.sub.p translational feedback gain matrix K.sub.r rotational feedback gain matrix Δ{tilde over (p)} round translational incremental correction Δ{tilde over (θ)} round rotational incremental correction in manufacturer's orientation representation η.sub.p residual of translational incremental correction η.sub.θ residual of rotational incremental correction P.sub.u total translational incremental correction R.sub.u total rotational incremental correction as a rotation matrix
[0031] In the present disclosure, the topology, theory, and operation of a control system, used to correct a robot's kinematic error, is described. The control system, referred to as the Kinematic Error Control System, is comprised of several subsystems, each containing several components, which facilitate its operation. These subsystems are a robot control system, a metrology tracking system, and an external control system on which the Kinematic Error Control System is implemented. A table showing the various components in relation to their respective subsystem and a signal diagram of the signals transmitted between the components are shown in Table 1 and
TABLE-US-00002 TABLE 1 Components and Subsystems of Kinematic Error Control System Component Subsystem Robot Robot Control System Robot Controller Robot Control System Laser Tracker Metrology Tracking System 6DoF Sensor Metrology Tracking System PC External Control System
[0032] The robot control system has two components, the robot, and the robot controller. The robot is the mechanical system that performs the physical operation. The robot contains encoders and servo motors used to both measure and move each of its joints. The robot controller contains the servo drives and the robot manufacturers proprietary trajectory controller which are used to both regulate and control the robot through a desired motion. The proprietary trajectory controller utilizes the forward kinematic model of the robot to convert the encoder (joint) measurements into a kinematic position and orientation of its tool flange for use in its control algorithm. In subsequent discussion the joint or kinematic position and orientation measurements will be referred to as robot measurements. In addition to the servo drives and trajectory controller, the robot controller contains the network interfaces used to communicate with the external control system as well as the software used to adjust its trajectory based on corrections transmitted from the external control system.
[0033] In this specific case the metrology tracking system has two components, the 6 DoF sensor and the laser tracker. The 6 DoF sensor is fixed to an end effector which is attached to the robot's tool flange. The 6 DoF sensor houses several orientation sensors and a retro reflector which are used to measure its orientation and position, respectively. More specifically the position of the 6 DoF sensor is measured by the laser tracker and the orientation of the 6 DoF sensor is measured by the sensor itself and transmitted to the tracker. The laser tracker houses a gimbaled laser displacement sensor that emits a laser beam which is reflected by the 6 DoF sensor's retro reflector back to the tracker. The azimuth and elevation of the beam, determined by the laser tracker's encoders, and the distance of the beam are used to determine the 6 DoF sensor's position. Position and orientation measurements collected by the laser tracker and 6 DoF sensor, respectively, are combined through a proprietary method to create a single measurement of the position and orientation of the 6 DoF sensor, and hence the actual position and orientation of the end effector. In subsequent discussion this measurement will be referred to as the tracker measurement. Additionally, the laser tracker contains the interface used to transmit the tracker measurements to the external controller system.
[0034] The external controller system is comprised of a computer (PC) containing the network interfaces used to receive the transmitted robot and tracker measurements from the robot controller and laser tracker, respectively. The robot controller and laser tracker may be unsynchronized, that is, measurements sampled and transmitted independently without using a shared clock signal between the robot controller and laser tracker. At runtime, the robot measurement is matched to the tracker measurement, the matched set of measurements are used to compute a kinematic error measurement, a kinematic error estimate is computed from the kinematic error measurement, and a rounded incremental correction of the end effectors position and orientation are computed from the kinematic error estimate. The incremental correction command is then transmitted to the robot controller where it is used to correct the position and orientation of the robot's end effector.
[0035] If the robot measurements are described using joint measurements, the robot and tracker measurements will be defined in different spatial domains. In this case, the robot measurements describe the position of its joints as coordinates in joint space while the tracker measurements describe the position and orientation coordinates of its tool flange in Euclidian space. These measurements must be converted into the same spatial domain to compute the kinematic error measurement. In the present disclosure, Euclidian space is used. Additionally, there are many ways to represent both the position and orientation of a 3D object in Euclidian space. In the field of robotics, it is common to represent a 3D object as a homogenous transformation matrix that defines the position and orientation of a frame with respect to another frame. The position is represented in cartesian coordinates and the orientation is represented as a rotation matrix, describing the projection of the axes of one frame with respect to the axes of another. This representation is both intuitive and provides a set of mathematical operators that can be used to determine the relative relationship of various frames. Further discussion describes how the robot and tracker measurements are converted into Euclidian space (if applicable) and represented as homogenous transformation matrices with respect to the same frame. A graphic depiction of the transformative relationships between the frames used to define the kinematic (robot) and actual (tracker) position and orientation of the 6 DoF sensor, equivalently the position and orientation of the end effector, with respect to the robot's base frame is shown in
[0036] Referring now to
[0037] The robot measurements are represented by a single vector, r, and are described by either a set of joint positions, r=[q.sub.1 q.sub.2 . . . q.sub.n].sup.T, for each of the robot's joints in joint space (where n denotes the last joint) or a kinematic position (x.sub.r, y.sub.r, z.sub.r) in cartesian coordinates and orientation (α.sub.r, β.sub.r, γ.sub.r), in an orientation representation defined by the robot manufacturer, of the robot's tool flange in Euclidian space, r=[x.sub.r y.sub.r z.sub.r α.sub.r β.sub.r γ.sub.r].sup.T. In the case that the robot measurement is described by joint positions, the robot's forward kinematic equations, from its forward kinematic model, are used to convert the robot measurement into a homogenous transformation of the frame defining its tool flange with respect to the robot's base frame. In the case that the robot measurement is described by the kinematic position and orientation of the robot's tool flange, the orientation of the robot measurement is converted into a rotation matrix to construct an equivalent homogenous transformation to the one produced by the kinematic equations. In both cases, an additional transformation that defines the translation and rotation of the 6 DoF sensor with respect to the robot's tool flange is applied in order to construct the kinematic position and orientation of the 6 DoF sensor,
where p.sub.r.sup.b is the kinematic position and R.sub.r.sup.b is the kinematic orientation (represented as a rotation matrix) of the 6 DoF sensor relative to a robot base frame, T.sub.n.sup.b(.Math.) is the equation that converts the robot measurements, r, into a homogenous transformation, and T.sub.r.sup.n is the transformation to the 6 DoF sensor with respect to the robot's tool flange. The transformation T.sub.r.sup.n is identified using standard techniques commonly understood by those skilled in the art.
[0038] The tracker measurements are taken with respect to the laser tracker's measurement frame and represented by a single vector, s=[x.sub.s y.sub.s z.sub.s α.sub.s β.sub.s γ.sub.s].sup.T, of its position (x.sub.s, y.sub.s, z.sub.s) and orientation (α.sub.s, β.sub.s, γ.sub.s) in an orientation representation defined by the laser tracker manufacturer. The measurements are converted into a homogenous transformation matrix and transformed into the robot's coordinate system by,
where p.sub.s.sup.b is the measured (actual) position and R.sub.s.sup.b is the measured (actual) orientation (represented as a rotation matrix) of the 6 DoF sensor, T.sub.s.sup.m(.Math.) is the equation that converts the tracker measurements, s, into a homogeneous transformation matrix, and T.sub.m.sup.b is the transformation of the laser tracker's measurement frame with respect to the robot's base frame.
[0039] The transformation T.sub.m.sup.b is identified by the using standard techniques commonly understood by those skilled in the art.
[0040] As mentioned in [0034] the robot and tracker measurements may be unsynchronized. Lack of synchronicity of the measurements will result in both a relative time delay between the two clock signals and jitter in each clock signal's timing. Each of these issues are addressed independently in the algorithmic procedure discussed below.
[0041] Before runtime, the relative time delay between the clock signals is determined by using an identification procedure, run once prior to the operation of the Kinematic Error Control System. The relative time delay identification procedure is conducted as follows: [0042] 1. Generate an oscillating motion command for the robot. [0043] 2. While the robot is in motion, record T.sub.r.sup.b and T.sub.s.sup.b data streams and plot the recorded position in time as shown in
[0046] Referring to
[0047] At system startup (Step 1), the following variables, defined further in the disclosure, are initialized at the given values,
η.sub.p[0]=0 (4)
η.sub.θ[0]=0 (5)
p.sub.u[0]=0 (6)
R.sub.u[0]=I (7)
[0048] At runtime, the robot and tracker measurements, r and s, are transmitted to the external control system independently. Once received, each measurement is given a timestamp, t.sub.r and t.sub.s, using the clock signal of the PC, and the measurements are converted (Step 2.1.A and 2.1.B) into the same spatial domain (if applicable) and representation using Equations (1) and (2), respectively. After conversion, the leading measurements, identified by (3) from the steps in [0040] are stored in a lookup table of sufficient size (constructed using a Last in First Out (LIFO) buffer). Now, the effects of the relative time delay, discussed in [0039], are compensated by matching (Step 2.2) the robot measurements to the tracker measurements producing the set of (matched) measurements, (T.sub.r.sup.b[k], T.sub.s.sup.b[k], t.sub.k[k]), for the k.sup.th iteration of the Kinematic Error Control System, referred to as the control iteration, by: [0049] 1. Compute the delayed timestamp, {tilde over (t)}, by subtracting the average relative time delay, E(δ.sub.r), from the current timestamp of the lagging measurement, identified by Equation (3) from the steps in [0040], by,
{tilde over (T)}=ƒ.sub.int((T.sub.1, t.sub.1), (T.sub.2, t.sub.2), {tilde over (t)}) (9) where ƒ.sub.int( . . . ):□.sup.4×4.fwdarw.□.sup.4×4 is the homogenous transformation interpolation function defined in the appendix. [0052] 4. Match the lagging and interpolated leading measurements for the k.sup.th control iteration by,
[0053] The kinematic error measurement, that is, the relative transformation between the matched robot and tracker measurements, is taken with respect to the robot's base frame and is computed (Step 3) by,
where e.sub.p and e.sub.r are the translational and rotational kinematic errors and the function, ƒ.sub.r(.Math.), defined in the appendix, converts the resultant rotation matrix of R.sub.r.sup.bR.sub.m.sup.b.sup.
[0054] The clock signal jitter, discussed in [0039], will corrupt the signal produced from Equation (11) with affects analogous to measurement noise (referred to as timing noise in the disclosure of our U.S. Provisional Patent Application No. 62/982,166). Compensation for jitter is accomplished by using the Kinematic Error Observer algorithm (Step 4). The algorithm is as follows: [0055] 1. Find the time difference between the current and previous control iteration,
Δt[k]=t.sub.k[k]−t.sub.k[k−1] (12) [0056] 2. Compute the kinematic error estimate,
The estimate computed in Equation (13) is then used in the Kinematic Error Controller (KEC) algorithm to produce and incremental correction to be sent to and executed by the robot controller.
[0057] The KEC algorithm computes a rounded incremental correction (Step 5) from the kinematic error estimate to be applied to the robot during the timestep of the control iteration. Computation of the rounded incremental correction is performed in three parts. In the first part, translational and rotational incremental corrections are computed, and the rotational incremental correction is converted into the orientation representation of the robot controller as follows: [0058] 1. Compute the corrected kinematic error by,
Δe.sub.p[k]=ê.sub.p[k]−p.sub.u[k−1] (14)
Δe.sub.r[k]=R.sub.u[k−1].sup.T ƒ.sub.r.sup.−1(ê.sub.r[k]) (15) [0059] 2. Compute the translational and rotational incremental corrections by,
Δp[k]=K.sub.p(Δe.sub.p[k]) (16)
Δr[k]=K.sub.rƒ.sub.r(Δe.sub.r[k]) (17) [0060] where K.sub.p and K.sub.r are the translational and rotational feedback gain matrices used to adjust the convergence dynamics of the KEC, the function ƒ.sub.r.sup.−1(.Math.) converts the axis angle representation of the kinematic error estimate back into its equivalent rotation matrix, and p.sub.u[k−1] and R.sub.u[k−1] are the total incremental corrections computed from the previous control iteration. [0061] 3. Convert the orientation representation of the incremental correction into the robot manufacturers specific orientation representation by,
Δθ[k]=ƒ.sub.θ(Δr[k]) (18) where ƒ.sub.θ(.Math.) is a function that converts the axis angle representation of an orientation into the robot controller's required orientation representation for incremental corrections. The exact form of the ƒ.sub.θ(.Math.) function is dependent on the orientation representation used by the robot controller and can be found using standard techniques commonly understood by those skilled in the art.
The robot controller has finite resolution of its internal variables, causing a received incremental correction to be round to the controller's resolution. Consequently, correction information smaller than the resolution is lost, which results in long term degradation in the accuracy of the Kinematic Error Control System. The second part of the KEC algorithm addresses the degradation effect caused by the robot controller's resolution as follows: [0062] 4. Round the incremental correction to the resolution of the robot controller by,
Δ{tilde over (p)}[k]=round(Δp[k]+η.sub.p[k−1], δ.sub.p) (19)
Δ{tilde over (θ)}[k]=round(Δθ[k]+η.sub.θ[k−1], δ.sub.θ) (20) where θ.sub.p and θ.sub.θ are the translational and rotational resolution of the robot controller, respectively, and η.sub.p[k−1] and η.sub.θ[k−1] are the translational and rotational rounding residuals of the previous incremental correction, respectively.
Before completing the KEC algorithm and transmitting the rounded incremental correction to the robot controller, both the rounding residuals and total incremental correction at the current control iteration must be computed and saved for the next control iteration. Computation of these variables in the third part of the KEC algorithm is performed as follows: [0063] 5. Compute new rounding residuals for the next control iteration by,
η.sub.p[k]=Δp[k]−Δ{tilde over (p)}[k], (21)
η.sub.θ[k]=Δθ[k]−Δ{tilde over (θ)}[k]. (22) [0064] 6. Compute the total incremental correction for the next control iteration by,
p.sub.u[k]=p.sub.u[k−1]+Δ{tilde over (p)}[k], (23)
R.sub.u[k]=ƒ.sub.θ.sup.−1(Δ{tilde over (θ)})R.sub.u[k−1], (24)
where the function ƒ.sub.θ.sup.−1(.Math.) converts the manufacturer's orientation representation back into its equivalent rotation matrix.
[0065] Once the KEC algorithm is completed, the rounded incremental corrections, Δ{tilde over (p)}[k] and Δ{tilde over (θ)}[k], are transmitted (Step 6) to the robot controller for execution, the control iteration is incremented, and the next set of matched robot and tracker measurements are used to compute a new kinematic error measurement (Step 3). Control iterations are conducted indefinitely, continually correcting the robot's kinematic error, until the program on the PC is terminated or the desired motion has completed.
[0066] An outline of the above procedure is summarized below: [0067] 1. System Startup [0068] 1.1. Set relative time delay measured from procedure described in [0040] and trigger parameter according to Equation (3) [0069] 1.2. Initialize System Variables using Equations (4)-(7). [0070] 2. Measurement Preparation and Matching of Robot Measurements to Tracker Measurements [0071] 2.1.A. Convert robot measurement, r, into homogenous transformation matrix using Equation (1) and add to lookup table if determined by Equation (3) to be the leading measurement. [0072] 2.1.B. Convert tracker measurement, s, into a homogenous transformation matrix using Equation (2) and add to lookup table if determined by Equation (3) to be the leading measurement. [0073] 2.2. Match robot measurements to tracker measurements by comparing the timestamp of the leading measurements in the lookup table to the delayed timestamp of the lagging measurement and perform interpolation using the procedure in [0043] and Equations (8)-(10). [0074] 3. Compute kinematic error measurement using Equation (11). [0075] 4. Compute kinematic error estimate with KEO algorithm [0076] 4.1. Compute time difference between control iterations using Equation (12). [0077] 4.2. Compute kinematic error estimate using Equation (13). [0078] 4.3. Save kinematic error estimate for next control iteration. [0079] 5. Compute rounded incremental path correction with KEC Algorithm [0080] 5.1. Compute the corrected kinematic error using Equations (14) and (15). [0081] 5.2. Calculate incremental correction using Equation (16) and (17). [0082] 5.3. Convert rotational incremental correction to manufacturer's orientation representation using Equation (18). [0083] 5.4. Round incremental correction using Equation (19) and (20). [0084] 5.5. Compute rounding residuals and save for next control iteration using Equations (21) and (22). [0085] 5.6. Compute total incremental correction and save for next iteration using Equations (23) and (24). [0086] 6. Transmit rounded incremental corrections to robot for execution. [0087] 7. Start next control iteration at step 3.
[0088] The description herein is merely exemplary in nature and, thus, variations that do not depart from the gist of that which is described are intended to be within the scope of the teachings. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions can be provided by alternative embodiments without departing from the scope of the disclosure. Such variations and alternative combinations of elements and/or functions are not to be regarded as a departure from the spirit and scope of the teachings.
[0089] Experimental results presented further in this disclosure were obtained using the hardware listed in Table 2.
TABLE-US-00003 TABLE 2 Specifications of Components in Experimental System. Equipment Model Manufacturer Specification Robot MH180 Yaskawa Motoman 6 axes Robotics ± 0.2 mm repeatability Robot Controller DX200 Yaskawa Motoman δ.sub.p = 1 μm Robotics δ.sub.θ = 100 μdeg Laser Tracker Radian Automated Precision Inc. 10 μm + 5 μm/m 6DoF Sensor STS Automated Precision Inc. ± 2 arcsec PC Precision Dell Windows 10 5820 Intel Xeon W-2125 4 GHz
[0090] Before further evaluation of the performance of the Kinematic Error Control System could be conducted, suitable values for the KEO observer gain matrix, L, and KEC feedback gain matrices, K.sub.p and K.sub.r were selected. The gain matrices were selected by commanding the robot to a single position, initializing the Kinematic Error Control System, and correcting the static kinematic errors at the commanded position. After several iterations, the final tuning of the system resulted in observer and feedback gains of L=diag(5,5,5) and K.sub.P=K.sub.r=diag(5×10.sup.−3, 5×10.sup.−3, 5×10.sup.−3), respectively, and a stable overdamped response with a settling time of 8.758 s.
[0091] In an additional experiment conducted for the present disclosure, the KEO algorithm's sensitivity was evaluated in both an open loop and closed loop configuration. This was done to ensure that sufficient measurement noise and jitter were filtered from the kinematic error measurement such that the residual measurement noise and jitter in the kinematic error estimate were not amplified significantly by the feedback gains in the KEC algorithm. To conduct this experiment, the robot was commanded to a single position and samples of the kinematic error estimate were measured both with (closed-loop) and without (open-loop) applying a correction with the KEC algorithm. Once the experiments were conducted, the steady state kinematic error was removed from both sets of measurements and the standard deviation was computed. The results of this experiment, provided in Table 3, show that there was an increase in the standard deviation, equivalently the noise, in the kinematic error estimate. However, when compared to the accuracy of the laser tracker in Table 2 and the process variation shown in subsequent experiments, the residual noise and jitter in the kinematic error estimate will not inhibit the Kinematic Error Control System's ability to both measure and correct the robot's kinematic error.
TABLE-US-00004 TABLE 3 Standard Deviation of Spatial Estimated Kinematic Error Measurement in Open and Closed Loop System Configurations. X (μm) Y (μm) Z (μm) R.sub.x (μrad) R.sub.y (μrad) R.sub.z (μrad) Open- 1.8 2 1.8 107.9 117.1 36.1 Loop Closed- 1.8 2.2 2.2 119.8 130 39.6 Loop
[0092] In an additional experiment conducted for this disclosure, the dynamic performance of the Kinematic Error Control System was evaluated for a series of linear, constant velocity motions of the end effector. The static kinematic error in the robot's nominal forward kinematic model is dependent on the position of its joints; therefore, increasing the commanded velocity of the industrial robot's end effector will increase the rate of change of the kinematic error that the Kinematic Error Control System will attempt to correct. In this series of experiments the robot's end effector traversed 1 m in the Y-axis of the robot's base frame at constant velocities ranging from 10 mm/s to 100 mm/s. Since the evaluated constant velocities were only performed in the Y-axis of the robot's base frame, only the corrected positional kinematic errors were evaluated in these experiments. The results of these experiments are shown in
[0093] To provide a single metric for each increase in the robot's corrected kinematic error, the spatial components of the corrected positional kinematic error were filtered independently using a zero-phase 6th order Butterworth filter with cutoff frequencies ranging between 0.1 Hz and 0.5 Hz. These aggressive cutoff frequencies were selected to capture the general trends of the corrected positional kinematic errors, especially those in the Y-axis which were heavily corrupted by noise and not as easily observed. Once each component of the corrected positional kinematic error was filtered, the resultant magnitude was computed, and its average was taken. This procedure was repeated for each constant velocity experiment. The average magnitude of the filtered corrected positional kinematic errors as functions of end effector velocity are shown in
[0094] Process forces acting on the robot's end effector will cause highly nonlinear deflections, referred to as external disturbances, of the arm due to the varying stiffness of the robot's structure. More importantly, these external disturbances are due to the deformation of the robot's links and are unobservable by the robot's control system (which can only measure deviations in its joints). Thus, these external disturbances can only be corrected by the Kinematic Error Control System.
[0095] An additional experiment was conducted for the present disclosure, to evaluate the performance of the Kinematic Error Control System when subjected to an external disturbance. In this experiment the robot was commanded to a single position, the Kinematic Error Control System was initialized, and the static kinematic errors at the commanded position were corrected. Once the static kinematic errors were corrected, a 45 lb. weight was applied to the end effector to emulate a single un-modeled process force acting on the end effector. The corrected positional and rotational kinematic error responses, respectively, of the described experiment are shown in
Additional Disclosure Regarding the Interpolation of a Homogenous Transformation Matrix
[0096] The function, ƒ.sub.int( . . . ):□.sup.4×4.fwdarw.□.sup.4×4, that produces an interpolation of a homogenous transformation between two sets of homogeneous transformations and corresponding timestamps, (T.sub.1, t.sub.1) and (T.sub.2, t.sub.2), at a specified timestamp, {tilde over (t)}, is defined as,
where the interpolation of the rotation matrix, {tilde over (R)}, and position vector, {tilde over (p)}, are respectively defined as,
Additional Disclosure Regarding the Axis Angle Representation of a Rotation Matrix
[0097] The axis-angle representation of a rotation matrix provides a more intuitive way to visualize and scale an orientation in Euclidian space. Essentially, this representation describes any orientation by a single vector which defines a single rotation about an arbitrary axis in □.sup.3. The elements of the resultant vector define the coordinates of the arbitrary axis while the vectors magnitude defines the rotation about this axis. Consider a generalized rotation matrix,
The single rotation about the arbitrary axis is calculated from Equation (28) by,
and the arbitrary axis is calculated from Equations (28) and (29) by,
Together, Equations (29) and (30) can be combined into a single vector,
which is the axis angle representation, r, of a generalized rotation matrix, R