Method for Simultaneous Robot Kinematic and Hand-Eye Calibration
20240042598 ยท 2024-02-08
Inventors
Cpc classification
G05B2219/39057
PHYSICS
B25J9/1602
PERFORMING OPERATIONS; TRANSPORTING
International classification
B25J13/08
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The present disclosure provides a method for simultaneously performing a robot's kinematic calibration and hand-eye calibration. One exemplary method comprises acquiring a plurality of point clouds of a calibration fixture and formulating this simultaneous calibration problem as an optimization problem based on the collection of point clouds of the same fixture.
Claims
1. A method for performing robot kinematic calibration and hand-eye calibration simultaneously, the robot having a 3D sensor attached to its end, said method comprising: determining the nominal values of the calibration parameters; acquiring a plurality of point clouds of a calibration fixture and recording the joint angles of said robot; formulating said simultaneous calibration problem as an optimization problem, said optimization problem having a cost function that minimizes the difference between the pose transformation from the pose of said robot's end at a first location to the pose of said 3D sensor when said robot's end is at a second location through the pose of said 3D sensor when said robot's end is at said first location and the pose transformation from the pose of said robot' end at said first location to the pose of said 3D sensor when said robot's end is at said second location through the pose of said robot's end at said second location; calculating parameters of said cost function; setting bounds for the design variables of said cost function; solving said optimization problem; and validating the calibration results.
2. The method according to claim 1 wherein said calibration fixture comprises three flat surfaces placed approximately perpendicular to each other.
3. The method according to claim 1 wherein said 3D sensor may be any one of a passive stereo 3D sensor, an active stereo 3D, a structured light 3D sensor, or a time-of-flight 3D sensor.
4. The method according to claim 1 wherein determining the nominal value of said calibration parameters comprising the steps of: determining the nominal Dennavit-Hartenburg parameters of said robot by measuring said robot's elements corresponding to said parameters or by measuring said robot's elements corresponding to said parameters by using a 3D digital model of said robot; selecting a representation of said 3D sensor's pose with respect to said robot's end; determining the nominal values of the Euler angles and the position of said 3D sensor's pose with respect to said robot's end by measuring the elements on said robot and said 3D sensor corresponding to said parameters or measuring the elements of said robot and said 3D sensor corresponding to said parameters by using 3D digital model of said robot and said 3D sensor; and, converting the Euler angles and the position of said 3D sensor's pose in said selected representation if it does not use Euler angles.
5. The method according to claim 1 wherein the step of acquiring point clouds of said calibration fixture and recording the corresponding robot joint angles comprises the steps of: placing said fixture in front of said robot; commanding said robot to move said 3D sensor to a plurality of poses over said fixture; and, commanding said 3D sensor to capture one or more point clouds of said fixture at each of said poses and recording the telemetry data of said robot including its joint angles at each of said poses.
6. The method according to claim 1 wherein the step of calculating parameters of said cost function comprises the steps of: calculating the poses of a plurality of point clouds of said fixture in the coordinate frame of said 3D sensor; and calculating the relative pose between each pair of said point clouds of said fixture.
7. The method according to claim 1 wherein the step of setting bounds for the design variables comprises setting an upper limit and a lower limit for each element of said design variables.
8. The method according to claim 1 wherein the step of solving said optimization problem may be done by using gradient-free methods such as particle swarm optimization and genetic algorithms.
9. The method according to claim 1 wherein the step of validating said calibration results comprises commanding said robot to move to a variety of poses and comparing the actual poses and the desired poses.
10. The method according to claim 6 wherein the step of calculating the pose of a point cloud of said fixture in the coordinate frame of said 3D sensor comprises the steps of: segmenting said point cloud into three portions with each portion corresponding to one of the three planes of said fixture; fitting a plane to each of said three portions and calculating the norm of each of said three planes; determining the correspondence of said three planes with the three faces of said fixture; calculating the position of the pose of said point cloud; and calculating the orientation of the post of said point cloud, comprising: setting the z axis of said orientation to be a first norm of said three norms; setting the x axis of said orientation to be the cross product of said first norm and a second norm of the remaining two norms; and setting the y axis of said orientation to be the cross product of said z axis and said x axis in accordance with the right-hand rule.
11. The method according to claim 10 wherein the step of calculating the position of the pose of said point cloud is by setting the position to be the intersection of said three norms of said three planes of said fixture.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0021] Embodiments will now be described, by way of example only, with reference to the drawings, in which:
[0022]
[0023]
[0024]
[0025]
DETAILED DESCRIPTION OF THE INVENTION
[0026] Various embodiments and aspects of the disclosure will be described with reference to details discussed below. The following description and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. The drawings are not necessarily to scale. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present disclosure.
[0027] As used herein, the terms, comprises and comprising are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in this specification including claims, the terms, comprises and comprising and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps, or components.
[0028] As used herein, the term exemplary means serving as an example, instance, or illustration, and should not be construed as preferred or advantageous over other configurations disclosed herein.
[0029] As used herein, the terms about and approximately, when used in conjunction with ranges of dimensions of particles, compositions of mixtures or other physical properties or characteristics, are meant to cover slight variations that may exist in the upper and lower limits of the ranges of dimensions so as to not exclude embodiments where on average most of the dimensions are satisfied but where statistically dimensions may exist outside this region. It is not the intention to exclude embodiments such as these from the present disclosure.
[0030] As used herein, the term position and orientation refers to an object's coordinates with respect to a fixed point together with its alignment (or bearing) with respect to a fixed axis. For example, the position and orientation of a robot arm might be the coordinates of a point on the robot arm's end with respect to a coordinate frame attached to the robot arm's base. The term pose is used interchangeably as a short form for position and orientation.
[0031] As used herein, the term robot arm's end refers to the end of the robot arm's last segment where robot actuators and sensors are usually mounted. The term robot arm's hand is used interchangeably as another term for robot arm's end in the context in which a sensor is mounted to the robot arm's end.
[0032] The present disclosure relates to a method for simultaneously calibrating a robot arm's kinematics and the hand-eye transformation of a 3D sensor mounted on the robot arm's end. As required, preferred embodiments of the invention will be disclosed, by way of examples only, with reference to drawings. It should be understood that the invention can be embodied in many various and alternative forms. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
[0033] Referring to
[0034] One aspect of this disclosure provides a formulation of this dual-calibration problem as an optimization problem, said formulation comprising:
[0035] Modelling the robot arm's kinematics.
[0036] Modelling the relative pose the 3D sensor at two locations for observing said calibration fixture.
[0037] Formulating a cost function as a function of the calibration parameters.
[0038] Specifying the design variables of this cost function.
[0039] In an embodiment, the step of modelling the kinematics of a six-axis serial robot arm comprises adopting the Dennavit-Hartenburg (D-H) model to parameterize the robot arm's geometric structure. According to this model, the pose of the robot arm's end relative to its base can be expressed as a function of the robot arm's six joint angles as follows:
A()=T.sub.01(.sub.1)T.sub.12(.sub.2)T.sub.23(.sub.3)T.sub.34(.sub.4)T.sub.45(.sub.5)T.sub.56(.sub.6)
[0040] where T.sub.ii(.sub.i)SE(3) is a homogeneous matrix denoting the relative transformation between link frames {i1} and {i} and =[.sub.1 .sub.2 .sub.3 .sub.4 .sub.5 .sub.6].sup.T with .sub.i denoting the joint angle between link frames {i=1} and {i}. The homogeneous transformation matrix between link frames {i1} and {i} is of the following form:
[0041] where .sub.i, .sub.i, and d.sub.i are the D-H parameters associated with the robot arm's ith link.
[0042] The D-H parameters can be denoted by a vector u=[.sub.1, . . . , .sub.6, .sub.1, . . . , .sub.6, d.sub.1, . . . , d.sub.6, .sub.1, . . . , .sub.6].sup.T. This vector can be modelled as a summation of two terms: its nominal value and a kinematic error u. The nominal value may be obtained through a robot arm's specification or manual measurements.
[0043] Referring to
[0044] The kinematic error term u=[.sub.1, . . . , .sub.6, .sub.1, . . . , .sub.6, d.sub.1, . . . , d.sub.6, .sub.1, . . . , .sub.6].sup.Trepresents uncertainty and inaccuracy in the robot arm's forward kinematic model; calibrating a robot arm's kinematics is to accurately determine the value of this vector.
[0045] In an embodiment, the step of determining the relative pose of said 3D sensor at two locations for observing said calibration fixture comprises calculating the pose of each point cloud with respect to said 3D sensor and then calculating the relative pose of said 3D sensor through the pose of the two point clouds. Calculating the pose of a 3D point cloud of said three-plane calibration fixture with respect to said 3D sensor comprises the following sub-steps. First, the 3D point cloud is segmented into three portions with each portion corresponding to one of the three planes. Then, each portion of the point cloud is fitted into a plane, and the norm vector of each plan is calculated subsequently. The norm vectors are denoted by n.sub.1, n.sub.2, and n.sub.3, and the next step is to determine the correspondence of the three norm vectors and the three faces 1, 2, and 3 of the calibration fixture. A few methods can be used for finding the correspondence of the norm vectors and the three faces of the calibration fixture.
[0046] Estimate the hand-eye transformation based on manual measurements and use telemetry of robot arm to calculate the pose of the 3D sensor with respect to the robot arm's base at each location of observation. This allows for transforming all point clouds into the robot arm's base frame and determining the correspondence of the three faces in each point clouds.
[0047] If the 3D sensor provides both 3D and 2D imaging functions, place a unique artificial marker on each plane and use the imaging function to identify each plane before calculating the norm vectors.
[0048] Add distinctive physical features to the three planes. For example, cutting off a small portion of a unique shape from each plane, and then the planes can be identified purely by the point cloud of each plane.
[0049] Once the norm vector is determined for each plane, the next step is to calculate the pose of the point cloud relative to the 3D sensor. The pose consists of two components: position and orientation of the point cloud's coordinate frame expressed in the 3D sensor's frame.
[0050] The position element can be selected to be the intersection of the three planes, denoted by p.sup.3. It can be calculated by solving the following equation:
[0051] where b.sub.i denotes the distance from the origin of the 3D sensor's coordinate frame to the ith plane, which can be calculated by projecting a point on the ith plane to the norm vector n.sub.i.
[0052] The next step is to determine the orientation of the coordinate frame of the 3-plane calibration fixture in the 3D sensor's frame, which comprises the following steps. The Z-axis of this coordinate frame to be n.sub.1, X-axis is the cross product of n.sub.1 and n.sub.3, and Y-axis is the cross product of Z-axis and X-axis. These three axes represent the orientation of the point cloud relative to the 3D sensor's coordinate frame, and it can be denoted by a rotational matrix QSO(3). The pose of the calibration fixture with respect to the 3D sensor's frame at the ith observation can be expressed by the following homogeneous matrix:
[0053] The steps described above is repeated for all the point clouds, each taken at a different pose around the calibration fixture. Once the poses of a plurality of point clouds are computed, the pose of 3D sensor at jth observation relative to that at ith observation is then expressed as
[0054] where Q.sub.iSO(3) is a rotational matrix and p.sub.i.sup.3 is a translation vector.
[0055] In an additional embodiment, the iterative closest point method can be used to calculate the transformation between two point clouds, which can subsequently determine the relative pose of the 3D sensor at these two observations.
[0056] In an embodiment, the step of formulating a cost function for the calibration problem is done as follows. Referring to
A.sub.ijX=X.sub.ij
[0057] where i,j {1, 2, . . . , N} for a total of N measurements and A.sub.ij=A.sub.i.sup.TA.sub.j denotes the pose of the robot's hand at the jth observation relative to that at the ith observation. The robot arm's pose A.sub.i at the ith observation location is a function of the robot arm's joint angles .sub.i at the ith observation and the robot arm's D-H parameter vector, which consist of its nominal value and an error vector u, and it can be can be expressed as:
[0058] where K.sub.iSO(3) is a rotational matrix and s.sub.i.sup.3 is a translation vector.
[0059] Subsequently, the relative pose A.sub.ij is also a function of the joint angles .sub.i and .sub.j as well as its D-H parameters, and it can be expressed as follows:
[0060] where K.sub.ijSO(3) is a rotational matrix and s.sub.ij.sup.3 is a translation vector.
[0061] The pose of the 3D sensor relative to the robot arm's hand, denoted by XSE(3), can be written in the following format:
[0062] where RSO(3) is a rotational matrix and t.sup.3 is a translation vector.
[0063] This dual-calibration problem is to find XSE(3) and u that satisfy the following equation
A.sub.ijX=XB.sub.ij
[0064] for all i,j{1, 2, . . . , N}, i>j. After separating the rotational and translational elements in this equation, the following cost function is defined for this optimization problem:
g(X, u)=.sub.ij(1,2, . . . ,N),i>jK.sub.ijRRQ.sub.ij.sup.2+(K.sub.ijI)tRp.sub.ij+s.sub.ij.sup.2
[0065] where >0 is a user-specified weight. The dual-calibration problem is to determine the value of RSO(3) and t.sup.3 that minimizes the value of g(X, u).
[0066] In one embodiment, the step of specifying the design variables of this optimization problem comprises parameterizing RSO(3), t.sup.3 and u. There are a variety of ways to parameterize the rotational matrix RSO(3). In a preferred embodiment, this rotational matrix is represented by the Euler angles. Euler angles, denoted by =[.sub.x .sub.y .sub.z].sup.T can be modelled as a summation of two terms: its nominal value and an error term =[.sub.x .sub.y .sub.z].sup.T. Similarly, the translational vector t
.sup.3 can be modelled as a summation of two terms as well: its nominal value
[0067] With these notations, the dual-calibration problem is to solve the following optimization
[0068] problem:
[0069] Where X=
and u=+u. When solving this optimization problem, it may be desirable to define some empirical bounds of the design variables , t, and u to reduce their searching space.
[0070] Referring to
[0071] Determining the nominal values of the calibration parameters.
[0072] Acquiring point clouds of the calibration fixture and recording the joint angles of the robot arm.
[0073] Calculating parameters of the cost function.
[0074] Setting bounds for the design variables of the cost function.
[0075] Solving the optimization problem.
[0076] Validating the robot arm's positioning accuracy after the calibration.
[0077] In an embodiment, the step of determining the nominal values of the calibration parameter comprises the following sub-steps. The first step is determining the nominal values of the D-H parameters of the given robot arm. For a given robot arm, the nominal value of these parameters, denoted by , can be found from its specification or measured manually. The second step is selecting a representation of the 3D sensor's pose with respect to the robot arm's end. In a preferred embodiment, Euler angles and a vector in .sup.3 are used to represent the rotational and translation elements of this pose. The third step is determining the nominal values of the Euler angles and translational vector, denoted by
[0078] In an embodiment, the step of acquiring point clouds of said three-plane calibration fixture and recording the corresponding robot joint angles comprises the following sub-steps. First, placing said fixture in front of said robot arm at a proper pose. Then controlling the robot arm to move the 3D sensor to a plurality of poses over the fixture. At each observation location, commanding the 3D sensor to capture one or more point clouds of said fixture. At each observation location, reading telemetry data including the robot arm's joint angles from said robot arm's controller and saving the data to a readable medium. The number of poses to capture point clouds should be greater than the number of calibration parameters. It is preferred that the poses are distributed throughout the work envelope of the robot arm, and it is desired that said 3D sensor has a complete view of the calibration fixture at each observation location.
[0079] In an embodiment, the step of calculating parameters of the cost function comprises the following sub-steps. First, calculate matrices B.sub.i for one point cloud of each observation. Second, calculate B.sub.ij for each pair of point cloud.
[0080] In an embodiment, the step of setting bounds for the design variables involves setting an upper limit and a lower limit for each element of design variables , t, and u.
[0081] In an embodiment, the step of solving this optimization problem is done by using the particle swarm optimization (PSO) method. The PSO method does not require gradient information of the cost function.
[0082] In an additional embodiment, a method based on the genetic algorithms can be used to solve this optimization problem.
[0083] In an embodiment, the step of validating the calibration results involves commanding the robot arm to move to a variety of poses within its work envelope and comparing the actual poses with the desired poses.
[0084] The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms.