METHOD FOR ROBOT ASSISTED MULTI-VIEW 3D SCANNING MEASUREMENT BASED ON PATH PLANNING
20230339112 · 2023-10-26
Assignee
Inventors
- Chun Yin (Chengdu, CN)
- Yan GAO (Chengdu, CN)
- Zhongbao YAN (Chengdu, CN)
- Kai CHEN (Chengdu, CN)
- Yuhua CHENG (Chengdu, CN)
- Xutong Tan (Chengdu, CN)
- Junyang LIU (Chengdu, CN)
Cpc classification
B25J9/1664
PERFORMING OPERATIONS; TRANSPORTING
G05B2219/39057
PHYSICS
International classification
Abstract
Robot assisted multi-view 3D scanning measurement based on path planning includes firstly, establishing a virtual simulation platform to complete the setting of measurement poses and measurement paths and perform the path evaluations of measurement paths. Then, completing the preliminary hand-eye calibration based on the properties of Kronecker product, and the preliminary hand-eye calibration is optimized by establishing a reprojection error cost function as the fitness function of the particle swarm optimization algorithm. Lastly, moving the robot to the measurement poses of the planned measurement paths, obtaining a single-view point cloud of the measured object and transforming it from the camera coordinate system to the robot base coordinate system to obtain a registered single-view point cloud based on the optimized hand-eye matrix. When registered single-view point clouds of all measurement poses are obtained, the points under the robot base coordinate system form a complete point cloud of the measured object.
Claims
1. A method for robot assisted multi-view 3D scanning measurement based on path planning, comprising: (1).establishing a virtual simulation platform 1.1).based on a real measurement environment, establishing a virtual simulation platform on a computer with an open source robot operating system (ROS), and importing the description file of a robot into the virtual simulation platform to create a robot which is the same as the robot of real measurement environment; at the same time, on the virtual simulation platform, installing a structured light measuring equipment and a depth camera, which are the same as that of real measurement environment on the fixture of the robot’s end flange; 1.2). simulating a measured object to obtain a simulated object, and placing the simulated object right in front of the robot and the structured light measuring equipment, determining multiple measurement surfaces of the simulated object and multiple measurement paths on each measurement surface for image taking of the simulated object by the structured light measuring equipment, where the i.sup.th measurement path of the k.sup.th measurement surface is denoted by .sub.kS.sub.i k = 1, 2, ..., K, i = 1,2,...,M.sub.k, K is the number of measurement surfaces, M.sub.k is the number of measurement paths of the k.sup.th measurement surface, and the j.sup.th measurement point, namely measurement pose of the i.sup.th measurement path of the k.sup.th measurement surface is denoted by .sub.kP.sub.ij, j =1, 2,..., N.sub.k, N.sub.k is the number of measurement poses of a measurement path of the k.sup.th measurement surface; where the view field of the structured light measuring equipment is a rectangle with a length of m cm and a width of n cm, the outer rectangle of the k.sup.th measurement surface is .sub.ka in length and .sub.kb in width, the measurement paths are vertical strips along the length, which satisfy the following constraints: any two adjacent measurement paths have a strip of common area, the number M.sub.k of measurement paths of the k.sup.th measurement surface is greater than or equal to .sub.ka/m, the number N.sub.k of measurement poses of a measurement path of the k.sup.th measurement surface is greater than or equal to .sub.kb/n, the j.sup.th measurement pose .sub.kP.sub.ij of the i.sup.th measurement path of the k.sup.th measurement surface is obtained according to the constraints; (2).planning measurement paths on the virtual simulation platform 2.1). on the virtual simulation platform, to the i.sup.th measurement path of the k.sup.th measurement surface, dragging the end of the robot to a measurement pose .sub.kP.sub.ij, where the measurement pose .sub.kP.sub.ij is (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij, .sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij), (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) is the position coordinate of the end of the robot, and (.sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) is the attitude coordinate of the end of the robot; 2.2).taking a RGB-D image of the simulated object at the measurement pose .sub.kP.sub.ij by the depth camera and converting the RGB-D image into a grayscale image, then filtering out the background of the RGB-D image by using threshold segmentation and extracting the contour from the RGB-D image to obtain a minimum circumscribed rectangle of the simulated object, then traversing the depths of all pixels of the RGB-D image within the minimum circumscribed rectangle to find out the shortest distance d.sub.min between the simulated object and the structured light measuring equipment and recording the pixel coordinate (u,v) that corresponds to the shortest distance d.sub.min on the RGB-D image, then judging whether the shortest distance d.sub.min satisfies the measurement range of the structured light measuring equipment: d.sub.min ∈ [D-δ,D+δ], where D is the focal length of the structured light measuring equipment for camera calibration, δ is the allowable measurement error range of the structured light measuring equipment, if yes, then going to step 2.4), otherwise going to step 2.3); 2.3). obtaining a corresponding spatial coordinate (x.sub.(u,v), y.sub.(u,v), z.sub.(u,v)) of the pixel coordinate (u,v) through a coordinate transformation, then determining a spatial line according to the spatial coordinate (x.sub.(u,v), y.sub.(u,v), z.sub.(u,v)) and the position coordinate (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) of the end of the robot:
Description
BRIEF DESCRIPTION OF THE DRAWING
[0064] The above and other objectives, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
[0075]
[0076]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0077] Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings. It should be noted that the similar modules are designated by similar reference numerals although they are illustrated in different drawings. Also, in the following description, a detailed description of known functions and configurations incorporated herein will be omitted when it may obscure the subject matter of the present invention.
[0078] In one embodiment of the present invention, as shown in
Step S1: Establishing a Virtual Simulation Platform
Step S1.1: Creating a Robot and Installing a Structured Light Measuring Equipment and a Depth Camera
[0079] Based on a real measurement environment, establishing a virtual simulation platform on a computer with an open source robot operating system (ROS), and importing the description file of a robot into the virtual simulation platform to create a robot which is the same as the robot of real measurement environment; at the same time, on the virtual simulation platform, installing a structured light measuring equipment and a depth camera, which are the same as that of real measurement environment on the fixture of the robot’s end flange.
[0080] Where the description file of a robot is URDF(Universal Robot Description Format description file, which includes the parameters of links, joints, kinematics and dynamic, visualization models, collision detection model.
Step S1.2: Determining Measurement Paths
[0081]
[0082] Simulating a measured object to obtain a simulated object, and placing the simulated object right in front of the robot and the structured light measuring equipment, determining multiple measurement surfaces of the simulated object and multiple measurement paths on each measurement surface for image taking of the simulated object by the structured light measuring equipment, where the i.sup.th measurement path of the k.sup.th measurement surface is denoted by .sub.kS.sub.i k = 1,2,...,K,i = 1, 2,...,M.sub.k, K is the number of measurement surfaces, M.sub.k is the number of measurement paths of the k.sup.th measurement surface, and the j.sup.th measurement point, namely measurement pose of the i.sup.th measurement path of the k.sup.th measurement surface is denoted by .sub.kP.sub.ij=1,2,...,N.sub.k, N.sub.k is the number of measurement poses of a measurement path of the k.sup.th measurement surface.
[0083] Where the view field of the structured light measuring equipment is a rectangle with a length of m cm and a width of n cm, the outer rectangle of the k.sup.th measurement surface is .sub.kα in length and .sub.kb in width, the measurement paths are vertical strips along the length, which satisfy the following constraints: any two adjacent measurement paths have a strip of common area, the number M.sub.k of measurement paths of the k.sup.th measurement surface is greater than or equal to .sub.ka/m, the number N.sub.k of measurement poses of a measurement path of the k.sup.th measurement surface is greater than or equal to .sub.kbln, the j.sup.th measurement pose .sub.kP.sub.ij of the i.sup.th measurement path of the k.sup.th measurement surface is obtained according to the constraints. As shown in
Step S2: Planning Measurement Paths on the Virtual Simulation Platform
Step S2.1: Dragging the End of the Robot to a Measurement Pose
[0084] On the virtual simulation platform, to the i.sup.th measurement path of the k.sup.th measurement surface, dragging the end of the robot to a measurement pose .sub.kP i, where the measurement pose .sub.kP.sub.ij is (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij, .sub.kRx.sub.ij,.sub.kRy.sub.ij, .sub.kRz.sub.ij), (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) is the position coordinate of the end of the robot, and (.sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) is the attitude coordinate of the end of the robot.
Step S2.2: Detecting and Judging the Shortest Distance
[0085] Taking a RGB-D image of the simulated object at the measurement pose .sub.kP.sub.ij by the depth camera and converting the RGB-D image into a grayscale image, then filtering out the background of the RGB-D image by using threshold segmentation and extracting the contour from the RGB-D image to obtain a minimum circumscribed rectangle of the simulated object, then traversing the depths of all pixels of the RGB-D image within the minimum circumscribed rectangle to find out the shortest distance d.sub.min between the simulated object and the structured light measuring equipment and recording the pixel coordinate (u,v) that corresponds to the shortest distance d.sub.min on the RGB-D image, then judging whether the shortest distance d.sub.min satisfies the measurement range of the structured light measuring equipment: d.sub.min∈[D-δ,D+δ], where D is the focal length of the structured light measuring equipment for camera calibration, δ is the allowable measurement error range of the structured light measuring equipment, if yes, then going to Step S2.4, otherwise going to Step S2.3.
Step S2.3: Adjusting Measurement Pose
[0086] Obtaining a corresponding spatial coordinate (x.sub.(u,v), y.sub.(u,v), z.sub.(u,v)) of the pixel coordinate (u,v) through a coordinate transformation, then determining a spatial line according to the spatial coordinate (x.sub.(u,v), y.sub.(u,v), z.sub.(u,v)) and the position coordinate (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) of the end of the robot:
[0087] Where (x,y,z) is a spatial coordinate on the spatial line.
[0088] Then, finding a position coordinate from the position coordinate (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) on and along the spatial line to obtain a position coordinate (.sub.k p̃x.sub.ij, .sub.kp̃y.sub.ij, .sub.kp̃z.sub.ij), which satisfies the following measurement conditions:
[0089] Combining the position coordinate (.sub.kp̃x.sub.ij, .sub.kp̃y.sub.ij, .sub.kp̃z.sub.ij) with the attitude coordinate (.sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) to form a measurement pose (.sub.k p̃x.sub.ij, .sub.kp̃y.sub.ij, .sub.kp̃z.sub.ij, .sub.kR.sub.xij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) and updating the measurement pose .sub.kP.sub.ij with the measurement pose (.sub.k p̃x.sub.ij, .sub.kp̃y.sub.ij, .sub.kp̃z.sub.ij, kR.sub.xij, .sub.kR.sub.ij, .sub.kRz.sub.ij).
Step S2.4: Repeating Steps S2.1- S2.3, Until All Measurement Poses of the Measurement Path are Judged
[0090] recording the measurement pose .sub.kP.sub.ij, the going to Step S2.1 for judging the next measurement pose .sub.kP.sub.i(j+1) until the shortest distances of all measurement poses of the measurement path .sub.kS.sub.i are judged, then, going to Step S2.5.
Step S2.5: Evaluating the Measurement Path .SUB.k.S.SUB.i
[0091] Step S2.5.1: executing the measurement path .sub.kS.sub.i on the virtual simulation platform, where the robot will move continuously from the measurement pose .sub.kP.sub.i1 to the measurement pose, then sampling and recording the position .sub.kṖ.sub.it in equal time during the movement, all positions .sub.kṖ.sub.it, t = 0,1, ..., T form a motion path .sub.kṠ.sub.i, where t is a serial number of sampling, T is the number of samplings.
[0092] Calculating the shortest distance l.sub.1 between the starting position (.sub.kPx.sub.i1, .sub.kPy.sub.i1, .sub.kPz.sub.i1) and the ending position (.sub.kPx.sub.iJ, .sub.kPy.sub.iJ, .sub.kPz.sub.iNk) of the measurement path .sub.kS.sub.i :
[0093] Calculating the motion distance between the starting position (.sub.kṖx.sub.i1, (.sub.kṖy.sub.i1, .sub.kṖz.sub.i1) and the ending position (.sub.kṖx.sub.iT, .sub.kṖy.sub.iT, .sub.kṖz.sub.iT) of the motion path .sub.kṠ.sub.i:
[0094] Then obtaining a distance difference L, where L = l.sub.2 — l.sub.1, and creating an evaluation function f.sub.1(L), where f.sub.1 (L) = (δ.sub.1 - L) / δ.sub.1, 0 ≤ L ≤ δ.sub.1, δ.sub.1 is a maximum error threshold.
[0095] Step S2.5.2: finding a position .sub.kṖ.sub.iw which has the shortest distance to the barycenter of the simulated object from the motion path .sub.kṠ.sub.i, where the shortest distance is denoted by l.sub.3, then creating an evaluation function f.sub.2(l.sub.3), where f.sub.2 (l.sub.3) = (l.sub.3 — δ.sub.2) / l.sub.3, δ.sub.2 is the shortest distance which guarantee no collision between the structured light measuring equipment and the simulated object.
[0096] Step S2.5.3: creating an overall evaluation function .sub.kF.sub.i for the measurement path .sub.kS.sub.i, where .sub.kF.sub.i = (0.5f.sub.1(L)+0.5F(l.sub.3))*100, then evaluating the measurement path .sub.kS.sub.i according to the overall evaluation function .sub.kF.sub.i, if the overall evaluation function .sub.kF.sub.i > g, the measurement path .sub.kS.sub.i has passed the evaluation, then going to Step S2.7, otherwise going to Step S2.6, where g is a threshold which is determined by measurement operator according to an actual measurement scenario, and 0 < g < 100.
Step S2.6: Correcting the Measurement Path
[0097] traversing all measurement poses of the measurement path .sub.kS.sub.i to find a measurement pose .sub.kP.sub.is which has the shortest distance to position .sub.kṖ.sub.iw, then moving the robot to the measurement pose .sub.kP.sub.is, dragging the end of the robot to manually increase the shortest distance d.sub.min between the simulated object and the structured light measuring equipment, where the increase of the shortest distance d.sub.min needs to satisfies the measurement range of the structured light measuring equipment: d.sub.min ∈ [D-δ,D+δ], replacing the measurement pose .sub.kP.sub.is with the measurement pose after the increase of the shortest distance d.sub.min to complete the correction of the measurement path .sub.kS.sub.i,, then going to Step S2.7.
Step S2.7: Repeating Step S2.1 - Step S2.6 for Each Measurement Path, Then Sending the Planned Measurement Paths to the Robot of Real Measurement Environment
[0098] For each measurement path of each measurement surface, performing Step S2.1 to Step S2.6 to complete the simulation of planning of the measurement paths, converting the planned measurement paths into communication messages that can be recognized by robot and sending the communication messages from the virtual simulation platform to the robot of real measurement environment.
Step S3: Measuring in Real Measurement Environment
[0099] As shown in
Step S3.1: Establishing a Hand-Eye Calibration Equation
[0100] In the real measurement environment, as shown in
[0101] Obtaining the rotation matrix R.sub.ci′ and the translation vector T.sub.ci′ of the calibration board relative to the binocular camera of the structured light measuring equipment in each calibration board image according to the calibration method of Zhang, and then combining the rotation matrix R.sub.ci′ and the translation vector T.sub.ci′ into an extrinsic parameter matrix H.sub.ci′,i′ = 1,2,..., n′, meanwhile, obtaining the rotation matrix R.sub.gi′ and the translation vector T.sub.gi′ of the robot’s end flange relative to the base of the robot according to the measurement pose, then combining the rotation matrix R.sub.gi′ and the translation vector T.sub.gi′ into a robot pose matrix H.sub.gi′,i′ = 1, 2,..., n′,where:
[0102] For the reason that the relative pose relation between the base of the robot and the checkerboard (calibration board) is constant and the relative pose relation between the binocular camera and the robot’s end flange is constant, combining the coordinate transformation relation between the extrinsic parameter matrix of the binocular camera and the robot pose matrix, we can obtain the following relation for any two measurement poses:
[0103] Namely:
[0104] Performing a matrix transformation, namely multiplying the front of left side by matrix
and the rear of right side by matrix
we can obtain the following equation:
[0105] Namely, establishing a hand-eye calibration equation based on the extrinsic parameter matrices H.sub.cu′, H.sub.cv′ and robot pose matrices H.sub.gu′, H.sub.gv′ of any two measurement poses:
where u′, v′ are serial numbers of any two measurement poses, u′ ≠ v′ and:
where R.sub.gu′,v′ is the rotation matrix of the matrix
is the translation vector of the matrix
is the rotation matrix of the hand-eye matrix H.sub.cg, T.sub.cg is the translation vector of the hand-eye matrix H.sub.cg, R.sub.cu′,v′ is the rotation matrix of the matrix
is the translation vector of the matrix
[0106] Letting
then hand-eye calibration equation can be expressed as:
hand-eye calibration equation can be established by the n′ pluralities of calibration data.
Step S3.2: Calculating the Hand-Eye Matrix by Using a Singular Value Decomposition
[0107] based on the properties of Kronecker product, transforming the hand-eye calibration equation into a least squares problem, and calculating the hand-eye matrix H.sub.cg by using a singular value decomposition
[0108] Based on the properties of Kronecker product, expanding the hand-eye calibration AX=BX, a homogeneous equation can be obtained :
[0109] Namely Formula 1 and formula 2:
[0110] Where R.sub.cu′,v′, R.sub.cg, R.sub.gu′,v′ are rotation matrices and belong to a special orthogonal group and are closed in multiplication.
[0111] Performing the following transformations on Formula 1:
[0112] Then obtained:
[0113] Where I is a unit matrix, .Math. is the operator of Kronecker product, vec is the operator of vectorization.
[0114] So a linear equation of Cx=0 is obtained, thus we can transform the linear equation into a least squares problem by using singular value decomposition (SVD). For:
we can obtain the following least squares problem:
[0115] Then obtained:
[0116] Now the primal problem is converted into:
[0117] Where
[0118] For there has a constraint of ||y||.sub.2 = 1, the minimal solution is y = [0 0 ... 1].sup.T, namely, the solution of Cx=0 is the last column of the matrix V:
[0119] Therefore, the calculation of the hand-eye matrix H.sub.cg in the present invention is: [0120] Firstly, establishing a linear equation set: (R.sub.gu′,v′ .Math.I-I.Math. R.sup.T.sub.cu′,v′) .Math. vec(R.sub.cg) = 0, where I is a unit matrix, .Math. is the operator of Kronecker product, vec is the operator of vectorization; [0121] placing all matrices (R.sub.gu′,v′ .Math.I-I.Math. R.sup.T.sub.cu′,v′) of all any two measurement poses by column to obtain a matrix R; [0122] performing a singular value decomposition on the matrix R to obtaining matrix V, namely the right singular matrix of the matrix R, and taking out the 9 elements of last column of matrix V to revert to a matrix
[0124] So the rotation matrix R.sub.cg can be calculated from Formula 1. In order to guarantee the Unit orthogonality, the rotation matrix R.sub.cg has been Orthogonalized by adopting Rodrigues’ rotation formula, as to eliminating the influence of measurement noise. Among Rodrigues’ rotation formula, Σ.sub.R can reflect the quality of the calibration to a certain extent. For good calibration, all elements of Σ.sub.R should be very close, even completely equal.
[0125] Lastly, through above-mentioned Formula 2, we can obtain:
[0126] Placing all matrices (R.sub.gu′,v′ - I) of all any two measurement poses by column to obtain a matrix R.sub.g, placing all matrices (R.sub.cgT.sub.cu′,v′ - T.sub.gu’,v′) of all any two measurement poses by column to obtain a matrix T′, then calculating the translation vector of the hand-eye matrix H.sub.cg : T.sub.cg = R.sub.g .sup.-1T′ .
[0127] Thus, the rotation and translation matrix of the binocular camera coordinate system to the robot’s end flange coordinate system, the hand-eye matrix H.sub.cg is calculated and taken as the initial position of the subsequent particle swarm optimization algorithm.
Step S3.3: Based on Minimizing a Reprojection Error, Optimizing the Hand-Eye Matrix by Using a Particle Swarm Optimization Algorithm
Step S3.3.1: Creating a Particle Swarm and Taking the Hand-Eye Matrix as Initial Positions of All Particles
[0128] Creating a particle swarm with population size of K′, where the position and the velocity of the k′.sup.th particle are denoted by p.sub.k′ and v.sub.k′, respectively, then initializing the positions p.sub.k′ and the velocities v.sub.k′ of all K′ particles:
where k′ = 1,2,..., K′, v.sub.min, v.sub.max are the upper threshold and the lower threshold of velocity at each iteration of particle, rand () is rand function;
Step S3.3.2: Establishing a Reprojection Error Cost Function as the Fitness Function of the Particle Swarm Optimization Algorithm
where s.sub.i′ is the scale factor of the i′.sup.th measurement pose, K* is the intrinsic parameter matrix of the binocular camera, exp[.Math.].sub.3×4 is an operator of choosing a submatrix of 3×4 of the left upper corner of the matrix in square bracket, || ||.sub.2 is a square-normal operator, P.sub.j′ is the position of the corner of the checkerboard of any non-i.sup.th measurement pose in camera coordinate system.
[0129] P = [x, y, z].sup.T is the position of a corner of the checkerboard (calibration board) in the world coordinate system W, B is the robot base coordinate system, P.sub.j′ is the position of the corner of the checkerboard of the j′.sup.th measurement pose in camera coordinate system. Their relationship can be express as:
[0130] As shown in
[0131] We can obtain the following expression of the position P.sub.i′j′:
where
[0132] We can project the position P.sub.i′j′ to the imagine from the i′.sup.th measurement pose by using the calibration parameters of the binocular camera and obtain a image coordinate
Namely:
where s.sub.i′ is the scale factor of the i′.sup.th measurement pose, K* is the intrinsic parameter matrix of the binocular camera, exp[.Math.].sub.3×4 is an operator of choosing a submatrix of 3×4 of the left upper corner of the matrix in square bracket.
[0133] Denoting the reprojection error of the position P.sub.i′j′ as err, the reprojection error of the position P.sub.i′j′ is:
[0134] Substituting the expression of the position P.sub.i′j′, we can obtain:
[0135] Generalized to the n′ pluralities of calibration board images, we can obtain a reprojection error cost function:
[0136] Taking the reprojection error cost function err* as the fitness function φ(p.sub.k′) of the particle swarm optimization algorithm to perform an iteration, we can obtain the optimized hand-eye matrix
[0137] Step S3.3.3: according to the fitness function φ(p.sub.k′), finding the position of the current individual extremum min
of each particle and taking it as the historical optimal position
finding the position of the global extremum min {φ(p.sub.k′), k′ = 1, 2,..., K′}, namely the position of the particle of the minimum value of fitness function and taking it as the global optimal position g* :
[0138] Step S3.3.4: updating the velocity v.sub.k′ and position p.sub.k′ of each particle:
where ω is a inertia factor, c.sub.1 and c.sub.2 are acceleration factors;
[0139] returning Step S3.3.3, until a termination condition is reached;
[0140] Step S3.3.5: taking the global optimal position g* as the optimized hand-eye matrix, which is denoted by
Step S3.4: Performing a 3D Measurement
[0141] Step S3.4.1: adjusting the binocular camera of the structured light measuring equipment so that it can clearly capture the measured object, the left and right cameras can be kept in the same horizontal position with a certain distance, and the binocular calibration can be completed; adjusting the robot so that it can carry the structured light measuring equipment to perform the 3D measurement and guarantee the full view of the measured object will be taken;
[0142] Step S3.4.2: after receiving the planned measurement paths from the virtual simulation platform, moving the robot in turn to the measurement poses of the planned measurement paths;
[0143] Step S3.4.3: as shown in
from the coordinate system of the robot’s end flange to the robot base coordinate system according themeasurement pose .sub.kP.sub.ij, then obtaining a rigid body transformation matrix:
then obtaining a single-view point cloud .sub.kI.sub.ij of the measured object through the binocular camera of the structured light measuring equipment and transforming it from the camera coordinate system to the robot base coordinate system to obtain a registered single-view point cloud
when registered single-view point clouds of all measurement poses are obtained, the points under the robot base coordinate system form a complete point cloud of the measured object, the 3D the measurement is completed.
Example 1
[0144] The measured object is a 600 mm×450 mm×200 mm cuboid. Firstly, establishing a 1:1 virtual simulation platform on a computer, which has been installed an Ubuntu operating system and an open source robot operating system (ROS) according to a real measurement environment. In the example, the virtual simulation platform is shown in
[0145] In the real measurement environment, the checkerboard (calibration board) is 8 × 11 with square size of 15 mm.
[0146] In the example, Firstly, operating the six DOF robot to carry the binocular camera of the structured light measuring equipment to 9 different measurements poses and taking 9 calibration board images with resolutions of 3000 × 4096. Then, according to the calibration method of Zhang, calculating 9 extrinsic parameter matrices H.sub.ci′, i′ = 1,2,...,9 of the 9 calibration board images, and recording the pose information of the six DOF robot and the position P.sub.i′ of the corner in camera coordinate system. Meanwhile, calculating 9 robot pose matrix H.sub.gi′, i′ = 1, 2,...,9 . Thus 36 hand-eye calibration equations to be solved are established.
[0147] Then, based on the properties of Kronecker product, transforming the hand-eye calibration equation into a least squares problem, and calculating the hand-eye matrix H.sub.cg by using a singular value decomposition. In the example, the hand-eye matrix H.sub.cg is:
[0148] Where the rotation matrix of the hand-eye matrix H.sub.cg is the matrix of 3 × 3 of the left upper corner, namely:
the translation vector of the hand-eye matrix H.sub.cg is the column vector of 3×1 of the right upper corner, namely:
[0149] the rotation matrix of the hand-eye matrix H.sub.cg is the matrix of 3×3 of the left upper corner, namely:
[0150] The calculated hand-eye matrix H.sub.cg is taken as the initial position of the subsequent particle swarm optimization algorithm. In the example, the all positions of K′ particles in initial population are the calculated hand-eye matrix H.sub.cg .
[0151] In the example, the optimized hand-eye matrix
is:
[0152] As shown in
[0153] After the optimized hand-eye matrix
is obtained, the six DOF robot carries the structured light measuring equipment to a measurement pose according to the planned measurement paths to taking imagines of the measured object, then a single-view point cloud of the measured object is obtained. The measurement poses and corresponding single-view point clouds on the 1.sup.st measurement surface (k=1) of the measured object are shown in
Example 2
[0154] In the example, the measured object is complex mechanical component. As shown in
[0155] While illustrative embodiments of the invention have been described above, it is, of course, understand that various modifications will be apparent to those of ordinary skill in the art. Such modifications are within the spirit and scope of the invention, which is limited and defined only by the appended claims.