METHOD FOR ROBOT ASSISTED MULTI-VIEW 3D SCANNING MEASUREMENT BASED ON PATH PLANNING

20230339112 · 2023-10-26

Assignee

Inventors

Cpc classification

International classification

Abstract

Robot assisted multi-view 3D scanning measurement based on path planning includes firstly, establishing a virtual simulation platform to complete the setting of measurement poses and measurement paths and perform the path evaluations of measurement paths. Then, completing the preliminary hand-eye calibration based on the properties of Kronecker product, and the preliminary hand-eye calibration is optimized by establishing a reprojection error cost function as the fitness function of the particle swarm optimization algorithm. Lastly, moving the robot to the measurement poses of the planned measurement paths, obtaining a single-view point cloud of the measured object and transforming it from the camera coordinate system to the robot base coordinate system to obtain a registered single-view point cloud based on the optimized hand-eye matrix. When registered single-view point clouds of all measurement poses are obtained, the points under the robot base coordinate system form a complete point cloud of the measured object.

Claims

1. A method for robot assisted multi-view 3D scanning measurement based on path planning, comprising: (1).establishing a virtual simulation platform 1.1).based on a real measurement environment, establishing a virtual simulation platform on a computer with an open source robot operating system (ROS), and importing the description file of a robot into the virtual simulation platform to create a robot which is the same as the robot of real measurement environment; at the same time, on the virtual simulation platform, installing a structured light measuring equipment and a depth camera, which are the same as that of real measurement environment on the fixture of the robot’s end flange; 1.2). simulating a measured object to obtain a simulated object, and placing the simulated object right in front of the robot and the structured light measuring equipment, determining multiple measurement surfaces of the simulated object and multiple measurement paths on each measurement surface for image taking of the simulated object by the structured light measuring equipment, where the i.sup.th measurement path of the k.sup.th measurement surface is denoted by .sub.kS.sub.i k = 1, 2, ..., K, i = 1,2,...,M.sub.k, K is the number of measurement surfaces, M.sub.k is the number of measurement paths of the k.sup.th measurement surface, and the j.sup.th measurement point, namely measurement pose of the i.sup.th measurement path of the k.sup.th measurement surface is denoted by .sub.kP.sub.ij, j =1, 2,..., N.sub.k, N.sub.k is the number of measurement poses of a measurement path of the k.sup.th measurement surface; where the view field of the structured light measuring equipment is a rectangle with a length of m cm and a width of n cm, the outer rectangle of the k.sup.th measurement surface is .sub.ka in length and .sub.kb in width, the measurement paths are vertical strips along the length, which satisfy the following constraints: any two adjacent measurement paths have a strip of common area, the number M.sub.k of measurement paths of the k.sup.th measurement surface is greater than or equal to .sub.ka/m, the number N.sub.k of measurement poses of a measurement path of the k.sup.th measurement surface is greater than or equal to .sub.kb/n, the j.sup.th measurement pose .sub.kP.sub.ij of the i.sup.th measurement path of the k.sup.th measurement surface is obtained according to the constraints; (2).planning measurement paths on the virtual simulation platform 2.1). on the virtual simulation platform, to the i.sup.th measurement path of the k.sup.th measurement surface, dragging the end of the robot to a measurement pose .sub.kP.sub.ij, where the measurement pose .sub.kP.sub.ij is (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij, .sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij), (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) is the position coordinate of the end of the robot, and (.sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) is the attitude coordinate of the end of the robot; 2.2).taking a RGB-D image of the simulated object at the measurement pose .sub.kP.sub.ij by the depth camera and converting the RGB-D image into a grayscale image, then filtering out the background of the RGB-D image by using threshold segmentation and extracting the contour from the RGB-D image to obtain a minimum circumscribed rectangle of the simulated object, then traversing the depths of all pixels of the RGB-D image within the minimum circumscribed rectangle to find out the shortest distance d.sub.min between the simulated object and the structured light measuring equipment and recording the pixel coordinate (u,v) that corresponds to the shortest distance d.sub.min on the RGB-D image, then judging whether the shortest distance d.sub.min satisfies the measurement range of the structured light measuring equipment: d.sub.min ∈ [D-δ,D+δ], where D is the focal length of the structured light measuring equipment for camera calibration, δ is the allowable measurement error range of the structured light measuring equipment, if yes, then going to step 2.4), otherwise going to step 2.3); 2.3). obtaining a corresponding spatial coordinate (x.sub.(u,v), y.sub.(u,v), z.sub.(u,v)) of the pixel coordinate (u,v) through a coordinate transformation, then determining a spatial line according to the spatial coordinate (x.sub.(u,v), y.sub.(u,v), z.sub.(u,v)) and the position coordinate (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) of the end of the robot: x k P x i j x u , v k P x i j = y k P y i j y u , v k P y i j = z k P z i j z u , v k P z i j where (x,y,z) is a spatial coordinate on the spatial line; then, finding a position coordinate from the position coordinate (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) on and along the spatial line to obtain a position coordinate (.sub.k px.sub.ij, .sub.k py.sub.ij, .sub.k pz.sub.ij), which satisfies the following measurement conditions: d ˜ min = k p ˜ x i j x u , v 2 + k p ˜ y i j y u , v 2 + k p ˜ z i j z u , v 2 d ˜ min D δ , D + δ combining the position coordinate (.sub.k px.sub.ij, .sub.k py.sub.ij, .sub.k pz.sub.ij) with the attitude coordinate (.sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) to form a measurement pose (.sub.k px.sub.ij, .sub.k py.sub.ij, .sub.k pz.sub.ij, .sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) and updating the measurement pose .sub.kP.sub.ij with the measurement pose (.sub.k px.sub.ij, .sub.k py.sub.ij, .sub.k pz.sub.ij, .sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij); 2.4).recording the measurement pose .sub.kP.sub.ij, the going to step 2.1) for judging the next measurement pose .sub.kP.sub.i(j+1) until the shortest distances of all measurement poses of the measurement path .sub.kS.sub.i are judged, then, going to step 2.5); 2.5). evaluating the measurement path .sub.kS.sub.i 2.5.1). executing the measurement path .sub.kS.sub.i on the virtual simulation platform, where the robot will move continuously from the measurement pose .sub.kP.sub.i1 to the measurement pose .sub.kP.sub.iNk, then sampling and recording the position .sub.kṖ.sub.it in equal time during the movement, all positions .sub.kṖ.sub.it, t = 0, 1, ..., T form a motion path .sub.kS.sub.i, where t is a serial number of sampling, T is the number of samplings; calculating the shortest distance l.sub.1 between the starting position (.sub.kPx.sub.i1,, .sub.kPy.sub.i1, .sub.kPz.sub.i1) and the ending position (.sub.kPx.sub.iJ, .sub.kPy.sub.iJ, .sub.kPz.sub.iNk) of the measurement path .sub.kS.sub.i: l 1 = k P x i 1 k P x i N k 2 + k P y i 1 k P y i N k 2 + k P z i 1 k P z i N k 2 calculating the motion distance between the starting position (.sub.kṖx.sub.i1, .sub.kṖy.sub.i1, .sub.kṖz.sub.i1) and the ending position (.sub.kṖx.sub.iT, .sub.kṖy.sub.iT, .sub.kṖz.sub.iT) of the motion path .sub.kS.sub.i : l 2 = .Math. t = 1 T 1 k P ˙ x i t + 1 k P ˙ x i t 2 + k P ˙ y i t + 1 k P ˙ y i t 2 + k P ˙ z i t + 1 k P ˙ z i t 2 then obtaining a distance difference L, where L = l.sub.2 - l.sub.1, and creating an evaluation function f.sub.1(L), where f.sub.1 (L) = (δ.sub.1 - L) / δ.sub.1, 0 ≤ L ≤ δ.sub.1, δ.sub.1 is a maximum error threshold; 2.5.2). finding a position .sub.k P.sub.iw which has the shortest distance to the barycenter of the simulated object from the motion path .sub.kS.sub.i, where the shortest distance is denoted by l.sub.3, then creating an evaluation function f.sub.2(l.sub.3), where f.sub.2 (l.sub.3) = (l.sub.3 - δ.sub.2) /l.sub.3, δ.sub.2 is the shortest distance which guarantee no collision between the structured light measuring equipment and the simulated object; 2.5.3).creating an overall evaluation function .sub.kF.sub.i for the measurement path .sub.kS.sub.i, where .sub.kF.sub.i =(0.5f.sub.1(L)+0.5F(l.sub.3))*100, then evaluating the measurement path .sub.kS.sub.i according to the overall evaluation function .sub.kF.sub.i, if the overall evaluation function .sub.kF.sub.i > g, the measurement path .sub.kS.sub.i has passed the evaluation, then going to step 2.7), otherwise going to step 2.6), where g is a threshold which is determined by measurement operator according to an actual measurement scenario, and 0 < g <100; 2.6). traversing all measurement poses of the measurement path .sub.kS.sub.i to find a measurement pose .sub.kP.sub.is which has the shortest distance to position .sub.kP.sub.iw, then moving the robot to the measurement pose .sub.kP.sub.is, dragging the end of the robot to manually increase the shortest distance d.sub.min between the simulated object and the structured light measuring equipment, where the increase of the shortest distance d.sub.min needs to satisfies the measurement range of the structured light measuring equipment: d.sub.min ∈[D-δ,D+δ], replacing the measurement pose .sub.kP.sub.is with the measurement pose after the increase of the shortest distance d.sub.min to complete the correction of the measurement path .sub.kS.sub.i, then going to step 2.7); 2.7).for each measurement path of each measurement surface, performing step 2.1) to step 2.6) to complete the simulation of planning of the measurement paths, converting the planned measurement paths into communication messages that can be recognized by robot and sending the communication messages from the virtual simulation platform to the robot of real measurement environment; (3).measuring in real measurement environment 3.1).establishing a hand-eye calibration equation in the real measurement environment, installing a structured light measuring equipment on the fixture of a robot’s end flange, operating the robot to move to the i′.sup.th measurement pose and taking an image of a corner of a checkerboard, namely a calibration board to obtain a calibration board image, then recording the pose information of the robot and the position P.sub.i′ of the corner in camera coordinate system, for n′ measurement poses, obtaining n′ calibration board images, and their corresponding pose information of the robot and positions P.sub.i′ of the corner in camera coordinate system; obtaining the rotation matrix R.sub.ci′ and the translation vector T.sub.ci′ of the calibration board relative to the binocular camera of the structured light measuring equipment in each calibration board image according to the calibration method of Zhang, and then combining the rotation matrix R.sub.ci′ and the translation vector T.sub.ci’into an extrinsic parameter matrix H.sub.ci’, i′ = 1, 2,..., n′, meanwhile, obtaining the rotation matrix R.sub.gi′, and the translation vector T.sub.gi′ of the robot’s end flange relative to the base of the robot according to the measurement pose, then combining the rotation matrix R.sub.gi′ and the translation vector T.sub.gi′ into a robot pose matrix H.sub.gi′, i′ = 1, 2,..., n′,where: H c i = R c i T c i 0 1 , H g i = R g i T g i 0 1 ; establishing a hand-eye calibration equation based on the extrinsic parameter matrices H.sub.cu’, H.sub.cv′ and robot pose matrices H.sub.gu′, H.sub.gv′ of any two measurement poses: H g v 1 H g u H c g = H c g H c v H c u 1 where u′, v′ are serial numbers of any two measurement poses, u′ ≠ v′ and: H g v 1 H g u = R g u , v T g u , v 0 1 , H c g = R c g T c g 0 1 and H c v H c u 1 = R c u , v T c u , v 0 1 ; where R.sub.gu′,.sub.v′ is the rotation matrix of the matrix H g v 1 H g u , T .sub.gu′,v′ is the translation vector of the matrix H g v 1 H g u , R c g is the rotation matrix of the hand-eye matrix H .sub.cg, T.sub.cg is the translation vector of the hand-eye matrix H.sub.cg, R.sub.cu′,v′ is the rotation matrix of the matrix H c v H c u 1 , T c u , v is the translation vector of the matrix H c v H c u 1 ; 3.2). based on the properties of Kronecker product, transforming the hand-eye calibration equation into a least squares problem, and calculating the hand-eye matrix H.sub.cg by using a singular value decomposition establishing a linear equation set: (R.sub.gu′,v′ ⓍI-I ⓍR.sup.T .sub.cu′,v′ ) .Math. vec (R.sub.cg) = 0, where I is a unit matrix, Ⓧ is the operator of Kronecker product, vec is the operator of vectorization; placing all matrices (R.sub.gu′,v′ ⓍI-IⓍR.sup.T.sub.cu′v′) of all any two measurement poses by column to obtain a matrix R; performing a singular value decomposition on the matrix R to obtaining matrix V, namely the right singular matrix of the matrix R, and taking out the 9 elements of last column of matrix V to revert to a matrix R.sub.cg of 3×3; performing a singular value decomposition on the matrix R.sub.cg, i.e R ¯ c g = U R .Math. R V R T , where U .sub.R is the left singular matrix of the matrix R.sub.cg, Σ.sub.R is the singular value matrix of the matrix R.sub.cg, V.sub.R is the right singular matrix of the matrix R.sub.cg, then obtaining the rotation matrix R.sub.cg of the hand-eye matrix H.sub.cg: R c g = U R V R T ; placing all matrices (R.sub.gu′,v′ -I) of all any two measurement poses by column to obtain a matrix R.sub.g, placing all matrices (R.sub.cgT.sub.cu′,v′ - T.sub.gu′,v′) of all any two measurement poses by column to obtain a matrix T′, then calculating the translation vector of the hand-eye matrix H.sub.cg : T.sub.cg = R.sub.g .sup.-1T′. 3.3).based on minimizing a reprojection error, optimizing the hand-eye matrix by using a particle swarm optimization algorithm 3.3.1). creating a particle swarm with population size of K′, where the position and the velocity of the k′.sup.th particle are denoted by p.sub.k′ and v.sub.k′, respectively, then initializing the positions p.sub.k′ and the velocities v.sub.k′ of all K′ particles: p k = H c g v k = r a n d v min , v max where k′= 1,2, ..., K′, v.sub.min, v.sub.max are the upper threshold and the lower threshold of velocity at each iteration of particle, rand () is rand function; 3.3.2). establishing a reprojection error cost function as the fitness function of the particle swarm optimization algorithm: φ p k = .Math. i = 1 , j i n P i 1 s i K exp p k H g j H g i 1 p k 1 3 × 4 P j 2 2 where s.sub.i′ is the scale factor of the i′.sup.th measurement pose, K* is the intrinsic parameter matrix of the binocular camera, exp[.Math.].sub.3x4 is an operator of choosing a submatrix of 3×4 of the left upper corner of the matrix in square bracket, || ||.sub.2 is a square-normal operator, P.sub.j′ is the position of the corner of the checkerboard of any non-i.sup.th measurement pose in camera coordinate system; 3.3.3). according to the fitness function φ(p.sub.k′), finding the position of the current individual extremum min {φ(p.sub.k′), φ p k of each particle and taking it as the historical optimal position p k , finding the position of the global extremum min {φ( p .sub.k′), k′ =1,2,...,K′}, namely the position of the particle of the minimum value of fitness function and taking it as the global optimal position g* : p k min φ p k , φ p k g min φ p k , k = 1 , 2 , .Math. , K 3.3.4). updating the velocity v.sub.k’and position p.sub.k’of each particle: v k = ω v k + c 1 r a n d 0 , 1 × p k p k + c 2 r a n d 0 , 1 × g p k p k = p k + v k where ω is a inertia factor, c.sub.1 and c.sub.2 are acceleration factors; returning step 3.3.3), until a termination condition is reached; step 3.3.5)taking the global optimal position g* as the optimized hand-eye matrix, which is denoted by H c g r e v i s e d ; 3.4).performing a 3D measurement 3.4.1).adjusting the binocular camera of the structured light measuring equipment so that it can clearly capture the measured object, the left and right cameras can be kept in the same horizontal position with a certain distance, and the binocular calibration can be completed; adjusting the robot so that it can carry the structured light measuring equipment to perform the 3D measurement and guarantee the full view of the measured object will be taken; 3.4.2).after receiving the planned measurement paths from the virtual simulation platform, moving the robot in turn to the measurement poses of the planned measurement paths; 3.4.3). for the measurement pose .sub.kP.sub.ij, calculating a pose matrix k H i j g from the coordinate system of the robot’s end flange to the robot base coordinate system according the measurement pose .sub.kP.sub.ij, then obtaining a rigid body transformation matrix: k H i j p c l = k H i j g * H c g r e v i s e d then obtaining a single-view point cloud .sub.kI.sub.ij of the measured object through the binocular camera of the structured light measuring equipment and transforming it from the camera coordinate system to the robot base coordinate system to obtain a registered single-view point cloud k I i j h o m o : k I i j h o m o = k H i j p c l * k I i j when registered single-view point clouds of all measurement poses are obtained, the points under the robot base coordinate system form a complete point cloud of the measured object, the 3D the measurement is completed.

Description

BRIEF DESCRIPTION OF THE DRAWING

[0064] The above and other objectives, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:

[0065] FIG. 1 a flow diagram of a method for robot assisted multi-view 3D scanning measurement based on path planning in accordance with the present invention;

[0066] FIG. 2 is a diagram of determining measurement paths and measurement poses;

[0067] FIG. 3 is a flow diagram of measuring in real measurement environment in accordance with the present invention;

[0068] FIG. 4 is a diagram of establishing a hand-eye calibration equation;

[0069] FIG. 5 is a diagram of registering a single-view point cloud;

[0070] FIG. 6 is a diagram of a virtual simulation platform;

[0071] FIG. 7 is a comparison chart of the position distribution of calibration origins after and before optimization;

[0072] FIG. 8 is a diagram of the measurement poses and corresponding single-view point clouds on one measurement surface of the measured object in example 1;

[0073] FIG. 9 is a diagram of the setting of different measurement surfaces of the measured object and corresponding registered single-view point clouds of measurement surfaces in example 1;

[0074] FIG. 10 is an illustration of complete point cloud of the measured object in example 1;

[0075] FIG. 11(A) is an illustration of complete point cloud of not optimizing the hand-eye matrix of another measured object in example 2;

[0076] FIG. 11(B) is an illustration of complete point cloud of optimizing the hand-eye matrix of another measured object in example 2.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

[0077] Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings. It should be noted that the similar modules are designated by similar reference numerals although they are illustrated in different drawings. Also, in the following description, a detailed description of known functions and configurations incorporated herein will be omitted when it may obscure the subject matter of the present invention.

[0078] In one embodiment of the present invention, as shown in FIG. 1, a method for robot assisted multi-view 3D scanning measurement based on path planning in accordance with the present invention comprises the following steps:

Step S1: Establishing a Virtual Simulation Platform

Step S1.1: Creating a Robot and Installing a Structured Light Measuring Equipment and a Depth Camera

[0079] Based on a real measurement environment, establishing a virtual simulation platform on a computer with an open source robot operating system (ROS), and importing the description file of a robot into the virtual simulation platform to create a robot which is the same as the robot of real measurement environment; at the same time, on the virtual simulation platform, installing a structured light measuring equipment and a depth camera, which are the same as that of real measurement environment on the fixture of the robot’s end flange.

[0080] Where the description file of a robot is URDF(Universal Robot Description Format description file, which includes the parameters of links, joints, kinematics and dynamic, visualization models, collision detection model.

Step S1.2: Determining Measurement Paths

[0081] FIG. 2 is a diagram of determining measurement paths and measurement poses. In one embodiment of the present invention, as shown in FIG. 2, in order to obtain the more complete, more accurate 3D shape characteristics of the measured object and guarantee the adjacent point clouds have an overlapping area, we need to perform a simulation on the measured object:

[0082] Simulating a measured object to obtain a simulated object, and placing the simulated object right in front of the robot and the structured light measuring equipment, determining multiple measurement surfaces of the simulated object and multiple measurement paths on each measurement surface for image taking of the simulated object by the structured light measuring equipment, where the i.sup.th measurement path of the k.sup.th measurement surface is denoted by .sub.kS.sub.i k = 1,2,...,K,i = 1, 2,...,M.sub.k, K is the number of measurement surfaces, M.sub.k is the number of measurement paths of the k.sup.th measurement surface, and the j.sup.th measurement point, namely measurement pose of the i.sup.th measurement path of the k.sup.th measurement surface is denoted by .sub.kP.sub.ij=1,2,...,N.sub.k, N.sub.k is the number of measurement poses of a measurement path of the k.sup.th measurement surface.

[0083] Where the view field of the structured light measuring equipment is a rectangle with a length of m cm and a width of n cm, the outer rectangle of the k.sup.th measurement surface is .sub.kα in length and .sub.kb in width, the measurement paths are vertical strips along the length, which satisfy the following constraints: any two adjacent measurement paths have a strip of common area, the number M.sub.k of measurement paths of the k.sup.th measurement surface is greater than or equal to .sub.ka/m, the number N.sub.k of measurement poses of a measurement path of the k.sup.th measurement surface is greater than or equal to .sub.kbln, the j.sup.th measurement pose .sub.kP.sub.ij of the i.sup.th measurement path of the k.sup.th measurement surface is obtained according to the constraints. As shown in FIG. 2, thus overlapping areas exist in horizontal direction and vertical direction.

Step S2: Planning Measurement Paths on the Virtual Simulation Platform

Step S2.1: Dragging the End of the Robot to a Measurement Pose

[0084] On the virtual simulation platform, to the i.sup.th measurement path of the k.sup.th measurement surface, dragging the end of the robot to a measurement pose .sub.kP i, where the measurement pose .sub.kP.sub.ij is (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij, .sub.kRx.sub.ij,.sub.kRy.sub.ij, .sub.kRz.sub.ij), (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) is the position coordinate of the end of the robot, and (.sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) is the attitude coordinate of the end of the robot.

Step S2.2: Detecting and Judging the Shortest Distance

[0085] Taking a RGB-D image of the simulated object at the measurement pose .sub.kP.sub.ij by the depth camera and converting the RGB-D image into a grayscale image, then filtering out the background of the RGB-D image by using threshold segmentation and extracting the contour from the RGB-D image to obtain a minimum circumscribed rectangle of the simulated object, then traversing the depths of all pixels of the RGB-D image within the minimum circumscribed rectangle to find out the shortest distance d.sub.min between the simulated object and the structured light measuring equipment and recording the pixel coordinate (u,v) that corresponds to the shortest distance d.sub.min on the RGB-D image, then judging whether the shortest distance d.sub.min satisfies the measurement range of the structured light measuring equipment: d.sub.min∈[D-δ,D+δ], where D is the focal length of the structured light measuring equipment for camera calibration, δ is the allowable measurement error range of the structured light measuring equipment, if yes, then going to Step S2.4, otherwise going to Step S2.3.

Step S2.3: Adjusting Measurement Pose

[0086] Obtaining a corresponding spatial coordinate (x.sub.(u,v), y.sub.(u,v), z.sub.(u,v)) of the pixel coordinate (u,v) through a coordinate transformation, then determining a spatial line according to the spatial coordinate (x.sub.(u,v), y.sub.(u,v), z.sub.(u,v)) and the position coordinate (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) of the end of the robot:

[00024]xkPxijxu,vkPxij=ykPyijyu,vkPyij=zkPzijzu,vkPzij

[0087] Where (x,y,z) is a spatial coordinate on the spatial line.

[0088] Then, finding a position coordinate from the position coordinate (.sub.kPx.sub.ij, .sub.kPy.sub.ij, .sub.kPz.sub.ij) on and along the spatial line to obtain a position coordinate (.sub.k p̃x.sub.ij, .sub.kp̃y.sub.ij, .sub.kp̃z.sub.ij), which satisfies the following measurement conditions:

[00025]d˜min=kp˜xijxu,v2+kp˜yijyu,v2+kp˜zijzu,v2d˜minDδ,D+δ

[0089] Combining the position coordinate (.sub.kp̃x.sub.ij, .sub.kp̃y.sub.ij, .sub.kp̃z.sub.ij) with the attitude coordinate (.sub.kRx.sub.ij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) to form a measurement pose (.sub.k p̃x.sub.ij, .sub.kp̃y.sub.ij, .sub.kp̃z.sub.ij, .sub.kR.sub.xij, .sub.kRy.sub.ij, .sub.kRz.sub.ij) and updating the measurement pose .sub.kP.sub.ij with the measurement pose (.sub.k p̃x.sub.ij, .sub.kp̃y.sub.ij, .sub.kp̃z.sub.ij, kR.sub.xij, .sub.kR.sub.ij, .sub.kRz.sub.ij).

Step S2.4: Repeating Steps S2.1- S2.3, Until All Measurement Poses of the Measurement Path are Judged

[0090] recording the measurement pose .sub.kP.sub.ij, the going to Step S2.1 for judging the next measurement pose .sub.kP.sub.i(j+1) until the shortest distances of all measurement poses of the measurement path .sub.kS.sub.i are judged, then, going to Step S2.5.

Step S2.5: Evaluating the Measurement Path .SUB.k.S.SUB.i

[0091] Step S2.5.1: executing the measurement path .sub.kS.sub.i on the virtual simulation platform, where the robot will move continuously from the measurement pose .sub.kP.sub.i1 to the measurement pose, then sampling and recording the position .sub.kṖ.sub.it in equal time during the movement, all positions .sub.kṖ.sub.it, t = 0,1, ..., T form a motion path .sub.kṠ.sub.i, where t is a serial number of sampling, T is the number of samplings.

[0092] Calculating the shortest distance l.sub.1 between the starting position (.sub.kPx.sub.i1, .sub.kPy.sub.i1, .sub.kPz.sub.i1) and the ending position (.sub.kPx.sub.iJ, .sub.kPy.sub.iJ, .sub.kPz.sub.iNk) of the measurement path .sub.kS.sub.i :

[00026]l1=kPxi1kPxiNk2+kPyi1kPyiNk2+kPzi1kPziNk2

[0093] Calculating the motion distance between the starting position (.sub.kṖx.sub.i1, (.sub.kṖy.sub.i1, .sub.kṖz.sub.i1) and the ending position (.sub.kṖx.sub.iT, .sub.kṖy.sub.iT, .sub.kṖz.sub.iT) of the motion path .sub.kṠ.sub.i:

[00027]l2=.Math.t=1T1kP˙xit+1kP˙xit2+kP˙yit+1kP˙yit2+kP˙zit+1kP˙zit2

[0094] Then obtaining a distance difference L, where L = l.sub.2 — l.sub.1, and creating an evaluation function f.sub.1(L), where f.sub.1 (L) = (δ.sub.1 - L) / δ.sub.1, 0 ≤ L ≤ δ.sub.1, δ.sub.1 is a maximum error threshold.

[0095] Step S2.5.2: finding a position .sub.kṖ.sub.iw which has the shortest distance to the barycenter of the simulated object from the motion path .sub.kṠ.sub.i, where the shortest distance is denoted by l.sub.3, then creating an evaluation function f.sub.2(l.sub.3), where f.sub.2 (l.sub.3) = (l.sub.3 — δ.sub.2) / l.sub.3, δ.sub.2 is the shortest distance which guarantee no collision between the structured light measuring equipment and the simulated object.

[0096] Step S2.5.3: creating an overall evaluation function .sub.kF.sub.i for the measurement path .sub.kS.sub.i, where .sub.kF.sub.i = (0.5f.sub.1(L)+0.5F(l.sub.3))*100, then evaluating the measurement path .sub.kS.sub.i according to the overall evaluation function .sub.kF.sub.i, if the overall evaluation function .sub.kF.sub.i > g, the measurement path .sub.kS.sub.i has passed the evaluation, then going to Step S2.7, otherwise going to Step S2.6, where g is a threshold which is determined by measurement operator according to an actual measurement scenario, and 0 < g < 100.

Step S2.6: Correcting the Measurement Path

[0097] traversing all measurement poses of the measurement path .sub.kS.sub.i to find a measurement pose .sub.kP.sub.is which has the shortest distance to position .sub.kṖ.sub.iw, then moving the robot to the measurement pose .sub.kP.sub.is, dragging the end of the robot to manually increase the shortest distance d.sub.min between the simulated object and the structured light measuring equipment, where the increase of the shortest distance d.sub.min needs to satisfies the measurement range of the structured light measuring equipment: d.sub.min ∈ [D-δ,D+δ], replacing the measurement pose .sub.kP.sub.is with the measurement pose after the increase of the shortest distance d.sub.min to complete the correction of the measurement path .sub.kS.sub.i,, then going to Step S2.7.

Step S2.7: Repeating Step S2.1 - Step S2.6 for Each Measurement Path, Then Sending the Planned Measurement Paths to the Robot of Real Measurement Environment

[0098] For each measurement path of each measurement surface, performing Step S2.1 to Step S2.6 to complete the simulation of planning of the measurement paths, converting the planned measurement paths into communication messages that can be recognized by robot and sending the communication messages from the virtual simulation platform to the robot of real measurement environment.

Step S3: Measuring in Real Measurement Environment

[0099] As shown in FIG. 3, the measuring in real measurement environment comprises the following steps:

Step S3.1: Establishing a Hand-Eye Calibration Equation

[0100] In the real measurement environment, as shown in FIG. 4, installing a structured light measuring equipment on the fixture of a robot’s end flange, operating the robot to move to the i′.sup.th measurement pose and taking an image of a corner of a checkerboard, namely a calibration to obtain a board images, then recording the pose information of the robot and the position P.sub.i′ of the corner in camera coordinate system, for n′ measurement poses, obtaining n′ calibration board images, and their corresponding pose information of the robot and positions P.sub.i′ of the corner in camera coordinate system.

[0101] Obtaining the rotation matrix R.sub.ci′ and the translation vector T.sub.ci′ of the calibration board relative to the binocular camera of the structured light measuring equipment in each calibration board image according to the calibration method of Zhang, and then combining the rotation matrix R.sub.ci′ and the translation vector T.sub.ci′ into an extrinsic parameter matrix H.sub.ci′,i′ = 1,2,..., n′, meanwhile, obtaining the rotation matrix R.sub.gi′ and the translation vector T.sub.gi′ of the robot’s end flange relative to the base of the robot according to the measurement pose, then combining the rotation matrix R.sub.gi′ and the translation vector T.sub.gi′ into a robot pose matrix H.sub.gi′,i′ = 1, 2,..., n′,where:

[00028]Hci=RciTci01,HgiRgiTgi01

[0102] For the reason that the relative pose relation between the base of the robot and the checkerboard (calibration board) is constant and the relative pose relation between the binocular camera and the robot’s end flange is constant, combining the coordinate transformation relation between the extrinsic parameter matrix of the binocular camera and the robot pose matrix, we can obtain the following relation for any two measurement poses:

[00029]HguHcgHcu=HBW=HBW=HgvHcgHcv

[0103] Namely:

[00030]HguHcgHcu=HgvHcgHcv

[0104] Performing a matrix transformation, namely multiplying the front of left side by matrix

[00031]Hgv1

and the rear of right side by matrix

[00032]Hcu1

we can obtain the following equation:

[00033]Hgv1HguHcg=HcgHcvHcu1

[0105] Namely, establishing a hand-eye calibration equation based on the extrinsic parameter matrices H.sub.cu′, H.sub.cv′ and robot pose matrices H.sub.gu′, H.sub.gv′ of any two measurement poses:

[00034]Hgv1HguHcg=HcgHcvHcu1

where u′, v′ are serial numbers of any two measurement poses, u′ ≠ v′ and:

[00035]Hgv1Hgu=Rgu,vTgu,v01,Hcg=RcgTcg01andHcvHcu1=Rcu,vTcu,v01;

where R.sub.gu′,v′ is the rotation matrix of the matrix

[00036]Hgv1Hgu,Tgu,v

is the translation vector of the matrix

[00037]Hgv1Hgu,Rcg

is the rotation matrix of the hand-eye matrix H.sub.cg, T.sub.cg is the translation vector of the hand-eye matrix H.sub.cg, R.sub.cu′,v′ is the rotation matrix of the matrix

[00038]HcvHcu1,Tcu,v

is the translation vector of the matrix

[00039]HcvHcu1.

[0106] Letting

[00040]A=Hgv1Hgu,B=HcvHcu1,X=Hcg,

then hand-eye calibration equation can be expressed as:

[00041]AX=XB

[00042]Cn2

hand-eye calibration equation can be established by the n′ pluralities of calibration data.

Step S3.2: Calculating the Hand-Eye Matrix by Using a Singular Value Decomposition

[0107] based on the properties of Kronecker product, transforming the hand-eye calibration equation into a least squares problem, and calculating the hand-eye matrix H.sub.cg by using a singular value decomposition

[0108] Based on the properties of Kronecker product, expanding the hand-eye calibration AX=BX, a homogeneous equation can be obtained :

[00043]AX=XBHgv1HguHcg=HcgHcvHcu1Rgu,vTgu,v01RcgTcg01=RcgTcg01Rcu,vTcu,v01Rgu,vRcgRgu,vTcg+Tcu,v01=RcgRcu,vRcgTcu,v+Tcg01

[0109] Namely Formula 1 and formula 2:

[00044]Rgu,vRcg=RcgRcu,vRgu,vTcg+Tgu,v=RcgTcu,v+Tcg­­­Formula 1 Formula 2

[0110] Where R.sub.cu′,v′, R.sub.cg, R.sub.gu′,v′ are rotation matrices and belong to a special orthogonal group and are closed in multiplication.

[0111] Performing the following transformations on Formula 1:

[00045]Rgu,vRcgI=IRcgRcu,vvecRgu.vRcgI=vecIRcgRcu,vRgu,v.Math.ITvecRcg=I.Math.RTcu,vvecRcgRgu,v.Math.II.Math.RTcu,vvecRcg=0

[0112] Then obtained:

[00046]Rgu,v.Math.II.Math.RTcu,vvecRcg=0

[0113] Where I is a unit matrix, .Math. is the operator of Kronecker product, vec is the operator of vectorization.

[0114] So a linear equation of Cx=0 is obtained, thus we can transform the linear equation into a least squares problem by using singular value decomposition (SVD). For:

[00047]Rgu,v.Math.II.Math.RTcu,v=U.Math.VT,UTU=VTV=I,Σ=diagσ1,σ2,.Math.,σ9,σ1σ2.Math.σ90

we can obtain the following least squares problem:

[00048]minRgu,v.Math.II.Math.RTcu,vvecRcg22=minU.Math.VTvecRcg22=min.Math.VTvecRcg22s.t.vecRcg2=1

[0115] Then obtained:

[00049]min.Math.VTvecRcg22s.t.vecRcg2=1

[0116] Now the primal problem is converted into:

[00050]minVTvecRcgT.Math.T.Math.VTvecRcg22=minyT.Math.T.Math.y22

[0117] Where

[00051]yT.Math.T.Math.y=σ12y12+σ22y22+

[00052]+σ92y92,σ1σ2.Math.σ90.

[0118] For there has a constraint of ||y||.sub.2 = 1, the minimal solution is y = [0 0 ... 1].sup.T, namely, the solution of Cx=0 is the last column of the matrix V:

[00053]vecRcg=Vy=v1v2.Math.v900.Math.1=v9

[0119] Therefore, the calculation of the hand-eye matrix H.sub.cg in the present invention is: [0120] Firstly, establishing a linear equation set: (R.sub.gu′,v′ .Math.I-I.Math. R.sup.T.sub.cu′,v′) .Math. vec(R.sub.cg) = 0, where I is a unit matrix, .Math. is the operator of Kronecker product, vec is the operator of vectorization; [0121] placing all matrices (R.sub.gu′,v′ .Math.I-I.Math. R.sup.T.sub.cu′,v′) of all any two measurement poses by column to obtain a matrix R; [0122] performing a singular value decomposition on the matrix R to obtaining matrix V, namely the right singular matrix of the matrix R, and taking out the 9 elements of last column of matrix V to revert to a matrix .sub.cg of 3×3; [0123] performing a singular value decomposition on the matrix where U.sub.R is the left singular matrix of the matrix .sub.cg, Σ.sub.R is the singular value matrix of the matrix .sub.cg, V.sub.R is the singular matrix of the matrix .sub.cg, then obtaining the rotation matrix R.sub.cg of the hand-eye matrix

[0124] So the rotation matrix R.sub.cg can be calculated from Formula 1. In order to guarantee the Unit orthogonality, the rotation matrix R.sub.cg has been Orthogonalized by adopting Rodrigues’ rotation formula, as to eliminating the influence of measurement noise. Among Rodrigues’ rotation formula, Σ.sub.R can reflect the quality of the calibration to a certain extent. For good calibration, all elements of Σ.sub.R should be very close, even completely equal.

[0125] Lastly, through above-mentioned Formula 2, we can obtain:

[00056]Rgu,vTcg+Tgu,v=RcgTcu,v+TcgRgu,vITcg=RcgTcu,vTgu,v

[0126] Placing all matrices (R.sub.gu′,v′ - I) of all any two measurement poses by column to obtain a matrix R.sub.g, placing all matrices (R.sub.cgT.sub.cu′,v′ - T.sub.gu’,v′) of all any two measurement poses by column to obtain a matrix T′, then calculating the translation vector of the hand-eye matrix H.sub.cg : T.sub.cg = R.sub.g .sup.-1T′ .

[0127] Thus, the rotation and translation matrix of the binocular camera coordinate system to the robot’s end flange coordinate system, the hand-eye matrix H.sub.cg is calculated and taken as the initial position of the subsequent particle swarm optimization algorithm.

Step S3.3: Based on Minimizing a Reprojection Error, Optimizing the Hand-Eye Matrix by Using a Particle Swarm Optimization Algorithm

Step S3.3.1: Creating a Particle Swarm and Taking the Hand-Eye Matrix as Initial Positions of All Particles

[0128] Creating a particle swarm with population size of K′, where the position and the velocity of the k′.sup.th particle are denoted by p.sub.k′ and v.sub.k′, respectively, then initializing the positions p.sub.k′ and the velocities v.sub.k′ of all K′ particles:

[00057]pk=Hcg

[00058]vk=randvmin,vmax

where k′ = 1,2,..., K′, v.sub.min, v.sub.max are the upper threshold and the lower threshold of velocity at each iteration of particle, rand () is rand function;

Step S3.3.2: Establishing a Reprojection Error Cost Function as the Fitness Function of the Particle Swarm Optimization Algorithm

[00059]φpk=.Math.i=1,jinPi1siKexppkHgjHgi1pk13×4Pj22

where s.sub.i′ is the scale factor of the i′.sup.th measurement pose, K* is the intrinsic parameter matrix of the binocular camera, exp[.Math.].sub.3×4 is an operator of choosing a submatrix of 3×4 of the left upper corner of the matrix in square bracket, || ||.sub.2 is a square-normal operator, P.sub.j′ is the position of the corner of the checkerboard of any non-i.sup.th measurement pose in camera coordinate system.

[0129] P = [x, y, z].sup.T is the position of a corner of the checkerboard (calibration board) in the world coordinate system W, B is the robot base coordinate system, P.sub.j′ is the position of the corner of the checkerboard of the j′.sup.th measurement pose in camera coordinate system. Their relationship can be express as:

[00060]Pj1=HcjP1

[0130] As shown in FIG. 2, P.sub.i′ is the position of the corner of the checkerboard of the i′.sup.th measurement pose in camera coordinate system. Thus we can transform the position P.sub.j′ into the position P.sub.i′j′ under the i′.sup.th measurement pose, and according to the following hand-eye relationship:

[00061]HgiXHci=HcjXHgjHciHcj1=XHgjHgi1X1

[0131] We can obtain the following expression of the position P.sub.i′j′:

[00062]Pij1=HciHcj1Pj1=XHgjHgi1X1Pj1

where

[00063]X=Hcg.

[0132] We can project the position P.sub.i′j′ to the imagine from the i′.sup.th measurement pose by using the calibration parameters of the binocular camera and obtain a image coordinate

[00064]Pij.

Namely:

[00065]Pij=1siKPijPij=1siKexpXBX13×4Pj1

where s.sub.i′ is the scale factor of the i′.sup.th measurement pose, K* is the intrinsic parameter matrix of the binocular camera, exp[.Math.].sub.3×4 is an operator of choosing a submatrix of 3×4 of the left upper corner of the matrix in square bracket.

[0133] Denoting the reprojection error of the position P.sub.i′j′ as err, the reprojection error of the position P.sub.i′j′ is:

[00066]err=PiPij22

[0134] Substituting the expression of the position P.sub.i′j′, we can obtain:

[00067]err=Pi1siKexpXBX13×4Pj22

[0135] Generalized to the n′ pluralities of calibration board images, we can obtain a reprojection error cost function:

[00068]err*=.Math.i=1,jinPi1siKexpXBX13×4Pj22

[0136] Taking the reprojection error cost function err* as the fitness function φ(p.sub.k′) of the particle swarm optimization algorithm to perform an iteration, we can obtain the optimized hand-eye matrix

[00069]Hcgrevised.

[0137] Step S3.3.3: according to the fitness function φ(p.sub.k′), finding the position of the current individual extremum min

[00070]φpk,φpk

of each particle and taking it as the historical optimal position

[00071]pk,

finding the position of the global extremum min {φ(p.sub.k′), k′ = 1, 2,..., K′}, namely the position of the particle of the minimum value of fitness function and taking it as the global optimal position g* :

[00072]pkminφpk,φpkgminφpk,k=1,2,.Math.,K

[0138] Step S3.3.4: updating the velocity v.sub.k′ and position p.sub.k′ of each particle:

[00073]vk=ωvk+c1rand0,1×pkpk+c2rand0,1×gpkpk=pk+vk

where ω is a inertia factor, c.sub.1 and c.sub.2 are acceleration factors;

[0139] returning Step S3.3.3, until a termination condition is reached;

[0140] Step S3.3.5: taking the global optimal position g* as the optimized hand-eye matrix, which is denoted by

[00074]Hcgrevised;

Step S3.4: Performing a 3D Measurement

[0141] Step S3.4.1: adjusting the binocular camera of the structured light measuring equipment so that it can clearly capture the measured object, the left and right cameras can be kept in the same horizontal position with a certain distance, and the binocular calibration can be completed; adjusting the robot so that it can carry the structured light measuring equipment to perform the 3D measurement and guarantee the full view of the measured object will be taken;

[0142] Step S3.4.2: after receiving the planned measurement paths from the virtual simulation platform, moving the robot in turn to the measurement poses of the planned measurement paths;

[0143] Step S3.4.3: as shown in FIG. 5, for the measurement pose .sub.kP.sub.ij, calculating a pose matrix

[00075]kHijg

from the coordinate system of the robot’s end flange to the robot base coordinate system according themeasurement pose .sub.kP.sub.ij, then obtaining a rigid body transformation matrix:

[00076]kHijpcl=kHijg*Hcgrevised

then obtaining a single-view point cloud .sub.kI.sub.ij of the measured object through the binocular camera of the structured light measuring equipment and transforming it from the camera coordinate system to the robot base coordinate system to obtain a registered single-view point cloud

[00077]kIijhomo:

[00078]kIijhomo=kHijpcl*kIij

when registered single-view point clouds of all measurement poses are obtained, the points under the robot base coordinate system form a complete point cloud of the measured object, the 3D the measurement is completed.

Example 1

[0144] The measured object is a 600 mm×450 mm×200 mm cuboid. Firstly, establishing a 1:1 virtual simulation platform on a computer, which has been installed an Ubuntu operating system and an open source robot operating system (ROS) according to a real measurement environment. In the example, the virtual simulation platform is shown in FIG. 6. A structured light measuring equipment and a depth camera, which are the same as that of real measurement environment are installed on the fixture of the robot’s end flange. The robot is a six degrees of freedom (DOF) robot. Then, planning measurement paths on the virtual simulation platform according to the method described in Step 2. After a measurement surface is completed, another measurement surface is selected to perform according to the method described in Step 2, until all measurement paths of all measurement surfaces are planned. The messages of the planned measurement paths are sent to the six DOF robot of real measurement environment.

[0145] In the real measurement environment, the checkerboard (calibration board) is 8 × 11 with square size of 15 mm.

[0146] In the example, Firstly, operating the six DOF robot to carry the binocular camera of the structured light measuring equipment to 9 different measurements poses and taking 9 calibration board images with resolutions of 3000 × 4096. Then, according to the calibration method of Zhang, calculating 9 extrinsic parameter matrices H.sub.ci′, i′ = 1,2,...,9 of the 9 calibration board images, and recording the pose information of the six DOF robot and the position P.sub.i′ of the corner in camera coordinate system. Meanwhile, calculating 9 robot pose matrix H.sub.gi′, i′ = 1, 2,...,9 . Thus 36 hand-eye calibration equations to be solved are established.

[0147] Then, based on the properties of Kronecker product, transforming the hand-eye calibration equation into a least squares problem, and calculating the hand-eye matrix H.sub.cg by using a singular value decomposition. In the example, the hand-eye matrix H.sub.cg is:

[00079]Hcg=RcgTcg01=0.04296610.99887630.020007520.1263-0.99906770.04287340.0050404166.4620.0041772-0.02019860.9997872361.3420001

[0148] Where the rotation matrix of the hand-eye matrix H.sub.cg is the matrix of 3 × 3 of the left upper corner, namely:

[00080]Rcg=0.04296610.99887630.0200075-0.99906770.04287340.00504040.0041772-0.02019860.9997872

the translation vector of the hand-eye matrix H.sub.cg is the column vector of 3×1 of the right upper corner, namely:

[0149] the rotation matrix of the hand-eye matrix H.sub.cg is the matrix of 3×3 of the left upper corner, namely:

[00081]Tcg=20.1263166.462361.342

[0150] The calculated hand-eye matrix H.sub.cg is taken as the initial position of the subsequent particle swarm optimization algorithm. In the example, the all positions of K′ particles in initial population are the calculated hand-eye matrix H.sub.cg .

[0151] In the example, the optimized hand-eye matrix

[00082]Hcgrevised

is:

[00083]Hcgrevised=0.0433760.9987630.020356419.6059-0.9989520.0452435-0.00692214162.861-0.00599257-0.02064920.9997669350.1590001

[0152] As shown in FIG. 7, after optimizing the hand-eye matrix H.sub.cg through the particle swarm optimization algorithm, we can calculate the position of calibration origin under the robot base coordinate system. Comparing the position distribution of calibration origins of n′ measurement poses, we can see that calibration origins (black dots) after optimization are relatively concentrated and almost overlapped; however calibration origins (hollow dots) before optimization are relatively dispersed. The comparison proves the improvement of hand-eye calibration, and the effect will be verified in subsequent 3D measurement.

[0153] After the optimized hand-eye matrix

[00084]Hcgrevised

is obtained, the six DOF robot carries the structured light measuring equipment to a measurement pose according to the planned measurement paths to taking imagines of the measured object, then a single-view point cloud of the measured object is obtained. The measurement poses and corresponding single-view point clouds on the 1.sup.st measurement surface (k=1) of the measured object are shown in FIG. 8, and registered single-view point clouds of measurement surfaces in robot base coordinate system are shown in FIG. 9. For FIG. 8 and FIG. 9, we can see that the single-view point clouds of the same measurement surface can be registered. As shown in FIG. 10, a complete point cloud of the measured object is obtained through the present invention.

Example 2

[0154] In the example, the measured object is complex mechanical component. As shown in FIGS. 11, the complete point cloud after optimizing the hand-eye matrix can embody more shape characteristics, which means that the complete point cloud after optimizing the hand-eye matrix can represent more abundant surface information and is beneficial to perform the subsequent process of point cloud high efficiently.

[0155] While illustrative embodiments of the invention have been described above, it is, of course, understand that various modifications will be apparent to those of ordinary skill in the art. Such modifications are within the spirit and scope of the invention, which is limited and defined only by the appended claims.