Six degree-of-freedom (DOF) measuring system and method
10907953 ยท 2021-02-02
Assignee
Inventors
Cpc classification
G05B19/402
PHYSICS
G01S17/42
PHYSICS
International classification
Abstract
This invention disclosures a six degree-of-freedom (DOF) measuring system and method. The measuring system comprises a tracking measurement device and a target. The tracking measurement device includes a processor, a camera and two rotation optical components connected with the processor respectively, which are arranged in sequence. The camera boresight and the optical axis of two rotation optical components are coaxial, and each rotation optical component can rotate independently. The target is mounted on the tested object, which contains at least three markers with known distance constraints. In addition, at least one marker does not coincide with the midpoint of the line connecting any two of the remaining markers. Compared with the prior technologies, this invention realizes a dynamic real-time measurement of six DOF for the tested object. In the process of measurement, only the target is fixed on the object. The target has the advantages of simple structure, less influence on the actual operation of the target and easy use. Meanwhile, the calculation process of 6 DOF measurement is simplified, and the real-time and reliability of the measurement method is improved.
Claims
1. A 6 DOF measuring system comprising: a tracking measurement device comprising: a processor; a camera connected to the processor; two rotation optical components connected to the processor, and the camera and two rotation optical components are arranged in sequence, and the boresight of the camera and the optical axis of the two rotation optical components remain coaxial, and each rotation optical component can rotate independently; a target mounted on a tested object, which contains at least three markers with known distance constraints, and at least one marker does not coincide with the midpoint of the line connecting any two of the remaining markers, the processor is configured to perform a 6 DOF measuring method, the 6 DOF measuring method comprising: 1) establishing a world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w, camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c, target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t and tested object coordinate system O.sub.a-X.sub.aY.sub.aZ.sub.a; 2) calibrating the camera to obtain the internal and external parameters of the camera; 3) the rotation optical components are controlled by the processor to adjust the camera boresight so that the target falls into the camera FOV; 4) the camera acquires images which include at least three markers, and then the image processing is carried out, and the image coordinates (X.sub.i, Y.sub.i) (i=1n) of the markers are obtained; 6) according to a reverse ray tracing method and the rotation angles of the rotation optical components, the exiting point K.sub.i(i=1n) and the emergent beam vector S.sub.i(i=1n) of the incident beam in the rotation optical component are obtained; 7) three-dimensional coordinates of the markers can be obtained according to the following equation;
{right arrow over (N)}={right arrow over (P.sub.1P.sub.2)}{right arrow over (P.sub.2P.sub.3)} where P.sub.1 is the first marker, P.sub.2 is the second marker, P.sub.3 is the third marker, N is the normal vector of the plane where the above three markers are located; taking P.sub.2 as the origin and the orientation vectors of {right arrow over (P.sub.1P.sub.2)}, {right arrow over (N)}, and {right arrow over (N)} as the coordinate directions, the coordinate system is established, and the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t coincides with this coordinate system; 9) an attitude matrix R of the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t relative to the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c is solved by a directional cosine matrix; 10) according to the image coordinates of the three markers and based on a pinhole imaging model, the translation matrix T of the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t relative to the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c can be solved; 11) the attitude matrix of the object relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w can be derived as follows:
M=M.sub.1M.sub.2M.sub.3, where M.sub.1 is the conversion matrix of the camera relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w,
2. The 6 DOF measuring system of claim 1 wherein at least one of the two rotation optical components comprising: a driving unit connected to a transmission unit and t processor; the transmission unit; an angle measuring unit mounted on the transmission unit; a prism mounted on the transmission unit, and the cross section of the prism is a right trapezoid.
3. The 6 DOF measuring system of claim 1 wherein the target comprising: a sleeve; marking rings which include marking shafts and rings, wherein the rings are nested on the outer diameter of the sleeve; the marking shafts are racially distributed on an outer side of each ring, and the markers are set at the hanging end of the marking shafts.
4. The 6 DOF measuring system of claim 1 wherein a filter is arranged between the camera and at least one of the rotation optical components.
5. The 6 DOF measuring system of claim 1 wherein the markers are spherical infrared light emitting diodes (LED).
6. The 6 DOF measuring system of claim 1 wherein an optical scanning field formed by two rotation optical components is not less than the camera FOV, and the combined imaging FOV formed by the camera and two rotation optical components is not less than the camera imaging FOV.
7. A 6 DOF measuring method comprising: 1) establishing a world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w, camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c, target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t and tested object coordinate system O.sub.a-X.sub.aY.sub.aZ.sub.a; 2) calibrating a camera to obtain the internal and external parameters of the camera; 3) rotation optical components are controlled by a processor to adjust the camera boresight so that a target falls into the camera FOV; 4) the camera acquires images which include at least three markers, and then the image processing is carried out, and the image coordinates (X.sub.i, Y.sub.i) (i=1n) of the markers are obtained; 6) according to a reverse ray tracing method and rotation angles of the rotation optical components, the exiting point K.sub.i(i=1n) and the emergent beam vector S.sub.i(i=1n) of the incident beam in the rotation optical component are obtained; 7) three-dimensional coordinates of markers can be obtained according to the following equation:
{right arrow over (N)}={right arrow over (P.sub.1P.sub.2)}{right arrow over (P.sub.2P.sub.3)} where P.sub.1 is the first marker, P.sub.2 is the second marker, P.sub.3 is the third marker, N is the normal vector of the plane where the above three markers are located; taking P.sub.2 as the origin and the orientation vectors of {right arrow over (P.sub.1P.sub.2)}, {right arrow over (N)}, and {right arrow over (N)} as the coordinate directions, the coordinate system is established, and the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t coincides with this coordinate system; 9) an attitude matrix R of the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t relative to the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c is solved by a directional cosine matrix; 10) according to the image coordinates of the three markers and based on a pinhole imaging model, a translation matrix T of the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t relative to the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c can be solved; 11) the attitude matrix of the tested object relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w can be derived as follows:
M=M.sub.1M.sub.2M.sub.3, where M.sub.1 is a conversion matrix of the camera relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w,
8. The method of claim 7 wherein the visual tracking algorithm in step 12 can be described as that under the condition that the rotation angles of the rotation optical components are known, and taking the line between any marker and the focus F of camera as an incident beam, then an emergent beam vector of the rotation optical component is obtained by a vector ray tracing method; under the condition that the emergent beam vector is known, a reverse lookup-table method is used to solve the rotation angles of the rotation optical components.
9. The method of claim 8 wherein the steps to make a lookup table of the reverse lookup-table method comprising: deciding the rotation step angle .sub.st of a prism, and every revolution of prism rotation can be further divided into 360/.sub.st steps; then emergent vectors at different angles are calculated according to the following equation and the lookup table of the relationship between the rotation angle and the unit vector of the emergent beam is established;
10. The reverse lookup-table method of claim 8 wherein the steps of this reverse lookup-table method are: according to the unit vector of the emergent beam of the current rotation optical components, search the lookup table for the actual emergent beam closest to the target emergent beam, the error between them is given by,
={square root over ((X.sub.rpx.sub.rp).sup.2+(Y.sub.rpy.sub.rp).sup.2+(Z.sub.rpz.sub.rp).sup.2)} where X.sub.rp, Y.sub.rp and Z.sub.rp are the unit vector coordinates of the emergent beam recorded in the table, and x.sub.rp, y.sub.rp and z.sub.rp are the unit vector coordinate of the emergent beam of a prism; then the related rotation angles can be determined according to the actual emergent beam.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Referring now to the drawings, exemplary embodiments are shown which should not be construed to be limiting regarding the scope of the disclosure, and wherein the elements are numbered alike in several FIGURES:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
REFERENCE NUMBERS
(23) 100, tracking measurement device; 101, processor; 102, camera; 1021, camera photosensitive chip; 1022, camera lens; 103, filter; 104, driving unit; 105, transmission unit; 106, angle measuring unit; 107, prism; 108, camera boresight; 109, adjusted camera boresight; 1080, camera imaging FOV; 1081, combined imaging FOV; 1082, optical scanning field; 200, target; 201, marker ring; 2011, marker; 2011A, first marker; 2011B, second marker; 2011C, third marker; 2012, marking shaft; 2013, sleeve ring; 202, sleeve I; 203, fixed support; 300, tested object; 301, sleeve II; 302, needle; 400, hole-based target; 401, top marker I; 402, pillar I; 403, sleeve III; 405, plug pin; 500, shaft-based target; 501, top marker II; 502, pillar II; 503, sleeve IV; 504, scroll chuck; 600, flexible multi-hinge; 601, base I; 602, rotary column I; 603, damper; 604, hinge rod; 605, probe; 700, scanner; 701, base II; 702, rotary column II; 703, scanning head.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(24) This invention is described in detail below with the drawings and specific embodiments. Based on the technical scheme of this invention, detailed implementation methods and specific operation procedures are given in the following embodiments. However, the protection range of this invention is not limited to these embodiments.
(25) This embodiment provides a 6 DOF measurement system and method. As shown in
(26) As shown in
(27) The transmission unit 105 can adopt either gear transmission, worm gear transmission, synchronous belt transmission or chain transmission. The prism 107 can use different refractive index optical transmission materials, such as optical glass, optical crystals, optical plastics (such as PC, PMMA, etc.), ceramics and metals, any one or several. The driving unit 104 can use either servo motors, stepping motors or torque motors.
(28) As shown in
(29) As shown in
(30) 1) Establish the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w, camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c, target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t and tested object coordinate system O.sub.a-X.sub.aY.sub.aZ.sub.a.
(31) 2) A suitable calibration method is used to calibrate the camera. The calibration methods include direct linear method, Tsai two-step calibration method, Zhang's camera calibration method or neural network calibration method. The internal and external parameters of the camera 102 are recorded, and the camera 102 is kept still during the measurement process.
(32) 3) The rotation optical components are controlled by processor 101 to adjust the camera boresight so that the target 200 falls into the camera FOV.
(33) 4) Camera 102 acquires images which include the first marker 2011A, the second marker 2011B, and the third marker 2011C, then the images are transmitted to processor 101.
(34) 5) Processor 101 acquires the images and then performs image processing. The image coordinates of the first marker 2011A, the second marker 2011B and the third marker 2011C are (0.462, 0), (0, 0), (0, 0.616).
(35) 6) As shown in
(36) 7) According to the following equation, the three-dimensional coordinates of the marker 2011 can be obtained.
(37)
(38) Where, X.sub.pi, Y.sub.pi and Z.sub.pi are the three-dimensional coordinates of the marker i in the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c in the measurement system; X.sub.ki, Y.sub.ki and Z.sub.ki are the three-dimensional coordinates of K.sub.i in the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c in the measurement system; X.sub.si, Y.sub.si and Z.sub.si are the three-dimensional coordinates of S.sub.i in the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c in the measurement system, i=13; X.sub.pj, Y.sub.pj, Z.sub.pj are the three-dimensional coordinates of the marker j in the measuring coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c in the measurement system, j=13. D.sub.ij is the distance between the marker i and the marker j, and .sub.i is the proportional coefficient of the equation i.
(39) In the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c, the three-dimensional coordinates of the first marker 2011A, the second marker 2011B and the third marker 2011C are (22.36, 49.63, 1008.2), (50.5, 46.3, 998.3) and (49.32, 10.0651, 981.3).
(40) 8) The normal vector of the plane where the three markers 2011 are located can be solved by the following formula:
{right arrow over (N)}={right arrow over (P.sub.1P.sub.2)}{right arrow over (P.sub.2P.sub.3)}
(41) Where P.sub.1 is the first marker, P.sub.2 is the second marker, P.sub.3 is the third marker, N is the normal vector of the plane where the first marker, the second marker and the third marker are located. The normal vector N is (301.8, 489.1, 1021.6). Then, the vector perpendicular to {right arrow over (P.sub.1P.sub.2)} in the plane where the three markers are located, namely vector {right arrow over (N)}, are obtained. Taking P.sub.2 as the origin and the orientation vectors of {right arrow over (P.sub.1P.sub.2)}, {right arrow over (N)} and {right arrow over (N)} as the coordinate directions, the coordinate system is established, and the target coordinate system O.sub.t-X.sub.tY.sub.tZ coincides with this coordinate system.
(42) 9) The attitude matrix R of the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t relative to the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c is solved by the directional cosine matrix.
(43) Optimized, the directional cosine matrix is
(44)
where i, j and k are the unit directional vectors of the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t, and I, J and K are the unit directional vectors of the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c. The value of each element in the direction cosine matrix is the cosine value of the angle between the two unit directional vectors.
(45)
(46) 10) According to the image coordinates of the first marker 2011A, the second marker 2011B, and the third marker 2011C and based on the pinhole imaging model, the translation matrix T of the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t relative to camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c can be solved.
T=[50.546.3 998.3]
(47) 11) The attitude matrix of the object 300 relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w can be derived according to the following equation.
M=M.sub.1M.sub.2M.sub.3
where M.sub.1 is the conversion matrix of the camera 102 relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w,
(48)
M.sub.3 is the pose conversion matrix of the tested object coordinate system O.sub.a-X.sub.aY.sub.aZ.sub.a relative to the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t. Considering that the world coordinate system of the object O.sub.w-X.sub.wY.sub.wZ.sub.w coincide with the camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c and the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t coincide with the tested object coordinate system O.sub.a-X.sub.aY.sub.aZ.sub.a, the conversion matrix of the object 300 relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w can be calculated as follows:
(49)
(50) 12) When the tested object is moving, the rotation optical components are controlled by the visual tracking algorithm to adjust the boresight 108 of camera 102 so that the target 200 is kept within the camera FOV, then repeat the steps 4 to 11.
(51) Optimally, the principle of the visual tracking algorithm is as follow: Firstly, taking the line that connecting any markers 2011 in the imaging coordinate system with the focus F of camera 102 as the incident beam. Then due to the rotation angles of the rotation optical components are known, the emergent beam vector of the rotation optical component is obtained through the vector ray tracing method. Finally, under the condition that the emergent beam vector is known, the rotation angles of the rotation optical components are acquired through a reverse lookup-table method.
(52) The steps to make a lookup table of the reverse lookup-table method are as follows: First, the rotation step angle .sub.st of the prism 107 is decided, in this embodiment, .sub.st=0.1. Every revolution of the prism 107 rotation can be further divided into 360/.sub.st steps. Then emergent beam vectors at different angles are calculated according to the following equation and the lookup table of the relationship between the rotation angles and the unit vector of the emergent beam is established.
(53)
(54) Where A.sub.in is the unit vector of the incident beam, A.sub.out is the unit vector of refraction light; N is the normal vector of refracting surface; n.sub.1 is the refractive index of an incident medium and n.sub.2 is the refractive index of the emergent medium.
(55)
where, .sub.ri is the rotation angle of the ith prism 107, is the wedge angle of the prism 107.
(56) The steps of this reverse lookup-table method are:
(57) Taking the unit vector of the current emergent beam in the rotation optical components as the target emergent beam, then search the lookup table to find the emergent beam closest to the target emergent beam, the error between them is given by,
={square root over ((X.sub.rpx.sub.rp).sup.2+(Y.sub.rpy.sub.rp).sup.2+(Z.sub.rpz.sub.rp).sup.2)}
where, X.sub.rp, Y.sub.rp and Z.sub.rp are the coordinates of the unit vector of the emergent beam recorded in the lookup table, and x.sub.rp, y.sub.rp and z.sub.rp are the coordinates of the unit vector of the current emergent beam of prism 107. Then the rotation angles can be determined according to the found emergent beam.
(58) The working principles of this embodiment are as follows: the pose matrix of target 200 in camera coordinate system O.sub.c-X.sub.cY.sub.cZ.sub.c is obtained by the 6 DOF measurement method, and then according to the pose conversion matrix of the O.sub.c-X.sub.cY.sub.cZ.sub.c camera coordinate system relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w and the pose conversion matrix of the tested object coordinate system O.sub.a-X.sub.aY.sub.aZ.sub.a relative to the target coordinate system O.sub.t-X.sub.tY.sub.tZ.sub.t, the pose conversion matrix of the target 300 relative to the world coordinate system O.sub.w-X.sub.wY.sub.wZ.sub.w is obtained. The driving unit 104 is controlled by processor 101 through the visual tracking algorithm. Then, the driving unit 104 drives the prism 107 through the transmission unit 105 to adjust the boresight of camera 102, so as to track the target 200 and realize a real-time dynamic measurement of tested object 300.
(59) Accuracy Analysis
(60) In order to meet the demand of high accuracy of this 6 DOF measurement system, the following conditions should be satisfied: 1) The harmful band light is filtered by filter 103; 2) Camera lens 1022 is of good quality and the image distortion can be corrected; 3) The system error of camera is removed through error compensation method; 4) The image of the marker 2011 in the camera photosensitive chip 1021 should be completely filled in the area with a radius of 1000 pixels.
(61)
(62) Therefore, the measurement accuracy can be determined by the following equation:
(63)
(64) Where dD is the machining error of the marker 2011. Assuming that N=1000, the pixel size is 4.4 m4.4 m, dD=0.001 mm and f=16 mm, it can be calculated that dS=3.6 m, which indicates that the measurement accuracy can reach 3.6 m. With the increase of measurement distance, f increases and the measurement accuracy decreases.
(65) The measurement method and tracking strategy of the system in this embodiment are mainly realized by the combination of processor, driving unit, transmission unit, angle measuring device and prism. The machining error and assembly error of prisms mainly affect the pointing accuracy of camera boresight. Assuming that the wedge angle errors of two prisms are .sub.1 and .sub.2, the actual wedge angles of the two prisms are +.sub.1 and +.sub.2, respectively; the refractive index errors of the prisms are n.sub.1 and n.sub.2, the actual refractive index of the two prisms are n+n.sub.1 and n+n.sub.2. Providing that =20, and n=1.517, the relationship of the maximum pointing error .sub.max within the pointing range of camera boresight vary with the wedge angle error and the refractive index error n is shown in
(66) Prism assembly errors directly affect the camera boresight pointing accuracy. According to the error form, assembly errors of the prism can be divided to the prism tilt errors and bearing tilt errors, as shown in
(67) In different embodiments, the target may also have the following forms. 1. As shown in
(68) The preferred specific embodiments of this invention are described in detail. The ordinary technicians in related field can make many modifications and changes according to the concept of this invention without creative work. Therefore, any technical scheme that can be obtained through logical analysis, reasoning or limited experiments on the basis of the existing technology in accordance with this invention shall be within the protect range determined by these claims.