A CALIBRATION METHOD FOR FRINGE PROJECTION SYSTEMS BASED ON PLANE MIRRORS

Abstract

A calibration method for fringe projection systems based on plane mirrors. Firstly, two mirrors are placed behind the tested object. Through the reflection of mirrors, the camera can image the measured object from the front and other two perspectives, so as to obtain 360-degree two-dimensional information of the measured object. The projector projects three sets of phase-shifting fringe patterns with frequencies of 1, 8, and 64. The camera captures the fringe image to obtain an absolute phase map with a frequency of 64 by using the phase-shifting method and the temporal phase unwrapping algorithm. By using the calibration parameters between the projector and the camera, the absolute phase map can be converted into three-dimensional information of the measured object. Then, the mirror calibration is realized by capturing a set of 3D feature point pairs, so that the 3D information from different perspectives is transformed into a unified world coordinate system. The calibration method does not need to artificially fix the feature pattern on plane mirrors, only needs to capture a set of 3D feature point pairs by the camera to directly realize the mirror calibration that it avoids the loss of measurement accuracy and realizes high-precision panoramic three-dimensional measurement.

Claims

1. A calibration method for fringe projection systems based on plane mirrors is characterized by the following steps: Step 1: In a traditional measurement system based on fringe projection profilometry (FPP) consisting of a camera and a projector, a left mirror and a right mirror are placed behind the measured object; the angle between the two mirrors is about 120 degrees; through the reflection of mirrors, the camera images the measured object from the perspective of the front, left and right mirrors, so as to obtain 360-degree two-dimensional information of the measured object; Step 2: The projector projects three sets of phase-shifting fringe patterns to the measured object; the frequencies of three sets of phase-shifting fringe patterns are 1, 8, and 64; the projected fringe patterns are captured synchronously by the camera, and the three sets of phase-shifted fringe images are collected as the intensity map; Step 3: For the intensity maps of three sets of phase-shifting fringe images collected in Step 2, the phase-shifting method is used to calculate the wrapped phase maps with different frequencies; then, a multi-frequency temporal phase unwrapping algorithm is used to perform phase unwrapping on three sets of obtained wrapped phase maps in turn, and finally an absolute phase map with a frequency of 64 is obtained; by using the calibration parameters between the projector and the camera, the absolute phase map can be converted into three-dimensional information of the measured object; Step 4: The mirror calibration is realized by capturing a set of 3D feature point pairs so that the 3D information from different perspectives is transformed into a unified world coordinate system, so as to achieve high-precision panoramic 3D measurement.

2. The method according to claim 1 is characterized by step 2: Any set of phase-shifting fringe patterns projected by the projector is represented as: I n p ( x p , y p ) = 1 2 8 + 127 cos [ 2 π fx p W - 2 π n N ] where I.sub.n.sup.p(x.sup.p,y.sup.p) is the phase-shifting fringe pattern projected by the projector, n represents the phase-shifting index of the phase-shifting fringe patterns, n=0, 1, 2, . . . , N−1, N represents the total number of phase-shifting steps, (x.sup.p,y.sup.p) is the pixel coordinates of the projector plane, W is the horizontal resolution of the projector, f is the frequency of the phase-shifting fringe patterns; the projector projects three sets of phase-shifting fringe patterns to the measured object; the frequencies of three sets of phase-shifting fringe patterns are 1, 8, and 64; the fringe patterns within each group have the same frequency; the projected fringe patterns are captured synchronously by the camera, and the three sets of phase-shifting fringe images are collected as the intensity map, which is represented as: I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ Φ ( x , y ) - 2 π n N ] where I.sub.n(x,y) is the intensity map of the corresponding phase-shifting fringe image, (x,y) is the pixel coordinates of the camera plane, A(x,y) is the background intensity, B(x,y) is the modulation of the fringes, Φ(x,y) is the phase to be calculated.

3. The method according to claim 1 is characterized by the following specific process in step 3: (1) The wrap phase map of the measured object is obtained, that is, the projected fringe patterns are taken synchronously by the camera in Step 2, and the intensity map of the obtained phase-shifting fringe images can be calculated by the following formula to obtain the wrapped phase map φ(x,y): φ ( x , y ) = arc tan .Math. n = 0 N - 1 I n ( x , y ) sin ( 2 π n N ) .Math. n = 0 N - 1 I n ( x , y ) cos ( 2 π n N ) Due to the truncation effect of the function arctan, the obtained phase map φ(x,y) is a wrapped phase with a range of [0,2π], its relation to the absolute phase map Φ(x,y) is as follows: Φ ( x , y ) = φ ( x , y ) + 2 π k ( x , y ) where k(x,y) is the periodic order of phase, and its range is an integer within the range of [0,f−1]; f is the frequency of the fringe pattern; the range of the absolute phase map with unit-frequency is [0,2π], so the wrapped phase map with unit-frequency is also the absolute phase map; (2) The high-frequency absolute phase map of the measured object is obtained, and the absolute phase map is obtained by multi-frequency algorithm based on temporal phase unwrapping using the wrapped phase map with different frequencies, that is, the absolute phase map unit-frequency is used to assist the expansion of the absolute phase map with a frequency of 8, and the absolute phase map with a frequency of 8 is used to assist the expansion of the absolute phase map with a frequency of 64, as follows: k h ( x , y ) = ( ( f h / f l ) Φ l ( x , y ) - φ h ( x , y ) 2 π ) Φ h ( x , y ) = ϕ h ( x , y ) + 2 π k h ( x , y ) where f.sub.h is the frequency of the high-frequency fringe images, f.sub.l is the frequency of the low-frequency fringe images, φ.sub.h(x,y) is the wrapping phase of the high-frequency fringe images, k.sub.h (x,y) is the periodic order of the phase of the high-frequency fringe images, Φ.sub.h(x,y) and Φ.sub.l(x,y) are the absolute phase of the high-frequency and low-frequency fringe images respectively, and Round is the rounding function; (3) The absolute phase map is converted into the horizontal pixel coordinates of the projector to obtain the corresponding relationship between the camera and the projector; that is, when the absolute phase map Φ.sub.64(x,y) with a frequency of 64 is obtained, the relationship between each pixel of the camera and the corresponding pixel in the projector can be described as follows: Φ 6 4 ( x , y ) = 2 π fx p W where W represents the horizontal resolution of the projector, f is the frequency of the absolute phase map, which is 64, x.sup.p is the horizontal pixel coordinates of the projector, (x,y) and x.sup.p represent the pixel-by-pixel correspondence between the camera and the projector; (4) After obtaining the corresponding relationship between the camera and the projector, the calibration parameters between the projector and the camera are used to obtain the 3D information of the measured object; through system calibration, the calibration parameters of the projector and camera are obtained, and the specific formula is as follows: P c = A c M c = ( p 1 1 c p 1 2 c p 1 3 c p 1 4 c p 2 1 c p 2 2 c p 2 3 c p 2 4 c p 3 1 c p 3 2 c p 3 3 c p 3 4 c ) P p = A p M p = ( p 1 1 p p 1 2 p p 1 3 p p 1 4 p p 2 1 p p 2 2 p p 2 3 p p 2 4 p p 3 1 p p 3 2 p p 3 3 p p 3 4 p ) where P.sub.c and P.sub.p are the calibration parameters obtained through system calibration, A.sub.c and A.sub.p are the corresponding internal parameters, M.sub.c and M.sub.p are the corresponding external parameters, p.sub.ij.sup.c and p.sub.ij.sup.p are the corresponding calibration elements in P.sub.c and P.sub.p respectively; therefore, the corresponding 3D information can be obtained by the following formula: ( x w y w z w ) = ( p 1 1 c - p 3 1 c x p 1 2 c - p 3 2 c x p 1 3 c - p 3 3 c x p 2 1 c - p 3 1 c y p 2 2 c - p 3 2 c y p 2 3 c - p 3 3 c y p 1 1 p - p 3 1 p x p p 1 2 p - p 3 2 p x p p 1 3 p - p 3 3 p x p ) - 1 ( p 3 4 c x - p 1 4 c p 3 4 c y - p 2 4 c p 3 4 p x p - p 1 4 p ) where (x,y) is the pixel coordinate of the camera, x.sup.p is the horizontal pixel coordinate of the projector, and (x.sup.w,y.sup.w,z.sup.w) is the 3D information corresponding to (x,y) and x.sup.p, that is, the 3D information of the measured object.

4. The method according to claim 1 is characterized by the following mirror calibration process in step 4: (1) The 3D imaging model of the mirror is expressed as: [ I - 2 n r ( n r ) T 2 d w r n r 0 1 ] [ X r 1 ] = [ X o 1 ] D r = [ I - 2 n r ( n r ) T 2 d w r n r 0 1 ] where I is the identity matrix, X.sup.o(x.sup.o,y.sup.o,z.sup.o) and X.sup.r(x.sup.r,y.sup.r,z.sup.r) are a group of actual points and virtual points, D.sup.r is the reflection matrix of the mirror, so the calibration parameters of the mirror are n.sup.r and d.sub.w.sup.r, n.sup.r is the normal vector of the mirror, d.sub.w.sup.r is the distance from the origin of the world coordinate system to the mirror plane; (2) N.sup.r 3D feature point pairs, including actual point X.sup.o(x.sup.o,y.sup.o,z.sup.o) and virtual point X.sup.r(x.sup.r,y.sup.r,z.sup.r), are observed by the camera to solve the reflection matrix of the mirror; in other words, step 2 and step 3 are used to achieve 3D measurement of the measured object, so as to obtain 3D information of 3D feature point pairs; since the actual point X.sup.o(x.sup.o,y.sup.o,z.sup.o) and the virtual point X.sup.r(x.sup.r,y.sup.r,z.sup.r) are a pair of reflection points, due to the reflection of the mirror, the normal vector {right arrow over (n.sup.r)} is parallel to {right arrow over (X.sup.oX.sup.r)}, that is, {right arrow over (n.sup.r)}×{right arrow over (X.sup.oX.sup.r)}=0; taking n.sup.r as (a.sup.r,b.sup.r,c.sup.r), and the initial estimation of n.sup.r can be obtained by the following formula: [ y r - y o x o - x r 0 z r - z o 0 x o - x r 0 z r - z o y o - y r ] [ a r b r c r ] = 0 where (x.sup.o,y.sup.o,z.sup.o) is the 3D coordinate of the actual point X.sup.o in the world coordinate system, (x.sup.r,y.sup.r,z.sup.r) is the 3D coordinate of the virtual point X.sup.r in the world coordinate system, and (a.sup.r,b.sup.r,c.sup.r) is the 3D coordinate of the normal vector n.sup.r of the mirror in the world coordinate system; A more accurate solution is obtained by the SVD algorithm; the last column vector of the V matrix obtained by SVD is the initial estimate of the normal vector n.sup.r, namely (a.sub.0.sup.r,b.sub.0.sup.r,c.sub.0.sup.r); since the actual point X.sup.o(x.sup.o,y.sup.o,z.sup.o) and virtual point X.sup.r(x.sup.r,y.sup.r,z.sup.r) are symmetric about the mirror, Therefore, their midpoints must satisfy the plane equation a.sub.0.sup.rx+b.sub.0.sup.ry+c.sub.0.sup.rz+d.sub.w.sup.r=0 of the mirror, and the initial estimate of d.sub.w.sup.r can be obtained by the following formula: d w r = - .Math. n = 1 N r ( a 0 r x i o + x i r 2 + b 0 r y i o + y i r 2 + c 0 r z i o + z i r 2 ) N r where N.sup.r is the number of 3D feature point pairs; (3) After obtaining the initial estimates of n.sup.r and d.sub.w.sup.r, the 3D imaging model of the mirror is rewritten into the following formula based on the Levenberg-Marquardt algorithm: g 1 ( G ) = [ 1 - 2 ( a r ) 2 ] x r - 2 a r b r y r - 2 a r c r z r + 2 a r d w r - x o g 2 ( G ) = - 2 a r b r x r + [ 1 - 2 ( b r ) 2 ] y r - 2 b r c r z r + 2 b r d w r - y o g 3 ( G ) = - 2 a γ c γ x r - 2 b r c r y r + [ 1 - 2 ( c r ) 2 ] z r + 2 c r d w r - z o R ( G ) = .Math. n = 1 N r g 1 2 ( G ) + g 2 2 ( G ) + g 3 2 ( G ) where G={a.sup.r,b.sup.r,c.sup.r,d.sub.w.sup.r}, g.sub.1(G), g.sub.2(G) and g.sub.2(G) represent the residuals of each equation and R(G) represents the sum of squares of the total residuals of the 3D imaging model of the mirror; the minimization of R(G) is a nonlinear least-squares problem, which is solved by the Levenberg-Marquardt algorithm; the bundle adjustment method is introduced to avoid low-precision problems caused by low-quality mirrors as far as possible; according to the bundle adjustment method, R(G) is rewritten into the following formula: R ( G ) = .Math. n = 1 N r g 1 2 ( G , X n r ) + g 2 2 ( G , X n r ) + g 3 2 ( G , X n r ) the minimization of R(G) is still a nonlinear least-squares problem, which is solved by the Levenberg-Marquardt algorithm, so that the high-precision calibration parameters n.sup.r and d.sub.w.sup.r are obtained, and the accurate calibration of the reflection matrix is realized.

5. The method according to claim 1 is characterized by step 4: Using reflector calibration parameters, the 3D information from different perspectives are transformed into a unified world coordinate based on the reflection matrix of mirrors, so as to realize a high-precision panoramic 3D measurement, that is, 3D measurement of the measured object is achieved through step 2 and step 3; virtual 3D measurement results are obtained from the perspective of the left mirror and right mirror, real 3D measurement results is obtained from the perspective of the real camera; the mirror calibration parameters n.sup.r and d.sub.w.sup.r are used to obtain the high-precision 3D imaging model of the mirror; and according to the high-precision 3D imaging model of the mirror, the virtual 3D measurement results X.sup.r(x.sup.r,y.sup.r,z.sup.r) obtained from the perspectives of the left mirror and the right mirror are respectively converted to the real 3D measurement results X.sup.o(x.sup.o,y.sup.o,z.sup.o) in the world coordinate system, so as to realize a high-precision panoramic 3D measurement.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] FIG. 1 is the flow chart of the calibration method of the fringe projection system based on the mirrors of the present invention.

[0012] FIG. 2 is the schematic diagram of the fringe projection measurement system (3D measurement system) with mirrors of the present invention.

[0013] FIG. 3 is a set of 3D feature point pairs of the circle calibration plane: (a) a posture data of the circle calibration plane which can provide 15 feature point pairs. (b) 3D data of virtual points. (c) 3D data of actual points.

[0014] FIG. 4 shows the 3D measurement results of the standard ceramic ball: (a) the 3D measurement results obtained from the perspective of the left mirror. (b) 3D measurement results obtained from the perspective of a real camera. (c) 3D measurement results obtained from the perspective of the right mirror. (d) is the distribution of the errors of (a). (e) is the distribution of the errors of (b). (f) is the distribution of the errors of (c). (g) full-surface 3D measurement results observed from the perspective of the left mirror. (h) full-surface 3D measurement results observed from the perspective of the real cameras. (i) full-surface 3D measurement results observed from the perspective of the right mirror. (j) is the distribution of the errors of (g). (k) is the distribution of the errors of (h). (1) is the distribution of the errors of (i).

[0015] FIG. 5 shows the measurement results of Voltaire model: (a) full-surface 3D reconstruction results of Voltaire model. (b) is the full-surface 3D reconstruction result of the left perspective of (a). (c) is the full-surface 3D reconstruction result of the back perspective of (a). (d) is the full-surface 3D reconstruction result of the right perspective of (a).

DESCRIPTION OF THE PREFERRED EMBODIMENT

[0016] Referring to FIG. 1, the calibration method of the fringe projection system based on the mirrors of the invention comprises the following five steps:

[0017] Step 1: The fringe projection measurement system with mirrors is built to obtain the 360-degree 2D information of the measured object. In a traditional measurement system based on fringe projection profilometry (FPP) consisting of a camera and a projector, a left mirror and a right mirror are placed behind the measured object. The angle between the two mirrors is about 120 degrees, as shown in FIG. 2. Through the reflection of mirrors, the camera images the measured object from the perspective of the front, left and right mirrors, so as to obtain 360-degree two-dimensional information of the measured object.

[0018] Step 2: The phase-shift fringe images of the measured object is obtained. The projector projects three sets of phase-shifting fringe patterns to the measured object. The frequencies of three sets of phase-shifting fringe patterns are 1, 8, and 64. Any set of phase-shifting fringe patterns projected by the projector is represented as:

[00001] I n p ( x p , y p ) = 1 2 8 + 1 2 7 cos [ 2 π f x p W - 2 π n N ]

where I.sub.n.sup.p(x.sup.p,y.sup.p) is the phase-shifting fringe pattern projected by the projector, n represents the phase-shifting index of the phase-shifting fringe patterns, n=0, 1, 2, . . . , N−1, N represents the total number of phase-shifting steps, (x.sup.p,y.sup.p) is the pixel coordinates of the projector plane, W is the horizontal resolution of the projector, f is the frequency of the phase-shifting fringe patterns. The projector projects three sets of phase-shifting fringe patterns to the measured object. The frequencies of three sets of phase-shifting fringe patterns are 1, 8, and 64. The fringe patterns within each group have the same frequency. The projected fringe patterns are captured synchronously by the camera, and the three sets of phase-shifting fringe images are collected as the intensity map, which is represented as:

[00002] I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ Φ ( x , y ) - 2 π n N ]

[0019] where I.sub.n(x,y) is the intensity map of the corresponding phase-shifting fringe image, (x,y) is the pixel coordinates of the camera plane, A(x,y) is the background intensity, B(x,y) is the modulation of the fringes, Φ(x,y) is the phase to be calculated.

[0020] Step 3: Fringe projection profilometry is used to obtain the 3D information of the measured object. For the intensity maps of three sets of phase-shifting fringe images collected in Step 2, the phase-shifting method is used to calculate the wrapped phase maps with different frequencies. Then, a multi-frequency temporal phase unwrapping algorithm is used to perform phase unwrapping on three sets of obtained wrapped phase maps in turn, and finally an absolute phase map with a frequency of 64 is obtained. By using the calibration parameters between the projector and the camera, the absolute phase map can be converted into three-dimensional information of the measured object. The specific process is as follows.

[0021] (1) The wrap phase map of the measured object is obtained, that is, the projected fringe patterns are taken synchronously by the camera in Step 2, and the intensity map of the obtained phase-shifting fringe images can be calculated by the following formula to obtain the wrapped phase map φ(x,y):

[00003] φ ( x , y ) = arctan .Math. n = 0 N - 1 I n ( x , y ) sin ( 2 π n N ) .Math. n = 0 N - 1 I n ( x , y ) cos ( 2 π n N )

Due to the truncation effect of the function arctan, the obtained phase map φ(x,y) is a wrapped phase with a range of [0, 2π], its relation to the absolute phase map Φ(x,y) is as follows:

[00004] Φ ( x , y ) = φ ( x , y ) + 2 π k ( x , y )

where k(x,y) is the periodic order of phase, and its range is an integer within the range of [0,f−1]. f is the frequency of the fringe pattern. The range of the absolute phase map with unit-frequency is [0,2π], so the wrapped phase map with unit-frequency is also the absolute phase map.

[0022] (2) The high-frequency absolute phase map of the measured object is obtained, and the absolute phase map is obtained by multi-frequency algorithm based on temporal phase unwrapping using the wrapped phase map with different frequencies, that is, the absolute phase map unit-frequency is used to assist the expansion of the absolute phase map with a frequency of 8, and the absolute phase map with a frequency of 8 is used to assist the expansion of the absolute phase map with a frequency of 64, as follows:

[00005] k h ( x , y ) = Round ( ( f h / f l ) Φ l ( x , y ) - φ h ( x , y ) 2 π ) Φ h ( x , y ) = φ h ( x , y ) + 2 π k h ( x , y )

where f.sub.h is the frequency of the high-frequency fringe images, f.sub.1 is the frequency of the low-frequency fringe images, φ.sub.h(x,y) is the wrapping phase of the high-frequency fringe images, k.sub.h(x,y) is the periodic order of the phase of the high-frequency fringe images, Φ.sub.h(x,y) and Φ.sub.l(x,y) are the absolute phase of the high-frequency and low-frequency fringe images respectively, and Round is the rounding function.

[0023] (3) The absolute phase map is converted into the horizontal pixel coordinates of the projector to obtain the corresponding relationship between the camera and the projector. That is, when the absolute phase map Φ.sub.64 (x,y) with a frequency of 64 is obtained, the relationship between each pixel of the camera and the corresponding pixel in the projector can be described as follows:

[00006] Φ 6 4 ( x , y ) = 2 π f x p W

where W represents the horizontal resolution of the projector, f is the frequency of the absolute phase map, which is 64, x.sup.p is the horizontal pixel coordinates of the projector, (x,y) and x.sup.p represent the pixel-by-pixel correspondence between the camera and the projector;

[0024] (4) After obtaining the corresponding relationship between the camera and the projector, the calibration parameters between the projector and the camera are used to obtain the 3D information of the measured object. Through system calibration, the calibration parameters of the projector and camera are obtained, and the specific formula is as follows:

[00007] P c = A c M c = ( p 1 1 c p 1 2 c p 1 3 c p 1 4 c p 2 1 c p 2 2 c p 2 3 c p 2 4 c p 3 1 c p 3 2 c p 3 3 c p 3 4 c ) P p = A p M p = ( p 1 1 p p 1 2 p p 1 3 p p 1 4 p p 2 1 p p 2 2 p p 2 3 p p 2 4 p p 3 1 p p 3 2 p p 3 3 p p 3 4 p )

where P.sub.c and P.sub.p are the calibration parameters obtained through system calibration, A.sub.c and A.sub.p are the corresponding internal parameters, M.sub.c and M.sub.p are the corresponding external parameters, p.sub.ij.sup.c and p.sub.ij.sup.p are the corresponding calibration elements in P.sub.c and P.sub.p respectively. Therefore, the corresponding 3D information can be obtained by the following formula:

[00008] ( x w y w z w ) = ( p 1 1 c - p 3 1 c x p 1 2 c - p 3 2 c x p 1 3 c - p 3 3 c x p 2 1 c - p 3 1 c y p 2 2 c - p 3 2 c y p 2 3 c - p 3 3 c y p 1 1 p - p 3 1 p x p p 1 2 p - p 3 2 p x p p 1 3 p - p 3 3 p x p ) - 1 ( p 3 4 c x - p 1 4 c p 3 4 c y - p 2 4 c p 3 4 p x p - p 1 4 p )

where (x,y) is the pixel coordinate of the camera, x.sup.p is the horizontal pixel coordinate of the projector, and (x.sup.w,y.sup.w,z.sup.w) is the 3D information corresponding to (x,y) and x.sup.p, that is, the 3D information of the measured object.

[0025] Step 4: The calibration of the mirror includes four steps: in step 1, the 3D imaging model of the mirror will be reported. Based on the 3D imaging model, the virtual 3D measurement results X.sup.r(x.sup.r,y.sup.r,z.sup.r) obtained from the perspectives of the left mirror and the right mirror can be transformed into the real 3D measurement results X.sup.o(x.sup.o,y.sup.o,z.sup.o) in the world coordinate system, so as to realize the panoramic 3D measurement. In order to obtain the 3D imaging model, in step 2, a robust, efficient and high-precision calibration method of plane mirrors is proposed to obtain the 3D imaging model of mirror. In this method, NT 3D feature point pairs including actual points X.sup.o(x.sup.o,y.sup.o,z.sup.o) and virtual points X.sup.r (x.sup.r,y.sup.r,z.sup.r) are captured by the camera, which can be used to obtain the initial estimation n.sup.r and d.sub.w.sup.r of the reflection matrix, and then the Levenberg-Marquardt algorithm with bundle adjustment is used to obtain the accurate estimation of n.sup.r and d.sub.w.sup.r of the reflection matrix, so as to realize high-precision mirror calibration and high-precision panoramic 3D measurement. In step 3, it describes how to obtain the initial estimates of n.sup.r and d.sub.w.sup.r. In step 4, it describes how to use the Levenberg-Marquardt algorithm with bundle adjustment to obtain accurate estimation of n.sup.r and d.sub.w.sup.r of reflection matrix, so as to realize high-precision mirror calibration. The specific process is as follows.

[0026] 1. The 3D imaging model of the mirror is reported. Based on the refraction model of the mirror proposed by Mariottini et al., the 3D imaging model of the mirror can be expressed as:

[00009] [ I - 2 n r ( n r ) T 2 d w r n r 0 1 ] [ X r 1 ] = [ X o 1 ] D r = [ I - 2 n r ( n r ) T 2 d w r n r 0 1 ]

where I is the identity matrix, X.sup.o(x.sup.o,y.sup.o,z.sup.o) and X.sup.r(x.sup.r,y.sup.r,z.sup.r) are a group of actual points and virtual points, D.sup.r is the reflection matrix of the mirror, so the calibration parameters of the mirror are n.sup.r and d.sub.w.sup.r, n.sup.r is the normal vector of the mirror, d.sub.w.sup.r is the distance from the origin of the world coordinate system to the mirror plane. From this formula, it can be found that the 3D point cloud data obtained from the mirror can be converted to the actual world coordinate system. Based on the 3D imaging model of the mirror, if n.sup.r and d.sub.w.sup.r are known, the 3D data of virtual points obtained from the mirror can be converted into 3D data of real points in the world coordinate system. Therefore, the key technology to realize panoramic 3D measurement is to calculate the accurate n.sup.r and d.sub.w.sup.r.

[0027] 2. The mirror is calibrated. From the 3D imaging model of the mirror, it can be found that the mirror calibration can be realized by capturing a set of 3D feature point pairs by the camera. In the invention, the whole calibration process of the proposed calibration method of plane mirror includes two steps: initial estimation of the reflection matrix and accurate calibration of the reflection matrix using the Levenberg-Marquardt algorithm with bundle adjustment, so as to realize high-precision panoramic 3D measurement.

[0028] 3. The initial estimates of n.sup.r and d.sub.w.sup.r are obtained. Firstly, N.sup.r 3D feature point pairs, including actual point X.sup.o(x.sup.o,y.sup.o,z.sup.o) and virtual point X.sup.r(x.sup.r,y.sup.r,z.sup.r), are observed by the camera to solve the reflection matrix of the mirror. In other words, step 2 and step 3 are used to achieve 3D measurement of the measured object, so as to obtain 3D information of 3D feature point pairs. Since the actual point X.sup.o(x.sup.o,y.sup.o,z.sup.o) and the virtual point X.sup.r, (x.sup.r,y.sup.r,z.sup.r) are a pair of reflection points, due to the reflection of the mirror, the normal vector {right arrow over (n.sup.r)} is parallel to {right arrow over (X.sup.oX.sup.r)}, that is, {right arrow over (n.sup.r)}×{right arrow over (X.sup.oX.sup.r)}=0. Taking n.sup.r as (a.sup.r,b.sup.r,c.sup.r), and the initial estimation of n.sup.r can be obtained by the following formula:

[00010] [ y r - y o x o - x r 0 z r - z o 0 x o - x r 0 z r - z o y o - y r ] [ a r b r c r ] = 0

where (x.sup.o,y.sup.o,z.sup.o) is the 3D coordinate of the actual point X.sup.o in the world coordinate system, (x.sup.r,y.sup.r,z.sup.r) is the 3D coordinate of the virtual point X.sup.r in the world coordinate system, and (a.sup.r,b.sup.r,c.sup.r) is the 3D coordinate of the normal vector n.sup.r of the mirror in the world coordinate system. The normal vector n.sup.r obtained from the above formula is a least-square problem. A more accurate solution is obtained by the SVD algorithm. The last column vector of the V matrix obtained by SVD is the initial estimate of the normal vector n.sup.r, namely (a.sub.0.sup.r,b.sub.0.sup.r,c.sub.0.sup.r).

[0029] Since the actual point X.sup.o(x.sup.o,y.sup.o,z.sup.o) and virtual point X.sup.r(x.sup.r,y.sup.r,z.sup.r) are symmetric about the mirror, Therefore, their midpoints must satisfy the plane equation a.sub.0.sup.rx+b.sub.0.sup.ry+c.sub.0.sup.rz+d.sub.w.sup.r=0 of the mirror, and the initial estimate of d.sub.w.sup.r can be obtained by the following formula:

[00011] d w r = - .Math. i = 1 N r ( a 0 r x i o + x i r 2 + b 0 r y i o + y i r 2 + c 0 r z i o + z i r 2 ) N r

where N.sup.r is the number of 3D feature point pairs.

[0030] 4. The Levenberg-Marquardt algorithm with bundle adjustment is used to calibrate the reflection matrix accurately. After obtaining the initial estimates of n.sup.r and d.sub.w.sup.r, the 3D imaging model of the mirror is rewritten into the following formula based on the Levenberg-Marquardt algorithm:

[00012] g 1 ( G ) = [ 1 - 2 ( a r ) 2 ] x r - 2 a r b r y r - 2 a r c r z r + 2 a r d w r - x o g 2 ( G ) = - 2 a r b r x r + [ 1 - 2 ( b r ) 2 ] y r - 2 b r c r z r + 2 b r d w r - y o g 3 ( G ) = - 2 a r c r x r - 2 b r c r y r + [ 1 - 2 ( c r ) 2 ] z r + 2 c r d w r - z o R ( G ) = .Math. n = 1 N r g 1 2 ( G ) + g 2 2 ( G ) + g 3 2 ( G )

where G={a.sup.r,b.sup.r,c.sup.r,d.sub.w.sup.r}, g.sub.1(G), g.sub.2(G) and g.sub.2(G) represent the residuals of each equation and R(G) represents the sum of squares of the total residuals of the 3D imaging model of the mirror. The minimization of R(G) is a nonlinear least-squares problem, which is solved by the Levenberg-Marquardt algorithm. (The Levenberg-Marquardt algorithm is a common algorithm. Generally, in order to use the Levenberg-Marquardt algorithm, the problem to be solved needs to be written into a fixed formula. The invention rewrites the 3D imaging model of the mirror into the above four formulas, and then it can be solved through the inherent solution method of the Levenberg-Marquardt algorithm). There are two key factors (X.sup.o and X.sup.r) that affect the accuracy of the final optimization results. It is well known that fringe projection profilometry can realize high-precision 3D measurement of the measured object. In this system, the measurement accuracy of the actual point X.sup.o(x.sup.o,y.sup.o,z.sup.o) is about 30 um. Therefore, the influence of the second factor X.sup.r should be mainly considered. In the above calibration process, the virtual point X.sup.r(x.sup.r,y.sup.r,z.sup.r) is always regarded as the known input data, but the imperfect flatness and uneven reflection coefficient of the mirror lead to the low 3D measurement accuracy of the virtual point X.sup.r(x.sup.r,y.sup.r,z.sup.r), which introduces the system error into the calibration of the mirror, thus obtaining calibration results with low accuracy. By further improving the manufacturing quality of the mirror, this disadvantage can be overcome to a certain extent to improve the calibration performance, but the cost is high. Therefore, the bundle adjustment method should be introduced to avoid the problem of low accuracy caused by low-quality mirrors. According to bundle adjustment, the above formula R(G) can be rewritten as the following formula:

[00013] R ( G ) = .Math. n = 1 N r g 1 2 ( G , X n r ) + g 2 2 ( G , X n r ) + g 3 2 ( G , X n r )

[0031] Although the total number of variables has increased from 4 to 4+3N.sup.r, the minimization of R(G) is still a nonlinear least-squares problem, which is solved by the Levenberg-Marquardt algorithm, so that the high-precision calibration parameters n.sup.r and d.sub.w.sup.r are obtained, and the accurate calibration of the reflection matrix is realized.

[0032] Step 5: The 3D measurement of the measured object is realized through steps 2 and 3. The virtual 3D measurement results from the perspective of the left mirror and the perspective of the right mirror are obtained, the real 3D measurement results from the perspective of the real camera is obtained. The high-precision 3D imaging model of the mirror is obtained by using the mirror calibration parameters n.sup.r and d.sub.w.sup.r obtained in step 4. The virtual 3D measurement results X.sup.r(x.sup.r,y.sup.r,z.sup.r) obtained from the perspective of the left mirror and the right mirror are transformed into the real 3D measurement results X.sup.o(x.sup.o,y.sup.o,z.sup.o) in the world coordinate system by using the 3D imaging model of the mirror, so as to realize the high-precision panoramic 3D measurement.

Example

[0033] In order to verify the effectiveness of the method of the invention, a camera (acA2440-75 um, Basler), a projector (LightCrafter 4500PRO, TI), two front surface reflecting mirrors (30 cm×30 cm), and a computer are used to construct a 3D measuring device based on the calibration method of fringe projection system based on plane mirrors. The acquisition speed of the device for 3D measurement of objects is 25 frames per second. As described in step 1, in the traditional measurement system based on fringe projection profilometry (FPP) composed of a camera and a projector, two mirrors are placed behind the measured object, and the angle between the two mirrors is about 120-degree. Through the reflection of the mirror, the camera can image the measured object from the front and two other different perspectives (left mirror and right mirror), so as to obtain 360-degree 2D information of the measured object. As described in step 2, three groups of nine-step phase-shifting fringe patterns with different frequencies are projected and collected, and the frequencies of the three groups of fringe patterns are 1, 8, and 64 respectively. As described in step 3, three groups of wrapped phase maps with different frequencies are calculated by using the phase-shifting method. Then, the multi-frequency phase unwrapping algorithm based on temporal phase unwrapping is used to unwrap the three groups of wrapped phase maps in turn, and finally the absolute phase map with the frequency of 64 is obtained. Using the calibration parameters between the projector and the camera, the absolute phase map is transformed into the 3D information of the measured object. In step 4, the mirror calibration is realized by capturing a set of 3D feature point pairs, as shown in FIG. 3. In the process of mirror calibration, the proposed calibration method needs to capture multiple posture data of high-precision circular calibration plane (6 postures are used in this experiment), and each posture data can provide 15 feature point pairs. Then, these feature point pairs are used to successively perform the initial estimation of the reflection matrix and accurately calibrate the reflection matrix using the Levenberg-Marquardt algorithm with the bundle adjustment. In order to quantitatively analyze the robustness of the proposed calibration method, the calibration residuals of different steps are calculated, as shown in Table 1. From the comparison results in Table 1, it can be found that the proposed method can provide relatively accurate initial guesses of n.sup.r and d.sub.w.sup.r, with RMS of 0.0704 mm or 0.0611 mm. Based on these estimates, the calibration residual error can be further reduced to 0.0578 mm or 0.0534 mm by using the Levenberg-Marquardt algorithm, which proves its effectiveness. However, the low-precision 3D measurement of virtual points will introduce system errors in the calibration process, which leads to low reliability of the calibration results. Therefore, the bundle adjustment method should be introduced into the calibration method to obtain high-precision mirror calibration results, as shown in Table 1.

TABLE-US-00001 TABLE 1 Comparison table of calibration residuals in different steps Levenberg-Marquardt Initial Levenberg-Marquardt algorithm with bundle RMS (mm) estimate algorithm adjustment Left mirror 0.0704 0.0578 1.7911 × 10.sup.−5 Right mirror 0.0611 0.0534 1.1588 × 10.sup.−5

[0034] In order to further evaluate the accuracy of the proposed method, the proposed system is used to measure a standard ceramic ball with a diameter of 50.8 mm. The 3D measurement results of a single perspective and the 3D measurement results of the whole surface are shown in FIG. 4. FIG. 4 (a) shows the 3D measurement results obtained from the perspective of the left mirror, FIG. 4 (b) shows the 3D measurement results obtained from the perspective of the real camera, and FIG. 4 (c) shows the 3D measurement results obtained from the perspective of the right mirror. FIG. 4 (g) shows the full-surface 3D measurement results observed from the perspective of the left mirror, FIG. 4 (h) shows the full-surface 3D measurement results observed from the perspective of the real camera, and FIG. 4 (i) shows the full-surface 3D measurement results observed from the perspective of the right mirror. For the 3D measurement results from a single perspective, spherical fitting is performed to obtain RMS measurement errors of 27.577 um, 47.531 um and 44.791 um respectively, and the accuracy of the full-surface measurement results is 65.122 urn. FIG. 4 (d) is the distribution of the errors of FIG. 4 (a), FIG. 4 (e) is the distribution of the errors of FIG. 4 (b), and FIG. 4 (f) is distribution of the errors of FIG. 4 (c). FIG. 4 (j) is the distribution of the errors of FIG. 4 (g), FIG. 4 (k) is the distribution of the errors of FIG. 4 (h), and FIG. 4 (l) is the distribution of the errors of FIG. 4 (i). The results verify that the proposed method can achieve high-precision panoramic 3D profile measurement.

[0035] Finally, the Voltaire model is measured, and the corresponding full-surface 3D reconstruction results are shown in FIG. 5. Then, the corresponding results from three different perspectives are given to illustrate the reliability of the proposed method. FIG. 5 (a) is the full-surface 3D reconstruction result of the Voltaire model, FIG. 5 (b) is the full-surface 3D reconstruction result of the left perspective of FIG. 5 (a), FIG. 5 (c) is the full-surface 3D reconstruction result of the real perspective of FIG. 5 (a), and FIG. 5 (d) is the full-surface 3D reconstruction result of the right perspective of FIG. 5 (a). Experimental results for different purposes show that this method can realize high-precision panoramic 3D profile measurement of objects with complex surfaces.