A DEEP LEARNING-BASED TEMPORAL PHASE UNWRAPPING METHOD FOR FRINGE PROJECTION PROFILOMETRY
20210356258 · 2021-11-18
Assignee
Inventors
- Qian Chen (Nanjing, CN)
- Chao Zuo (Nanjing, CN)
- Shijie Feng (Nanjing, CN)
- Yuzhen Zhang (Nanjing, CN)
- Guohua Gu (Nanjing, CN)
Cpc classification
G06N3/049
PHYSICS
International classification
Abstract
The invention discloses a deep learning-based temporal phase unwrapping method for fringe projection profilometry. First, four sets of three-step phase-shifting fringe patterns with different frequencies (including 1, 8, 32, and 64) are projected to the tested objects. The three-step phase-shifting fringe images acquired by the camera are processed to obtain the wrapped phase map using a three-step phase-shifting algorithm. Then, a multi-frequency temporal phase unwrapping (MF-TPU) algorithm is used to unwrap the wrapped phase map to obtain a fringe order map of the high-frequency phase with 64 periods. A residual convolutional neural network is built, and its input data are set to be the wrapped phase maps with frequencies of 1 and 64, and the output data are set to be the fringe order map of the high-frequency phase with 64 periods. Finally, the training dataset and the validation dataset are built to train and validate the network. The network makes predictions on the test dataset to output the fringe order map of the high-frequency phase with 64 periods. The invention exploits a deep learning method to unwrap a wrapped phase map with a frequency of 64 using a wrapped phase map with a frequency of 1 and obtain an absolute phase map with fewer phase errors and higher accuracy.
Claims
1. A deep learning-based temporal phase unwrapping method for fringe projection profilometry is characterized in that the specific steps are as follows: step one, four sets of three-step phase-shifting fringe patterns with different frequencies (including 1, 8, 32, and 64) are projected to the tested objects; the projected fringe patterns are captured by the camera simultaneously to acquire four sets of three-step phase-shifting fringe images; step two, the three-step phase-shifting fringe images acquired by the camera are processed to obtain the wrapped phase map using a three-step phase-shifting algorithm; step three, a multi-frequency temporal phase unwrapping (MF-TPU) algorithm is used to unwrap four wrapped phase maps successively to obtain a fringe order map and an absolute phase map of the high-frequency phase with 64 periods; step four, a residual convolutional neural network is built to implement phase unwrapping; steps one to three are repeatedly performed to obtain multiple sets of data, which are divided into a training dataset, a validation dataset, and a test dataset; the training dataset is used to train the residual convolutional neural network; the validation dataset is used to verify the performance of the trained network; step five, the residual convolutional neural network after training and validation makes predictions on the test dataset to realize the precision evaluation of the network and output the fringe order map of the high-frequency phase with 64 periods.
2. According to claim 1, a deep learning-based temporal phase unwrapping method for fringe projection profilometry is characterized by step one wherein four sets of three-step phase-shifting fringe patterns with different frequencies are projected to the tested objects; each set of patterns contains three fringe patterns with the same frequency and different initial phase; any set of three-step phase-shifting fringe patterns projected by the projector can be represented as:
I.sub.1.sup.p(x.sup.p, y.sup.p)=128+127 cos[2πf x.sup.p/W]
I.sub.2.sup.p(x.sup.p, y.sup.p)=128+127 cos[2πf x.sup.p/W+27π/3]
I.sub.3.sup.p(x.sup.p, y.sup.p)=128+127 cos[2πf x.sup.p/W+4π/3] where I.sub.1.sup.p(x.sup.p, y.sup.p), I.sub.2.sup.p(x.sup.p, y.sup.p), I.sub.3.sup.p(x.sup.p, y.sup.p) are three-step phase-shifting, fringe patterns projected by the projector; (x.sup.p, y.sup.p) is the pixel coordinate of the projector; W is the horizontal resolution of the projector; f is the frequency of phase-shifting fringe patterns; a DLP projector is used to project four sets of three-step phase-shifting fringe patterns onto the tested objects; the frequencies of four sets of three-step phase-shifting fringe patterns are 1, 8, 32, and 64, respectively; each set of three fringe patterns has the same frequency; the projected fringe patterns are captured by the camera simultaneously; the acquired three-step phase-shifting fringe images are represented as:
I.sub.1(x, y)=A(x, y)+B(x, y)cos[Φ(x, y)]
I.sub.2(x, y)=A(x, y)+B(x, y)cos[Φ(x, y)+2π/3]
I.sub.3(x, y)=A(x, y)+B(x, y)cos[Φ(x, y)+4π/3] where I.sub.1(x, y), I.sub.2 (x, y), I.sub.3 (x, y) are three-step phase-shifting fringe images; (x, y) is the pixel coordinate of the camera; A(x, y) is the average intensity; B(x, y) is the intensity modulation; Φ(x, y) is the phase distribution of the measured object.
3. According to claim 2, a deep learning-based temporal phase unwrapping method for fringe projection profilometry is characterized by step two wherein the wrapped phase φ(x, y) can be obtained as:
Φ(x, y)=φ(x, y)+2πk(x, y) where k(x, y) represents the fringe order of Φ(x, y), and its value range is from 0 to N−1; N is the period number of the fringe patterns (i;e;, N=f).
4. According to claim 1, a deep learning-based temporal phase unwrapping method for fringe projection profilometry is characterized by step three wherein the distribution range of the absolute phase map with unit frequency is [0, 2π], so the wrapped phase map with unit frequency is an absolute phase map; by using a multi-frequency temporal phase unwrapping (MF-TPU) algorithm, an absolute phase map with a frequency of 8 can be unwrapped with the aid of the absolute phase map with unit frequency; an absolute phase map with a frequency of 32 can be unwrapped with the aid of the absolute phase map with a frequency of 8; an absolute phase map with a frequency of 64 can be unwrapped with the aid of the absolute phase map with a frequency of 32; the absolute phase map can be calculated by the following formula:
5. According to claim 2, a deep learning-based temporal phase unwrapping method for fringe projection profilometry is characterized by step four wherein a residual convolutional neural network is built, consisting of six modules, including convolutional layers, pooling layers, concatenate layers, residual blocks, and upsampling blocks; next, after the network is built, steps one to three are repeatedly performed to obtain multiple sets of data, which are divided into a training dataset, a validation dataset, and a test dataset; for the residual convolutional neural network, the input data are set to be the wrapped phase maps with frequencies of 1 and 64, and the output data are set to be the fringe order map of the high-frequency phase with frequencies of 64; to monitor the accuracy of the trained neural networks on data that they have never seen before, a validation dataset is created that is separate from the training scenarios; before training the residual convolutional neural network, the acquired data is preprocessed; because the fringe image obtained by the camera contains the background and the tested object, and the background is removed by the following equation:
6. According to claim 1, a deep learning-based temporal phase unwrapping method for fringe projection profilometry is characterized by step five wherein the residual convolutional neural network predicts the output data based on the input data in the test dataset; by comparing the real output data in the test dataset with the output data predicted by the network, the comparison results are used to evaluate the accuracy of the network.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
DESCRIPTION OF THE PREFERRED EMBODIMENT
[0014] The invention is based on a deep learning-based temporal phase unwrapping method for fringe projection profilometry. The steps of the invention are as follows: step one, four sets of three-step phase-shifting fringe patterns with different frequencies are projected to the tested objects. Each set of patterns contains three fringe patterns with the same frequency and different initial phase. Any set of three-step phase-shifting fringe patterns projected by the projector can be represented as:
I.sub.1.sup.p)(x.sup.p, y.sup.p)=128+127 cos[2πf x.sup.p/W]
I.sub.2.sup.p(x.sup.p, y.sup.p)=128+127 cos[2πf x.sup.p/W+2π/3]
I.sub.3.sup.p(x.sup.p, y.sup.p)=128+127 cos[2πf x.sup.p/W+4π/3]
where I.sub.1.sup.p(x.sup.p, y.sup.p), I.sub.2.sup.p(x.sup.p, y.sup.p), I.sub.3.sup.p(x.sup.p, y.sup.p) are three-step phase-shifting fringe patterns projected by the projector. (x.sup.p, y.sup.p) is the pixel coordinate of the projector. W is the horizontal resolution of the projector. f is the frequency of phase-shifting fringe patterns. A DLP projector is used to project four sets of three-step phase-shifting fringe patterns onto the tested objects. The frequencies of four sets of three-step phase-shifting fringe patterns are 1, 8, 32, and 64, respectively. Each set of three fringe patterns has the same frequency. The projected fringe patterns are captured by the camera simultaneously. The acquired three-step phase-shifting fringe images are represented as:
I.sub.1(x, y)=A(x, y)+B(x, y)cos[Φ(x, y)]
I.sub.2(x, y)=A(x, y)+B(x, y)cos[Φ(x, y)+2π/3]
I.sub.3(x, y)=A(x, y)+B(x, y)cos[Φ(x, y)+4π/3]
where I.sub.1(x, y), I.sub.2(x, y), I.sub.3(x, y) are three-step phase-shifting fringe images. (x, y) is the pixel coordinate of the camera. A(x, y) is the average intensity. B (x, y) is the intensity modulation. ψ(x, y) is the phase distribution of the measured object.
[0015] step two, the wrapped phase Φ(x, y) can be obtained as:
Due to the truncation effect of the arctangent function, the obtained phase φ(x, y) is wrapped within the range of [0,2π], and its relationship with Φ(x, y) is:
Φ(x, y)=φ(x, y)+2πk(x, y)
where k(x, y) represents the fringe order of Φ(x, y), and its value range is from 0 to N−1. N is the period number of the fringe patterns (i.e., N=f).
[0016] step three, the distribution range of the absolute phase map with unit frequency is [0, 2π], so the wrapped phase map with unit frequency is an absolute phase map. By using a multi-frequency temporal phase unwrapping (MF-TPU) algorithm, an absolute phase map with a frequency of 8 can be unwrapped with the aid of the absolute phase map with unit frequency. An absolute phase map with a frequency of 32 can be unwrapped with the aid of the absolute phase map with a frequency of 8. An absolute phase map with a frequency of 64 can be unwrapped with the aid of the absolute phase map with a frequency of 32. The absolute phase map can be calculated by the following formula:
where f.sub.h is the frequency of high-frequency fringe images. f.sub.l is the frequency of low-frequency fringe images. φ.sub.h(x, y) is the wrapped phase map of high-frequency fringe images, k.sub.h(x, y) is the fringe order map of high-frequency fringe images. Φ.sub.h(x, y) is the absolute phase map of high-frequency fringe images, Φ.sub.l(x, y) is the absolute phase map of low-frequency fringe images. Round( ) is the rounding operation. Based on the principle of the multi-frequency temporal phase unwrapping (MF-TPU) algorithm, the absolute phase can be obtained theoretically by directly using the absolute phase with unit-frequency to assist in unwrapping the wrapped phase with a frequency of 64. Due to the non-negligible noises and other error sources in actual measurement, a multi-frequency temporal phase unwrapping (MF-TPU) algorithm cannot be used to unwrap the high-frequency wrapped phase map with frequencies of 64 using the low-frequency wrapped phase map with frequencies of 1. The result has a large number of error points. Therefore, the multi-frequency temporal phase unwrapping (MF-TPU) algorithm generally use multiple sets of wrapped phase maps with different frequencies to unwrap sequentially the high-frequency wrapped phase map, which finally obtains the absolute phase with frequencies of 64. It is obvious that the multi-frequency temporal phase unwrapping (MF-TPU) algorithm consumes a lot of time and cannot achieve fast and high-precision 3D measurements based on fringe projection profilometry.
[0017] step four, a residual convolutional neural network is built to implement phase unwrapping. Steps one to three are repeatedly performed to obtain multiple sets of data, which are divided into a training dataset, a validation dataset, and a test dataset. The training dataset is used to train the residual convolutional neural network. The validation dataset is used to verify the performance of the trained network. Firstly, a residual convolutional neural network is built to implement phase unwrapping, and
[0018] Although these modules used in the network are existing, the innovation of the invention lies in how to use the existing modules to build a network model that enables phase unwrapping, as shown in
where M(x, y) is the intensity modulation in actual measurement. The modulation corresponding to the points belonging to the background in the image is much smaller than the modulation corresponding to the points of the measured objects, and the background in the image can be removed by setting a threshold value. The data after the background removal operation is used as the dataset of the residual convolutional neural network for training. In the network configuration, the loss function is set as mean square error (MSE), the optimizer is Adam, the size of mini-batch is 2, and the training epoch is set as 500. To avoid over-fitting as the common problem of the deep neural network, L2 regularization is adopted in each convolution layer of residual blocks and upsampling blocks instead of all convolution layers of the proposed network, which can enhance the generalization ability of the network. The training dataset is used to train the residual convolutional neural network. The validation dataset is used to verify the performance of the trained network.
[0019] step five, the residual convolutional neural network predicts the output data based on the input data in the test dataset. By comparing the real output data in the test dataset with the output data predicted by the network, the comparison results are used to evaluate the accuracy of the network. Due to the non-negligible noises and other error sources in actual measurement, a multi-frequency temporal phase unwrapping (MF-TPU) algorithm cannot be used to unwrap the high-frequency wrapped phase map with frequencies of 64 using the low-frequency wrapped phase map with frequencies of 1. The result has a large number of error points. The invention uses a deep learning approach to achieve temporal phase unwrapping. Compared with the multi-frequency temporal phase unwrapping (MF-TPU) algorithm, a residual convolutional neural network is used to implement phase unwrapping, which exploits the low-frequency wrapped phase map with frequencies of 1 to unwrap the high-frequency wrapped phase map with frequencies of 64. The absolute phase map with fewer phase errors and higher accuracy can be obtained by using this method.
Example of Implementation
[0020] To verify the actual performance of the proposed method described in the invention, a monochrome camera (Basler acA640-750um with the resolution of 640×480), a DLP projector (LightCrafter 4500Pro with the resolution of 912×1140), and a computer are used to construct a 3D measurement system based on a deep learning-based temporal phase unwrapping method for fringe projection profilometry, as shown in