DEEP LEARNING-BASED DIGITAL HOLOGRAPHIC CONTINUOUS PHASE NOISE REDUCTION METHOD FOR MICROSTRUCTURE MEASUREMENT
20240361727 · 2024-10-31
Assignee
Inventors
- Benyong Chen (Zhejiang, CN)
- Jianjun TANG (Zhejiang, CN)
- Liping Yan (Zhejiang, CN)
- Liu HUANG (Zhejiang, CN)
Cpc classification
G03H1/0866
PHYSICS
International classification
G03H1/08
PHYSICS
G01B11/25
PHYSICS
Abstract
A deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement is provided. A MEMS microstructure is simulated to generate an object phase image through generation of random matrix superposition, noise in a digital holographic continuous phase map is simultaneously simulated to generate a noise grayscale image, and a simulation data set is thus created. An end-to-end convolutional neural network is designed, and a trained convolutional neural network is trained and obtained. A holographic interference pattern of an object under measurement is collected by photographing, and after spectrum extraction, angular spectrum diffraction, phase unwrapping, and distortion compensation, a continuous phase map containing only the object phase and noise is obtained and input into the trained convolutional neural network to obtain an object phase map. A simulation data set is accurately created in the disclosure, thereby the difficulty of collecting a large amount of experimental data is avoided.
Claims
1. A deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement, wherein: step one: simulating a MEMS microstructure to generate an object phase image through generation of random matrix superposition, simultaneously simulating noise in a digital holographic continuous phase map to generate a noise grayscale image, adding the object phase image and the noise grayscale image as input data, treating the object phase image as a label to create a simulation data set, designing an end-to-end convolutional neural network combined with a subspace projection method, and inputting the simulation data set into the convolutional neural network to train the convolutional neural network to obtain a trained convolutional neural network; step two: collecting a holographic interference pattern of an object under measurement by photographing, obtaining object light field complex amplitude U containing information of an object to be measured through image processing, and extracting and wrapping phase information in the object light field complex amplitude U between (, ] to obtain a wrapped phase map .sub.0; step three: performing an unwrapping operation on the wrapped phase map .sub.0 to obtain a continuous phase map containing phase distortion, using Zernike polynomial fitting to remove the phase distortion, and obtaining a continuous phase map containing only an object phase and a noise phase; and step four: inputting the continuous phase map into the trained convolutional neural network and outputting a noise-reduced object phase map.
2. The deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement according to claim 1, wherein: the step one specifically is: 1.1) generating a plurality of step-like structure images as the object phase image by generating non-overlapping random rectangles; 1.2) for each object phase image generated in step 1.1), generating the noise grayscale image of a same size based on two noise model algorithms, Brown and Perlin, and setting a standard deviation of the noise to be normalized to a range of 0.05 to 0.26 rad during generation; and 1.3) adding the object phase image generated by simulation and the noise grayscale image to obtain the continuous phase map containing noise, treating the continuous phase map as the input data of the convolutional neural network, treating the object phase image generated by simulation without being added with noise as a learning label of the convolutional neural network, creating the simulation data set, and then training the convolutional neural network to obtain the trained convolutional neural network.
3. The deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement according to claim 2, wherein: the step 1.1) specifically is: generating a grayscale image of MEMS through matlab first, generating 8 to 64 rectangles in the grayscale image according to the following method, setting overlapping portions among the rectangles and portions outside the rectangles to zero in the grayscale image, and obtaining a grayscale image containing a plurality of non-overlapping graphics as a phase grayscale image simulating a MEMS chip surface structure; randomly selecting coordinates in the grayscale image as a vertex of a lower left corner of a rectangle, then randomly generating two random integers limited to a predetermined range as a length and a width, and establishing a filled rectangle; and finally using a mean filter with a window size of 33 on the phase grayscale image to obtain the object phase image.
4. The deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement according to claim 1, wherein: the convolutional neural network specifically comprises a first convolution module, a plurality of consecutive basic convolution layers, a subspace projection layer SSA, a second convolutional module, and an additive layer connected in sequence, the first convolution module receives the continuous phase map input to the convolutional neural network, output of the first convolution module is input into the plurality of consecutive basic convolution layers, output of the plurality of consecutive basic convolution layers and the output of the first convolution module are both input to the subspace projection layer SSA for processing, output of the subspace projection layer SSA is input into the second convolution module, and output of the second convolution module and the continuous phase map input to the convolutional neural network are added through the additive layer as output of the convolutional neural network.
5. The deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement according to claim 4, wherein: each basic convolution layer is mainly formed by two consecutive first convolution modules and one additive layer connected in sequence, and input of the basic convolution layer is processed by the two consecutive first convolution modules and then is added to the input of the basic convolution layer itself through the additive layer to act as the output of the basic convolution layer.
6. The deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement according to claim 4, wherein: the subspace projection layer SSA comprises a convolution regularization module, a convolution operation, the additive layer, basis vector processing operations Basic Vectors, and a projection operation Projection, the output of the plurality of consecutive basic convolution layers and the output of the first convolution module are spliced first and then input into the convolution regularization module and the convolution operation respectively, output of the convolution regularization module and output of the convolution operation are added through the additive layer, and a result is then input into the basis vector processing operations Basic Vectors, output of the basis vector processing operations Basic Vectors and the output of the first convolution module are input to the projection operation Projection together, and the projection operation Projection uses the output of the basic vector processing operations Basic Vectors to perform weighted optimization on the output of the first convolution module to obtain the final noise-reduced object phase map, the convolution regularization module is mainly formed by a first convolution operation, a first BatchNormal batch regularization operation, a first activation function, a second convolution operation, a second BatchNormal batch regularization operation, and a second activation function connected in sequence.
7. The deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement according to claim 1, wherein: the step two specifically is: 2.1) recording a holographic interference pattern of the object to be measured by using a CCD photosensitive electronic imaging device, obtaining a spectrogram through Fourier transform, extracting a positive first-order spectrum in the spectrogram, reconstructing a hologram using inverse Fourier transform, and diffracting the reconstructed hologram through an angular spectrum diffraction method to obtain the object light field complex amplitude containing the information of the object to be measured; and 2.2) extracting and wrapping an exponential term in the object light field complex amplitude U between (,] to obtain the wrapped phase map.
8. The deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement according to claim 1, wherein: the step three specifically is: 3.1) unwrapping the wrapped phase map to obtain the continuous phase map, wherein a phase of the object to be measured, a distortion phase, and phase noise are usually comprised; and 3.2) performing the Zernike polynomial fitting on a continuous phase map .sub.c to obtain a Zernike coefficient of the distortion phase, calculating the distortion phase .sub.a through the Zernike coefficient obtained by fitting, and finally subtracting the distortion phase .sub.a from the unwrapped phase .sub.c to obtain a phase image containing the object to be measured and noise.
9. The deep learning-based digital holographic continuous phase noise reduction method for microstructure measurement according to claim 1, wherein: the step four specifically is: for the trained convolutional neural network model, for each specific continuous phase map to be measured, obtaining a noise-reduced object phase map:
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0039]
[0040] Table 1 shows Zernike polynomials used in the embodiments.
[0041]
[0042]
DESCRIPTION OF THE EMBODIMENTS
[0043] The disclosure is further described in detail in combination with accompanying figures and embodiments.
[0044] The embodiments of the disclosure are shown in the flow chart of (a) of
[0045] In step one, a MEMS microstructure is simulated to generate an object phase image through generation of random matrix superposition, noise in a digital holographic continuous phase map is simultaneously simulated to generate a noise grayscale image, the object phase image and the noise grayscale image are added together with their labels to create a large number of simulation data sets, an end-to-end convolutional neural network combined with a subspace projection method is designed, and the simulation data sets are input into the convolutional neural network to train the convolutional neural network to obtain a trained convolutional neural network to achieve the noise reduction task.
[0046] Specific implementation is as follows:
[0047] 1.1) In the specific implementation, a size of input data of the convolutional neural network is set to MM. Several grayscale images of MEMS MM pixel size are first generated through matlab, and 8 to 64 rectangles in each grayscale image are generated according to the following method. The parts inside the rectangles are set to 1 in the grayscale image, the parts overlapping between the rectangles and the parts outside the rectangles are set to zero in the grayscale image, and a grayscale image containing multiple non-overlapping graphics is obtained. That is equivalent to finding the difference set between all rectangles, and as a phase grayscale image simulating a surface structure of a MEMS chip, a grayscale image containing only 0 and 1 is obtained. This image is multiplied by a random number in the range [0, ] to give each image a different phase value:
[0048] Coordinates are first randomly selected in a matrix of MM pixel size, and the coordinates are treated as a vertex of a lower left corner of a rectangle. Two random integers limited to a certain range are then randomly generated as a length and a width, and a filled rectangle is thus obtained.
[0049] Finally, a mean filter with a window size of 33 is used on the phase grayscale image to reduce an edge gradient and obtain the object phase image, so as to optimize a learning ability of the convolutional neural network on the data set.
[0050] 1.2) For each object phase image generated in step 1.1), noise grayscale images of the same size MM is simulated according to two noise model algorithms, Brown and Perlin, which simulate random noise forms in nature, and a standard deviation of the noise to be normalized is set to a range of 0.05 to 0.26 rad during generation.
[0051] 1.3) The object phase image generated by simulation and the noise grayscale image are added to obtain a continuous phase map containing noise, the continuous phase map is treated as the input data of the convolutional neural network, the object phase image generated by simulation without being added with noise is treated as a learning label of the convolutional neural network, a simulation data set containing fourty thousand pairs of data (twenty thousand pairs of data containing Brown noise and twenty thousand pairs of data containing Perlin) is created, and the convolutional neural network is then trained to obtain the trained convolutional neural network.
[0052] Step 1.3) When the convolutional neural network is trained, initial parameters of the training are set to: a learning rate of 0.0001, an optimizer is Adam, a loss function is a root mean square error function, and a learning rate decay function is a cosine annealing function. The data set is iteratively trained 20 times.
[0053] (b) of
[0054] The convolutional neural network specifically includes an input layer, a first convolution module, a plurality of consecutive basic convolution layers, a subspace projection layer SSA, a second convolution module, an additive layer, and an output layer connected in sequence. The first convolution module receives the continuous phase map input to the convolutional neural network, and output of the first convolution module is input into the plurality of consecutive basic convolution layers. Output of the plurality of consecutive basic convolution layers and the output of the first convolution module are both input to the subspace projection layer SSA for processing. Output of the subspace projection layer SSA is input into the second convolution module. Output of the second convolution module and the continuous phase map input to the convolutional neural network are added through the additive layer as output of the convolutional neural network.
[0055] Each basic convolution layer is mainly formed by two consecutive first convolution modules and one additive layer connected in sequence, and input of the basic convolution layer is processed by the two consecutive first convolution modules and then is added to the input of the basic convolution layer itself through the additive layer to act as the output of the basic convolution layer.
[0056] The first convolution module is mainly formed by a convolution operation and an activation function connected in sequence.
[0057] The second convolution module is mainly formed by a first convolution operation, an activation function, and a second convolution operation connected in sequence.
[0058] The subspace projection layer SSA includes a convolution regularization module, a convolution operation, the additive layer, basis vector processing operations Basic Vectors, and a projection operation Projection. The output of the plurality of consecutive basic convolution layers and the output of the first convolution module are spliced first and then input into the convolution regularization module and the convolution operation respectively. Output of the convolution regularization module and output of the convolution operation are added through the additive layer, and a result is then input into the basis vector processing operations Basic Vectors to obtain a basic vector. Output X2 of the basis vector processing operations Basic Vectors and the output X1 of the first convolution module are input to the projection operation Projection together. The projection operation Projection uses the output of the basic vector processing operations Basic Vectors to perform weighted optimization on the output X1 of the first convolution module to obtain the final noise-reduced object phase map.
[0059] The convolution regularization module is mainly formed by a first convolution operation, a first BatchNormal batch regularization operation, a first activation function, a second convolution operation, a second BatchNormal batch regularization operation, and a second activation function connected in sequence.
[0060] For the input phase map containing noise, features are first extracted through a 77 convolution kernel and expanded to 32 channels. The features are then extracted through 19 convolution modules with residual structure, and then a subspace projection module is used to separate the features. Finally, the two convolution layers are integrated into a one-channel grayscale image to represent the noise phase and then added to an original input image to output a filtered phase map. A BasicConvLayer residual module is formed by two convolution layers. The first convolution layer uses an ordinary 33 convolution kernel, the second convolution layer uses a 33 dilated convolution kernel with an expansion coefficient of 2, the activation functions all use LeakReLU, and a negative semi-axis slope is 0.2.
[0061] In the subspace projection module, the input low-dimensional feature X1 and high-dimensional feature X2 are first merged and spliced in their channel dimensions. Feature extraction is performed through two basic convolution layers with a size of 33 convolution kernel and residual connection, and then the feature map is mapped to k channels, where k is the subspace dimension.
[0062] The feature map on each channel is expanded into a one-dimensional vector, and k vectors of size M.sup.2 are obtained, where M is the size of the feature map, a set of basic vectors Basic Vectors is obtained, denoted as V.sub.M.sub.
where P is an orthogonal projection matrix of a signal subspace, and V represents the basic vectors Basic Vectors.
[0063] Finally, the low-dimensional feature map X1 is reconstructed in the signal subspace as:
in the formula, Y is the final noise-reduced object phase map, which is reconstructed and transformed into a feature map of the same dimension as X1 and is treated as the output of the subspace projection module to the next layer of the convolutional neural network.
[0064] In step two, a holographic interference pattern of an object under measurement is collected by photographing, object light field complex amplitude U(x, y) of size MM containing information of an object to be measured is obtained through image processing, and phase information in the complex amplitude U(x, y)) is extracted and wrapped between (, ] to obtain a wrapped phase .sub.0 Specific implementation is as follows:
[0065] 2.1) A holographic interference pattern of the object to be measured is recorded by using a CCD photosensitive electronic imaging device, a spectrogram is obtained through Fourier transform, a positive first-order spectrum in the spectrogram is extracted, a hologram is reconstructed using inverse Fourier transform, and the reconstructed hologram is diffracted through an angular spectrum diffraction method to obtain the object light field complex amplitude containing the information of the object to be measured. The specific expressions are:
where U is the complex amplitude of the object light field, (x, y) is the coordinate point on the two-dimensional plane, i represents an imaginary unit, i={square root over (1)}, A is the amplitude of the light field, is the phase information, including a phase .sub.o of the object to be measured, distortion phase .sub.a, and phase noise .sub.c.
[0066] 2.2) An exponential term in the object light field complex amplitude U is extracted and wrapped between (, ] to obtain the wrapped phase map. The specific expression is:
where .sub.0 is the wrapped phase map, arctan{.Math.} is an arctangent function, Im[.Math.] is an operation of taking an imaginary part, and Re[.Math.] is an operation of taking a real part.
[0067] (a) of
[0068] In step three, an unwrapping operation is performed on the wrapped phase map .sub.0 to obtain a continuous phase map containing phase distortion, Zernike polynomial fitting is then used to remove the phase distortion, and a continuous phase map containing only an object phase and a noise phase is obtained. Specific implementation is as follows:
[0069] 3.1) The wrapped phase map is unwrapped using a least squares method to obtain the continuous phase map, and a phase of the object to be measured, a distortion phase, and phase noise are usually included. The specific expression is:
where .sub.c is the unwrapping phase, a continuous surface, unwrap [.Math.] is the unwrapping operation, .sub.o (x, y).sub.a(x, y) and .sub.e (x, y) are the continuous object phase, distortion phase, and noise phase respectively.
[0070] 3.2) The Zernike polynomial fitting is performed on a continuous phase map .sub.c to obtain a Zernike coefficient of the distortion phase, a distortion phase .sub.a is calculated through the Zernike coefficient obtained by fitting, and the distortion phase .sub.a is subtracted from the unwrapped phase .sub.c finally to obtain a phase containing the object to be measured and noise. The specific expression is:
where is the continuous phase map containing noise that needs to be input into the convolutional neural network model for noise reduction.
TABLE-US-00001 TABLE 1 Polynomial Cartesian form Aberration type Z.sub.0 1 translation Z.sub.1 2x x-axis tilt Z.sub.2 2y y-axis tilt Z.sub.3 {square root over (3)}(2x.sup.2 + 2y.sup.2 1) defocus Z.sub.4 {square root over (6)}(2xy) y-axis astigmatism Z.sub.5 {square root over (6)}(x.sup.2 y.sup.2) x-axis astigmatism Z.sub.6 {square root over (8)}(3x.sup.2y + 3y.sup.2 2y) y-axis coma Z.sub.7 {square root over (8)}(3x.sup.3 + 3xy.sup.2 2x) x-axis coma Z.sub.8 {square root over (8)}(3x.sup.2y y.sup.3) axis cloverleaf aberration Z.sub.9 {square root over (8)}(x.sup.3 3xy.sup.2) x-axis cloverleaf aberration
[0071] The above Table 1 shows the Zernike polynomials in the Cartesian coordinate system used in this embodiment, and (d) of
[0072] In step four, the continuous phase map is input into the trained convolutional neural network, and a network outputs a noise-reduced object phase map. Specific implementation is as follows:
[0073] For the trained convolutional neural network model, it is treated as a function mapping relationship, and for each specific continuous phase map to be measured, a noise-reduced object phase map is obtained:
wherein (.Math.) represents a trained convolutional neural network model with specific network parameters, is the continuous phase map input to the convolutional neural network, and Y is the noise-reduced object phase map output by the convolutional neural network after processing the input data. Only the object phase information is left, and a shape measurement value may be obtained by converting it into height data.
[0074]
[0075] In view of the problem that it is difficult for the phase filter algorithm to solve the complex noise existing in the digital holographic continuous phase in the related art, by designing a deep convolutional neural network combined with the subspace projection method and using two noise models, Brown and Perlin, to simulate the noise in the digital holographic continuous phase, a large number of data sets are produced to train the designed convolutional neural network in the disclosure. The purpose of efficiently filtering out phase noise in digital holographic experiments is achieved, and the accuracy of digital holographic phase measurement is significantly improved.