METHOD FOR DETERMINING AN IMAGING QUALITY OF AN OPTICAL SYSTEM WHEN ILLUMINATED BY ILLUMINATION LIGHT WITHIN AN ENTRANCE PUPIL TO BE MEASURED
20220381643 · 2022-12-01
Inventors
- Klaus Gwosch (Aalen, DE)
- Markus Koch (Neu-Ulm, DE)
- Lars Stoppe (Jena, DE)
- Manuel Decker (Jena, DE)
- Lukas Fischer (Leinfelden-Echterdingen, DE)
Cpc classification
G03F7/70666
PHYSICS
International classification
Abstract
To determine an imaging quality of an optical system when illuminated by illumination light within an entrance pupil or exit pupil, a test structure is initially arranged in an object plane of the optical system and an illumination angle distribution for illuminating the test structure with the illumination light is specified. The test structure is illuminated at different distance positions relative to the object plane. An intensity of the illumination light is measured in an image plane of the optical system, the illumination light having been guided by the optical system when imaging the test structure at each distance position. An aerial image measured in this way is compared with a simulated aerial image and fit parameters of a function set for describing the simulated aerial image are adapted and a wavefront of the optical system is determined on the basis of the result of a minimized difference.
Claims
1. A method, comprising: a) illuminating a test structure in an object plane of an optical system with a specified illumination angle distribution at different distance positions of the test structure relative to the object plane; b) measuring an intensity of the illumination light in an image plane of the optical system using a spatially resolving detection device to determine a measured aerial image of the test structure, the illumination light having been guided by the optical system when imaging the test structure at each distance position; c) comparing the measured aerial image with a simulated aerial image and adapting fit parameters of a function set to describe the simulated aerial image until a difference between the measured aerial image and the simulated aerial image has been reduced to a desired value; d) determining a wavefront of the optical system based on the result of the reduced difference between the measured and the simulated aerial image, the specified illumination angle distribution corresponding to a first subaperture within a pupil to be measured; e) repeating a) through d) using a second subaperture which is shifted relative to the first subaperture; and f) determining the wavefront of the optical system by combining the results obtained for the measured subapertures over the entire pupil to be measured.
2. The method of claim 1, wherein the desired value is a minimum value.
3. The method of claim 1, comprising using the subapertures to scan the pupil.
4. The method of claim 1, further comprising eliminating a test structure contribution to an influence on the wave-front by the test structure to determine a test structure-independent imaging quality of the optical system.
5. The method of claim 4, wherein: d) comprises determining the test structure contribution for exactly one specified subaperture; and e) comprises using the test structure contribution for the exactly one specified subaperture to determine the test structure-independent imaging quality of the optical system for the second subaperture.
6. The method of claim 4, wherein: a linear system of equations is solved for determining the imaging quality while eliminating the test structure contribution; and the linear system of equations comprises data of the wavefront determination prior to the elimination of the test structure contribution, contributions of the test structure, and a transformation matrix.
7. The method of claim 6, wherein: a dependence of at least one parameter on a respective coordinate in the solution space to be determined is described by a decomposition into basis functions; and the at least one parameter comprises a member selected from the group consisting of the data of the wavefront determination prior to the elimination of the test structure contribution, the contributions of the test structure, and the transformation matrix.
8. The method of claim 1, wherein the test structure comprises a pinhole.
9. The method of claim 1, wherein the test structure comprises a pinhole comprising an elliptical edge.
10. The method of claim 1, wherein: the pupil to be measured comprises an elliptical edge; within the determination of the wavefront, there is a representation of a pupil function for the at least sectional description of the pupil to be determined on a coordinate grid that is equidistant in mutually perpendicular pupil coordinates and parameterized basis functions that are scaled in accordance with a principal axis ratio of an elliptical edge of the pupil.
11. The method of claim 1, wherein: the pupil to be determined comprises an elliptical edge; and there is within the determination of the wavefront, a representation of a pupil function for the at least sectional description of the pupil to be determined on a coordinate grid that is scaled in mutually perpendicular pupil coordinates in accordance with a principal axis ratio of the elliptical edge of the pupil and parameterized basis functions that are scaled uniformly.
12. The method of claim 1, further comprising using a metrology system comprising an imaging optical unit to image the test structure toward a spatially resolving detection device, wherein the imaging quality of the imaging optical unit is to be determined.
13. The method of claim 12, comprising using the subapertures to scan the pupil.
14. The method of claim 12, further comprising eliminating a test structure contribution to an influence on the wave-front by the test structure to determine a test structure-independent imaging quality of the optical system.
15. The method of claim 12, further comprising eliminating a test structure contribution to an influence on the wave-front by the test structure to determine a test structure-independent imaging quality of the optical system.
16. The method of claim 1, further comprising using a metrology system which comprises: an illumination optical unit configured to illuminate a test structure in an object plane in which the test structure is present; a spatially resolving detection device; an imaging optical unit configured to image the test structure toward a detection device in an image plane; and a stop comprising an aperture having an elliptical edge, wherein the stop is an illumination pupil plane of the imaging optical unit and/or in an entrance pupil of the imaging optical unit.
17. The method of claim 16, comprising using the subapertures to scan the pupil.
18. The method of claim 16, further comprising eliminating a test structure contribution to an influence on the wave-front by the test structure to determine a test structure-independent imaging quality of the optical system.
19. The method of claim 16, further comprising eliminating a test structure contribution to an influence on the wave-front by the test structure to determine a test structure-independent imaging quality of the optical system.
20. A metrology system, comprising: an illumination optical unit configured to illuminate a test structure in an object plane in which the test structure is present; a spatially resolving detection device; an imaging optical unit configured to image the test structure toward a detection device in an image plane; and a stop comprising an aperture having an elliptical edge, wherein the stop is an illumination pupil plane of the imaging optical unit and/or in an entrance pupil of the imaging optical unit.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] Exemplary embodiments of the disclosure are explained in more detail below with reference to the drawings, in which:
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
EXEMPLARY EMBODIMENTS
[0046] In order to facilitate the representation of positional relationships, a Cartesian xyz-coordinate system is used hereinafter. In
[0047] In a view that corresponds to a meridional section,
[0048] The illumination light 1 is reflected at the object 5. A plane of incidence of the illumination light 1 is parallel to the yz-plane in the case of central illumination (kx=0, cf., the following description, for example in relation to
[0049] The EUV illumination light 1 is produced by an EUV light source 6. The light source 6 can be a laser plasma source (LPP; laser produced plasma) or a discharge source (DPP; discharge produced plasma). In principle, a synchrotron-based light source may also be used, for example a free electron laser (FEL). A used wavelength of the EUV light source may range between 5 nm and 30 nm. In principle, in the case of a variant of the metrology system 2, a light source for another used light wavelength may also be used instead of the light source 6, for example a light source for a used wavelength of 193 nm.
[0050] Depending on the embodiment of the metrology system 2, the latter can be used for a reflective or else for a transmissive object 5. One example of a transmissive object is a pinhole aperture.
[0051] An illumination optical unit 7 of the metrology system 2 is arranged between the light source 6 and the object 5. The illumination optical unit 7 serves for the illumination of the object 5 to be examined with a defined illumination intensity distribution over the object field 3 and at the same time with a defined illumination angle distribution with which the field points of the object field 3 are illuminated. This illumination angle distribution is also referred to hereinafter as illumination subaperture.
[0052] The illumination subaperture is delimited by way of a sigma subaperture stop 8 of the illumination optical unit 7, which is arranged in an illumination optical unit pupil plane 9. Alternatively or in addition, a corresponding subaperture stop may also be present in the imaging optical unit of the metrology system 2, which is yet to be described below. The sigma subaperture stop 8 restricts a beam of illumination light 1 which is incident thereon on the edge. Alternatively or in addition, the sigma subaperture stop 8 and/or the stop in the imaging optical unit can also shadow the illumination light beam from the inside, that is to say act as an obscuration stop. A corresponding stop can have an inner stop body that accordingly shadows the beam on the inside, the stop body being connected to an outer stop support body by way of a plurality of webs, for example by way of four webs.
[0053] The sigma subaperture stop 8 is displaceable by way of a displacement drive 8a in the illumination optical unit pupil plane 9, that is to say parallel to the xy-plane, in a defined fashion.
[0054]
[0055] After reflection at the object 5, the illumination and imaging light 1 enters an imaging optical unit or projection optical unit 13 of the metrology system 2. In a manner analogous to the illumination subaperture, there is a projection optical unit subaperture which is specified by an NA subaperture stop 11a in the entrance pupil 11 of the projection optical unit 13 in
[0056] The imaging optical unit 13 to be measured serves for imaging the object 5 towards a spatially resolving detection device 14 of the metrology system 2. The detection device 14 is designed for example as a CCD detector. A CMOS detector can also be used. The detection device 14 is arranged in an image plane 15 of the projection optical unit 13.
[0057] The detection device 14 is signal connected to a digital image processing device 17. A pixel spatial resolution of the detection device 14 in the xy-plane can be specified in such a way that it is inversely proportional to the numerical aperture of the entrance pupil 11 to be measured, in the coordinate directions x and y (NA.sub.x, NA.sub.y). In the direction of the x-coordinate, this pixel spatial resolution is regularly less than λ/2NA.sub.x, and, in the direction of the y-coordinate, it is regularly less than λ/2NA.sub.y. In this case, λ is the wavelength of the illumination light 1. The pixel spatial resolution of the detection device 14 can also be implemented with square pixel dimensions, independently of NA.sub.x, NA.sub.y.
[0058] A spatial resolution of the detection device 14 can be increased or reduced by resampling. A detection device with pixels with different dimensions in the x- and y-direction is also possible.
[0059] The object 5 is carried by an object holder or a holder 18. The holder 18 can be displaced by a displacement drive 19 on the one hand parallel to the xy plane and on the other hand perpendicularly to this plane, that is to say in the z direction. The displacement drive 19, as also the entire operation of the metrology system 2, is controlled by a central control device 20, which, in a way that is not represented any more specifically, is in signaling connection with the components to be controlled.
[0060] The optical set-up of the metrology system 2 serves for the most exact possible emulation of an illumination and an imaging in the course of a projection exposure of the object 5 during the projection-lithographic production of semiconductor components.
[0061]
[0062] The number of focal planes z.sub.m can be between two and twenty, for example between ten and fifteen. In this case, there is a total displacement in the z-direction over several Rayleigh units (NA/λ.sup.2).
[0063] Depicted in
[0064] In addition to the entrance pupil 11,
[0065]
[0066]
[0067] The pinhole of the test structure 5 may be elliptical. The principal axes of the pinhole can have approximately the same size as the Airy disk of the projection optical unit 13, that is to say 2.44 λ/NA.sub.x in the direction of the x-coordinate and 2.44 λ/NA.sub.y in the direction of the y-coordinate.
[0068] The test structure 5 may have a single pinhole or else a plurality of pinholes, such as a periodic array of pinholes. Other test structures are possible, for example as described in US 2015/0355052 A1.
[0069]
[0070]
[0071] These pupils are represented in angle space, that is to say in the pupil coordinates kx (corresponding to the x spatial coordinate) and ky (corresponding to the y spatial coordinate). On account of the oblique illumination, a center of the entrance pupil 11 is at kx=0 and at ky≠0. The centers of the various subapertures 10.sub.i, that is to say the relative position of the respective chief rays, are labeled by triangles in
[0072]
[0073] When the entrance pupil 11 is scanned, the subapertures 10.sub.1 to 10.sub.5 sweep over a chief ray is azimuth angle φ which is approximately 75° in the embodiment according to
[0074]
[0075] As an alternative to a single-line scan with a constant chief ray polar angle θ like in the variant according to
[0076] In the form of contour line diagrams,
[0077]
[0078] In representations similar to
[0079] The test structure contribution according to
[0080]
[0081] In a representation similar to
[0082]
[0083] When determining the wavefront measurement data according to
[0084] Examples of such a shift-rotation method can be found for example in the specialist article by D. Su et al. Absolute surface figure testing by shift-rotation method using Zernike polynomials. Optics Letters Vol. 37, No. 15, 3198- 3200, 2012. https://doi.org/ 10.1364/OL.37.003198 and DE 10 2013 226 668 A1.
[0085] If the measurement data (m pixel values) of all n (n=5 in the depicted example) subapertures are combined as a vector, the following system of equations can be constructed:
[0086] : measurement data of the wavefront measurement (cf., above,
[0087] : wavefront points to be determined of the projection optical unit 13 (cf., above,
[0088] : pinhole contribution (cf.,
[0089] : combined transformation matrix;
[0090] T.sub.W.sub.
[0091] T.sub.P.sub.
[0092] For a subaperture scan according to
[0093] In the case of a subaperture scan according to
[0094] The system of equations
[0095] can be solved using conventional methods for solving linear systems of equations and the wavefront aberration W to be measured of the projection optical unit and the component of the wavefront aberrations P caused by the pinhole can be determined in this way.
[0096] Zernike polynomials can be fitted to the determined wavefront aberrations W of the projection optical unit in the region of the elliptical pupil to be measured and of the pinhole P, and hence it is possible to determine the Zernike spectrum.
[0097] The method from the application example can also be used to improve the wavefront measurement on a round, circular entrance pupil of an optical unit to be measured, rather than on the elliptical entrance pupil 11, since the contributions of the pinhole and the projection optical unit can be separated.
[0098] is In phase retrieval, the measured aerial image I(x,y,z.sub.m) is compared with a simulated aerial image I.sub.sim and fit parameters of a function set for describing the simulated aerial image are adapted until a difference between the measured aerial image and the simulated aerial image has been minimized.
[0099] The wavefront of the optical system is determined within the phase retrieval on the basis of the minimized difference between the measured and the simulated aerial image.
[0100] The phase retrieval difference minimization can be optimized with the aid of various methods. These include projection methods which are also known as error reduction algorithms, Gerchberg-Saxton methods or IFTA methods. The use of conventional iterative optimization methods is also possible. By way of example, such methods include gradient descent, least square, damped least square, genetic search method, simplex method, Chambolle-Pock optimization, back propagation method. Direct inversion methods can also be used. Examples thereof include extended Nijbour Zernike decomposition or else a machine learning-based method on the basis of, for example, previous results stored in a database. If aberrations of the optical system are expected as a matter of principle within the entrance pupil to be measured, a sufficiently densely sampled database can be generated via simulation. The retrieval can then be implemented via the search in this database. Within the scope of machine learning, a network can be trained with the aid of a precalculated aberration data set.
[0101] For the parametric capture and determination of the imaging aberrations of the optical system, a description of these imaging aberrations, that is to say for example a description of the phase distribution according to
[0102] What is important for an accurate determination of the imaging quality is that the basis functions are able to describe the expected imaging aberrations well. What is to be taken into account here is that an elliptical pupil to be measured is scanned using circular subapertures. In this case, regions of the wavefronts determined via phase retrieval overlap. To be able to use this to calculate the entire elliptical entrance pupil to be measured, it can be desirable for the basis of a function decomposition for the individual wavefronts is chosen in such a way that it is describable by way of a shift/rotation.
[0103] Zernike polynomials are suitable as basis functions as a matter of principle. Bhatia-Wolf polynomials, Bessel functions, solutions to the Laplace equation, orthogonalized, locally distributed, narrow exponential functions and/or Gaussian functions (optionally distributed on a grid), orthogonalized, locally distributed spline polynomials (optionally distributed on a grid) and orthogonalized mixtures of basis functions were found to be advantageous in respect of the describability of a shift/rotation.
[0104] In this case, the orthogonalization of the functions improves a robustness of the optimization and a comparability of the results. A partial orthogonalization of the basis functions is also possible.
[0105] A mixture of the possible basis functions listed above may also be particularly suitable, for example a combination of Zernike polynomials and orthogonalized, locally distributed, narrow exponential functions. To this end, a small number of Zernike polynomials, for example 9 to 16 Zernike polynomials, are used to describe the conventional imaging aberrations in this way. Additionally, localized Pilk functions, for example in the form of an exponential function or a Gaussian function, are used to be able to describe local deviations. In this case, the exponential functions are partially orthogonalized with respect to the Zernike functions. A partial orthogonalization of a function set F with respect to another function set G is understood to mean that each element of F is converted with the aid of a method such that it is subsequently orthogonal to all elements of G. By way of example, this can be implemented using the orthogonalization step of the Gram-Schmidt orthogonalization method. The difference to complete orthogonalization is that the elements in F and G need not necessarily be orthogonal amongst themselves.
[0106] By way of example, such an orthogonalization can be implemented using the GramSchmidt orthogonalization method (D. Malacara, “Optical Shop Testing”, Wiley-Interscience, 1992; http://de.wikipedia.org/wiki/Schmidtsches_Orthonormalisierungsverfahren).
[0107]
[0108]
[0109] In addition to the image plane 15 in which the detection device 14 is arranged,
[0110] The following relationship can be constructed for the intensity I(x,y,z) measured by the detection device 14:
I=abs(H.sup.pupil_image(H.sup.object_pupil(E.sub.object).Math.E.sub.pupil)).sup.2+N (2)
[0111] In this case, H.sup.object_pupil is an optical transfer function between the object plane 4 and the pupil 11 in the pupil plane 25;
[0112] H.sup.pupil_image is an optical transfer function between the pupil 11 and the image plane 15;
[0113] E.sub.object is a complex amplitude (amplitude and phase) of the test object 5;
[0114] E.sub.pupil is a system transfer function in the form of a complex pupil amplitude, that is to say the desired wave function of the optical system; and
[0115] N is a contribution which describes, inter alia, the noise in the detection device 14.
[0116] Within the scope of the phase retrieval, the wave function E.sub.pupil is back-calculated from the measured intensity value I.
[0117] In this case, a forward simulation of the imaging of the test object 5 by the projection optical unit 3 is implemented and a difference between a simulation parameterized in the aberrations, that is to say in the imaging aberrations, and the measurement results I is minimized.
[0118] If an anamorphic projection optical unit 13 is used, the simulation is to be adapted in accordance with the anamorphic set up. A simulation formulation based on Fourier transforms lends itself to the realization of a fast and exact simulation.
[0119] The elliptically shaped pupil 11 of the projection optical unit to be parameterized to this end can be parameterized via the following variants:
[0120] Firstly, the pupil function can be represented on a square grid together with an elliptical apodization and a parameterization of the pupil function by way of compressed Zernike is polynomials, that is to say Zernike polynomials that are scaled differently in the x- and y-directions. This is visualized in exemplary fashion in
[0121]
[0122] A variant of the representation of the pupil function is implemented on a non-square pupil grid, that is to say in which the scaling differs in the kx- and ky-directions. The scaling of the grids, that is to say the grid widths in kx and ky, is coupled to the absolute values of the associated numerical apertures NAx, NAy of the elliptical pupil 11. Then, in respect of the pixels, this representation has a circular apodization and a parameterization of the pupil function by way of conventional Zernike polynomials, and not by way of compressed Zernike polynomials. Within the scope of the simulation, the different grid widths in kx and ky then are taken into account in the scaling of the Fourier transform. In this case, either use can be made of an adapted zero padding or use can be made of a chirp Z-transform, in which different adapted scaling parameters should be selected. The pupil grid widths in kx and ky can be chosen in such a way that the pupil function is maximally scanned and numerically has the highest information density.
[0123]
[0124] The scaling factor scal.sub.x/y of the chirp Z-transform between a given pixel grid of the detection device 14 and the x- and y-grid according to the pupil representation according to
[0125] Here:
[0126] λ is the wavelength of the illumination light 1;
[0127] dx (dy) is the pixel dimension and
[0128] NA.sub.x/y is the numerical aperture of the pupil 11 in the x- and y-directions.
[0129] Then, different scalings arise in the x- and y-directions depending on the different numerical apertures NA.sub.x, NA.sub.y of the pupil 11.
[0130] As a rule, the following applies: dx=dy. However, the pixel dimensions of the detection device 14 in the x- and y-directions may, in principle, also be chosen to be different.
[0131] A further variant of the calculation lies in the use of a so-called error reduction algorithm, either with a conventional FFT and use of an elliptical apodization matrix or with the chirp Z-transform, adapted scaling parameters and the use of a circular apodization matrix. As a result, it is then possible in turn to alternate between pupil space and image space, the corresponding restrictions being implemented in the respective space (like in the case of the conventional IFTA algorithm, also referred to as Gerchberg-Saxton algorithm).
[0132] Using the representation variants for the pupil function explained above, it is possible on the one hand to represent the entire entrance pupil 11 to be measured, or else the subapertures 10.sub.i.
[0133] The measurement above was implemented with round subapertures 10.sub.i. In principle, the is measurement can also be carried out using elliptically bounded subapertures. This can likewise be used to determine the aberrations over an elliptical entrance pupil. In this case, measurements can be carried out directly using an elliptical stop at the location of the stops 8 and 11a, respectively.