METHOD FOR SIMULATING ILLUMINATION AND IMAGING PROPERTIES OF AN OPTICAL PRODUCTION SYSTEM WHEN ILLUMINATING AND IMAGING AN OBJECT BY MEANS OF AN OPTICAL MEASUREMENT SYSTEM
20240402613 ยท 2024-12-05
Inventors
Cpc classification
G03F7/70508
PHYSICS
G03F7/70666
PHYSICS
G03F7/70133
PHYSICS
G03F7/7085
PHYSICS
G03F1/22
PHYSICS
International classification
Abstract
A metrology system having an optical measurement system serves to simulate illumination and imaging properties of an optical production system when an object is illuminated and imaged. The optical measurement system has an illumination optical unit serving to illuminate the object and having a pupil stop in the region of an illumination pupil in a pupil plane, and an imaging optical unit for imaging the object in an image plane. At least one pupil stop for specifying a plurality of measurement illumination settings created by displacing the pupil stop in the pupil plane is provided within the scope of the simulation method. Measurement aerial images are recorded in the image plane for various displacement positions of the object perpendicular to the object plane with the various measurement illumination settings. The various measurement illumination settings are specified by displacing the pupil stop. A complex mask transfer function is reconstructed from the recorded measurement aerial images. A 3-D aerial image of the optical production system is determined from the reconstructed mask transfer function and a given illumination setting of the optical production system as the result of the simulation method. The reconstruction includes the fact that profiles of stop edges of the at least one pupil stop which effectively act to specify the respective measurement illumination setting are changed in a manner going beyond a pure displacement of the stop edge when the respective measurement illumination setting is specified on the basis of the displacement position of the pupil stop. This results in an improvement of the simulation method.
Claims
1. A method for simulating illumination and imaging properties of an optical production system when an object is illuminated and imaged, wherein the simulation is implemented by use of an optical measurement system of a metrology system, wherein the optical measurement system comprises an illumination optical unit for illuminating the object, having a pupil stop of the illumination optical unit in the region of an illumination pupil in a pupil plane, and an imaging optical unit for imaging the object in an image plane, wherein the object is displaceable perpendicular to an object plane, including the following steps: providing at least one pupil stop for specifying a plurality of measurement illumination settings created by displacing the pupil stop in the pupil plane, recording measurement aerial images in the image plane for various displacement positions of the object perpendicular to the object plane with the various measurement illumination settings, wherein the various measurement illumination settings are specified by displacing the pupil stop, reconstructing a complex mask transfer function from the recorded measurement aerial images, and determining a 3-D aerial image of the optical production system from the reconstructed mask transfer function and a given illumination setting of the optical production system as the result of the simulation method, wherein the reconstruction includes the fact that profiles of stop edges of the at least one pupil stop which effectively act to specify the respective measurement illumination setting are changed in a manner going beyond a pure displacement of the stop edge when the respective measurement illumination setting is specified on the basis of the displacement position of the pupil stop.
2. The method of claim 1, wherein shadowing effects on account of a finite thickness of a main body of the pupil stop are included in a determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
3. The method of claim 1, wherein shadowing effects on account of a chief ray angle of an illumination of the object in the optical production system of greater than 4 are included in a determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
4. The method of claim 1, wherein there is a field-dependent determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
5. The method of claim 1, wherein a field-dependence of imaging properties of an imaging optical unit of the optical production system is included in a determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
6. The method of claim 1, wherein at least one of the following correction terms is included when the mask transfer function is reconstructed: a calculated aerial image for the associated defocus value and an associated field height, created by simulating an image by use of the imaging optical unit of the optical production system with the inclusion of reconstructed spectra of the object, and/or a calculated aerial image for the associated defocus value, created by simulating an image by use of the measurement imaging optical unit with the inclusion of the reconstructed spectra.
7. The method of claim 1, wherein the recording of the measurement aerial images utilizes a pupil stop whose stop shape is optimized with the aid of the following method steps: specifying a starting stop shape of the pupil stop as an initial design candidate for the simulation, modifying the starting stop shape to give rise to a modification stop shape which is different from the most recently predefined stop shape, checking at least one fabrication boundary condition with regard to fabrication of the modification stop shape and repeating the modifying and checking steps until the checking reveals compliance with the fabrication boundary condition, ascertaining a match quality between the illumination and imaging properties of the optical production system and the illumination and imaging properties of the optical measurement system as soon as the fabrication boundary conditions are complied with, repeating the modifying, checking and ascertaining steps until the match quality attains a predefined optimization criterion, which is checked by way of a query step, and fabricating a target stop shape resulting from the attaining of the optimization criterion as an optimized pupil stop shape after attaining the optimization criterion.
8. The method of claim 7, wherein the stop edge is optimized separately for a plurality of field regions and, in particular, for a plurality of field heights, with the result that this gives rise to a plurality of pupil stops which can each be used for simulating the properties of the optical production system in the corresponding field region.
9. A metrology system for carrying out a method of claim 1, wherein the optical measurement system comprises an illumination optical unit serving to illuminate the object and having a pupil stop in the region of an illumination pupil in a pupil plane, and an imaging optical unit for imaging the object in the image plane.
10. The metrology system of claim 9, wherein the optical measurement system comprises a displacement drive for displacing the pupil stop in at least one displacement direction in the pupil plane, wherein the optical measurement system comprises an object holder which is displaceable perpendicular to an object plane by actuator.
11. The metrology system of claim 9, wherein the optical measurement system comprises a displacement drive for displacing, in at least one displacement direction in a pupil plane of the imaging optical unit, an imaging pupil stop arranged in the region of a pupil of the imaging optical unit.
12. The metrology system of claim 9, comprising a selection apparatus for selecting at least one pupil stop from a plurality of pupil stops.
13. The metrology system of claim 10, wherein the optical measurement system comprises a displacement drive for displacing, in at least one displacement direction in a pupil plane of the imaging optical unit, an imaging pupil stop arranged in the region of a pupil of the imaging optical unit.
14. The metrology system of claim 10, comprising a selection apparatus for selecting at least one pupil stop from a plurality of pupil stops.
15. The metrology system of claim 9, wherein shadowing effects on account of a finite thickness of a main body of the pupil stop are included in a determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
16. The metrology system of claim 9, wherein shadowing effects on account of a chief ray angle of an illumination of the object in the optical production system of greater than 4 are included in a determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
17. The method of claim 2, wherein shadowing effects on account of a chief ray angle of an illumination of the object in the optical production system of greater than 4 are included in a determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
18. The method of claim 2, wherein there is a field-dependent determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
19. The method of claim 2, wherein a field-dependence of imaging properties of an imaging optical unit of the optical production system is included in a determination of a change in the profiles of the stop edges of the at least one pupil stop when the measurement illumination settings are specified.
20. The method of claim 2, wherein at least one of the following correction terms is included when the mask transfer function is reconstructed: a calculated aerial image for the associated defocus value and an associated field height, created by simulating an image by use of the imaging optical unit of the optical production system with the inclusion of reconstructed spectra of the object, and/or a calculated aerial image for the associated defocus value, created by simulating an image by use of the measurement imaging optical unit with the inclusion of the reconstructed spectra.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0028] Exemplary embodiments of the invention are explained in greater detail below with reference to the drawing, in which:
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
DETAILED DESCRIPTION
[0056] In order to facilitate the representation of positional relationships, a Cartesian xyz-coordinate system will be used hereinafter. In
[0057] In a view that corresponds to a meridional section,
[0058] An example of the test structure 5 is depicted in a plan view in
[0059] The metrology system 2 is used to analyze a three-dimensional (3-D) aerial image (aerial image metrology system). One application is found in the simulation of an aerial image of a lithography mask, in the way that the aerial image would also appear in an optical production system of a producing projection exposure apparatus, for example in a scanner. To this end, an imaging quality of the metrology system 2 itself, in particular, can be measured and optionally adjusted. Consequently, the analysis of the aerial image can serve to determine the imaging quality of a projection optical unit of the metrology system 2, or else to determine the imaging quality of, in particular, projection optical units within a projection exposure apparatus. Metrology systems are known from DE 10 2019 208 552 A1, from WO 2016/012 426 A1, from US 2013/0063716 A1 (cf. FIG. 3 therein), from DE 102 20 815 A1 (cf. FIG. 9 therein), from DE 102 20 816 A1 (cf. FIG. 2 therein) and from US 2013/0083321 A1.
[0060] The illumination light 1 is reflected and diffracted at the test structure 5. A plane of incidence of the illumination light 1 is parallel to the yz-plane in the case of the central, initial illumination.
[0061] The EUV illumination light 1 is produced by an EUV light source 8. The light source 8 can be a laser plasma source (LPP; laser produced plasma) or a discharge source (DPP; discharge produced plasma). In principle, a synchrotron-based light source can also be used, e.g. a free electron laser (FEL). A used wavelength of the EUV light source can range between 5 nm and 30 nm. In principle, in one variant of the metrology system 2, a light source for another used light wavelength can also be used instead of the light source 8, for example a light source for a used wavelength of 193 nm.
[0062] An illumination optical unit 9 of the metrology system 2 is arranged between the light source 8 and the test structure 5. The illumination optical unit 9 serves for the illumination of the test structure 5 to be examined, with a defined illumination intensity distribution over the object field 3 and at the same time with a defined illumination angle distribution with which the field points of the object field 3 are illuminated. Such an illumination angle distribution is also referred to as illumination setting.
[0063] The respective illumination angle distribution of the illumination light 1 is specified by way of a pupil stop 10, which is arranged in an illumination optical unit pupil plane 11. The pupil stop 10 is also referred to as a sigma stop.
[0064]
[0065]
[0066] Further variants of pupil stops 10 with a central passage pole I of increasingly larger radius are shown in
[0067]
[0068]
[0069]
[0070]
[0071] Corresponding annular illumination settings can be realized using the embodiments of the pupil stops 10 according to
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078]
[0079]
[0080]
[0081]
[0082]
[0083]
[0084]
[0085]
[0086]
[0087]
[0088]
[0089] Measured from the x-coordinate of the pupil stop 10 of
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
[0097] The pupil stop 10 of the illumination optical unit 9 is embodied as a stop which is displaceable in driven fashion and which is arranged in front of the object plane 4 in an illumination light beam path 15 of the illumination light 1. A drive unit used for the driven displacement of the pupil stop 10 is depicted at 16 in
[0098] With the aid of the displacement drive 16, it is possible to displace the selected pupil stop 10 along the pupil coordinates k.sub.x and k.sub.y in the pupil plane 11.
[0099] The displacement drive 16 may also include a stop interchange unit, by use of which a specific pupil stop 10 is replaced with another, specific pupil stop 10. To this end, the stop interchange unit may take the respective selected pupil stop from a stop storage unit and return the replaced stop to this stop storage unit.
[0100] The test structure 5 is held by an object holder 17 of the metrology system 2. The object holder 17 cooperates with an object displacement drive 18 for displacing the test structure 5, in particular along the z-coordinate.
[0101] Following reflection at the test structure 5, the electromagnetic field of the illumination light 1 has a distribution 19 which is depicted in
[0102] The illumination light 1 reflected by the test structure 5 enters an imaging optical unit or projection optical unit 20 of the metrology system 2.
[0103] A diffraction spectrum 21 arises in a pupil plane of the projection optical unit 20 on account of the periodicity of the test structure 5 (cf.
[0104] The 0th order of diffraction of the test structure 5 is present centrally in the diffraction spectrum 21. Moreover,
[0105] The orders of diffraction of the diffraction spectrum 21 depicted in
[0106] The imaging pupil stop 23 is operatively connected to a displacement drive 25, the function of which corresponds to that of the displacement drive 16 for the sigma stop 10.
[0107]
[0108]
[0109] The pupils 24 (cf.
[0110] The intensity distribution in the exit pupil 26 finds contributions firstly from the images of the 1st, 0th and +1st order of diffraction and secondly from an imaging contribution of the optical system, specifically of the projection optical unit 20. This imaging contribution which is elucidated in
[0111] The projection optical unit 20 images the test structure 5 towards a spatially resolving detection device 27 of the metrology system 2. The detection device 27 is in the form of a camera, in particular a charge-coupled device (CCD) camera or complementary metal-oxide-semiconductor (CMOS) camera.
[0112] The projection optical unit 20 is embodied as magnifying optical unit. A magnification factor of the projection optical unit 20 may be greater than 10, may be greater than 50, may be greater than 100 and may even be greater still. As a rule, this magnification factor is less than 1000.
[0113] In a manner corresponding to
[0114]
[0115] The following procedure is carried out to simulate the illumination and imaging properties of the optical production system when illuminating and imaging the object, using the example of the test structure 5, by use of the optical measurement system 1 of the metrology system 2:
[0116] Firstly, at least one pupil stop 10 and, for instance, a plurality of pupil stops 10 each with different stop edge shapes are provided for the purpose of specifying correspondingly different measurement illumination settings. This is implemented by providing pupil stops 10, for example in the style of the pupil stops 10 of
[0117] Then, a target pupil stop with a target stop edge shape is specified proceeding from an illumination setting of the optical production system to be simulated. The target pupil stop can be an arrangement of a plurality or multiplicity of individual pupil spots or stop spots. In this case, the intensity of individual illumination spots or pupil spots generally differs between the individual spots.
[0118]
[0119]
[0120] The target pupil stop 36 can be specified by way of a definition of appropriate stop aperture contours, especially continuous stop aperture contours. Such stop aperture contours can be described by polygonal chains, for example.
[0121] These continuous openings are then approximated by a finite number of pupil spots 37 within the openings. These spots are depicted in
[0122] For the specific example in
[0123]
[0124] Proceeding from this target pupil stop 36, at least one pupil stop 10 is then selected from the provided plurality of pupil stops 10 by use of an algorithm which qualifies deviations between the respective stop edge shape of the provided pupil stops 10 and the target stop edge shape of the target pupil stop 36. To this end, the pupil stop 10 currently under examination during the selection (also referred to as pupil stop to be qualified below) can in turn be decomposed within its stop edge into a plurality of pupil spots 38 arranged in grid-like fashion and represented by circles in
[0125] The scope of qualification comprises determining the similarity between the target illumination pupil (also denoted T below) and the possible measurement stops 10 (also denoted M below). For instance, this can be implemented by calculating an overlap function Q.
[0126] Here, A is a function for (approximately) calculating the area. The first term corresponds to the normalized area of the overlap between measurement stop and target illumination pupil. The second and third terms correspond to the normalized difference area between the measurement stop and the target illumination pupil, and vice versa. The difference area is intended to refer to the area contained only in the first pupil and not in the second.
[0127] The operators , and \ correspond to the intersection (), union () and relative complement (\) operators from set theory. In this case, the intersection M.sub.1M.sub.2 of the sets/areas M.sub.1 and M.sub.2 is intended to mean the set/area which is contained both in M.sub.1 and in M.sub.2, i.e. corresponds to the overlap area of M.sub.1 and M.sub.2. The union M.sub.1M.sub.2 of the sets/areas M.sub.1 and M.sub.2 describes the set/area which is contained in M.sub.1 or M.sub.2, i.e. corresponds to the overall area covered by M.sub.1 or M.sub.2. The relative complement. M.sub.1\M.sub.2 of the sets/areas M.sub.1 and M.sub.2 describes the set/area which is covered by M.sub.1 but not contained in M.sub.2.
[0128] For instance, the area function A can be implemented as counting illumination spots in the pupil. To this end, target illumination pupil and measurement pupil are scanned using the same grid. Typically, the grid corresponds to the pupil facet grid in the scanner on which the target illumination pupil is sampled (cf.
[0129] Thus, the selection of the pupil stop 10 encompasses a comparison between the poses of pupil spots 37 of the target stop edge shape and the poses of pupil spots 38 of the provided pupil stops 10.
[0130] Moreover, a plurality of defocus values z.sub.m (cf.
[0131] Moreover, a plurality of measurement positions (k.sub.x, k.sub.y) of the selected pupil stop 10 are specified within the scope of the simulation method.
[0132] Now, measurement aerial images I.sub.meas({right arrow over (r)}, z.sub.n, {right arrow over (q)}.sub.m) are recorded in the image plane 29 at the spatial coordinates {right arrow over (r)}=x, y, in the style of intensity distributions 31 according to
[0133] The sequence in
[0134]
[0135] In comparison with the imaging pupil stop 23,
[0136] In comparison with the centered position according to
[0137] An alternative sequence of measurement positions (k.sub.x, k.sub.y) of the pupil stop 10 is depicted in
[0138]
[0139]
[0140] Relative to the imaging pupil stop 23,
[0141] Relative to the imaging pupil stop 23,
[0142] Relative to the imaging pupil stop 23,
[0143] The completed sequence of measurement positions (k.sub.x, k.sub.y) is shown in
[0144]
[0145] The selection of the respective measurement position sequence, or optionally a subset therefrom, is implemented on the basis of the arrangement of individual structures of the test structure 5 and/or on the basis of the illumination setting of the optical production system to be simulated. For instance, the measurement position sequence can be selected in a manner analogous to the stop selection algorithm (see above), with all stop positions of a sequence being taken into account and the sequence being selected for which the overlap of the measurement sequence with the target illumination pupil is maximal.
[0146] The poses of the pupil stop 10 which differ from the center position in terms of the relative pose with respect to the imaging pupil stop 23 are also referred to as offset measurement positions. Within the scope of a measurement position sequence, two to ten such offset measurement positions can be homed in on, this typically being two to five offset measurement positions, for example three or four offset measurement positions. The offset measurement positions can be arranged uniformly distributed in the circumferential direction. To reduce the measurement time, it is also possible to use only a subset, e.g. every second measurement position, from the measurement schemes (
[0147] The specified defocus values z.sub.m are all measured with the aid of the respective measurement position sequence. In an alternative, it is possible that the entire respective measurement position sequence is used only for one defocus value or for individual defocus values z.sub.m, with the measurement aerial images being recorded for fewer measurement positions of the pupil stop relative to the imaging pupil stop 23 in the case of other defocus values z.sub.m. In extreme cases, it is possible for instance to home in on the entire measurement position sequence and record a respective measurement aerial image there for only one defocus value z.sub.m, whereas the measurement aerial image I.sub.meas is only recorded at one respective measurement position, in particular for the centered pupil stop 10, in the case of the other specified defocus values z.sub.m.
[0148] For instance, the following defocus value/measurement position combinations can be recorded: A central defocus value z.sub.m and a plurality of measurement positions (k.sub.x, k.sub.y) of the pupil stop 10, i.e., in particular, a centered measurement position and a plurality of offset measurement positions, and defocus values z.sub.min, z.sub.max maximally offset from the central defocus value on both sides, with exactly one central measurement position (k.sub.x, k.sub.y) of the pupil stop 10 being adopted at these positions z.sub.min, z.sub.max.
[0149] Then, a complex mask transfer function is reconstructed from the totality of measurement aerial images recorded with the selected pupil stop 10. A similar reconstruction step is also described in DE 10 2019 215 800 A1.
[0150] The reconstruction is implemented within the scope of a modelled description, in which a function ({right arrow over (p)}, {right arrow over (q)}) reproducing the illumination directions {right arrow over (p)} passed through the pupil stop 10 is used to describe the projection optical unit 20 of the metrology system 2 with the illumination setting specified by the pupil stop 10. In contrast to the reconstruction according to DE 10 2019 215 800 A1, for example, a change in the illumination light distribution a on account of a change in the measurement position of the pupil stop 10 is not limited to a description by way of a pure displacement vector; instead, the description of the illumination light distribution a includes a chief ray-dependent change in effective edge contours of the pupil stop 10, dependent on the displacement position thereof. Thus, the illumination light distribution depends on, firstly, the pupil coordinate {right arrow over (p)}, which describes a basic shape of pupil stop edge contours, and on a chief ray illumination direction {right arrow over (q)}. The illumination light distribution field dependence considered thus is illustrated in more detail with the aid of the figures described below:
[0151]
[0152] By way of x.sup.1 and x.sup.2,
[0153] The following variables are depicted schematically in
[0154] In particular, it should be observed that the shapes of the intensity spots 34 of the respective EUV illumination pupil BP.sub.x1 also vary depending on the direction of the illumination chief ray CRA.sub.i. This shape dependence of the intensity spots 34 on the chief ray angle CRA.sub.i can be traced back to optical system aberrations.
[0155] The pupil coordinates .sub.x, .sub.y used in the description of
[0156]
[0157] denotes an azimuth angle between a perpendicular to the object field coordinate x, once again through the object field point of incidence of the chief ray CRA, and a projection line of the chief ray CRA in the xy-plane.
[0158] The orientation of the chief ray CRA with respect to the object field 3 can be described exactly by way of the two angles and .
[0159] In general, stop shadowing effects predominantly come to bear at angles 4 and 100.
[0160] The angle specifies a deviation of the chief ray from a meridional trajectory (parallel to the yz-plane).
[0161] The metrology system 2 measures a portion of the reticle, i.e. of the test structure 5 in the form of an aerial image. A chief ray CRA of the production system is simulated with the aid of the optical measurement system of the metrology system 2, and so a variation of a contour of the pupil stop 10 in the respective measurement position arises as a function of a simulated chief ray angle CRAi; this is a consequence of the inclination of the respective production system chief ray angle CRAi, i.e. of the respective chief ray illumination direction.
[0162]
[0163]
[0164] In addition,
[0165] The pupil stop 10 according to
[0166] Further,
[0167] The finite thickness of the stop main body 41, inter alia, causes a slight angular space offset of the contours of the corresponding illumination pupil when the chief ray CRAi is varied, as depicted schematically in
[0168] In general, the fact that the intensities, contours and polarization properties in the illumination pupil vary when the illumination directions of the chief rays CRAi are varied also applies to the illumination pupil of the metrology system 2. In principle, the respective imaging optical units of the metrology system 2 and optical production system to be simulated also have a field variation. The upshot is that, in the optical production system, an exit pupil of the imaging optical unit, which images the object field or illumination field in the image field, regularly varies as a function of the field coordinate. This exit pupil variation is independent of the respective optical production system illumination pupil. In a manner similar to the situation presented above, contour, polarization and intensity variations arise in the metrology system 2 for different illumination directions of the chief rays CRAi.
[0169]
[0170]
[0171]
[0172] An effective edge of an aperture stop 42 for a first chief ray angle CRA1 is depicted in
[0173] Accordingly,
[0174] A variation in the field point leads to a change in the aperture edge, intensity distribution (apodization), and phase and polarization effect; this can be described by way of a Jones pupil J_production_system_1/2 or J_metrology_system_1/2 for two field & chief ray variants CRA1 and CRA2 in the optical systems.
[0175] A person skilled in the art finds details in this respect and, in particular, in respect of the Jones formalism in M. Totzeck, P. Grupner, T. Heil, A. Ghnermeier, O. Dittmann, D. Krhmer, V. Kamenov, J. Ruoff, and D. Flagello, How to describe polarization influence on imaging, Proc. SPIE 5754, 23-37 (2005), and in the textbook Field Guide to Polarization by E. Collett, SPIE Press Book, 2005.
[0176] Thus, as a rule, the following dependencies have to be considered when simulating aerial image properties of the optical production system using the metrology system 2, especially in the sub-nanometer range, i.e. on a picometer scale: [0177] a field dependence of an optical production system illumination pupil; [0178] a chief ray angle dependence of the metrology system illumination pupil; [0179] a field dependence of an exit pupil of the optical production system imaging optical unit; [0180] a chief ray angle dependence of an exit pupil of the imaging optical unit 20 in the metrology system 2.
[0181] These effects can be considered in the simulation method described herein, to be precise both during the reconstruction, as described below, and during the forward propagation, as described below.
[0182] In a manner corresponding to the explanations given above in relation to
[0183] The finite thickness of the stop main body 41 leads to differences in the effective stop contours of the respective pupil 10 in the measurement positions according to
[0184] In
[0185] For typical measurement sequences by the metrology system 1, a plurality of focus stacks (variation of z.sub.m, cf.
[0186] The (known) effective aperture contours 39 can be taken into account when reconstructing the complex mask spectrum. To this end, the effective aperture contours 39 can for instance be ascertained by optical simulations (e.g. by way of ray tracing) of the illumination and projection optical unit. Further, it is also possible to measure these directly by way of a pupil image. For instance, Bertrand optics can be used to this end.
[0187] Each illumination direction produces a complex-valued field distribution m({right arrow over (r)}, {right arrow over (p)}) (cf. the field distribution 19 in
[0188] Here,
is the curtailment by the numerical aperture of the imaging optical unit 20, i.e. by the imaging pupil stop 23, and
is the wavefront error caused by a defocus z (displacement by the object holder 17). Now, for this curtailment by the aperture stop 23, the effective stop edge 39 is set in accordance with the explanations relating to
[0189] The propagated spectrum (cf.
[0190] In this case, {right arrow over (r)} is the xy-position of the intensity measurement, i.e. the respective pixel of the camera 27.
[0191] {right arrow over (q)} is the illumination direction and {right arrow over (p)} is the pupil coordinate. An illumination direction {right arrow over (q)} corresponds to a center of a stop aperture of the respective pupil stop 10 in the respective measurement position.
[0192] For ({right arrow over (p)}, {right arrow over (q)}), the effective stop edges of the pupil stops 10 at the different measurement positions are used in accordance with the explanations given above, especially in the context of
[0193] The object now is to determine the mask spectrum M({right arrow over (k)}, {right arrow over (p)}). In this case, {right arrow over (k)} is the pupil coordinates in the entrance pupil 24 of the projection optical unit 20 and {right arrow over (p)} is the illumination direction.
[0194] The Fourier transform of the respective mask spectrum of the associated mask transfer function.
[0195] The reconstructed spectra can then be used to calculate the aerial image for any other illumination setting .sub.target({right arrow over (p)})) and any defocus z.sub.target. This is also referred to as forward propagation.
[0196] The determination of M({right arrow over (k)}, {right arrow over (p)}) can be formulated as an optimization problem: Sought are the spectra M({right arrow over (k)}, {right arrow over (p)}) for which there is a minimum deviation F between the simulated aerial images and the aerial images I.sub.meas measured at the defocus positions z.sub.1, z.sub.2 . . . z.sub.N and the illumination directions {right arrow over (q)}.sub.1, {right arrow over (q)}.sub.2 . . . {right arrow over (q)}.sub.M. The following optimization problem should be solved:
[0197] A separate spectrum is reconstructed for each illumination direction {right arrow over (p)}.
[0198] A simulated aerial image I.sub.sim for the target illumination setting .sub.target and the target defocus z.sub.target can be calculated using the reconstructed directionally dependent spectrum:
[0199] The setting for the target illumination setting .sub.target now also depends on the field position x.sup.m of an intensity measurement position in the object field 3. This dependence corresponds to the variation depicted in
[0200] Thus, an actual intensity distribution BP.sub.xm in the optical production system illumination pupil BP is used for each intensity measurement position. This intensity distribution can be ascertained from an optical simulation of the optical production system or from a measurement of an optical production system illumination unit. This actual intensity distribution in the optical production system illumination pupil is assumed to be known.
[0201] A dependence of an exit pupil contour of the optical production system imaging optical unit on, firstly, the field position and/or on the chief ray angle can be taken into account by way of a field-dependent transfer function of the optical production system imaging optical unit:
[0202] Here, NA.sub.Scanner({right arrow over (k)}, x.sup.m) is the description of the stop edge 39.sub.m of the optical production system aperture stop 42, dependent on the field pose x.sup.m (cf. also the above description in relation to
[0203] The reconstruction thus includes that profiles of edge contours of the pupil stops 10 at the respective measurement position, i.e. of measurement illumination settings specified by stop contours 39 of the pupil stop 10, change in manner which is dependent on a respective displacement position of the pupil stop 10 and which goes beyond a pure displacement of the edge contours.
[0204] Equation (4) then allows comparison between the simulated aerial image I.sub.sim and the respectively measured aerial image I.sub.meas, and this can be used to reconstruct the mask spectrum M and, accordingly, the complex mask transfer function.
[0205] From Equation (4), the 3-D aerial image can be calculated with the aid of the reconstructed mask transfer function M and the illumination setting .sub.target of the optical production system. In this way, it is possible to ascertain what the aerial image of the test structure 5 would look like if it were imaged by the optical production system.
[0206] As an alternative to the method described in the previous section, a correction approach analogous to DE 10 2019 206 651 A1 is also possible in place of a completely synthetic calculation of the images by propagating the reconstructed mask spectrum. To this end, a correction term is calculated:
[0207] The two terms I.sub.sim({right arrow over (r)}, z, x.sup.m) and I.sub.sim ({right arrow over (r)}, z, {right arrow over (q)}) correspond to those in Equations (4) and (2) above. A precondition for this is that measurements are performed for the same focus poses and illumination stop positions as for the target settings.
[0208] The correction term corresponds to the difference in the aerial images produced by the CRA/field dependence. Thus, the CRA/field dependence can be corrected as follows:
[0209] In this correction approach, systematic/constant errors compensate one another in the mask spectrum reconstruction, i.e. the same deviations in I.sub.sim({right arrow over (r)}, z, x.sup.m) and I.sub.sim({right arrow over (r)}, z, {right arrow over (q)}), and do not make an unwanted contribution to the final image. As a result, it is possible to obtain effects/properties in the measurement data even if these are not taken into account in the object reconstruction imaging model. An example are 3-D mask effects in a reconstruction with a simple imaging model without explicit consideration of 3-D mask effects. In the limit case of a reconstruction with a negligible residue, i.e. F.fwdarw.0 in Equation (3), both methods (propagation and correction approach) are equivalent.
[0210] For the pupil stop 10 of the metrology system 2, the simulation method can make use either of stop basic shapes corresponding to those already explained above, especially in the context of
[0211] In a predefining step 45, firstly a starting stop shape of the sigma stop 10, 10.sup.dc, is selected as an initial design candidate for the simulation.
[0212] In the context of the optimization, this starting stop shape 10.sup.dc is modified in a modifying step 46, such that a modification stop shape 10.sup.dcnew that is slightly changed in regard to its boundary shape arises in a producing step 47.
[0213] In a checking step 48, a check is then made to establish whether this modification stop shape 10.sup.dcnew satisfies at least one fabrication boundary condition with regard to the fabrication of this modification stop shape 10.sup.dcnew. If the checking step 48 reveals that at least one marginal check portion of the modification stop shape 10.sup.dcnew does not satisfy the fabrication boundary conditions (decision N of the checking step 48), the modification step 46 and the producing step 47 are repeated on the basis of the last valid design candidate. This is done until the checking step 48 for a modification stop shape 10.sup.dcnew then given reveals compliance with the predefined fabrication boundary conditions (decision Y of the checking step 48).
[0214] An ascertaining step 49 then involves communicating the match quality between the illumination and imaging properties of the optical production system and the illumination and imaging properties of the optical measurement system.
[0215] A value of at least one merit function is calculated in the context of this match quality ascertainment. Said merit function is influenced by a comparison of optical illumination and imaging parameters between a pupil overlap region of an illumination pupil and an imaging pupil of the optical production system, on the one hand, and a corresponding pupil overlap region of an illumination pupil with a used stop shape of the sigma stop 8 and an imaging pupil with a used NA aperture stop 11 of the optical measurement system.
[0216]
[0217] A center Z.sub.Ar, of the exit pupil AP lies at Cartesian coordinates .sub.x.sup.i, .sub.y.sup.i. Instead of Cartesian coordinates .sub.x, .sub.y, it is also possible to choose polar coordinates, likewise illustrated in
[0218] In the context of ascertaining the match quality with the aid of such a pupil overlap region A.sub.r, , the overlap at various support points .sub.x.sup.i, .sub.y.sup.i that are scanned is assessed. The following assessment terms are used in this case:
[0219] D here is a term describing a simple summation of the intensities I(.sub.x,.sub.y) over the respective pupil overlap region A.sub.r,. This D term (according to Equation (8)) correlates with an image dimension CD (critical dimension), i.e. a width of a structure along a predefined direction.
[0220] Reference is made to U.S. Pat. No. 9,176,390 B in the context of the definition of the parameter CD.
[0221] The T term (according to Equation (9)) represents an integral over the overlap region A, said integral again being weighted with the distance value .sub.. For this formulation of the T term, it is assumed for simplification that the exit pupil has no apodization. This T term correlates with the imaging telecentricity imaging parameter. This can include a sensitivity of an object structure offset as a function of a defocus position of a substrate onto which the object is imaged.
[0222] For a given pupil stop shape of the pupil stop 10, the following optimization rules are applied when ascertaining the match quality for all possible overlap regions A.sub.r,:
[0223] In this case, dc stands for the respective design candidates, i.e. for the currently considered stop shape of the pupil stop 10. t stands for the target illumination pupil of the optical production system, i.e. in particular of a projection exposure apparatus in the form of a scanner.
[0224] The optimization rules in accordance with Equations (10) and (11) are not attained as a rule. When ascertaining the match quality, the stop shape of the design candidate dc is varied until the optimization rules (10), (11) yield minimum values.
[0225] Besides the optimization variables D and T, further variables correlated with further illumination and/or imaging parameters can also be used when ascertaining the match quality. One example of such a variable is:
[0226] This HV term correlates with an imaging variable HV asymmetry, which quantifies a difference in the critical dimensions (CDs) along a vertical and along a horizontal dimension. The HV term may be of interest depending on the structures to be imaged on the object 5; for example in the case of horizontal or vertical lines to be imaged, in particular having the same periodicity and the same target CD, or else in the case of so-called contact holes, i.e. structures having an xy-aspect ratio in the region of 1. An HV asymmetry can then be understood as the difference between the two CDs, i.e. CD.sub.hCD.sub.v in the case of horizontal (h) and vertical (v) lines or CD.sub.xCD.sub.y in the case of contact holes having extents in the x- and y-directions.
[0227] Ascertaining the HV term according to Equation (12) above involves calculating the difference between two D terms according to Equation (8) at the location of two defined overlap regions A.sub.r, and B.sub.r, which are rotated with respect to one another by 90 about the coordinate origin Z.sub.B (cf.
[0228] For the HV term, too, there is then a corresponding optimization rule:
[0229] After comparison calculation has been carried out, the overlap regions A.sub.r, used cover the entire illumination pupil of, on the one hand, the optical production system and, on the other hand, the illumination optical unit 9 of the metrology system 2.
[0230]
[0231]
[0232] The illumination setting to be simulated (cf.
[0233]
[0234] A merit function E can be used during the match quality ascertainment since, in general, the match rules according to Equations (10), (11) and (13) do not all become 0 at the same time. This merit function can be written as weighted error minimization in the usual way as:
[0235] I here denotes the stop shape of the pupil stop 10.sup.dcnew which is intended to be assessed by use of the merit function. I.sup.t denotes the target illumination pupil of the optical production system, this being the intended target of optimization. D and T denote the assessment terms discussed above in the context of Equations (10) and (11). In addition, the merit function E can for example also be extended by the assessment term HV (cf. Equations (12) and (13)).
[0236] The merit function I can additionally be extended by the requirement for a minimum transmission of the pupil stop 10.sup.dcnew.
[0237] Besides the target illumination pupil of the optical production system, the ascertaining step 49 can also be influenced by a pupil transfer function of the optical production system and a pupil transfer function of the optical measurement system of the metrology system 2.
[0238] For this purpose, the D term defined above in the context of Equation (8) can be written as follows:
[0239] P here is an apodization function, i.e. an energetic proportion of the pupil transfer function.
[0240] An apodization of the exit pupil can then be taken into account by this means.
[0241] In the course of the ascertaining step 49, compliance with an optimization criterion is queried in an optimization query step 50. One example of such an optimization criterion is the Boltzmann criterion of simulated annealing:
[0242] In this case, r is a uniformly distributed random number from the interval [0,1[ (the exact numerical value 1 is thus excluded in this interval) and is a control parameter that increases further and further in the course of the simulated annealing optimization. E(dc.sub.new) and E(dc) are the merit functions that arose for the stop shapes of the pupil stop 10 during the last and during the preceding optimization step.
[0243] Insofar as the Boltzmann criterion is satisfied, i.e. the optimization has not yet concluded (decision Y in the query step 50), the current stop shape 10.sup.dcnew is set as initial stop shape 10.sup.dc for the next modification, which is effected in a predefining step 51. The control parameter 3 is also increased in the predefining step 51. The optimization criterion is thus intensified in the context of the predefining step 51. Afterwards, the method continues with the modifying step 46 and steps 47 to 50 are repeated until the optimization query step 50 reveals that either the Boltzmann criterion is no longer satisfied or the control parameter is greater than a predefined value (query result N in the query step 50).
[0244] If, therefore, the optimization criterion has then been attained in the optimization query step 50 (query result N), the pupil stop 10 with the target stop shape that occurred with the smallest merit function value E in the optimization is fabricated in a fabricating step 52.
[0245]
[0246] Exactly one pupil stop 10, which was optimized as explained above in the context of
[0247] Individually optimized stops can be designed for a given selection of field points x.sup.i, e.g. three field points (left x-field edge, field center, right x-field edge), and so the contributions of, firstly, the optical production system and, secondly, the metrology system 2 which are valid for this field point x.sup.i or for this illumination direction of the respective chief ray CRAi are taken into account more completely, and there is an accordingly improved simulation of the optical production system aerial image. This specification of individual optimized stop edges can be implemented accordingly for the pupil stop 10, for the aperture stops 23 and also for both stops 10, 23.
[0248] For instance, the three pupil stops 10 and optionally three aperture stops 23 optimized thus are then available for the metrology system 2. In a manner fitting to the field point x.sup.m to be measured in the object field 3, it is then possible to use the corresponding stop or the corresponding stop pair of pupil stop and aperture stop.
[0249] If there is no coincidence between a field point x.sup.m to be measured and a specified field point for which the respective stop was optimized in respect of its edge, then it is possible to use a compensation rule. An example of such a compensation rule is:
[0250] Here, A(x, y) denotes the aerial image measured by the metrology system 2.
[0251]
[0252] In a variant of the simulation method, it is also possible to use a plurality of different pupil stops 10 to specify the various measurement positions (k.sub.x, k.sub.y).
[0253] To prepare the simulation method, it is possible to record an aerial image stack in order to make sure which z-pose of the object plane 4 supplies an optimally sharp image in the image plane 29 (zero of the z-pose).
[0254] z-increments which are used in Equation (2) when determining the aerial image I.sub.sim may differ from the defocus values z.sub.m that are specified within the scope of the simulation method.
[0255] Pixel sizes of the recorded measurement aerial images I.sub.meas may be re-sampled for the purpose of matching to a desired pixel resolution.
[0256] A plurality of k.sub.x, k.sub.y positions of the imaging pupil stop 23 can also be set by way of the displacement drive 25 in a simulation method.
[0257] When reconstructing the mask transfer function, it is accordingly possible to take account of imaging aberrations of the optical measurement system, in particular imaging aberrations of the imaging optical unit 20 of the metrology system 2.
[0258] The determination of the 3-D aerial image I.sub.meas and/or the calculation of the simulated aerial image I.sub.sim may be carried out using a different illumination chief ray angle to that of the reconstruction of the mask transfer function.
[0259] For selecting the respective pupil stop 10 from the provided plurality of pupil stops 10 with in each case different stop edge shapes and/or stop edge orientations, the metrology system 2 has a selection apparatus not depicted in detail in the drawing. This selection apparatus has a stop storage unit, in which the plurality of pupil stops 10 with different stop edge shapes and/or stop edge orientations are stored in each case for the purpose of specifying correspondingly different measurement illumination settings.
[0260] In the selection step of the simulation method, the last pupil stop inserted is firstly removed from its use location in the pupil plane 11 and supplied to the stop storage unit in the selection apparatus with the aid of an actuator system of the selection apparatus, in particular with the aid of a robotic actuator system. Subsequently, the pupil stop 10 selected according to the simulation method is selected from the stop storage unit and inserted in the use position in the pupil plane 11 with the aid of the robotic actuator system.
[0261] In principle, the problem presented above and also the solution can be applied analogously to take account of machine-individual properties in the aerial image emulation. For instance, the EUV illumination pupils differ from machine to machine, depending on the light source used, in particular the EUV light source used. In particular, the combination of a stop emulating the ideal system and the simulation-type method incorporating the machine-individual component is attractive.
[0262] For instance, this requires that the machine-individual properties are known, for example by way of a qualification or, in the case of the EUV source types, by way of the appropriate numerical models. Machine-individual portions of the metrology system 2 can also be considered in similar fashion.
[0263] In general, the above-described approaches allow specific properties of the optical production system to be simulated, for example crosstalk effects between different illumination channels of a fly's eye integrator system in the illumination optical unit of the optical production component and/or crosstalk effects between various x-coordinate-dependent intensity correction stops in the illumination optical unit of the optical production system. For instance, it is possible to simulate insertion depths of appropriate stop fingers which correct the illumination intensity in x-coordinate-dependent fashion, and the influence thereof on the aerial image.
[0264] In some implementations, the calculations and processing of data (e.g., performing simulation) described in this document can be performed by one or more computers that include one or more data processors configured to execute one or more programs that include a plurality of instructions according to the principles described above. Each data processor can include one or more processor cores, and each processor core can include logic circuitry for processing data. For example, a data processor can include an arithmetic and logic unit (ALU), a control unit, and various registers. Each data processor can include cache memory. Each data processor can include a system-on-chip (SoC) that includes multiple processor cores, random access memory, graphics processing units, one or more controllers, and one or more communication modules. Each data processor can include millions or billions of transistors.
[0265] The methods described in this document can be carried out using one or more computers, which can include one or more data processors for processing data, one or more storage devices for storing data, and/or one or more computer programs including instructions that when executed by the one or more computers cause the one or more computers to carry out the processes. The one or more computers can include one or more input devices, such as a keyboard, a mouse, a touchpad, and/or a voice command input module, and one or more output devices, such as a display, and/or an audio speaker.
[0266] In some implementations, the one or more computing devices can include digital electronic circuitry, computer hardware, firmware, software, or any combination of the above. The features related to processing of data can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations. Alternatively or in addition, the program instructions can be encoded on a propagated signal that is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a programmable processor.
[0267] A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
[0268] For example, the one or more computers can be configured to be suitable for the execution of a computer program and can include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer system include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer system will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as hard drives, magnetic disks, solid state drives, magneto-optical disks, or optical disks. Machine-readable storage media suitable for embodying computer program instructions and data include various forms of non-volatile storage area, including by way of example, semiconductor storage devices, e.g., EPROM, EEPROM, flash storage devices, and solid state drives; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM, DVD-ROM, and/or Blu-ray discs.
[0269] In some implementations, the processes described above can be implemented using software for execution on one or more mobile computing devices, one or more local computing devices, and/or one or more remote computing devices (which can be, e.g., cloud computing devices). For instance, the software forms procedures in one or more computer programs that execute on one or more programmed or programmable computer systems, either in the mobile computing devices, local computing devices, or remote computing systems (which may be of various architectures such as distributed, client/server, grid, or cloud), each including at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one wired or wireless input device or port, and at least one wired or wireless output device or port.
[0270] In some implementations, the software may be provided on a medium, such as CD-ROM, DVD-ROM, Blu-ray disc, a solid state drive, or a hard drive, readable by a general or special purpose programmable computer or delivered (encoded in a propagated signal) over a network to the computer where it is executed. The functions can be performed on a special purpose computer, or using special-purpose hardware, such as coprocessors. The software can be implemented in a distributed manner in which different parts of the computation specified by the software are performed by different computers. Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system can also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.
[0271] The embodiments of the present invention that are described in this specification and the optional features and properties respectively mentioned in this regard should also be understood to be disclosed in all combinations with one another. In particular, in the present case, the description of a feature comprised by an embodimentunless explicitly explained to the contraryshould also not be understood such that the feature is essential or indispensable for the function of the embodiment.