Method for three-dimensionally measuring a 3D aerial image of a lithography mask
10634886 · 2020-04-28
Assignee
Inventors
- Ulrich Matejka (Jena, DE)
- Christoph Husemann (Jena, DE)
- Johannes Ruoff (Aalen, DE)
- Sascha Perlitz (Jena, DE)
- Hans-Jürgen Mann (Oberkochen, DE)
Cpc classification
G02B27/0988
PHYSICS
G02B17/0828
PHYSICS
G02B21/361
PHYSICS
G02B17/0848
PHYSICS
International classification
G02B21/36
PHYSICS
G02B27/09
PHYSICS
Abstract
In a method for three-dimensionally measuring a 3D aerial image in the region around an image plane during the imaging of a lithography mask, which is arranged in an object plane, a selectable imaging scale ratio in mutually perpendicular directions (x, y) is taken into account. For this purpose, an electromagnetic wavefront of imaging light is reconstructed after interaction thereof with the lithography mask. An influencing variable that corresponds to the imaging scale ratio is included. Finally, the 3D aerial image measured with the inclusion of the influencing variable is output. This results in a measuring method with which lithography masks that are optimized for being used with an anamorphic projection optical unit during projection exposure can also be measured.
Claims
1. A method for three-dimensionally measuring a 3D aerial image in the region around an image plane during the imaging of a lithography mask, which is arranged in an object plane, while taking into account a selectable imaging scale ratio in mutually perpendicular directions with the following steps: performing a manipulation of a pupil of an imaging optical unit used for the imagining of the lithography mask, inclusion of an influencing variable in the imaging of the lithography mask, in which the influencing variable corresponds to the imaging scale ratio, performing a digital simulation step using information from the pupil manipulation step and from the inclusion step to generate a 3D aerial image, and output of the 3D aerial image measured with the inclusion of the influencing variable.
2. The method according the claim 1, wherein the measurement is carried out with a measuring optical unit, the imaging scale of which is the same in mutually perpendicular directions, the inclusion of the influencing variable being performed by converting the data of a measured 2D imaging-light intensity distribution in the region of a plane corresponding to the image plane.
3. The method according to claim 1, wherein the inclusion of the influencing variable corresponding to the imaging scale ratio is performed by a digital simulation of the imaging with the imaging scale ratio.
4. A metrology system for three-dimensionally measuring a 3D aerial image in the region around an image plane during the imaging of a lithography mask, which is arranged in an object plane, in which the metrology system is configured to take into account a selectable imaging scale ratio in mutually perpendicular directions with the following steps: performing a manipulation of a pupil of an imaging optical unit used for the imagining of the lithography mask, inclusion of an influencing variable in the imaging of the lithography mask, in which the influencing variable corresponds to the imaging scale ratio, performing a digital simulation step using information from the pupil manipulation step and from the inclusion step to generate a 3D aerial image, and output of the 3D aerial image measured with the inclusion of the influencing variable; wherein the metrology system comprises: an illumination optical unit for illuminating the lithography mask to be examined, and an imaging optical unit for imaging the object towards a spatially resolving detection device.
5. The metrology system of claim 4, in which the metrology system is configured to carry out the following steps to prepare the output of the 3D aerial image: measuring a 2D imaging-light intensity distribution in the region of a plane corresponding to the image plane, displacing the lithography mask perpendicularly to the object plane by a predetermined displacement, and repeating the measuring and displacing steps until a sufficient number of 2D imaging-light intensity distributions to reproduce a 3D aerial image are measured.
6. The metrology system of claim 5, comprising a measuring optical unit configured to carry out the measurement, the imaging scale of which is the same in mutually perpendicular directions, the inclusion of the influencing variable being performed by converting the data of the measured 2D imaging-light intensity distribution.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) An exemplary embodiment of the invention is explained in greater detail below with reference to the drawing. In said drawing:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
DETAILED DESCRIPTION
(18) A Cartesian xyz-coordinate system is used below to facilitate the illustration of positional relationships. In
(19)
(20) The illumination light 1 is reflected at the object 5. The plane of incidence of the illumination light 1 lies parallel to the yz plane.
(21) The EUV illumination light 1 is produced by an EUV light source 6. The light source 6 may be a laser plasma source (LPP; laser produced plasma) or a discharge source (DPP; discharge produced plasma). In principle, a synchrotron-based light source may also be used, for example a free electron laser (FEL). A used wavelength of the EUV light source may lie in the range between 5 nm and 30 nm. In principle, in the case of a variant of the metrology system 2, a light source for another used wavelength may also be used instead of the light source 6, for example a light source for a used wavelength of 193 nm.
(22) Depending on the configuration of the metrology system 2, it may be used for a reflecting object 5 or for a transmitting object 5. An example of a transmitting object is a phase mask.
(23) An illumination optical unit 7 of the metrology system 2 is arranged between the light source 6 and the object 5. The illumination optical unit 7 serves for the illumination of the object 5 to be examined with a defined illumination intensity distribution over the object field 3 and at the same time with a defined illumination angle distribution, with which the field points of the object field 3 are illuminated.
(24)
(25) The six illumination poles 9 lie within an elliptical outer edge contour 10, which is indicated in a dashed manner in
(26) The elliptical edge contour 10 is produced by an illumination aperture stop 11 of the illumination optical unit 7, which marginally delimits a beam of the illumination light 1 that is incident on the illumination aperture stop 11. Correspondingly, in a stop plane extending parallel to the xy plane, the illumination aperture stop 11 has in the two mutually perpendicular directions x and y two stop diameters that differ from one another by at least 10%, in the present case by 100%, the corresponding equivalents of which are denoted in
(27) The metrology system 2 is designed for the examination of anamorphic masks with different structure scaling factors in x and y. Such masks are suitable for producing semiconductor elements by use of anamorphic projection apparatuses.
(28) A numerical aperture of the illumination and imaging light 1 in the xz plane may be 0.125 on the reticle side and in the yz plane 0.0625 on the reticle side.
(29)
(30) After reflection at the object 5, the illumination and imaging light 1 enters an imaging optical unit or projection optical unit 13 of the metrology system 2, which in
(31) The imaging optical unit 13 comprises an imaging aperture stop 15 arranged downstream of the object 5 in the beam path (cf. also
(32) It is also possible to dispense with the imaging aperture stop 15 in the metrology system 2.
(33) The imaging aperture stop 15 has an elliptical edge contour 16 with an x/y semiaxis ratio of 2:1. Therefore, in a stop plane extending parallel to the xy plane, the imaging aperture stop 15 has in two mutually perpendicular directions x, y two stop diameters that differ from one another by at least 10%, which are in turn denoted in
(34) The imaging aperture stop 15 also has the greater stop diameter Bx perpendicular to the plane of incidence yz of the illumination and imaging light 1 on the object 5. Also in the case of the imaging aperture stop 15, the diameter Bx is twice the diameter By.
(35) The detection device 14 is in signaling connection with a digital image processing device 17.
(36) The object 5 is carried by an object holder 18. This object holder can be displaced by a displacement drive 19 on the one hand parallel to the xy plane and on the other hand perpendicularly to this plane, that is to say in the z direction. The displacement drive 19, as also the entire operation of the metrology system 2, is controlled by a central control device 20, which, in a way that is not represented any more specifically, is in signaling connection with the components to be controlled.
(37) The optical set-up of the metrology system 2 serves for the most exact possible emulation of an illumination and an imaging in the course of a projection exposure of the object 5 during the projection-lithographic production of semiconductor devices.
(38)
(39) The projection optical unit 21, which is part of a projection exposure apparatus that is not otherwise represented, is of an anamorphic configuration, and therefore has a different imaging scale in the xz plane than in the yz plane. An object-side numerical aperture of the projection optical unit 21 is 0.125 in the xz plane and 0.0625 in the yz plane. An image-side numerical aperture of the projection optical unit 21 is 0.5 both for the xz plane and for the yz plane. This gives an imaging scale of 4 in the xz plane and an imaging scale of 8 in the yz plane, that is to say a reduction factor on the one hand of 4 and on the other hand of 8.
(40) During the projection exposure, the projection optical unit 21 projects an image of the object field 3 into an image field 23 in an image plane 24, in which a wafer 25 is arranged.
(41) As a difference from the projection optical unit 21 of the projection exposure apparatus, the projection optical unit 13 of the metrology system 1 is not anamorphic, but instead has the same magnifying imaging scale .sub.MS of more than 100, for example of 500 or of 850, both in the xz plane and in the yz plane. The projection optical unit 13 of the metrology system is therefore isomorphic.
(42)
(43) A central axis, from which the chief-ray angle CRA is measured and which is perpendicular to the object plane 4, is denoted in
(44) Data that can be used to deduce an imaging behavior of the structure of the object 5 that is illuminated in the object field 3 by the projection optical unit 21 in the region of the image plane 24 are generated during the 3D aerial-image measurement. For this purpose, the metrology system 2 is used, the imaging scale ratio of 2:1 of the projection optical unit 21 in the two mutually perpendicular directions y and x, that is to say in the two mutually perpendicular planes yz and xz, being taken into account by using a metrology system projection optical unit 13 that is not anamorphic.
(45) The method for 3D aerial image measurement is explained below on the basis of
(46) First, the object 5 to be measured, that is to say the lithography mask to be measured, is provided in a step 27. Then, the intensity distribution of the imaging light 1 is measured in the region of an image plane 14a, in which the detection device 14 of the metrology system 1 is arranged. This takes place in a measuring step 28. In the measuring step 28, the detection device 14 detects a 2D imaging-light intensity distribution within a detection field, into which an image of the object field 3 is projected by the projection optical unit 13 of the metrology system. The measured intensity distribution is then in each case stored and passed on to the digital image processing device 17.
(47) Then the lithography mask 5 is displaced with the aid of the displacement drive 19 perpendicularly to the object plane 4 by a predetermined displacement z. This takes place in a displacement step 29.
(48) The measuring step 28 and the displacement step 29 are then repeated by carrying out a repetition step 30 as often as is needed until a sufficient number of 2D imaging-light intensity distributions to reproduce a 3D aerial image are measured by use of the detection device 14. By repeating the measuring step 28 and the displacement step 29 for different z positions of the object 5, the 2D imaging-light intensity distribution is therefore measured for example at five, seven, nine or eleven positions, each lying z apart, the object 5 lying exactly in the object plane 4 in the case of a midway displacement step 29. In
(49) In the case of this measuring method, the third dimension of the 3D aerial image, to be specific the z dimension, is made accessible to the measurement by z displacement of the object 5. Since the 3D aerial image is intended to emulate an anamorphic imaging, to be specific an imaging by the lithographic projection optical unit 21, in the region of the image plane 14a each displacement step 29 leads to a defocusing in the z direction. Defocusing values on the one hand in the xz plane and on the other hand in the yz plane differ from one another on account of the xz/yz imaging scale ratio of the lithographic projection optical unit 21 to be emulated. The difference between the imaging scale ratios on the one hand of the isomorphic projection optical unit 13 of the metrology system and on the other hand of the anamorphic projection optical unit 21 of the projection exposure apparatus to be emulated is taken into account in the measuring method by including an influencing variable that corresponds to the ratio of the imaging scales of the lithographic projection optical unit 21. This takes place in an inclusion step 31, which is represented in greater detail in the flow diagram of
(50) The measurement is carried out with a measuring optical unit of the metrology system 1, the imaging scale of which is the same in mutually perpendicular directions (xz/yz). The inclusion step 31 is performed exclusively by converting the data of the measured 2D imaging-light intensity distribution. This conversion is carried out by the digital image processing device 17.
(51) When carrying out the inclusion step 31, first the data records of the measuring steps 28 are referred to, that is to say the various measured 2D imaging-light intensity distributions at the various z positions of the object 5 that were measured in the course of the previous sequence of the repeating steps measuring step 28/displacement step 29 and stored in a memory of the digital image processing device 17. This takes place in a reference step 32.
(52) In preparation for the inclusion, an electromagnetic wavefront of the imaging light 1 after interaction of the imaging light 1 with the object 5 is reconstructed in a reconstruction step 33 from the data used for reference in this way. This reconstruction takes place in particular in the region of the image plane 14a of the metrology system 1. In the reconstruction step 33, a phase reconstruction of the electromagnetic wavefront of the imaging light 1 may be performed. In particular, the phase and amplitude of a 3D object spectrum and the partially coherent superimposition thereof are reconstructed. A polarization-dependent reconstruction does not take place.
(53) Various methods of phase reconstruction that are already known from the literature may be used for carrying out the reconstruction step 33. These include methods that include various 2D imaging-light intensity distribution sequences produced by correspondingly carrying out the series of steps 28 to 30 repeatedly, part of the optical system of the metrology system 1 being changed in each of these sequences, which is also known as diversification. Steps 28 to 30 may therefore represent part of the phase reconstruction and be used in the reconstruction of the wavefront in step 33.
(54) In the case of a variant of the phase reconstruction, a defocusing diversification takes place. This has already been discussed above by explaining steps 28 to 30.
(55) Algorithms that are used here may be for example: Transport of Intensity Equation, Iterative Fourier Transform Algorithms (IFTA, e.g. Gerchberg-Saxton) or methods of optimization, for example by use of backpropagation. The Transport of Intensity Equation (TIE) algorithm is described in the technical article Critical assessment of the transport of intensity equation as a phase recovery technique in optical lithography, Aamod Shanker; Martin Sczyrba; Brid Connolly; Franklin Kalk; Andy Neureuther; and Laura Waller, Proc. SPIE 9052, Optical Microlithography XXVII, 90521D (Mar. 31, 2014); DOI:10.1117/12.2048278. The Gerchberg-Saxton algorithm is described in Fienup, J. R. (Aug. 1, 1982) Phase retrieval algorithms: a comparison, Applied Optics 21 (15): 2758-2769. Bibcode:1982 Applied Optics, Vol. 21, pp. 2758-2769, DOI:10.1364/AO.21.002758. The backpropagation method of optimization is described in General framework for quantitative three-dimensional reconstruction from arbitrary detection geometries in TEM, Phys. Rev. B 87, 184108published May 13, 2013, Wouter Van den Broek and Christoph T. Koch.
(56) A further variant for an algorithm that can be used in the phase reconstruction is Stokes polarimetry. This algorithm is described for example in Optics Express, Jun. 2, 2014; 22(11):14031-40; DOI: 10.1364/OE.22.014031, All-digital wavefront sensing for structured light beams, Dudley A, Milione G, Alfano R R, and Forbes A.
(57) When using a phase reconstruction, it is also possible to dispense with the elliptical imaging aperture stop 15. The optical effect of the aperture stop can also be brought about digitally.
(58) As an alternative to a defocusing diversification, an illumination direction diversification can also be carried out for carrying out the reconstruction step 33. An example of this is Fourier ptychography. This algorithm is described in the technical article Wide-field, high-resolution Fourier ptychographic microscopy, Guoan Zheng et al., Nature Photonics, Advance online publication 28 Jul. 2013, DOI:10.1038/NPHOTON.2013.187.
(59) This involves measuring a 2D imaging-light intensity distribution for each illumination direction and calculating back to the phase and amplitude of the electromagnetic wavefront by use of an algorithm. The algorithms IFTA or backpropagation can in turn be used here.
(60) A further possibility for carrying out the reconstruction step 33 is a general pupil manipulation, as is used for example in Spatial Light Interference Microscopy (SLIM, cf. the technical article Wang et al. Optics Express, 2011, volume 19, no. 2, page 1017). Here, four images are recorded for example, each with a different phase-shifting mask, which is arranged in a detection pupil, that is to say for example in the pupil plane 8a of the projection optical unit 13 of the metrology system 1.
(61) In principle, the phase reconstruction of the electromagnetic wavefront may also be performed without such a diversification. Examples of this are methods of interferometry and digital holography. In interferometry, a reference beam is needed. In digital holography, for example, a grating is introduced into the detection pupil. The individual orders of diffraction are then brought to a state of interference on the detector. By way of example, these methods of interferometry and digital holography are described in U. Schnars, W. Jptner (2005), Digital Holography, Springer, and Wen, Han; Andrew G. Gomella, Ajay Patel, Susanna K. Lynch, Nicole Y. Morgan, Stasia A. Anderson, Eric E. Bennett, Xianghui Xiao, Chian Liu, Douglas E. Wolfe (2013), Subnanoradian X-ray phase-contrast imaging using a far-field interferometer of nanometric phase gratings, Nature Communications 4, Bibcode:2013NatCo . . . 4E2659W, DOI:10.1038/ncomms3659.
(62) For a given illumination setting, for which the imaging function of the lithographic projection optical unit 21 is intended to be emulated by the metrology system 1, a phase reconstruction can be realized by fine sampling of the illumination pupil used with these illumination settings, for example of the intensity distribution shown in
(63) After the reconstruction step 33, a digital simulation of the imaging is performed with the imaging scale ratio of the lithographic projection optical unit 25. This is performed in a digital simulation step 35.
(64) The electromagnetic wavefront calculated in the reconstruction step 33 is thereby manipulated in the same way as it would be manipulated in the propagation by a corresponding anamorphic system. This may take place by using a digital elliptical imaging aperture stop corresponding to the imaging aperture stop 15 explained above. At the same time, it must be ensured by the digital manipulation that, on the image side, as also in the case of the lithographic projection optical unit 25, the numerical aperture in the xz plane is equal to the numerical aperture in the yz plane. Such a digital manipulation may be performed by a digital cylindrical lens or by adding an astigmatic wavefront. The addition of an astigmatic wavefront may be performed by addition of a contribution of a Zernike polynomial Z5. Zernike polynomials Zi (i=1, 2, . . . ) are known for example in the Fringe notation from the mathematical and optical literature. An example of this notation is provided by the Code V Manual, version 10.4, pages C-6 ff.
(65) The resultant astigmatic wavefront can then be calculated in each propagation plane.
(66) Correspondingly, the output of the resultant 3D aerial image with the inclusion of the influencing variable can then be output in an output step 36.
(67) The phase reconstruction may include a Fourier transformation step, with which a complex, that is to say phase-including, amplitude distribution is calculated from a calculated phase. After digital astigmatism manipulation, it is then possible to calculate back into the image field with the aid of an inverse Fourier transformation.
(68) In the course of the phase reconstruction, a three-dimensional (3D) Fourier transformation may also take place.
(69) Alternatively, an intensity Fourier transformation of the 2D imaging-light intensity distributions determined in the sequence of steps 28 to 30 may be carried out to carry out the reconstruction step 33, for which purpose these intensity distributions are provided in advance with periodic boundary conditions by use of known mathematical techniques. In this connection, reference is made to WO 2008/025433 A2 and DE 10 2007 009 661 A1.
(70) The inclusion step 31 is then performed by selecting the xy directional components of the generated intensity Fourier transformations while taking into account the xy imaging scale ratio of the lithographic projection optical unit 21. A Fourier image is therefore composed, the x component of which was recorded during a displacement by a first increment z.sub.1 with a sequence of method steps 28 to 30, and the y component of which is provided by using Fourier components of the intensity distributions of a sequence that were recorded with an incremental ratio z.sub.2. For directional components that form an angle with the x axis of between 0 and 90, Fourier-transformed 2D intensity data that were recorded with an intermediate increment zi are used. The respective increment zi scales with the angle of the direction considered in each case of the Fourier component and the x axis.
(71) The function zi () can be varied between the increments z.sub.1 for the x axis and the increments z.sub.2 for the y axis linearly or by use of an appropriately selected matching function, for example by use of a quadratic function, a sine function and a sine.sup.2 function.
(72) The zi incremental measurements of the 2D imaging-light intensity distributions do not all have to be carried out in reality; if a measurement for a z value between two measurements carried out in reality is needed, an interpolation between these two 2D imaging-light intensity distributions can also be carried out. This interpolation may be performed for example with the aid of a nearest-neighborhood, linear, bicubic or spline interpolation function. The interpolation may take place in the Fourier domain, but also in the spatial domain.
(73) An imaging with the metrology system 2 may be carried out with an elliptical imaging aperture stop 15, but alternatively also with an oval or rectangular stop. If no phase reconstruction is carried out, it is necessary to use an imaging aperture stop with an x/y aspect ratio that corresponds to the ratio of the imaging scale in the x and y directions of an imaging optical unit to be emulated or to be reconstructed, that is to say has for example an aspect or diameter ratio in the range between 10:1 and 1.1:1.
(74) The Fourier image thus manipulated and composed of the various directional components is then transformed back by use of an inverse Fourier transformation, so that the desired 3D aerial image is obtained.
(75) The resultant image intensity distribution may then also be distorted by software, in particular be scaled differently in the x direction than in the y direction, in order to reproduce an amorphism produced by the lithographic projection optical unit 21.
(76) Steps 28 to 30 are therefore not mandatory. After the providing step 27, a reconstruction of the wavefront may also be performed in the reconstruction step 33 by one of the variants described above.
(77) A method for three-dimensionally measuring a 3D aerial image in the region around the image plane 24 during the imaging of the lithography mask 5, which is arranged in the object plane 4, while taking into account a selectable imaging scale ratio of an imaging optical unit to be emulated or to be reconstructed by using intensity reconstruction of an electromagnetic wavefront of the imaging light 1, is explained in still more detail below on the basis of
(78) This involves first measuring a stack of 2D imaging-light intensity distributions respectively differing by a z displacement of the test structure in the region of the plane 14a with the detection device 14 by repeating steps 28 to 30. This takes place with the imaging aperture stop 15 used, shown in
(79)
(80)
(81)
(82)
(83)
(84) The progressive defocusing that can be seen in
(85) To achieve an intensity reconstruction of the 3D aerial image of the imaging optical unit to be emulated with a predetermined imaging scale ratio different from 1, a conversion of the measured focus stack with a multiplicity of 2D imaging-light intensity distributions of the type shown in
(86) By way of example, a magnification scale of the imaging optical unit 21 to be emulated of in the x direction, .sub.x, and of in the y direction, .sub.y, is assumed. The imaging optical unit 13 of the metrology system 2 has an isomorphic magnification factor .sub.MS of 850.
(87) The displacement z of the test structure or the lithography mask 5 is also referred to below as z.sub.LM.
(88) With the aid of selected 2D imaging-light intensity distributions for specific displacements z, as shown by way of example in
(89) A new synthetic result image is then produced from these intensity Fourier transforms. For this purpose, directional components of the first-generated 2D intensity Fourier transforms are selected, taking into account the imaging scale ratio of the lithographic projection optical unit 21. A displacement zi of a 2D imaging-light intensity distribution respectively selected for this purpose scales here with the alignment of the directional components. The following procedure is followed for this: The intensities and phases (that is to say real and imaginary components) of the Fourier image that was recorded in the plane zi=z.sub.LM/.sub.x.sup.2 are used on the x axis.
(90) The intensities and phases of the Fourier image that was recorded in the plane zi=z.sub.LM/.sub.y.sup.2 are used on the y axis.
(91) The intensities and phases of a Fourier image that was recorded in a defocusing plane zi between z.sub.LM/.sub.x.sup.2 and z.sub.LM/.sub.y.sup.2 are used for all of the pixels in between. The function for the interpolating calculation of the defocusing is intended to be continuous and advantageously continuously differentiable and advantageously monotonic from 0 to 90.
(92) Two examples of an assignment of respective z displacement positions to the directional components, that is to say the various angles , are given below:
zi=z.sub.LM*1/(.sub.x+(.sub.y.sub.x)*sin.sup.2).sup.2 (=example assignment function 1)
zi=z.sub.LM*(1/.sub.x+(1/.sub.y1/.sub.x)*sin.sup.2).sup.2 (=example assignment function 2)
(93)
(94) A further example of an assignment function in the manner of the example assignment functions 1 and 2 described above is the mean value of these two example assignment functions.
(95) A focus stack with very many images and a very small increment is needed for this calculation. In practice, however, usually fewer images are measured (for example to save measuring time) and a greater increment is chosen. In this case, the images between the various measured images can be interpolated. The interpolation may be performed in the image domain (that is to say before the Fourier transformation) or in the Fourier domain (after the Fourier transformation). Depending on what accuracy is required, nearest neighbour, linear, bicubic, spline or some other method comes into consideration as the method of interpolation.
(96) Advantageously, the overall focal region is chosen to be of such a size that it is only necessary to interpolate and not extrapolate between the focal planes.
(97) A numerical realization of a directional component selection corresponding to one of these example assignment functions is illustrated by digital selection functions shown in
(98) On the basis of the four 2D imaging-light intensity distributions measured according to
(99) A result image for z.sub.LM=100 nm is calculated. The intensity Fourier transform shown in
(100) The selection function shown in
(101)
(102)
(103) Therefore, a selection of predetermined angle sectors of the 2D intensity Fourier transforms shown in
(104) Numerically, the intensity Fourier transform shown in
(105) This synthetic raw image shown in
(106) The calculation explained above in connection with
(107) The method described above in connection with
(108) As an alternative to a digital selection function, which can only assume the values 0 and 1, as explained above in conjunction with the selection functions shown in
(109) The reconstruction method was described above with a configuration in which a distortion step with the imaging scale ratio of the lithographic projection optical unit 21 represents the last method step. It is alternatively possible to distort the 2D imaging-light intensity distributions first measured in measuring step 28 with the imaging scale ratio of the lithographic projection optical unit 21 and then carry out the other reconstruction steps for measuring the 3D aerial image, in particular the Fourier transformation, the selection of the directional components, the addition of the directional components and the inverse Fourier transformation.