Projection exposure method and projection exposure apparatus for microlithography
09817316 · 2017-11-14
Assignee
Inventors
- Boris Bittner (Roth, DE)
- Norbert Wabra (Werneck, DE)
- Martin von Hodenberg (Oberkochen, DE)
- Sonja Schneider (Oberkochen, DE)
Cpc classification
G03F7/70525
PHYSICS
G03F7/70083
PHYSICS
International classification
Abstract
A projection exposure method for exposing a radiation-sensitive substrate with at least one image of a pattern includes providing the pattern between an illumination system and a projection lens of a projection exposure apparatus so that the pattern is arranged in the region of an object plane of the projection lens and can be imaged via the projection lens into an image plane of the projection lens. The image plane is optically conjugate with respect to the object plane, and imaging-relevant properties of the pattern can be characterized by pattern data. The method also includes illuminating an illumination region of the pattern with an illumination radiation provided by the illumination system in accordance with an illumination setting which is specific to a use case and which can be characterized by illumination setting data.
Claims
1. A method of using a projection exposure apparatus comprising an illumination system and a projection lens, the method comprising: illuminating an illumination region of a pattern with illumination radiation provided by the illumination system in accordance with an illumination setting specific to a use case, the illumination setting being characterized by illumination setting data and imaging-relevant properties of the pattern being characterized by pattern data; determining use case data specific to the use case, the use case data comprising pattern data and/or illumination setting data; using the use case data to determine imaging specification data; controlling optical components of the projection lens based on the imaging specification data to adapt imaging behavior of the projection lens to the use case; and using the projection lens adapted to the use case to image the pattern onto a substrate, wherein the method further comprises determining imaging specification data with regard to the use case data so that imaging specification data of at least two field points differ with respect to at least one imaging specification.
2. The method of claim 1, further comprising determining at least a portion of the pattern data and/or of the illumination setting data by at least one measurement to acquire data about the projection lens.
3. The method of claim 1, wherein: the pattern data comprise at least core region structure data containing quantitative information about the structure of core regions; the core regions comprise regions having the smallest line spacing and/or the smallest periodicity length of a group of mutually parallel lines in the pattern; and lines of the core region form the core region structure.
4. The method of claim 1, wherein the pattern data comprise core region structure orientation data representing an orientation of lines of a core region structure of the pattern.
5. The method of claim 4, further comprising deriving the core region structure orientation data from illumination setting data containing information about the orientation of poles of a dipole illumination set on the illumination system.
6. The method of claim 4, wherein the pattern data comprise, in addition to the core region orientation data, one or more data selected from the group consisting of: core region structure position data representing a position of lines of a core region structure within the pattern; peripheral region structure orientation data representing an orientation of lines of a peripheral structure of the pattern; and peripheral region structure position data representing a position of lines of a peripheral structure within the pattern.
7. The method of claim 1, wherein: an imaging specification S.sub.k has a specification ratio between an imaging specification S.sub.k(FP.sub.i) for a first field point in the core region and a corresponding imaging specification S.sub.k(FP.sub.j) for a second field point in the peripheral region that deviates from one; and
max(S.sub.k(FP.sub.i)/S.sub.k(FP.sub.j),S.sub.k(FP.sub.j)/S.sub.k(FP.sub.i))≥1.5.
8. The method of claim 7, wherein the imaging specification S.sub.k is described by an odd-order Zernike coefficient or a linear combination of odd-order Zernike coefficients.
9. The method of claim 1, wherein: the imaging specification data comprise at least one structure data selected from the group consisting of core region structure orientation data, core region structure position data, peripheral structure orientation data, and peripheral structure position data; the at least one structure data is such that at least one of the following holds: there are two field points whose imaging specification data differ in at least one aspect; and there is one field point at which the wavefront specification for a wavefront for an n.sup.th-order wavefront expansion function differs in at least one aspect from the wavefront rotated by an angle of 90°/n.
10. The method of claim 1, further comprising: determining at least one subset of use case data and/or imaging specification data for a use case via an extrinsic data acquisition; communicating the at least one subset of use case data and/or imaging specification data to the control unit of the projection lens, wherein: acquiring the extrinsic data comprises at least one measure from the group consisting of: interrogation by the user via a user interface; retrieval from a memory accessible to a control unit of the projection exposure apparatus; determination from information concerning settings on the illumination system of the projection exposure apparatus; and determination from information about the mask to be exposed.
11. The method of claim 1, further comprising determining at least one subset of use case data for a use case via intrinsic data acquisition by at least one measurement on or in the projection lens, and communicating the at least one subject of use case data for the use case to the control unit of the projection lens.
12. The method of claim 1, wherein determining use case data comprises automatedly determining projection radiation data representing at least one property of the projection radiation passing from an object plane of the projection lens in a direction of an image plane of the projection lens.
13. The method of claim 12, wherein determining projection radiation data comprises measuring a wavefront of the projection radiation at at least one field point.
14. The method of claim 1, wherein: use case classification data are stored in a memory accessible to a control unit of the projection exposure apparatus; the use case classification data comprises, for a multiplicity of combinations of an illumination setting and a mask structure in the illumination region, corresponding intensity distribution data for at least one reference surface lying between the object plane and the image plane; and projection radiation data are determined using the use case classification data.
15. The method of claim 1, wherein a pattern recognition method and/or a feature extraction method are/is used when determining pattern data and/or illumination setting data from the projection radiation data.
16. The method of claim 1, further comprising comparing first use case data of a first projection exposure with second use case data of a directly succeeding second projection exposure to generate use case comparison data.
17. The method of claim 16, further comprising generating a use case change signal indicating the change of the use case in a manner dependent on the use case comparison data.
18. A method of using a projection exposure apparatus comprising an illumination system and a projection lens, the method comprising: illuminating an illumination region of a pattern with illumination radiation provided by the illumination system in accordance with an illumination setting specific to a use case, the illumination setting being characterized by illumination setting data and imaging-relevant properties of the pattern being characterized by pattern data; determining use case data specific to the use case, the use case data comprising pattern data and/or illumination setting data; using the use case data to determine imaging specification data; controlling optical components of the projection lens based on the imaging specification data to adapt imaging behavior of the projection lens to the use case; and using the projection lens adapted to the use case to image the pattern onto a substrate, wherein: determining use case data comprises automatedly determining projection radiation data representing at least one property of the projection radiation passing from an object plane of the projection lens in a direction of an image plane of the projection lens; determining projection radiation data comprises determining intensity distribution data representing a two-dimensional distribution of radiation intensity of the projection radiation at at least one reference surface lying between the object plane and the image plane in a projection beam path; and for determining intensity distribution data, a two-dimensional temperature distribution at the reference surface is measured in a spatially resolved manner.
19. The method of claim 18, wherein at least one optical surface of an optical element in the beam path of the projection lens is used as reference surface.
20. The method of claim 19, wherein the optical element is a mirror, and the optical surface is a mirror surface.
21. The method of claim 18, further comprising acquiring the two-dimensional temperature via at least one thermal imaging camera or at least one temperature sensor.
22. The method of claim 18, further comprising determining intensity distribution data at a reference surface which lies at or in proximity to a pupil plane of the projection lens, and using the intensity distribution data to determine illumination setting data.
23. A method of using a projection exposure apparatus comprising an illumination system and a projection lens, the method comprising: illuminating an illumination region of a pattern with illumination radiation provided by the illumination system in accordance with an illumination setting specific to a use case, the illumination setting being characterized by illumination setting data and imaging-relevant properties of the pattern being characterized by pattern data; determining use case data specific to the use case, the use case data comprising pattern data and/or illumination setting data; using the use case data to determine imaging specification data; controlling optical components of the projection lens based on the imaging specification data to adapt imaging behavior of the projection lens to the use case; and using the projection lens adapted to the use case to image the pattern onto a substrate, wherein: determining use case data comprises automatedly determining projection radiation data representing at least one property of the projection radiation passing from an object plane of the projection lens in a direction of an image plane of the projection lens; determining projection radiation data comprises determining intensity distribution data representing a two-dimensional distribution of radiation intensity of the projection radiation at at least one reference surface lying between the object plane and the image plane in a projection beam path; and the method further comprises determining intensity distribution data at a reference surface which lies at or in proximity to a field plane of the projection lens, and using the intensity distribution data to determine pattern data.
24. The method of claim 23, wherein at least one optical surface of an optical element in the beam path of the projection lens is used as reference surface.
25. The method of claim 24, wherein the optical element is a mirror, and the optical surface is a mirror surface.
26. A method of using a projection exposure apparatus comprising an illumination system and a projection lens, the method comprising: illuminating an illumination region of a pattern with illumination radiation provided by the illumination system in accordance with an illumination setting specific to a use case, the illumination setting being characterized by illumination setting data and imaging-relevant properties of the pattern being characterized by pattern data; determining use case data specific to the use case, the use case data comprising pattern data and/or illumination setting data; using the use case data to determine imaging specification data; controlling optical components of the projection lens based on the imaging specification data to adapt imaging behavior of the projection lens to the use case; and using the projection lens adapted to the use case to image the pattern onto a substrate, wherein: determining use case data comprises automatedly determining projection radiation data representing at least one property of the projection radiation passing from an object plane of the projection lens in a direction of an image plane of the projection lens; determining projection radiation data comprises determining intensity distribution data representing a two-dimensional distribution of radiation intensity of the projection radiation at at least one reference surface lying between the object plane and the image plane in a projection beam path; a first reference surface is arranged at or in proximity to a pupil plane of the projection lens; a second reference surface is arranged at or in proximity to a field plane of the projection lens; and the method further comprises: determining first intensity distribution data at the first reference surface; determining second intensity distribution data at the second reference surface; and using the first and the second intensity distribution data to determine the pattern data and/or illumination setting data.
27. The method of claim 26, wherein at least one optical surface of an optical element in the beam path of the projection lens is used as reference surface.
28. The method of claim 27, wherein the optical element is a mirror, and the optical surface is a mirror surface.
29. An apparatus, comprising: an illumination system configured to generate an illumination radiation directed onto a mask; a projection lens configured to generate an image of a pattern in a region of an image plane of the projection lens; and a system configured to adapt imaging behavior of the projection lens to a specific use case, the system comprising: units configured to determine use case data specific to the use case; units configured to determine imaging specification data using the use case data; and a control unit assigned to the projection lens and configured to control optical components of the projection lens based on the imaging specification data to adapt the imaging behavior of the projection lens to the use case, wherein the apparatus is a projection exposure apparatus, wherein the units configured to determine use case data comprise an intrinsic data acquisition system which comprises a unit configured to acquire intrinsic data which is connected to at least one measuring and acquiring unit selected from the group consisting of: a wavefront measuring system configure dot measure a wavefront of projection radiation passing in the projection lens from the mask to a substrate to be exposed; a reticle measuring system configured to measure structures of the pattern of the mask; and a unit configured to determine intensity distribution data representing a two-dimensional distribution of radiation intensity of the projection radiation at at least one reference surface lying between an object plane of the projection lens and an image plane of the projection lens in a projection beam path; and wherein the unit configured to determine intensity distribution data comprises at least one element selected from the group consisting of: temperature measuring system configured to: i) acquire local temperatures in an illuminated region in a spatially resolved manner at an optical surface of an optical element that serves as reference surface; and ii) determine therefrom a two-dimensional temperature distribution at the optical surface; and a thermal imaging camera configured to spatially resolve recording of a two-dimensional heat distribution at an optical surface of an optical element of the projection lens.
30. The apparatus of claim 29, wherein the optical element having the optical surface comprises a mirror.
31. The apparatus of claim 29, wherein the system for intrinsic data acquisition comprises: a first unit configured to determine first intensity distribution data at a first reference surface lying at or in proximity to a pupil plane of the projection lens; and a second unit configured to determine second intensity distribution data at a second reference surface lying at or in proximity to a field plane of the projection lens.
32. The apparatus of claim 31, wherein a subaperture ratio is in the range of 0.7 to 1 at the first reference surface, and a subaperture ratio is in the range of 0 to 0.3 at the second reference surface.
33. The apparatus of claim 29, wherein the units configured to determine use case data comprise a system to acquire extrinsic data which comprises a unit to acquire extrinsic data, which is connected to at least one unit selected from the group consisting of: a user interface with a computer terminal configured to allow a user to input data; a memory to the control unit and in which is stored at least one portion of the use case data in the form of information about the illumination system, the projection lens and/or the reticle; a data line by which information concerning settings on the illumination system of the projection exposure apparatus is retrievable; and a reticle data acquisition unit configured to read in information concerning the reticle.
34. The apparatus of claim 29, wherein a controllable optical component of the projection lens is a mirror arranged at or in proximity to a pupil surface and to which is assigned a deformation manipulator comprising a multiplicity of independently driveable actuators which allow the mirror surface of the mirror to be deformed one-dimensionally or two-dimensionally.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(4)
(5) The projection exposure apparatus is operated with the radiation of a primary radiation source RS. An illumination system ILL serves for receiving the radiation of the primary radiation source and for shaping illumination radiation directed onto the pattern. The projection lens PO serves for imaging the structure of the pattern onto the light-sensitive substrate W.
(6) The primary radiation source RS can be, inter alia, a laser plasma source or a gas discharge source or a synchrotron-based radiation source. Such radiation sources generate radiation RAD in the extreme ultraviolet range (EUV range), in particular having wavelengths of between 5 nm and 15 nm. In order that the illumination system and the projection lens can operate in this wavelength range, they are constructed with components that are reflective to EUV radiation.
(7) The radiation RAD emerging from the radiation source RS is collected via a collector C and guided into the illumination system ILL. The illumination system includes a mixing unit MIX, a telescope optical unit TO and a field forming mirror FFM. The illumination system shapes the radiation and thus illuminates an illumination field situated in the object plane OS of the projection lens PO or in proximity thereto. In this case, the form and size of the illumination field determine the form and size of the effectively used object field OF in the object plane OS. The illumination field is generally in the shape of a slot having a high aspect ratio between width and height.
(8) During operation of the apparatus, a reflective mask M is arranged in the object plane OS. The projection lens PO here has six mirrors M1 to M6 and images the pattern of the mask into the image plane on a reducing scale, the substrate to be exposed, e.g. a semiconductor wafer, being arranged in the image plane.
(9) The mixing unit MIX substantially consists of two facet mirrors FAC1, FAC2. The first facet mirror FAC1 is arranged in a plane of the illumination system that is optically conjugate with respect to the object plane OS. It is therefore also designated as a field facet mirror. The second facet mirror FAC2 is arranged in a pupil plane of the illumination system that is optically conjugate with respect to a pupil plane of the projection lens. It is therefore also designated as a pupil facet mirror.
(10) With the aid of the pupil facet mirror FAC2 and the imaging optical assembly disposed downstream in the beam path and including the telescope optical unit TO and the field forming mirror FFM operated with grazing incidence, the individual mirroring facets (individual mirrors) of the first facet mirror FAC1 are imaged into the object field.
(11) The spatial (local) illumination intensity distribution at the field facet mirror FAC1 determines the local illumination intensity distribution in the object field. The spatial (local) illumination intensity distribution at the pupil facet mirror FAC2 determines the illumination angle intensity distribution in the object field OF.
(12) A unit RST for holding and manipulating the mask M (reticle) is arranged such that the pattern PAT arranged on the mask lies in the object plane OS of the projection lens PO, the object plane here also being designated as the reticle plane. The mask is movable in this plane for scanner operation in a scanning direction (y-direction) perpendicular to the reference axis AX of the projection lens (z-direction) with the aid of a scan drive SCM.
(13) The substrate W to be exposed is held by a unit WST including a scanner drive SCW in order to move the substrate synchronously with the mask M perpendicularly to the reference axis AX in a scanning direction (y-direction). Depending on the design of the projection lens PO, these movements of mask and substrate can be effected parallel or antiparallel to one another.
(14) The unit WST, which is also designated as “wafer stage”, and the unit RST, which is also designated as “reticle stage”, are part of a scanner unit controlled via a scanning control unit, which in the case of the embodiment is integrated into the central control unit CU of the projection exposure apparatus.
(15) All optical components of the projection exposure apparatus WSC are accommodated in an evacuatable housing H. The projection exposure apparatus is operated under vacuum.
(16) EUV projection exposure apparatuses having a similar basic construction are known e.g. from WO 2009/100856 A1 or WO 2010/049020 A1, the disclosure of which is incorporated by reference in the content of this description.
(17) The mask M has a mask substrate MS having a structured, macroscopically substantially planar front side. A multilayer arrangement containing e.g. alternating layers of molybdenum and silicon or ruthenium and silicon, the multilayer arrangement being reflective to EUV radiation, is situated on the front side. The pattern PAT of the mask is formed by one or a plurality of structured absorber layers, which can consist e.g. of a tantalum-based absorber material. In the illustrated incorporated operating state, the front side faces the illumination system ILL and the projection lens PO in such a way that the illumination radiation ILR provided by the illumination system impinges on the pattern obliquely, i.e. at an angle different from zero with respect to the surface normal of the front side, is altered by the pattern and is then radiated as projection radiation PR obliquely into the projection lens PO.
(18) The projection exposure apparatus is equipped with a system for adapting the imaging behaviour of the projection lens PO to a specific use case or user case, which system enables the projection lens to automatically adapt its imaging properties, within certain limits predefined by the design, to a specific use case such that imaging properties which are optimized for the specific use case are achieved. For this purpose, provision is made of units for determining use case data specific to the use case, which serve for internal and external data acquisition for a specific use case. With the aid of the use case data, imaging specification data specifying the imaging behaviour of the projection lens that is expedient for the use case are then determined by corresponding units. The projection lens is equipped with controllable optical components which can be driven on the basis of the imaging specification data with the aid of the control unit CU assigned to the projection lens, in order to alter the imaging properties of the projection lens and to adapt them to the use case (cf.
(19) Information about a specific use case can be determined, in principle, by intrinsic data acquisition and/or by extrinsic data acquisition. The intrinsic data acquisition substantially uses data which can be determined on components of the projection exposure apparatus directly or indirectly with the aid of suitable measuring systems. Via extrinsic data acquisition, by way of example, data can be interrogated by a user via a user interface. It is also possible to retrieve a portion of relevant data from corresponding memories containing information about the illumination system, the projection lens and/or the reticle provided for the use case.
(20) Some of these systems are explained on the basis of the example of the projection exposure apparatus in
(21) A system for intrinsic data acquisition typically has a unit IDA for intrinsic data acquisition, to which one or a plurality of measuring and acquiring units can be connected. A wavefront measuring system WMS is designed to perform a measurement of the wavefront of the projection radiation which passes in the projection lens from the mask to the substrate to be exposed. A spatially resolving measurement for a plurality of field points is preferably provided. By way of example, it is possible to provide wavefront measuring systems of the type described in U.S. Pat. No. 7,333,216 A1 or U.S. Pat. No. 6,650,399 A1, the disclosure content of which in this respect is incorporated by reference in the content of this description.
(22) Alternatively or additionally, it is possible to provide a reticle measuring system RMS designed to measure the structures of the pattern of the mask e.g. optically and to acquire e.g. information about the core region structures, in particular the local position thereof within the pattern and information concerning the orientation of the lines of the core region structure.
(23) A temperature measuring system MTS is designed to acquire the local mirror temperatures in the illuminated region on the second mirror M2 in a spatially resolved manner and to determine therefrom a two-dimensional temperature distribution at this mirror surface serving as a reference surface. The temperature measuring system can contain e.g. a multiplicity of temperature sensors arranged in a two-dimensional distribution in the mirror substrate in proximity to the multilayer reflection layer.
(24) Alternatively or additionally, provision can be made of at least one thermal imaging camera for the spatially resolving recording of the heat distribution at a mirror surface. In the case of the example illustrated, the heat distribution at the concave mirror surface of the sixth mirror M6 is acquired in a spatially resolving manner via the thermal imaging camera MWBK6, and the heat distribution at the concave mirror surface of the fourth mirror M4 is acquired in a spatially resolving manner via the thermal imaging camera MWBK4.
(25) If the mirror used for measurement is a mirror mounted in proximity to or in a pupil plane of the projection lens (that is to say “near-pupil”), then relevant information about the illumination setting chosen can be derived therefrom. In the case of a mirror arranged near-field, it is possible, if appropriate, to derive information about the position of core region structures within the pattern.
(26) In the case of the example in
(27) In the example in
(28) In order to obtain information about the illumination setting respectively set, the system for intrinsic data acquisition data also acquires the mirror positions of the first facet mirror FAC1 (field facet mirror) and of the second facet mirror FAC2 (pupil facet mirror), in order to determine in particular the local illumination intensity distribution in the object field (at the reticle).
(29) A system for external data acquisition has a unit EDA for external data acquisition to which in turn one or a plurality of components can be connected. A computer terminal CT serves for the inputting of data by a user. A reticle data acquisition device RBCS can contain for example a reticle bar code scanner and/or an RFID receiver in order to obtain relevant information about the reticle, in particular about the pattern applied to the reticle, from a reticle database with the aid of the acquired identification data of the reticle. A connected database D allows access to stored use case data.
(30) The unit IDA for intrinsic data acquisition and the unit EDA for extrinsic data acquisition are connected to the control unit CU of the projection exposure apparatus and the central data memory DS connected thereto and can exchange data with them.
(31) The projection lens includes a manipulation system having a multiplicity of manipulators which make it possible to alter the imaging properties of the projection lens in a defined manner on the basis of control signals of the control unit CU. In this case, the term “manipulator” denotes, inter alia, optomechanical units designed for actively acting—on the basis of corresponding control signals—on individual optical elements or groups of optical elements in order to alter the optical effect thereof. In general, manipulators are set such that metrologically acquired imaging aberrations can be reduced in a targeted manner. Manipulators are often also provided in order for example to displace, to tilt and/or to deform the mask or the substrate. A manipulator can be designed e.g. for a decentration of an optical element along a reference axis or perpendicular thereto or for a tilting of an optical element. In this case, the manipulators bring about rigid-body movements of optical elements. It is also possible locally or globally to heat or cool an optical element with the aid of a thermal manipulator and/or to introduce a deformation of an optical element. For this purpose, a manipulator contains one or a plurality of actuating elements or actuators, the present actuating value of which can be changed or adjusted on the basis of control signals of the control system.
(32)
(33) In the exemplary embodiment in
(34) In the case of the second mirror M2 arranged near-pupil, the subaperture ratio SAR is in the range of between 0.7 and 1, namely approximately 0.8. In the case of the first mirror M1 arranged near-field, the subaperture ratio is in the range of between 0 and 0.4, namely approximately 0.3.
(35) A first thermal imaging camera MWBK1 is arranged in relation to the first mirror M1 such that its two-dimensional image field acquires the entire illuminated region of the first mirror, such that it is thereby possible to record a two-dimensional temperature distribution on the first mirror. A second thermal imaging camera MWBK2 is arranged correspondingly relative to the mirror surface of the second mirror M2 in order to acquire the entire illuminated region of the pupil mirror.
(36) An explanation is given below of how, on the basis of the two-dimensional intensity distributions on a near-field mirror and a near-pupil mirror that are acquired via these units, it is possible to obtain information about the lateral distribution of core region structures and peripheral structures in the pattern PAT of the mask M.
(37) Possible operation of the projection exposure apparatus is explained generally in connection with the flow charts in
(38) In the optional step S1, use case data are input by an operator via the computer terminal CT. Use case data can also be read out from the database D with access to stored use case data. If use case data are input via the computer terminal CT, they can be stored in the central data memory DS.
(39) In the optional step S2, a reticle identification about a reticle to be incorporated or an incorporated reticle is read in from the reticle e.g. with the aid of a bar code scanner or an RFID receiver and corresponding reticle information is read out from the database D. By way of example, information about position and orientation of core region structures can be stored here. If information is acquired by this route, it is stored in the central data memory DS.
(40) If a reticle measuring system RMS is present and used, in the optional step S3 information for example about position and orientation of core region structures and peripheral structures can be determined with the aid of the reticle measuring system via the measurement of the pattern.
(41) In the optional steps S4 and S5, information about the illumination setting presently set on the illumination system can be acquired by corresponding sensors at the first facet mirror FAC1 and respectively at the second facet mirror FAC2. Corresponding illumination setting data can be stored in the data memory DS.
(42) All of steps S1 to S5 of the external data acquisition are optional. It is possible that one or a plurality of the devices provided for this purpose are provided on a projection exposure apparatus and used. Other embodiments have other or no systems for external data acquisition.
(43) This upstream data acquisition is ended at the point in time DAK1 COMP. On the basis of the information present up to then, the control unit CU in step S6 calculates use case data UCD and stores them in the central data memory DS. On the basis of the data, the control unit CU in step S7 calculates provisional imaging specification data ASD and stores them in the central data memory DS. The projection exposure apparatus is thus prepared for a lens start at the point in time PREP.
(44) If the reticle is incorporated in the predefined position and the illumination system is put into operation, the pattern of the mask is illuminated and the radiation modified by the pattern passes as projection radiation through the projection lens to the substrate W. In this phase, information about the present use case can be acquired with the aid of units of the internal data acquisition (cf.
(45) In step S13, the control unit CU calculates control data and specifications (imaging specification data) for the use case whose special nature was characterized by external and internal data acquisition in the previous steps. After the corresponding data have been stored in the central data memory DS (step S14), the control unit determines, on the basis of these use case data, a lens model LM adapted thereto (step S15). The control parameters of the lens model are stored in the central data memory DS in step S16. In step S17, the lens model is then activated by the control unit CU (CU ACT), such that in step S18 the projection exposure apparatus can be operated after a trigger (TRIGG) in step S19 on the basis of the lens model which was adapted to the specific reticle and specific properties of the illumination using use case data.
(46) The operation of the projection exposure apparatus is continued with the imaging specification data determined in this way. In this case, the term “lens model” substantially denotes a projection exposure apparatus control algorithm. The lens model calculates how the manipulators are to be readjusted on the basis of a measurement (or a projection), and communicates corresponding control commands to the manipulator control unit. The imaging specification data determine how the “lens model” controls the projection exposure apparatus in order to achieve the desired imaging specifications.
(47) With reference to
(48) The use case data to be acquired for this substantially consist of pattern data and illumination setting data. Pattern data describe the reticle to be imaged or the pattern thereof with regard to the question of what imaging quality is desired at what field points. The imaging quality present where dense lines have to be imaged is higher than that present where less dense lines with a larger spacing have to be imaged. Accordingly, it is considered to be expedient for the pattern data to contain information about those regions where high imaging quality is involved (core region structure data) and about those regions where an imaging quality that is not as high is sufficient (peripheral structure data). The core region structure data include core region structure orientation data representing an orientation of lines of a core region structure of the pattern, and core region structure position data representing a position of lines of a core region structure within a pattern. Correspondingly, peripheral structure data include so-called peripheral structure orientation data representing the orientations of lines of the pattern which do not belong to the core region structures within the pattern, and peripheral structure position data representing the positions of such lines within the pattern.
(49) If detailed direct information about the reticle to be imaged is not present or cannot be determined, in general it is possible to deduce fundamental properties of the pattern to be imaged with the aid of illumination setting data. Illumination setting data include, in particular, information about the set illumination setting, that is to say about the illumination intensity distribution in the pupil plane of the illumination system, which is also designated as the illumination pupil. They include for example x-dipole, y-dipole, quadrupole or a freeform pupil, i.e. an illumination intensity distribution which has no low symmetry and can be asymmetrical, for example.
(50)
(51) In the case of
(52)
(53)
(54) The intensity distributions acquired with the aid of the thermal imaging cameras MWBK1, MWBK2 are converted into corresponding core region structure data and peripheral structure data using an image processing mechanism. For this purpose it is possible to use methods of pattern recognition and/or feature extraction in the evaluation algorithms. The acquired intensity distributions can also be compared with standard intensity distributions stored in the form of use case classification data in a memory of the evaluation unit.
(55) From the use case data thus determined, imaging specification data are then determined in a next step. This takes account of the fact that the specifications applicable at those field points at which core region structures do not have to be imaged can be less stringent specifications (e.g. in the form of larger permissible Zernikes) than those applicable at those field points at which core region structures have to be imaged.
(56) Even at those field points at which core region structures are imaged, not all of the Zernikes have to be small. By way of example, it may suffice if those Zernikes (or combinations of Zernikes, such as cos 30°×Z.sub.7+sin 30°×Z.sub.8) are small which are oriented in the direction of the densest lines to be imaged of the core region structure. Other Zernikes and other orientations of the Zernikes (such as e.g. cos 60°×Z.sub.7+sin 60°×Z.sub.8) can indeed be large.
(57) Preferred embodiments have been explained on the basis of EUV projection exposure apparatuses and methods. However, the use possibilities are not restricted thereto. The disclosure can also be used in the case of projection exposure apparatuses and methods which use radiation from the deep ultraviolet range (DUV) or vacuum ultraviolet range (VUV), e.g. at wavelengths of less than 260 nm, in particular at approximately 193 nm. The illumination system and the projection lens can then contain refractive optical elements, in particular lens elements, if appropriate in combination with one or a plurality of mirrors. The documents WO 2004/019128 A2 and WO 2005/111689 A2 disclose by way of example projection exposure apparatuses including catadioptric projection lenses having three imaging lens parts and correspondingly two intermediate images, wherein in the second lens part a concave mirror is arranged in the region of a pupil plane.