METHOD OF PERFORMING METROLOGY ON A MICROFABRICATION PATTERN

20230298854 · 2023-09-21

    Inventors

    Cpc classification

    International classification

    Abstract

    A method includes generating, by a SEM, sets of frames corresponding to regions of a microfabrication pattern, for each set of frames, estimating feature data representing edge positions, linewidths, or centerline positions of one or more features of each region of the pattern, and computing a preliminary estimate of a roughness parameter from the feature data. The roughness parameter is indicative of a line edge roughness, a linewidth roughness, or a pattern placement roughness of the one or more features. The method further includes fitting a model equation to the preliminary estimates of the roughness parameter using a model parameter dependent on the number of frames of each set of frames, the model equation relating the model parameter to the roughness parameter; and computing a final estimate of the roughness parameter as an asymptotic value of the fitted model equation.

    Claims

    1. A method comprising: generating, by a scanning electron microscope, a first set of frames of a first region of a microfabrication pattern, a second set of frames of a second region of the microfabrication pattern, and a third set of frames of a third region of the microfabrication pattern, wherein a number of frames of the first set, the second set and the third set are different; estimating, using the first set, the second set, and the third set, first feature data corresponding to the first set, second feature data corresponding to the second set, and third feature data corresponding to the third set, wherein the first feature data, the second feature data, and the third feature data represent edge positions, linewidths, or centerline positions of features of the microfabrication pattern; computing a first preliminary estimate, a second preliminary estimate, and a third preliminary estimate of a roughness parameter from the first feature data, the second feature data, and the third feature data, wherein the roughness parameter is indicative of a line edge roughness, a linewidth roughness, or a pattern placement roughness of the features of the microfabrication pattern; fitting a model equation to the first preliminary estimate, the second preliminary estimate, and the third preliminary estimate using a model parameter that is dependent on the number of frames of the first set, the second set, or the third set, the model equation relating the model parameter to the roughness parameter; and computing a final estimate of the roughness parameter as an asymptotic value of the model equation.

    2. The method according to claim 1, wherein the roughness parameter is one or more standard deviations of an edge position of the microfabrication pattern.

    3. The method according to claim 1, wherein the roughness parameter is one or more standard deviations of a linewidth of the microfabrication pattern.

    4. The method according to claim 1, wherein the roughness parameter is one or more standard deviations of a centerline position of the microfabrication pattern.

    5. The method according to claim 1, wherein the first preliminary estimate, the second preliminary estimate, and the third preliminary estimate of the roughness parameter are noise-unbiased preliminary estimates of the roughness parameter.

    6. The method according to claim 5, further comprising: computing a spatial frequency density representation from the first feature data, and estimating a noise floor of the spatial frequency density representation, wherein computing the first preliminary estimate comprises computing the first preliminary estimate using the spatial frequency density representation and the noise floor.

    7. The method according to claim 6, wherein computing the first preliminary estimate further comprises: computing a noise-unbiased spatial frequency density representation by subtracting the noise floor from the spatial frequency density representation; and integrating the noise-unbiased spatial frequency density representation.

    8. The method according to claim 6, wherein computing the first preliminary estimate further comprises: integrating the spatial frequency density representation to obtain a noise-biased estimate of the roughness parameter; and subtracting the noise floor from the noise-biased estimate.

    9. The method according to claim 6, wherein the spatial frequency density representation is a Fourier spectrum or a power spectral density.

    10. The method according to claim 1, further comprising, generating a first composite image as a pixel-wise average of the first set of frames, a second composite image as a pixel-wise average of the second set of frames, and a third composite image as a pixel-wise average of the third set of frames, wherein estimating the first feature data, the second feature data, and the third feature data comprises estimating the first feature data using the first composite image, estimating the second feature data using the second composite image, and estimating the third feature data using the third composite image.

    11. The method according to claim 1, wherein the model parameter is equal to the number of frames of the first set, the second set, or the third set.

    12. The method according to claim 1, wherein the model parameter is an average signal-to-noise ratio for the first set, the second set, or the third set.

    13. The method according to claim 1, wherein the model equation is an exponential function.

    14. The method according to claim 13, wherein the model equation is Y=a(1−b*e.sup.c*x), wherein Y is the roughness parameter, x is the model parameter, and a, b, c are fitting parameters.

    15. The method according to claim 1, wherein the number of frames of the first set is less than 8, and wherein the numbers of frames of the second set and the third set are each equal to or greater than 8.

    16. The method according to claim 1, wherein the first region, the second region, and the third regions are different regions of the microfabrication pattern.

    17. The method according to claim 1, wherein the microfabrication pattern is a resist pattern formed on a substrate.

    18. The method according to claim 1, wherein the microfabrication pattern is an etched pattern formed on a substrate.

    19. A non-transitory computer-readable media storing instructions that, when executed by a computing device, cause the computing device to perform functions comprising: generating, by a scanning electron microscope, a first set of frames of a first region of a microfabrication pattern, a second set of frames of a second region of the microfabrication pattern, and a third set of frames of a third region of the microfabrication pattern, wherein a number of frames of the first set, the second set and the third set are different; estimating, using the first set, the second set, and the third set, first feature data corresponding to the first set, second feature data corresponding to the second set, and third feature data corresponding to the third set, wherein the first feature data, the second feature data, and the third feature data represent edge positions, linewidths, or centerline positions of features of the microfabrication pattern; computing a first preliminary estimate, a second preliminary estimate, and a third preliminary estimate of a roughness parameter from the first feature data, the second feature data, and the third feature data, wherein the roughness parameter is indicative of a line edge roughness, a linewidth roughness, or a pattern placement roughness of the features of the microfabrication pattern; fitting a model equation to the first preliminary estimate, the second preliminary estimate, and the third preliminary estimate using a model parameter that is dependent on the number of frames of the first set, the second set, or the third set, the model equation relating the model parameter to the roughness parameter; and computing a final estimate of the roughness parameter as an asymptotic value of the model equation.

    20. A computing device comprising: a processor; and a computer readable medium storing instructions that, when executed by the processor, cause the computing device to perform functions comprising: generating, by a scanning electron microscope, a first set of frames of a first region of a microfabrication pattern, a second set of frames of a second region of the microfabrication pattern, and a third set of frames of a third region of the microfabrication pattern, wherein a number of frames of the first set, the second set and the third set are different; estimating, using the first set, the second set, and the third set, first feature data corresponding to the first set, second feature data corresponding to the second set, and third feature data corresponding to the third set, wherein the first feature data, the second feature data, and the third feature data represent edge positions, linewidths, or centerline positions of features of the microfabrication pattern; computing a first preliminary estimate, a second preliminary estimate, and a third preliminary estimate of a roughness parameter from the first feature data, the second feature data, and the third feature data, wherein the roughness parameter is indicative of a line edge roughness, a linewidth roughness, or a pattern placement roughness of the features of the microfabrication pattern; fitting a model equation to the first preliminary estimate, the second preliminary estimate, and the third preliminary estimate using a model parameter that is dependent on the number of frames of the first set, the second set, or the third set, the model equation relating the model parameter to the roughness parameter; and computing a final estimate of the roughness parameter as an asymptotic value of the model equation.

    Description

    BRIEF DESCRIPTION OF THE FIGURES

    [0068] The above, as well as additional, features will be better understood through the following illustrative and non-limiting detailed description of example embodiments, with reference to the appended drawings.

    [0069] FIG. 1 is a schematic illustration of a metrology system, according to an example.

    [0070] FIG. 2 is a flow chart of a method for performing metrology, according to an example.

    [0071] FIG. 3 shows fitting a model equation, according to an example.

    [0072] FIG. 4 shows a power spectral density of feature data, according to an example.

    [0073] FIG. 5 shows SEM images, according to an example.

    [0074] FIG. 6 shows the results of metrology performed on the patterns shown in FIG. 5, according to an example.

    [0075] All the figures are schematic, not necessarily to scale, and generally only show parts which are necessary to elucidate example embodiments, wherein other parts may be omitted or merely suggested.

    DETAILED DESCRIPTION

    [0076] Example embodiments will now be described more fully hereinafter with reference to the accompanying drawings. That which is encompassed by the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example. Furthermore, like numbers refer to the same or similar elements or components throughout.

    [0077] A metrology system 100 and method 200 for performing metrology on a microfabrication pattern will now be described in detail with reference to FIG. 1 and FIG. 2.

    [0078] FIG. 1 schematically illustrates a metrology system 100 comprising a SEM 110 and a computing device 120. The SEM 110 may be a conventional SEM, for instance a critical dimension scanning electron microscope (CDSEM). The SEM 110 may by way of example comprise a vacuum chamber for receiving a sample comprising the microfabrication pattern 130 to be characterized, and an electron source for forming an electron beam. The SEM 110 may further comprise focusing lenses and scanning coils for focusing and scanning the electron beam across the sample in a region of interest, e.g. in a raster scan pattern. An electron detector of the SEM 110 may generate a pixel value (e.g. grayscale) corresponding to the number of back-scattered electrons detected at each scanning position to generate a frame (i.e. a digital image frame). A set of frames may be generated of a same region and combined into a composite image by performing frame integration/averaging of the set of frames.

    [0079] The microfabrication pattern 130 may for instance be a resist pattern formed on a substrate, or an etched pattern formed on a substrate, e.g. etched via EUVL such as high-NA EUVL. However, the methods described herein may also be used for performing metrology on patterns formed with non-EUVL based lithography techniques (e.g. deep ultraviolet), as well as other types microfabrication patterns.

    [0080] To facilitate understanding, the patterns will in the following be assumed to be line-shaped, i.e. a pattern of one or more line-shaped features (e.g. protruding material lines or trenches in a layer on the substrate) extending in parallel along a substrate. However, the method may also be used to perform metrology on pattern features of other shapes, such as rounded features (e.g. pillars or via holes). In either case, the pattern may be a regular pattern (e.g. a regular line pattern). Thus, each of the regions of which SEM images/frames are generated may include a same or corresponding pattern (e.g. lines with corresponding orientation, line width, and spacing).

    [0081] Each SEM scan line (and thus each row of pixels of each frame) may conveniently be oriented across and transverse to the features (e.g. lines) of the pattern, wherein each frame/image may be generated such that each column of pixels of the frame/image is oriented along a longitudinal dimension of the features and each row of pixels is oriented across and transverse to the features, e.g. as shown in FIG. 5. In other words, each row of pixels of a frame/image may correspond to a respective scan line across the features of the imaged region. Optionally, the computing device 120 may apply a rotational transform to each frame/image as a pre-processing step to orient the imaged features along the column dimension Y if a different scan line orientation has been used. The orientation of the pattern with respect to the frame/image dimensions is however arbitrary and the processing to be described below may also be applied to frames/images with features oriented along the Y-dimension.

    [0082] The data analysis of the present method (e.g. estimation of feature data and roughness parameters) may be implemented by the computing device 120. The computing device 120 may comprise processing circuitry configured to implement the method. The computing device 120 may for instance comprise one or more processors 122 and the operations of the method may be implemented using software instructions which may be stored on a computer readable media 124 (e.g. on a non-transitory computer readable storage medium) to be executed by the one or more processors 122 of the computing device 120. The computing device 120 may for example be a personal computer (e.g. a laptop or desktop computer). The computing device 120 may alternatively comprise dedicated circuitry configured to implement the method, such as one or more integrated circuits, one or more application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs).

    [0083] In FIG. 1 the computing device 120 is shown to be separate from the SEM 110 and coupled to the SEM 110 over a communications interface (e.g. a serial or parallel interface or a network) to receive data (e.g. frames/images) from the SEM 110, and optionally transmit control parameters to the SEM 110 (e.g. parameters for setting pixel size, beam current, frame/image resolution, the number of frames of integration, coordinates for the region to be scanned, etc.) The computing device 120 may however also form part of the SEM 110, such that the computing device 120 of the SEM 110 may compute and output the final estimate of the roughness parameter to an external device, e.g. a computer or a display device connected to the SEM 110.

    [0084] FIG. 2 depicts a flow chart of the method 200. In step S202, the SEM 110 generates a number of sets of frames S.sub.i={F.sub.i,1, F.sub.i,2, . . , F.sub.i,nbr.sub.i} of the microfabrication pattern 130. A set of frames S.sub.i may for brevity in the following also be referred to as a frame set S.sub.i. Each frame set S.sub.i comprises or consists of a respective number of frames nbr.sub.i, which is different for each frame set S.sub.i. The number of frame sets S.sub.i may generally be selected in dependency of the number of data points needed to ensure a stable fitting of a model equation (described below) and may by way of example be 3, 4, 5 or greater. However, according to the method at least 3 frame sets S.sub.i=1,2,3, . . . can be generated, i.e. a first frame set S.sub.i of nbr.sub.1 frames, a second frame set S.sub.2 of nbr.sub.2 frames, and a third frame set S.sub.3 of nbr.sub.3 frames, wherein nbr.sub.1<nbr.sub.2<nbr.sub.3.

    [0085] The frames F.sub.i,j of each frame set S.sub.i are generated or acquired from, and thus represent or image, a same region R.sub.i of the microfabrication pattern 130 under investigation. Each frame set S.sub.i may represent a different region R.sub.i of the pattern 130.

    [0086] While the number of frames nbr.sub.i of the frame sets S.sub.i are different (and optionally also the regions R.sub.i each frame set depict), it is to be understood that other acquisitions parameters may be kept constant between the frame sets S.sub.i. Variations in parameters other than number of frames/SNR between the frame sets could otherwise complicate data analysis and render the metrology more challenging. SEM acquisition parameters which may be the same among the frame sets S.sub.i include acceleration voltage, probe current, pixel size, and magnification.

    [0087] In step S204, each frame set S.sub.i is received by the computing device 120 and for each frame set S.sub.i (e.g. each of the first frame set S.sub.i, the second frame set S.sub.2, and the third frame set S.sub.3) the computing device 120: estimates feature data and computes a preliminary estimate of a roughness parameter from the feature data (step S206 and sub-steps S206a, S206b). A preliminary estimate of a roughness parameter may for brevity in the following also be referred to as a preliminary roughness estimate.

    [0088] The estimation of the feature data may comprise estimating feature data representing one or more of edge positions, linewidths, or centerline positions of each feature of the region R.sub.i represented in the frames F.sub.i,j of the frame set S.sub.i.

    [0089] The feature data may comprise edge data representing, for each feature in the region R.sub.i, a position of the edge (left and/or right) of the feature for each scan line or pixel row position along the feature. For a line-shaped feature, the edge position may be indicated as a deviation from a nominal edge position, e.g. an average position of the feature edge.

    [0090] The feature data may additionally or alternatively comprise linewidth data, for each feature in the region R.sub.i, a linewidth (a difference between left and right edge positions) of the feature for each scan line or pixel row position along the feature.

    [0091] The feature data may additionally or alternatively comprise centerline data, for each feature in the region R.sub.i, a centerline position (a midpoint between left and right edge positions) of the feature for each scan line or pixel row position along the feature. For a line-shaped feature, the centerline position may be indicated as a deviation from a nominal centerline position, e.g. an average centerline position of the feature.

    [0092] Any suitable conventional image processing technique for edge detection may be used for estimating the feature data.

    [0093] From the edge data, the computing device 120 may compute a preliminary estimate of the line edge roughness (LER). From the linewidth data, the computing device 120 may compute a preliminary estimate of the line width roughness (LWR). From the centerline data, the computing device 120 may compute a preliminary estimate of the pattern placement roughness (PPR). More specifically, the computing device 120 may compute a noise-unbiased preliminary roughness estimate (i.e. uLER, uLWR or uPPR), as will be described in further detail below.

    [0094] In step 208, the method 200 proceeds by the computing device 120 fitting a model equation to the preliminary estimates of the roughness parameter (e.g. uLER, uLWR or uPPR) using a model parameter dependent on the number of frames of the set of frames. The model equation may be:


    Y=a(1−b*e.sup.c*x)   (Eq. 1)

    [0095] where Y is the roughness parameter (e.g. uLER, uLWR or uPPR), x is the model parameter and a, b, c are fitting parameters. The model parameter x may be the number of frames nbr_i of the respective frame sets S.sub.i, or a respective average SNR for the respective the respective frame sets S.sub.i, or some other derived parameter increasing with nbr_i.

    [0096] An average SNR for a respective frame set S.sub.i may be estimated in various manners. An average SNR may be estimated for each frame F.sub.i,j of the frame set S.sub.i, wherein the average SNRs of the frame F.sub.i,j in turn may be averaged to obtain the average SNR for the frame set S.sub.i. An average SNR may also be estimated by generating a composite image I.sub.i by frame integration over the set of frames F.sub.i,j (see Eq. 3 below) and subsequently estimating an average SNR of the image I.sub.i. An average SNR of an image I.sub.i (and correspondingly for a frame F.sub.i,j) may be estimated using a ratio of a difference between a (grayscale) maximum (P.sub.max) and minimum (P.sub.in) intensity of an average pixel row of the rows of pixels of the image I.sub.i, and a grayscale noise level. A grayscale noise level may be estimated from a non-feature edge portion of the image I.sub.i (e.g. the 1σ noise noise). Eq. 2 provides an example equation for computing an average SNR:

    [00003] Average SNR = P max - P min Grayscale noise ( 1 σ ) ( Eq . 2 )

    [0097] In case the method involves processing supersets {S.sub.i}.sub.m, of frame sets S.sub.i with a same number of frames (discussed below), the average pixel row may be computed over the rows of pixels of each image I.sub.i associated with the superset {S.sub.i}m, and the grayscale noise level may be computed as the average of the 1σ grayscale noise of each image I.sub.i associated with the superset {S.sub.i}.sub.m, so as to yield an average SNR for the superset {S.sub.i}.sub.m.

    [0098] A model equation of the form exemplified by Eq. 1 enables an accurate estimation of the roughness parameter based on only three data points. However it is contemplated that other types of model equations may be used, such as other exponential functions or a power law. An example form of a power law equation is Y=a−b*c.sup.x, with the same parameter definitions as provided in connection with Eq. 1.

    [0099] After performing the model fit, the computing device 120 in step S210 computes a final estimate of the roughness parameter as an asymptotic value of the fitted model equation. The asymptotic value may be obtained by extrapolating the fitted model equation Y(x). The asymptotic value may for example be computed as the value of x at which the curvature of Y(x) (e.g. the first order derivate of the model equation dY/dx) becomes smaller than a predetermined threshold. The asymptotic value may also be obtained by plotting Y(x) and identifying a value of x at which Y(x) approaches a plateau value.

    [0100] FIG. 3 schematically depicts the result of fitting the model equation Y to a data set of 5 data points, i.e. 5 preliminary estimates of a roughness parameter (e.g. uLWR) computed for 5 frame sets S.sub.i as a function of average SNR for the respective frame sets S.sub.i. The dashed circle represents an example of the final estimate of the roughness parameter (e.g. uLWR).

    [0101] For improved robustness of the estimation of the feature data, the computing device 120 may, rather than estimating feature data from the individual frames F.sub.i,j of each frame set S.sub.i, estimate the feature data from a composite image I.sub.i generated by performing frame integration on the frame set S.sub.i (S205). That is, the SEM 110 or the computing device 120 may, for each frame set S.sub.i, generate a respective composite image I.sub.i as a pixel-wise average of the respective frames F.sub.i,j of each respective frame set S.sub.i, i.e.

    [00004] I i = .Math. j F i , j nbr i ( Eq . 3 )

    [0102] where the summation is to be understood as a pixel-wise summation of corresponding pixels of the frames F.sub.i,j. The image I.sub.i and the frames F.sub.i,j of each frame set S.sub.i may thus have the same dimensions. The computing device 120 may for each image I.sub.i estimate one or more of the aforementioned types of feature data and compute a respective preliminary roughness estimate for each image I.sub.i. If the composite images are generated by the SEM 110, the computing device 120 may receive the composite images I.sub.i from the SEM 110 (e.g. as an alternative to receiving to the individual frames of the frame sets S.sub.i).

    [0103] A noise-unbiased estimate of a roughness parameter (e.g. uLWR, uLER or uPPR) for an image I.sub.i (generated from frame set S.sub.i) may be determined through an unbiasing procedure in which a (one-dimensional) power spectral density (PSD) is used to determine the SEM noise floor in the image I.sub.i. The noise floor may subsequently be subtracted from the (noise-biased) PSD to obtain the unbiased PSD. The unbiased roughness estimate may be obtained by integrating the unbiased PSD. Alternatively, the (noise-biased) PSD may be integrated to obtain a noise-biased roughness estimate wherein the unbiased roughness estimate may be obtained by subtracting the noise floor. The noise floor may be estimated from the high-frequency portion of the PSD. FIG. 4 illustrates an example Log-Log plot of the PSD as a function of the spatial frequency of feature data (e.g. edge data, linewidth data, or centerline data) for a feature, e.g. estimated from an image I.sub.i. The full line represents the biased PSD. The dash-dotted line represents the noise floor of the SEM frames/image. The dashed line represents the noise-unbiased PSD. The integral of the unbiased PSD corresponds to an estimate of the variance aσ of the feature data. uLWR, uLER or uPPR may then be computed as 3 times the square root of the integral value (3V7).

    [0104] In case of multiple features, an improved significance level of the preliminary estimate may be provided by estimating feature data for each individual feature k in the image computing a respective PSD (PSD.sub.k) for the feature data relating to each individual feature k, and thereafter computing a common/average PSD (PSD) for the feature data estimated from the image I.sub.i as an average of the respective PSDs, e.g.

    [00005] PSD _ = .Math. k PSD k number of features in image I i ( Eq . 4 )

    [0105] The preliminary estimate of the noise-unbiased estimate of the roughness parameter for each respective set of frames may subsequently be computed from the average PSD (PSD) and the noise floor thereof and by integrating the resulting unbiased average PSD.

    [0106] While a PSD is a convenient tool for characterizing a feature roughness, it is contemplated that also other types of spatial frequency density representations may be used, such as the Fourier spectrum.

    [0107] Further techniques for unbiased roughness estimates include the approach based on the HHCF, as described by Constantoudis et al in “Toward a complete description of linewidth roughness: a comparison of different methods for vertical and spatial LER and LWR analysis and CD variation” (Proceedings of SPIE Vol. 5375 pages 967-977).

    [0108] As discussed above, unbiased roughness estimates may be computed based on feature data (edge position, linewidth, or centerline position) estimated from the respective composite images I.sub.i. Unbiased roughness estimates may however also be computed based on feature data estimated from the individual frames F.sub.i,j of a frame set S.sub.i. An example of such a technique is described by Villarrubia et al in “Unbiased estimation of linewidth Roughness” (Proceedings of SPIE Vol. 5752, Metrology, Inspection, and Process Control for Microlithography XIX, 10 May 2005). Another example would be to compute a respective PSD for a feature k in each frame F.sub.i,j of a frame set S.sub.i, and then compute an average PSD for the feature k by averaging the respective PSDs. In line with the above discussion, this may be repeated for each feature in the frame set S.sub.i, wherein the average PSDs for the features k then may be averaged to compute the PSD for the frame set S.sub.i.

    [0109] As may be appreciated from the above, the significance level of the roughness estimates (preliminary and final) is dependent on the amount of feature data. One approach for increasing the amount of feature data is to, for each data point to be used in the model fit (e.g. step S208), generate a number of (e.g. two or more) frame sets S.sub.i with a same number of frames nbr_i and representing a different respective region of the pattern. Accordingly, the SEM may generate a number of supersets {S.sub.i,m}.sub.m of frame sets S.sub.i,m (wherein the additional index m is introduced to distinguish between frame sets of different supersets) wherein each frame set S.sub.i,m of a superset {S.sub.i}.sub.m has a same number of frames nbr.sub.i,m. The methods and data processing described above may accordingly be applied in a corresponding manner to each frame of each frame set S.sub.i,m of {S.sub.i}.sub.m, and/or each image I.sub.i,m generated from the frame set S.sub.i,m (and thus being associated with {S.sub.i}.sub.m).

    [0110] For example, a number of supersets {S.sub.i}.sub.m of frame sets S.sub.i,m may be generated by the SEM 110 and received by the computing device 120 (S202-S204). An image may be generated from each frame set S.sub.i,m of each superset {S.sub.i}.sub.m (S205). Feature data may be estimated for each feature in each image (S206a) and a preliminary roughness estimate may be computed from the feature data estimated from each image associated with the superset {S.sub.i}.sub.m (S206b). The model equation may subsequently be fitted to the respective preliminary roughness estimates for the supersets {S.sub.i}.sub.m and the model parameter (e.g. the number of frames nbr.sub.i,m or the average SNR).

    [0111] If a PSD-based approach is used, an average PSD may be computed for each superset {S.sub.i}.sub.m. For example, a respective PSD may be computed for each feature of each image associated with the superset {S.sub.i}.sub.m and an average PSD for each image I.sub.i,m may be computed by averaging the respective PSDs computed for the features of the image I.sub.i,m. Once an average PSD has been computed for each image I.sub.i,m associated with the superset {S.sub.i}.sub.m, a final (biased) PSD may be computed as an average over the average PSDs for each image I.sub.i,m associated with the superset {S.sub.i}.sub.m. A single preliminary roughness estimate may then be computed from the final average PSD for the superset {S.sub.i}.sub.m and a noise floor thereof, using either of the approaches described above.

    [0112] FIG. 5 shows example SEM images of wafers with patterned thin resist films, more specifically, chemically amplified resists (CAR), coated on a spin-on-glass (SOG) and an organic underlayer (UL). The X-axis and the Y-axis indicate the pixel column and pixel row directions, respectively.

    [0113] The film thickness (FT) of the resist films were 30 nm, 25 nm, 20 nm, and 15 nm. The films were exposed in an ASML full-field NXE:3400 scanner to print vertical 1:3 lines and spaces at a pitch of 32 nm. Subsequently, the wafers received a post-exposure bake of 90° C. for 60 seconds and were developed with a 2.38% tetramethyl-ammonium hydroxide (TMAH) solution.

    [0114] The SEM images were obtained with a Hitachi CG-6300 CDSEM. Although the SEM images in the different rows differ, the trend of a reduced overall image contrast as the FT is reduced may be readily seen for both types of underlayers.

    [0115] For each wafer, 50 images were generated (84K, 2048×2048 pixels) at the best dose best focus condition (BD-BF) at a variable number of CDSEM frames of integration (e.g., 4, 8, 16, 32, or 64). For a given wafer, this corresponds to 5 supersets {S.sub.i,m}.sub.m=1,2, . . . , 5, each comprising 50 frame sets S.sub.i,m (i=1,2, . . . , 50) of nbr.sub.i,m frames F.sub.i,j,m each (nbr.sub.i,1=4, nbr.sub.i,2=8, nbr.sub.i,3=16, nbr.sub.i,4=32, nbr.sub.i,5=64), and generating a composite image km for each frame set S.sub.i,m.

    [0116] For each wafer, and for each frame number (i.e. each superset {S.sub.i,m}.sub.m=1,2, . . . ,5), the unbiased LWR (3σ) was determined for the set of 50 images km associated with the respective superset, and the average SNR was estimated using Eq. 2.

    [0117] FIG. 6 shows a plot of the resulting data points for each of the wafers, wherein a model equation of the form shown in Eq. 1 has been fitted to the data points associated with each wafer and superset. The fact that the estimated unbiased LWR changes with SNR is a clear metrology artifact that ideally is avoided. It may be observed that in the limit of small SNR the unbiased LWR is underestimated, while at larger SNR the unbiased LWR asymptotically approaches a stable realistic value. Plots of unbiased LER versus SNR and unbiased PPR versus SNR show a corresponding relationship. This applies also to plots of unbiased LER/LWR/PPR versus number of frames of integration.

    [0118] While some embodiments have been illustrated and described in detail in the appended drawings and the foregoing description, such illustration and description are to be considered illustrative and not restrictive. Other variations to the disclosed embodiments can be understood and effected in practicing the claims, from a study of the drawings, the disclosure, and the appended claims. The mere fact that certain measures or features are recited in mutually different dependent claims does not indicate that a combination of these measures or features cannot be used. Any reference signs in the claims should not be construed as limiting the scope.