METHOD OF PERFORMING TOMOGRAPHIC IMAGING IN A CHARGED-PARTICLE MICROSCOPE
20190348254 ยท 2019-11-14
Assignee
Inventors
Cpc classification
H01J2237/24495
ELECTRICITY
H01J37/20
ELECTRICITY
International classification
Abstract
A method of performing sub-surface imaging of a specimen in a charged-particle microscope of a scanning transmission type, comprising the following steps: Providing a beam of charged particles that is directed from a source along a particle-optical axis through an illuminator so as to irradiate the specimen; Providing a detector for detecting a flux of charged particles traversing the specimen; Causing said beam to follow a scan path across a surface of said specimen, and recording an output of said detector as a function of scan position, thereby acquiring a scanned charged-particle image I of the specimen; Repeating this procedure for different members n of an integer sequence, by choosing a value P.sub.n of a variable beam parameter P and acquiring an associated scanned image I.sub.n, thereby compiling a measurement set M={(I.sub.n, P.sub.n)}; Using computer processing apparatus to automatically deconvolve the measurement set M and spatially resolve it into a result set representing depth-resolved imagery of the specimen,
wherein: Said variable beam parameter P is focus position along said particle-optical axis; Said scanned image I is an integrated vector field image, obtained by; Embodying said detector to comprise a plurality of detection segments; Combining signals from different detection segments so as to produce a vector output from the detector at each scan position, and compiling this data to yield a vector field; Mathematically processing said vector field by subjecting it to a two-dimensional integration operation.
Claims
1. A method for imaging a specimen using a charged-particle microscope, comprising: scanning the specimen with a charged particle beam at a plurality of focal depths, each of the plurality of focal depths includes a plurality of scan positions; generating a vector field at each of the plurality of focal depths based on signals received from different segments of a detector at each scan position of the focal depth; two-dimensionally integrating the vector field at each of the plurality of focal depths to form a vector field (iVF) image at each of the plurality of focal depths; and deconvolving the iVF images at the plurality of focal depths into a set of depth-resolved images of the specimen.
2. The method of claim 1, wherein the vector field represents an electrostatic potential field gradient in the specimen.
3. The method of claim 1, wherein the iVF image represents an electrostatic potential field in the specimen.
3. The method of claim 1, wherein the specimen is non-magnetic.
4. The method of claim 1, wherein each of the plurality of iVF images is a linear sum of the set of depth-resolved images.
5. The method of claim 1, wherein the deconvolution is a linear transformation.
6. The method of claim 1, wherein the deconvolution is performed using a Source Separation algorithm.
7. The method of claim 6, wherein said Source Separation algorithm is selected from the group consisting Independent Component Analysis, Principal Component Analysis, Non-Negative Matrix Factorization, and combinations and hybrids hereof.
8. The method of claim 1, wherein the detector detects a flux of charged particles transmitted through the specimen.
9. The method of claim 1, wherein the charged-particle microscope is a transmission electron microscope.
10. A charged-particle microscope for imaging a specimen, comprising: a source for directing a charged particle beam towards the specimen; deflectors for scanning the specimen at a focal depth; a segmented detector for receiving charged particles transmitted through the specimen; and a controller configured to: scan the specimen with the charged particle beam at a plurality of focal depths, each of the plurality of focal depths includes a plurality of scan positions; generate a vector field at each of the plurality of focal depths based on signals received from different segments of the detector at each scan position of the focal depth; two-dimensionally integrate the vector field at each of the plurality of focal depths to form a vector field (iVF) image at each of the plurality of focal depths; and deconvolve the iVF images at the plurality of focal depths into a set of depth-resolved images of the specimen.
11. The charged-particle microscope of claim 10, wherein the vector field represents an electrostatic potential field gradient in the specimen.
12. The charged-particle microscope of claim 10, wherein the iVF image represents an electrostatic potential field in the specimen.
13. The charged-particle microscope of claim 10, wherein the specimen is non-magnetic.
14. The charged-particle microscope of claim 10, wherein each of the multiple iVF images is a linear sum of the set of depth-resolved images.
15. The charged-particle microscope of claim 10, wherein the deconvolution is a linear transformation.
16. The charged-particle microscope of claim 10, wherein the deconvolution is performed using a Source Separation algorithm.
17. The charged-particle microscope of claim 16, wherein said Source Separation algorithm is selected from the group consisting Independent Component Analysis, Principal Component Analysis, Non-Negative Matrix Factorization, and combinations and hybrids hereof.
Description
[0083] The invention will now be elucidated in more detail on the basis of exemplary embodiments and the accompanying schematic drawings, in which:
[0084]
[0085]
[0086]
EMBODIMENT 1
[0087]
[0088] The specimen S is held on a specimen holder H. As here illustrated, part of this holder H (inside enclosure E) is mounted in a cradle A that can be positioned/moved in multiple degrees of freedom by a positioning device (stage) A; for example, the cradle A may (inter alia) be displaceable in the X, Y and Z directions (see the depicted Cartesian coordinate system), and may be rotated about a longitudinal axis parallel to X. Such movement allows different parts of the specimen S to be irradiated/imaged/inspected by the electron beam traveling along axis B (and/or allows scanning motion to be performed as an alternative to beam scanning [using deflector(s) D], and/or allows selected parts of the specimen S to be machined by a (non-depicted) focused ion beam, for example).
[0089] The (focused) electron beam B traveling along axis B will interact with the specimen S in such a manner as to cause various types of stimulated radiation to emanate from the specimen S, including (for example) secondary electrons, backscattered electrons, X-rays and optical radiation (cathodoluminescence). If desired, one or more of these radiation types can be detected with the aid of sensor 22, which might be a combined scintillator/photomultiplier or EDX (Energy-Dispersive X-Ray Spectroscopy) module, for instance; in such a case, an image could be constructed using basically the same principle as in a SEM. However, of principal importance in a (S)TEM, one can instead/supplementally study electrons that traverse (pass through) the specimen S, emerge (emanate) from it and continue to propagate (substantially, though generally with some deflection/scattering) along axis B. Such a transmitted electron flux enters an imaging system (combined objective/projection lens) 24, which will generally comprise a variety of electrostatic/magnetic lenses, deflectors, correctors (such as stigmators), etc. In normal (non-scanning) TEM mode, this imaging system 24 can focus the transmitted electron flux onto a fluorescent screen 26, which, if desired, can be retracted/withdrawn (as schematically indicated by arrows 26) so as to get it out of the way of axis B. An image (or diffractogram) of (part of) the specimen S will be formed by imaging system 24 on screen 26, and this may be viewed through viewing port 28 located in a suitable part of a wall of enclosure E. The retraction mechanism for screen 26 may, for example, be mechanical and/or electrical in nature, and is not depicted here.
[0090] As an alternative to viewing an image on screen 26, one can instead make use of the fact that the depth of focus of the electron flux emerging from imaging system 24 is generally quite large (e.g. of the order of 1 meter). Consequently, various types of sensing device/analysis apparatus can be used downstream of screen 26, such as: [0091] TEM camera 30. At camera 30, the electron flux can form a static image (or diffractogram) that can be processed by controller 10 and displayed on a display device (not depicted), such as a flat panel display, for example. When not required, camera 30 can be retracted/withdrawn (as schematically indicated by arrows 30) so as to get it out of the way of axis B. [0092] STEM detector 32. An output from detector 32 can be recorded as a function of (X,Y) scanning position of the beam B on the specimen S, and an image can be constructed that is a map of output from detector 32 as a function of X,Y. Typically, detector 32 will have a much higher acquisition rate (e.g. 10.sup.6 points per second) than camera 30 (e.g. 10.sup.2 images per second). In conventional tools, detector 32 can comprise a single pixel with a diameter of e.g. 20 mm, as opposed to the matrix of pixels characteristically present in camera 30; however, in the context of the present invention, detector 32 will have a different structure (see below), so as to allow iVF imaging to be performed. Once again, when not required, detector 32 can be retracted/withdrawn (as schematically indicated by arrows 32) so as to get it out of the way of axis B (although such retraction would not be a necessity in the case of a donut-shaped annular dark field detector 32, for example; in such a detector, a central hole would allow beam passage when the detector was not in use). [0093] As an alternative to imaging using camera 30 or detector 32, one can also invoke spectroscopic apparatus 34, which could be an EELS module, for example.
It should be noted that the order/location of items 30, 32 and 34 is not strict, and many possible variations are conceivable. For example, spectroscopic apparatus 34 can also be integrated into the imaging system 24.
[0094] Note that the controller/computer processor 10 is connected to various illustrated components via control lines (buses) 10. This controller 10 can provide a variety of functions, such as synchronizing actions, providing setpoints, processing signals, performing calculations, and displaying messages/information on a display device (not depicted). Needless to say, the (schematically depicted) controller 10 may be (partially) inside or outside the enclosure E, and may have a unitary or composite structure, as desired. The skilled artisan will understand that the interior of the enclosure E does not have to be kept at a strict vacuum; for example, in a so-called Environmental (S)TEM, a background atmosphere of a given gas is deliberately introduced/maintained within the enclosure E. The skilled artisan will also understand that, in practice, it may be advantageous to confine the volume of enclosure E so that, where possible, it essentially hugs the axis B, taking the form of a small tube (e.g. of the order of 1 cm in diameter) through which the employed electron beam passes, but widening out to accommodate structures such as the source 4, specimen holder H, screen 26, camera 30, detector 32, spectroscopic apparatus 34, etc.
[0095] In the context of the current invention, the following specific points deserve further elucidation: [0096] The detector 32 is embodied as a segmented detector, which, for example, may be a quadrant sensor, pixelated CMOS/CCD/SSPM detector, or PSD, for instance. Specific embodiments of such detectors are shown in plan view in
EMBODIMENT 2
[0110] A further explanation will now be given regarding some of the mathematical techniques that can be used to obtain an iVF image as employed in the present invention.
[0111] Integrating Gradient Fields As set forth above, a measured vector field {tilde over (E)}(x, y)=({tilde over (E)}.sub.x(x, y), {tilde over (E)}.sub.y(x, y)).sup.T can (for example) be derived at each coordinate point (x, y) from detector segment differences using the expressions:
where, for simplicity, spatial indexing (x, y) in the scalar fields {tilde over (E)}.sub.x, {tilde over (E)}.sub.y and S.sub.i=1, . . . , 4 has been omitted, and where superscript T denotes the transpose of a matrix.
It is known from the theory of (S)TEM contrast formation that {tilde over (E)} is a measurement of the actual electric field E in an area of interest of the imaged specimen. This measurement is inevitably corrupted by noise and distortions caused by imperfections in optics, detectors, electronics, etc. From basic electromagnetism, it is known that the electrostatic potential function (x, y) [also referred to below as the potential map] is related to the electric field by:
E=(3)
The goal here is to obtain the potential map at each scanned location of the specimen. But the measured electric field in its noisy form {tilde over (E)} will most likely not be integrable, i.e. cannot be derived from a smooth potential function by the gradient operator. The search for an estimate {circumflex over ()} of the potential map given the noisy measurements {tilde over (E)} can be formulated as a fitting problem, resulting in the functional minimization of objective function J defined as:
J()=(){tilde over (E)}.sup.2dxdy=+{tilde over (E)}.sup.2dxdy(4)
where
One is essentially looking for the closest fit to the measurements, in the least squares sense, of gradient fields derived from smooth potential functions .
[0112] To be at the sought minimum of J one must satisfy the Euler-Lagrange equation:
which can be expanded to:
finally resulting in:
which is the Poisson equation that one needs to solve to obtain {circumflex over ()}.
Poisson Solvers
[0113] Using finite differences for the derivatives in (7) one obtains:
where is the so-called grid step size (assumed here to be equal in the x and y directions). The right side quantity in (8) is known from measurements and will be lumped together in a term p.sub.i,j to simplify notation:
which, after rearranging, results in:
.sub.i1,j+.sub.i,j14.sub.i,j+.sub.i,j+1+.sub.i+1,j=.sup.2.sub.i,j(10)
for i=2, . . . , N1 and j=2, . . . , M1, with (N, M) the dimensions of the image to be reconstructed.
The system in (10) leads to the matrix formulation:
L=(11)
where and represent the vector form of the potential map and measurements, respectively (the size of these vectors is NM, which is the size of the image). The so-called Laplacian matrix L is of dimensions (NM).sup.2 but is highly sparse and has a special form called tridiagonal with fringes for the discretization scheme used above. So-called Dirichlet and Neumann boundary conditions are commonly used to fix the values of {circumflex over ()} at the edges of the potential map.
[0114] The linear system of (11) tends to be very large for typical (S)TEM images, and will generally be solved using numerical methods, such as the bi-conjugate gradient method. Similar approaches have previously been used in topography reconstruction problems, as discussed, for example, in the journal article by Ruggero Pintus, Simona Podda and Massimo Vanzi, 14.sup.th European Microscopy Congress, Aachen, Germany, pp. 597-598, Springer (2008). One should note that other forms of discretization of the derivatives can be used in the previously described approach, and that the overall technique is conventionally known as the Poisson solver method. A specific example of such a method is the so-called multi-grid Poisson solver, which is optimized to numerically solve the Poisson equation starting from a coarse mesh/grid and proceeding to a finer mesh/grid, thus increasing integration speed.
Basis Function Reconstruction
[0115] Another approach to solving (7) is to use the so-called Frankot-Chellapa algorithm, which was previously employed for depth reconstruction from photometric stereo images. Adapting this method to the current problem, one can reconstruct the potential map by projecting the derivatives into the space-integrable Fourier basis functions. In practice, this is done by applying the Fourier Transform FT() to both sides of (7) to obtain:
(.sub.x.sup.2+w.sub.y.sup.2)FT()={square root over (1)}(.sub.xFT({tilde over (E)}.sub.x)+(.sub.yFT({tilde over (E)}.sub.y))(12)
from which {circumflex over ()} can be obtained by Inverse Fourier Transform (IFT):
[0116] The forward and inverse transforms can be implemented using the so-called Discrete Fourier Transform (DFT), in which case the assumed boundary conditions are periodic. Alternatively, one can use the so-called Discrete Sine Transform (DST), which corresponds to the use of the Dirichlet boundary condition (=0 at the boundary). One can also use the so-called Discrete Cosine Transform (DCT), corresponding to the use of the Neumann boundary conditions (.Math.n=0 at the boundary, n being the normal vector at the given boundary location).
Generalizations and Improved Solutions
[0117] While working generally well, the Poisson solver and Basis Function techniques can be enhance further by methods that take into account sharp discontinuities in the data (outliers). For that purpose, the objective function J can be modified to incorporate a different residual error R (in (4), the residual error was R(v)=v.sup.2). One can for example use exponents of less than two including so-called Lp norm-based objective functions:
J()=w.sub.x(.sub.x.sup.k-1)(.sub.x{tilde over (E)}.sub.x).sup.2+w.sub.y(.sub.y.sup.k-1)(.sub.y{tilde over (E)}.sub.y).sup.2dxdy(15)
where the weight functions depend on the residuals:
R(.sub.x.sup.k-1)=R(.sub.x.sup.k-1,{tilde over (E)}.sub.x) and R(.sub.y.sup.R-1)=R(.sub.y.sup.k-1,{tilde over (E)}.sub.y)(15a)
at iteration k1.
[0119] It can be shown that, for the problem of depth reconstruction from photometric stereo images, the use of such anisotropic weights, which can be either binary or continuous, leads to improved results in the depth map recovery process.
[0120] In another approach, one can also apply a diffusion tensor D to the vector fields and {tilde over (E)} with the aim of smoothing the data while preserving discontinuities during the process of solving for {circumflex over ()}, resulting in the modification of (4) into:
J()=D()D({tilde over (E)}).sup.2dxdy(16)
[0121] Finally, regularization techniques can be used to restrict the solution space. This is generally done by adding penalty functions in the formulation of the objective criterion I such as follows:
J()=[(){tilde over (E)}.sup.2+()]dxdy(17)
[0122] The regularization function () can be used to impose a variety of constraints on p for the purpose of stabilizing the convergence of the iterative solution. It can also be used to incorporate into the optimization process prior knowledge about the sought potential field or other specimen/imaging conditions.
Position Sensitive Detector (PSD)
[0123] Using a Position Sensitive Detector (PSD) and measuring a thin, non-magnetic specimen, one obtains (by definition) the vector field image components as components of the center of mass (COM) of the electron intensity distribution I.sub.D({right arrow over (k)}, {right arrow over (r)}.sub.p) at the detector plane:
I.sub.x.sup.COM({right arrow over (r)}.sub.p)=.sub..sup.k.sub.xI.sub.D({right arrow over (k)},{right arrow over (r)}.sub.p)d.sup.2{right arrow over (k)} I.sub.y.sup.COM({right arrow over (r)}.sub.p)=k.sub.yI.sub.D({right arrow over (k)},{right arrow over (r)}.sub.p)d.sup.2{right arrow over (k)}(18)
where {right arrow over (r)}.sub.p represents position of the probe (focused electron beam) impinging upon the specimen, and {right arrow over (k)}=(k.sub.x, k.sub.y) are coordinates in the detector plane. The full vector field image can then be formed as:
{right arrow over (I.sup.COM)}({right arrow over (r)}.sub.p)=I.sub.x.sup.COM({right arrow over (r)}.sub.p).Math.{right arrow over (x)}.sub.0+I.sub.y.sup.COM({right arrow over (r)}.sub.p).Math.{right arrow over (y)}.sub.0(19)
where {right arrow over (x)}.sub.0 and {right arrow over (y)}.sub.0 go are unit vectors in two perpendicular directions.
[0124] The electron intensity distribution at the detector is given by:
I.sub.D({right arrow over (k)},{right arrow over (r)}.sub.p)=|{.sub.in({right arrow over (r)}{right arrow over (r)}.sub.p)e.sup.i({right arrow over (r)})}({right arrow over (k)})|.sup.2(20)
where .sub.in({right arrow over (r)}{right arrow over (r)}.sub.p) is the impinging electron wave (i.e. the probe) illuminating the specimen at position {right arrow over (r)}.sub.p, and e.sup.i({right arrow over (r)}) is the transmission function of the specimen. The phase ({right arrow over (r)}) is proportional to the specimen's inner electrostatic potential field. Imaging ({right arrow over (r)}) is the ultimate goal of any electron microscopy imaging technique. Expression (19) can be re-written as:
where {right arrow over (E)}({right arrow over (r)})=({right arrow over (r)}) is the inner electric field of the specimenwhich is the negative gradient of the electrostatic potential field of the specimenand the operator * denotes cross-correlation. It is evident that the obtained vector field image {right arrow over (I.sup.COM)}({right arrow over (r)}.sub.p) directly represents the inner electric field {right arrow over (E)}({right arrow over (r)}) of the specimen. Its components are set forth in (18) above. Next, an integration step in accordance with the current invention is performed, as follows:
I.sup.ICOM({right arrow over (r)}.sub.p).sub.t={right arrow over (r)}.sub.
using any arbitrary path l. This arbitrary path is allowed because, in the case of non-magnetic specimens, the only field is the electric field, which is a conservative vector field. Numerically this can be performed in many ways (see above). Analytically it can be worked out by introducing (21) into (22), yielding:
[0125] It is clear that, with this proposed integration step, one obtains a scalar field image that directly represents ({right arrow over (r)}), as already alluded to above.
EMBODIMENT 3
[0126] The linearity assumptions in image formation elucidated above can be represented in the model:
Q=AI(24)
in which:
[0127] I=(I.sub.1, I.sub.2, . . . , I.sub.N).sup.T is the set of iVF images acquired by varying focus value;
[0128] Q=(Q.sub.1, Q.sub.2, . . . , Q.sub.N).sup.T is a set of source images that are statistically de-correlated and that represent information coming from different depth layers (levels);
[0129] A=(.sub.1, .sub.2, . . . , .sub.N).sup.T is a square matrix transforming the original images into so-called principal components.
[0130] PCA decomposition obtains the factorization in equation (24) by finding a set of orthogonal components, starting with a search for the one with the highest variance.
[0131] The first step consists in minimizing the criterion:
[0132] The next step is to subtract the found component from the original images, and to find the next layer with highest variance.
[0133] At iteration 1<kN, we find the kth row of the matrix A by solving:
[0134] It can be shown (see, for example, literature references [1] and [3] referred to above) that successive layer separation can be achieved by using so-called Eigenvector Decomposition (EVD) of the covariance matrix .sub.I of the acquired images:
.sub.I=E{I.sup.TI}=EDE.sup.T(27)
in which:
[0135] E is the orthogonal matrix of eigenvectors of .sub.I;
[0136] D=diag(d.sub.1, . . . , d.sub.N) is the diagonal matrix of Eigenvalues.
The principal components can then be obtained as
Q=E.sup.TI(28)
The Eigenvalues are directly related to the variance of the different components:
d.sub.i=(var(Q.sub.i)).sup.2(29)
[0137] In cases in which noise plays a significant role, the components with lower weights (Eigenvalues) may be dominated by noise. In such a situation, the inventive method can be limited to the K (K<N) most significant components. The choice to reduce the dimensionality of the image data can be based on the cumulative energy and its ratio to the total energy:
One can choose a limit for the number of employed layers K based on a suitable threshold value t. A common approach in PCA dimensionality reduction is to select the lowest K for which one obtains rt. A typical value for t is 0.9 (selecting components that represent 90% of the total energy).
[0138] Noise effects can be minimized by recombining several depth layers with a suitable weighting scheme. Additionally, re-weighting and recombination of layers can be useful to obtain an image contrast similar to the original images. In the previously described PCA decomposition, the strongest component (in terms of variance) is commonly associated with the background (matrix) material. Adding this component to depth layers enhances the visual appearance and information content of the obtained image. One can achieve the effect of boosting deeper-lying layers, reducing noise, and rendering proper contrast by re-scaling the independent components by their variances and reconstructing the highest-energy image using the rescaled components, as follows:
Q=EDE.sup.TI(31)
The skilled artisan will appreciate that other choices for the linear weighting of depth layers can also be used.
EMBODIMENT 4
[0139] As an alternative to the PCA decomposition set forth above, one can also employ an SS approach based on ICA. In ICA, one assumes a linear model similar to (24). The main difference with PCA is that one minimizes a higher-order statistical independence criterion (higher than the second-order statistics in PCA), such as so-called Mutual Information (MI):
With marginal entropies computed as:
and the joint entropy:
in which: [0140] P(Q) is the probability distribution of the imaging quantity Q; [0141] q.sub.k is a possible value for said imaging quantity; and [0142] S is the total number of scanned sites on the specimen (e.g. in the case of rectangular images, this is the product of height and width).
[0143] Other criteriasuch as the so-called Infomax and Negentropycan also be optimized in ICA decomposition. Iterative methodssuch as FastlCAcan be employed to efficiently perform the associated depth layer separation task. Adding more constraints to the factorization task can lead to more accurate reconstruction. If one adds the condition that sources (layers) render positive signals and that the mixing matrix is also positive, one moves closer to the real physical processes underlying image formation. A layer separation method based on such assumptions may use the so-called Non-Negative Matrix Decomposition (NNMD) technique with iterative algorithms.
[0144] For more information, see, for example, literature references [1] and [2] cited above.