Method for determining an image recording aberration

20200311886 ยท 2020-10-01

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for determining and correcting an image recording aberration of an image recording device includes recording a first image using an image recording device. The first image represents a first region of an object. The method also includes recording second images using the image recording device. The second images represent mutually different partial regions of the first region. Each partial region is smaller than the first region. The method further includes determining at least one value of an image recording aberration of the image recording device on the basis of the first image and the second images. Related devices are disclosed.

    Claims

    1. A method, comprising: using an image recording device to record a first image representing a first region of an object; using the image recording device to record second images representing mutually different partial regions of the first region of the object), each of the partial regions being smaller than the first region; and determining a value of an image recording aberration of the image recording device on the basis of the first image and the second images.

    2. The method of claim 1, wherein: determining the value of the image recording aberration of the image recording device comprises determining first intermediate values via image values of the first image and of the second images at corresponding image positions being computed with one another, corresponding image positions representing the same location of the object in the first image and the second images; and the value of the image recording aberration of the image recording device is determined on the basis of the first intermediate values.

    3. The method of claim 2, further comprising determining a first assignment, by which corresponding image positions in the first image and the second images are determinable via correlation of the first image with the second images, wherein the first intermediate values are determined by image values of the first image and of the second images at corresponding image positions determined via the first assignment being computed with one another.

    4. The method of claim 2, wherein computing the image values of the first image and of the second images comprises determining a deviation between image values of the first image and of the second images at corresponding image positions.

    5. The method of claim 1, further comprising generating a third image on the basis of the second images, wherein the third image represents the first region of the object, and the value of the image recording aberration is determined on the basis of the third image.

    6. The method of claim 5, wherein generating the third image comprises: combining the second images so that exactly one location of the object is assigned to each pixel of the third image; and/or correlating the second images.

    7. The method of claim 5, wherein: determining the value of the image recording aberration of the image recording device comprises determining second intermediate values by image values of the first image and of the third image at corresponding image positions being computed with one another; corresponding image positions represent a same location of the object in the first image and the third image; and the value of the image recording aberration of the image recording device is determined on the basis of the second intermediate values.

    8. The method of claim 7, further comprising determining a second assignment, by which corresponding image positions in the first image and the third image are determinable by correlation of the first image with the third image, wherein the second intermediate values are determined by image values of the first image and of the third image at corresponding image positions determined via the second assignment being computed with one another.

    9. The method of claim 7, wherein computing the image values of the first image and of the third image comprises determining a deviation between image values of the first image and of the third image at determined corresponding image positions.

    10. The method of claim 1, wherein the first image is recorded with a first magnification, the second images with second magnifications, and each second magnification is greater than the first magnification.

    11. The method of claim 10, wherein a ratio of the smallest second magnification to the first magnification is at least two.

    12. The method of claim 1, wherein the first image is recorded with a first field of view size, the second images are recorded with second field of view sizes, and each of the second field of view sizes is smaller than the first field of view size.

    13. The method of claim 12, wherein a ratio of the first field of view size to the largest second field of view size is at most two.

    14. The method of claim 1, wherein: the partial regions partly overlap one another; and/or the partial regions together cover the first region.

    15. The method of claim 1, wherein an optical axis of the image recording device passes through the partial regions during recording of the second images.

    16. The method of claim 1, further comprising correcting the first image on the basis of the value of the image recording aberration.

    17. The method of claim 1, further comprising: recording a fourth image using the image recording device; and correcting the fourth image on the basis of the value of the image recording aberration determined.

    18. The method of claim 1, further comprising determining an operating parameter of the image recording device on the basis of the value of the image recording aberration so that the image recording aberration is reduced in comparison with the situation during the recording of the first image.

    19. A system comprising: an image recording device; one or more processing devices; and one or more machine-readable hardware storage devices comprising instructions that are executable by the one or more processing devices to perform the method of claim 1.

    20. One or more machine-readable hardware storage devices comprising instructions that are executable by one or more processing devices to perform the method of claim 1.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0058] Embodiments of the disclosure are explained in greater detail below with reference to figures, in which:

    [0059] FIG. 1 shows a schematic illustration of a first region of an object, a first image of which is recorded using an image recording device;

    [0060] FIGS. 2A to 2D show a schematic illustration of partial regions of the object, second images of which are recorded using the image recording device;

    [0061] FIG. 3 shows a schematic illustration for elucidating a first assignment between the first image and the second images;

    [0062] FIG. 4 shows a schematic illustration of a third image generated by combination of the second images;

    [0063] FIG. 5 shows a schematic illustration for elucidating a second assignment between the first image and the third image;

    [0064] FIG. 6 shows a schematic illustration of an image recording device in the form of a light microscope;

    [0065] FIG. 7 shows a schematic illustration of a further image recording device in the form of a particle beam system;

    [0066] FIG. 8 shows a schematic illustration of a further image recording device in the form of a further particle beam system;

    [0067] FIG. 9 shows a schematic illustration for elucidating an image recording aberration in a first image;

    [0068] FIG. 10 shows a schematic illustration for elucidating an image recording aberration in a second image; and

    [0069] FIG. 11 shows a schematic illustration of an image recording aberration as determined from the first image shown in FIG. 9 and the second image shown in FIG. 10.

    DETAILED DESCRIPTION

    [0070] One embodiment of a method for determining an image recording aberration is described below. The method includes recording a first image using an image recording device, wherein the first image represents a first region of an object.

    [0071] FIG. 1 shows a schematic illustration of an object 1. The object 1 has a rectangular first region 3. The first image is an imaging of the first region 3. The point of intersection of two dashed lines represents an optical axis 5 of the image recording device used to record the first image. The optical axis 5 extends perpendicularly to the image plane in FIG. 1 and passes through the first region 3 during the recording of the first image.

    [0072] The method furthermore includes recording second images using the image recording device, wherein the second images represent mutually different partial regions of the first region 3, wherein each of the partial regions is smaller than the first region 3.

    [0073] FIGS. 2A to 2D show a schematic illustration of partial regions 7, 9, 11, 13 of the object 1, a second image of each of which is recorded using the image recording device.

    [0074] FIG. 2A shows a first partial region 7 of the object 1, a second image of which is recorded using the image recording device. During the recording of the second image representing the first partial region 7, the optical axis 5 passes through the first partial region 7. The first partial region 7 is delimited by the rectangle illustrated in a highlighted manner. The first partial region 7 contains a part of the first region 3 and is thus a partial region of the first region 3. The first partial region 7 is smaller than the first region 3. This is illustrated by the first partial region 7 having a smaller area than the first region 3.

    [0075] FIG. 2B shows a second partial region 9 of the object 1, a further second image of which is recorded using the image recording device. During the recording of the second image representing the second partial region 9, the optical axis 5 passes through the second partial region 9. The second partial region 9 is delimited by the rectangle illustrated in a highlighted manner. The second partial region 9 contains a part of the first region 3 and is thus a partial region of the first region 3. The second partial region 9 is smaller than the first region 3. This is illustrated by the second partial region 9 having a smaller area than the first region 3.

    [0076] FIG. 2C shows a third partial region 11 of the object 1, a further second image of which is recorded using the image recording device. During the recording of the second image representing the third partial region 11, the optical axis 5 passes through the third partial region 11. The third partial region 11 is delimited by the rectangle illustrated in a highlighted manner. The third partial region 11 contains a part of the first region 3 and is thus a partial region of the first region 3. The third partial region 11 is smaller than the first region 3. This is illustrated by the third partial region 11 having a smaller area than the first region 3.

    [0077] FIG. 2D shows a fourth partial region 13 of the object 1, a further second image of which is recorded using the image recording device. During the recording of the second image representing the fourth partial region 13, the optical axis 5 passes through the fourth partial region 13. The fourth partial region 13 is delimited by the rectangle illustrated in a highlighted manner. The fourth partial region 13 contains a part of the first region 3 and is thus a partial region of the first region 3. The fourth partial region 13 is smaller than the first region 3. This is illustrated by the fourth partial region 13 having a smaller area than the first region 3.

    [0078] The first to fourth partial regions 7, 9, 11, 13 are mutually different partial regions of the first region 3. The first to fourth partial regions 7, 9, 11, 13 have regions overlapping in pairs. Together the first to fourth partial regions 7, 9, 11, 13 cover the first region 3 (and regions beyond that) of the object 1.

    [0079] In accordance with this embodiment, the method furthermore includes determining a first assignment, by which corresponding image positions in the first image and the second images are determinable. FIG. 3 is a schematic illustration for elucidating the first assignment.

    [0080] FIG. 3 shows the first image 15 representing the first region 3. Furthermore, FIG. 3 shows that one of the second images 17 which represents the first partial region 7. The second image 17 which represents the first partial region 7 is used hereinafter in a manner representative of every other second image from among the second images 17. A frame 19 shown by dashed lines indicates the position of the first partial region 7 with respect to the first region 3 represented by the first image 15.

    [0081] Corresponding image positions in the first image 15 and the second image 17 are determinable using the first assignment 21, which is symbolized by two arrows. Corresponding image positions represent the same location of the object 1 in different images. An image position 23 contained in the first image 15 and an image position 24 contained in the second image 17, each of the image positions being highlighted by a cross, represent the same location 25 of the object 1 (cf. FIG. 1). An image position 27 contained in the first image 15 and an image position 28 contained in the second image 17, each of which image positions are highlighted by a circle, represent another same location 29 of the object 1 (cf. FIG. 1).

    [0082] The first assignment can be determined for example by applying a correlation between the first image 15 and the second image 17. In association with FIG. 3, an explanation has been given of the determination of the first assignment with reference to the first image 15 and the second image 17 which represents the first partial region 7. The first assignment between the first image 15 and the further second images 17 is furthermore determined in the same way.

    [0083] As a result of determining the first assignment, corresponding image positions in the first image 15 and the second images are known. Image recording aberrations are more highly pronounced in the first image 15 than in the second images. By virtue of this difference, it is possible to determine a value of an image recording aberration of the image recording device on the basis of the first image 15 and the second images 17. One example for the determination of the image recording aberration is elucidated later with reference to FIGS. 9 to 11.

    [0084] For this purpose, the image values of the first image 15 and of the second images 17 can be computed with one another, in particular at corresponding image positions 23, 24 and 27, 28, respectively, determined using the first assignment. First intermediate values can be determined from this computation, which first intermediate values can in turn be used for determining the value of the image recording aberration. By way of example, the deviation between image values at corresponding image positions 23, 24 and 27, 28, respectively, is determined and the value of the image recording aberration is determined on the basis thereof.

    [0085] A further embodiment of a method for determining an image recording aberration is described with reference to FIGS. 4 and 5. The method includes recording the first image 15 and recording the second images 17 as explained in association with FIGS. 1 and 2A to 2D.

    [0086] The method furthermore includes generating a third image 31 using the second images 17, the third image being illustrated by way of example in FIG. 4. For comparison with the first image 15, the first region 3 represented by the first image 15 is illustrated by a dash-dotted rectangle in FIG. 4. For comparison with the second images 17, the partial regions 7, 9, 11, 13 represented by the second images 17 are illustrated by dotted rectangles.

    [0087] The third image 31 represents the first region 3 of the object 1. The third image 31 is generated for example by the second images 17 being combined as illustrated in FIG. 4. Corresponding image regions of the second images 17 are superimposed in each case such that each pixel of the third image 31 is assigned exactly one location of the first region 3. By way of example, corresponding image regions of the second images 17 are identified by correlation of the second images 17. Consequently, the second images 17 can be combined such that corresponding image regions of the second images 17 overlap one another.

    [0088] Image values of the third image 31 at pixels which correspond to corresponding image regions of the second images 17 can be determined in various ways. By way of example, the image values at the pixels are determined by averaging the image values at corresponding image regions of the second images 17. Alternatively, the image values of the third image 31 at the pixels can be taken over from one of the second images 17 at corresponding image positions.

    [0089] The third image 31 is therefore an image of the first region 3 of the object 1, but has smaller image recording aberrations compared with the first image 15 since the third image 31 is generated from the second images 17.

    [0090] In accordance with this embodiment, the value of the image recording aberration of the image recording device is determined on the basis of the third image 31. By way of example, the value of the image recording aberration is determined by a comparison of the first image 15 with the third image 31. This can be carried out for example by a second assignment being determined, by which corresponding image positions in the first image 15 and the third image 31 are determinable, and by image values of the first image 15 and of the third image 31 at corresponding image positions being computed with one another. This is explained in association with FIG. 5.

    [0091] FIG. 5 shows a schematic illustration for elucidating the second assignment. FIG. 5 shows the first image 15 and the third image 31. Corresponding image positions in the first image 15 and the third image 31 are determinable using the second assignment 33, which is symbolized by two arrows. An image position 23 contained in the first image and an image position 35 contained in the third image 31, each of the image positions being highlighted by a cross, represent the same location 25 of the object 1 (cf. FIG. 1). An image position 27 contained in the first image and an image position 37 contained in the third image 31, each of which image positions are highlighted by a circle, represent the further same location 29 of the object 1 (cf. FIG. 1). The second assignment 33 can be determined for example by applying a correlation between the first image 15 and the third image 31.

    [0092] As a result of determining the second assignment 33, corresponding image positions 23, 35 and 27, 37, respectively, in the first image 15 and the third image 31 are known. Image recording aberrations are more highly pronounced in the first image 15 than in the third image 31. By virtue of this difference, it is possible to determine a value of an image recording aberration of the image recording device on the basis of the first image 15 and the third image 31. For this purpose, the image values of the first image 15 and of the third image 31 can be computed with one another, in particular at corresponding image positions 23, 35 and 27, 37, respectively, determined using the second assignment 33. Second intermediate values can be determined from this computation, which second intermediate values can in turn be used for determining the value of the image recording aberration. By way of example, the deviation between image values at corresponding image positions 23, 35 and 27, 37, respectively, is determined and the value of the image recording aberration is determined on the basis thereof.

    [0093] FIG. 9 shows a schematic illustration for elucidating an image recording aberration in a first image 15. The first image 15 was recorded with a small magnification and therefore shows a large region 3 of the object 1 (cf. FIG. 1). The image distortion shown in FIG. 9 is a so-called barrel distortion. The barrel distortion serves as an illustrative example of the image recording aberration. However, the explanations in respect of the meaning, determination and application of the image recording aberration also hold true for other types of image recording aberrations.

    [0094] A grid illustrated by dashed lines represents image positions I, such as would be imaged by an ideal image recording of object locations arranged in the form of a grid. In the present case, ideal imaging means an object magnification that is constant in the vertical and horizontal directions. In the description, reference is explicitly made to the image positions IP1, IP2, IP3 and IP4 of the image positions I. These image points are highlighted by circular areas.

    [0095] A distorted grid illustrated by solid lines represents image positions B1 such as would be imaged by a real image recordingcarried out using an image recording device 51, 101, 102of the object locations which are arranged in the form of a grid and which would be imaged onto the image positions I during ideal imaging. In the description, reference is made explicitly to the image positions B1P1, B1P2, B1P3 and B1P4 of the image positions B1. These image points are highlighted by circular areas.

    [0096] A first object location is imaged onto the image position IP1 during the ideal imaging and is imaged onto the image position B1P1 during the real imaging. A second object location is imaged onto the image position IP2 during the ideal imaging and is imaged onto the image position B1P2 during the real imaging. A third object location is imaged onto the image position IP3 during the ideal imaging and is imaged onto the image position B1P3 during the real imaging. A fourth object location is imaged onto the image position IP4 during the ideal imaging and is imaged onto the image position B1P4 during the real imaging.

    [0097] The image positions IP1 and B1P1 are at a distance from one another. The image positions IP2 and B1P2 are at a distance from one another. The image positions IP3 and B1P3 are at a distance from one another. The image positions IP4 and B1P4 are at a distance from one another. That means that the image recording exhibits aberrations. The distances are not constant. That means that the real image recording is subject to a distortion aberration. The distance between an image position produced by the ideal image recording and an image position produced by the real image recording increases with increasing distance from the image center, which is caused by the increasing distance between the object location imaged onto the image positions and the optical axis of the image recording device.

    [0098] FIG. 10 shows a schematic illustration for elucidating an image recording aberration in a second image 17. The second image 17 represents a partial region 7 of the object 1 (cf. FIG. 2A), wherein the partial region 7 is smaller than the partial region 3 represented by the first image 15 in FIG. 9. The second image 17 was recorded with a magnification but is larger than the magnification with which the first image 15 shown in FIG. 9 was recorded. Therefore, the second image 17 represents a smaller partial region 7 of the object 1 in comparison with FIG. 9.

    [0099] A grid illustrated by dashed lines represents image positions I such as would be imaged by an ideal recording of object locations, wherein the object locations are identical to those which are imaged by the ideal imaging onto the grid illustrated using dashed lines in FIG. 9. The distance between the lines of the grid in FIG. 10 is greater than in FIG. 9 on account of the larger magnification of the second image 17. Accordingly, the image positions IP1 in FIGS. 9 and 10 both represent the first object location; the image positions IP2 in FIGS. 9 and 10 both represent the second object location; the image positions IP3 in FIGS. 9 and 10 both represent the third object location; and the image positions IP4 in FIGS. 9 and 10 both represent the fourth object location.

    [0100] A distorted grid illustrated by dash dotted lines represents image positions B2 such as would be imaged by a real image recordingcarried out using the image recording device 51, 101, 102of the object locations which are arranged in the form of a grid and which would be imaged onto the image positions I during ideal imaging. In the description, reference is made explicitly to the image positions B2P1, B2P2, B2P3 and B2P4 of the image positions B2. These image positions are highlighted by circular areas.

    [0101] The first object location is imaged onto the image position IP1 during the ideal imaging and is imaged onto the image position B2P1 during the real imaging. The second object location is imaged onto the image position IP2 during the ideal imaging and is imaged onto the image position B2P2 during the real imaging. The third object location is imaged onto the image position IP3 during the ideal imaging and is imaged onto the image position B2P3 during the real imaging. The fourth object location is imaged onto the image position IP4 during the ideal imaging and is imaged onto the image position B2P4 during the real imaging.

    [0102] FIG. 10 furthermore shows, using solid lines, a part of the grid which represents the image positions B1 in FIG. 9, wherein the grid was adapted to the magnification of the second image 17. The image position B1P1 in the first image 15 and the image position B2P1 in the second image 15 both represent the first object location and are therefore corresponding image positions. The image position B1P2 in the first image 15 and the image position B2P2 in the second image 15 both represent the second object location and are therefore corresponding image positions. The image position B1P3 in the first image 15 and the image position B2P3 in the second image 15 both represent the third object location and are therefore corresponding image positions. The image position B1P4 in the first image 15 and the image position B2P4 in the second image 15 both represent the fourth object location and are therefore corresponding image positions.

    [0103] In FIG. 10, the image positions IP1 and B2P1 are at a distance from one another, but the distance is smaller than the distance between the image positions IP1 and B1P1. The image positions IP2 and B2P2 are at a distance from one another, but the distance is smaller than the distance between the image positions IP2 and B1P2. The image positions IP3 and B2P3 are at a distance from one another, but the distance is smaller than the distance between the image positions IP3 and B1P3. The image positions IP4 and B2P4 are at a distance from one another, but the distance is smaller than the distance between the image positions IP4 and B1P4. The fact that the distances between the ideal and real image positions in the second image 17 are smaller than in the first image 15 is owing to the fact, for example, that the image positions in the first image 15 are comparatively further away from the optical axis of the image recording device than in the second image 17 and are thus affected by the distortion to a greater extent. A further reason is the higher magnification of the second image 17 with approximately identically manifested distortion in the first and second images.

    [0104] FIG. 11 shows a schematic illustration of the image recording aberration such as is determined from the first image 15 shown in FIG. 9 and the second image 17 shown in FIG. 10. The image positions B1P1 and B2P1, which are corresponding image positions, are identified using known methods in the first image 15 and the second image 17 (or a third image composed of the second images, see FIG. 4). This is carried out using (local) correlation methods, for example. The further corresponding image positions B1P2 and B2P2, B1P3 and B2P3, B1P4 and B2P4 are determined in an analogous manner.

    [0105] The distance and a displacement direction between the corresponding image positions are determined on the basis of the corresponding image positions. In FIG. 11, FP1 represents a displacement vector having a length and a direction and indicating the distance and the displacement direction between the corresponding image positions B1P1 and B2P1. FP2 represents a displacement vector indicating the distance and the displacement direction between the corresponding image positions B1P2 and B2P2. FP3 represents a displacement vector indicating the distance and the displacement direction between the corresponding image positions B1P3 and B2P3. FP4 represents a displacement vector indicating the distance and the displacement direction between the corresponding image positions B1P4 and B2P4.

    [0106] The displacement vectors FP1, FP2, FP3 and FP4 indicate the image recording aberration, wherein the image positions of the second image 17 (or respectively of the third image, cf. figure 4) are regarded as a valid approximation of the ideal image recording. Accordingly, the image recording aberration represents the difference between the real image position and the ideal image position of an object location for a multiplicity of object locations.

    [0107] The image recording aberration which was determined in the form of the displacement vectors FP1, FP2, FP3 and FP4 can be used for correcting the image positions of the first image 15 in FIG. 9. Referring to FIG. 9, the image position B1P1 can be displaced by the displacement vector FP1 scaled to the magnification of the first image 15. The image recording aberration concerning the imaging of the first object location in the first image 15 is reduced as a result. The further image positions B1P2, B1P3, B1P4 (generally the image positions B1) are changed in a corresponding manner and the image recording aberration is thereby reduced.

    [0108] The image recording aberration that was determined in the form of the displacement vectors FP1, FP2, FP3 and FP4 is a general correction rule that can be used for the image correction of an arbitrary image which is recorded with the same magnification as the first image 15. By way of example, a new image (of a different object) can be recorded with a magnification the same as or comparable to that of the first image 15. The image recording aberration determined as above on the basis of the first image 15 and the second image 17 can be stored in a memory. For the correction of the newly recorded image, the image recording aberration is read from the memory in which the image recording aberration is stored, and is applied to the newly recorded image in the manner as was described with reference to FIG. 9.

    [0109] The image recording aberration that was determined in the form of the displacement vectors FP1, FP2, FP3 and FP4 can be the optimization target of an optimization algorithm which changes the operating parameters of the image recording device which influence the image recording such that the optimization target becomes better. By way of example, the optimization algorithm implements an iterative method in which the operating parameters of the image recording device which influence the image recording are changed in each iteration. In each iteration, a new image (with a magnification that substantially corresponds to the magnification of the first image 15) is recorded and the image recording aberration is determined once again. In this case, the method changes the operating parameters such that the image recording aberration is reduced or a metric based thereon (for example the average value of the lengths of the displacement vectors FP1, FP2, FP3 and FP4 or the like) is optimized.

    [0110] The methods described herein can be carried out using a multiplicity of different image recording devices. Examples of such image recording devices are described in association with FIGS. 6 to 8.

    [0111] FIG. 7 shows a simplified schematic illustration of a light microscope 51. The light microscope 51 includes an imaging device 53 configured to image an object plane 55 into an image plane 57. The imaging device 53 includes for example one or a plurality of lenses, which together form an objective. The imaging device 53 has the optical axis 5.

    [0112] The light microscope 51 furthermore includes an image sensor 59, the detection area 61 of which is arranged in the image plane 57. The image sensor 59 is configured to record images.

    [0113] In the methods described herein, the first image 15 can be recorded with a first magnification and the second images 17 can be recorded with second magnifications, wherein each of the second magnifications is greater than the first magnification.

    [0114] In association with the imaging device 53, the magnification can be defined as the ratio between the size of an image field and the size of a field of view. By way of example, the first image 15 is recorded with the field of view 63 and the image field 65. The image field 65 extends over the entire detection area 61.

    [0115] The second images 17 can be recorded with a field of view 67 and the image field 65. Since the field of view 63 with which the first image 15 is recorded is larger than the field of view 67 with which the second images 17 are recorded, and both the first image 15 and the second images 17 are recorded with the image field 65, the magnification with which the second images 17 are recorded is greater than the magnification with which the first image 15 is recorded.

    [0116] However, the first image 15 and the second images 17 can also be recorded with the same magnification, but with different field of view sizes. By way of example, the first image 15 is recorded with the field of view 63 and the image field 65. The second images 17 are recorded with the field of view 67 and an image field 69. The ratio of the image field 65 to the field of view 63 is equal to the ratio of the image field 69 to the field of view 67, such that the first image and the second images are recorded with the same magnification. However, the field of view 67 with which the second images 17 are recorded is smaller than the field of view 63 with which the first image 15 is recorded. Moreover, the image field 69 with which the second images 17 are recorded is smaller than the image field 65 with which the first image 15 is recorded.

    [0117] The methods described herein can furthermore be carried out with the particle beam systems described with reference to FIGS. 7 and 8.

    [0118] FIG. 7 shows, in a perspective and schematically simplified illustration, a particle beam system 101 including an electron beam microscope 103 having a particle-optical axis 105.

    [0119] The electron beam microscope 103 is configured to generate a primary electron beam 119, which is emitted along the particle-optical axis 105 of the electron beam microscope 103, and to direct the primary electron beam 119 onto an object 113.

    [0120] For the purpose of generating the primary electron beam 119, the electron beam microscope 103 includes an electron source 121, which is illustrated schematically by a cathode 123 and a suppressor electrode 125, and an extractor electrode 126 arranged at a distance therefrom. Furthermore, the electron beam microscope 103 includes an acceleration electrode 127, which transitions into a beam tube 129 and passes through a condenser arrangement 131, which is illustrated schematically by a toroidal coil 133 and a yoke 135. After passing through the condenser arrangement 131, the primary electron beam 119 passes through a pinhole stop 137 and a central hole 139 in a secondary particle detector (for example a secondary electron detector) 141, whereupon the primary electron beam 119 enters an objective lens 143 of the electron microscope 103. The objective lens 143 includes a magnetic lens 145 and an electrostatic lens 147 for focusing the primary electron beam 119. The magnetic lens 145 includes a toroidal coil 149, an inner pole shoe 151 and an outer pole shoe 153. The electrostatic lens 147 is formed by a lower end 155 of the beam tube 129, the inner lower end of the outer pole shoe 153, and a toroidal electrode 159 tapering conically towards the object 113.

    [0121] Although not illustrated in FIG. 7, the electron beam microscope 103 furthermore includes a deflector device for deflecting/diverting the primary electron beam 119 in directions that are orthogonal to the particle-optical axis 105.

    [0122] The particle beam system 101 furthermore includes a controller 177, which controls the operation of the particle beam system 101. In particular, the controller 177 controls the operation of the electron beam microscope 103. The controller 177 receives from the secondary particle detector 141 a signal representing the detected secondary particles which are generated by the interaction of the object 113 with the primary electron beam 119 and are detected by the secondary particle detector 141. The controller 177 can furthermore include an image processing device and be connected to an image reproduction device (not illustrated). Instead of being arranged within the electron beam microscope 103, the secondary particle detector 141 can also be arranged within a vacuum chamber, which includes the object 113, but outside the electron beam microscope 103.

    [0123] FIG. 8 shows, in a perspective and schematically simplified illustration, a particle beam system 102 including an ion beam system 107 having a particle-optical axis 109 and the electron beam microscope 103 described with reference to FIG. 7.

    [0124] The particle-optical axes 105 and 109 of the electron beam microscope 103 and of the ion beam system 107 intersect at a location 111 within a common working region at an angle , which can have values of for example 45 to 55 or approximately 90, such that an object 113 to be analysed and/or to be processed and having a surface 115 in a region of the location 111 can be imaged or processed using an ion beam 117 emitted along the particle-optical axis 109 of the ion beam 107 and can additionally be analysed using an electron beam 119 emitted along the particle-optical axis 105 of the electron beam microscope 103. A mount 116 indicated schematically is provided for mounting the object 113, which mount can set the object 113 with regard to distance from and orientation with respect to the electron beam microscope 103 and the ion beam system 107.

    [0125] The ion beam system 107 includes an ion source 163 having an extraction electrode 165, a condenser 167, a stop 169, deflection electrodes 171 and a focusing lens 173 for generating the ion beam 117 emerging from a housing 175 of the ion beam system 107. The longitudinal axis 109 of the mount 116 is inclined with respect to the vertical 105 by an angle which in this example corresponds to the angle between the particle-optical axes 105 and 109. However, the directions 105 and 109 do not have to coincide with the particle-optical axes 105 and 109, and the angle formed by them also does not have to correspond to the angle between the particle-optical axes 105 and 109.

    [0126] The particle beam system 102 furthermore includes a controller 277, which controls the operation of the particle beam system 102. In particular, the controller 277 controls the operation of the electron beam microscope 103 and of the ion beam system 107. The particle beam system 102 can furthermore include a detector for backscattered ions or secondary ions (not shown).