DEVICE AND METHOD FOR MEASURING HEIGHT PROFILES ON AN OBJECT

20220268569 · 2022-08-25

Assignee

Inventors

Cpc classification

International classification

Abstract

An optical device for sensing a surface profile of an object surface of an object by means of interferometric distance measurement, including a beam splitter for splitting a light beam of a light source into first and second sub-beams, a beam divider for dividing each sub-beam into a reference and a measuring beam, a mirror for reflecting the two reference beams, wherein each measuring beam is directed onto a measuring area on the object surface for reflection and after reflection is directed as object beam to the beam divider, each reference beam reflected by the mirror and directed as mirror beam to the beam divider, the object and mirror beams each interfere and are each fed as an evaluation beam to a detector unit for evaluation. Further include a light source for generating a monochromatic light beam, a detector unit, a signal evaluation unit and for determining the surface profile.

Claims

1.-18. (canceled)

19. An optical device for sensing a surface profile of an object surface of an object by means of interferometric distance measurement, comprising a beam splitter for splitting a light beam of a light source into a first sub-beam and a second sub-beam; a beam divider for dividing each sub-beam of the light source into a reference beam and a measuring beam; a mirror for reflecting the two reference beams, which is designed to create a phase difference between the reflected beams; a detector unit for detecting evaluation beams; and a signal evaluation unit for evaluating the detected evaluation beams and determining the surface profile; wherein each measuring beam is directed to a measurement area on the object surface for reflection and after reflection is directed as an object beam to the beam divider; each reference beam is reflected by the mirror and directed as a mirror beam to the beam divider; and the object beam and the mirror beam each interfere after impinging on the beam divider and are each fed as an evaluation beam to a detector unit for evaluation; the signal evaluation unit is designed to determine the intensities of the two evaluation beams; and the signal evaluation unit is designed to determine, by using the intensities of the two evaluation beams, a height profile of the surface for which the influence of the reflectance of the surface on the measured intensities is compensated by calculation and thereby to determine a correct profile for object surfaces with not 100% reflection as well as for object surfaces with different reflection coefficients within an area to be measured.

20. The device according to claim 19, wherein the signal evaluation unit is designed to determine the phase difference of the two evaluation beams and to determine the height profile of the surface from the intensities and the phase difference of the two evaluation beams.

21. A system for sensing the surface profile of an object surface of an object by means of interferometric distance measurement, comprising a light source for generating a monochromatic light beam; and the optical device according to claim 19.

22. The device according to claim 19, wherein the mirror has two parallel mirror segments 6, the mirror surfaces of which are offset in the direction of the normal.

23. The device according to claim 19, wherein the beam divider and the mirror are integrated in an optical element.

24. The device according to claim 19, wherein the optical element, the prism is formed from two glass blocks resting against each other, and two mirror surfaces or mirrors are applied onto one side of a glass block.

25. The device according to claim 19, wherein the beam divider comprises a semi-transparent mirror which allows a part of an incident light beam to pass through as a reference beam and reflects another part of an impinging light beam as a measuring beam towards the object surface.

26. The system according claim 21, wherein the light source is a laser.

27. The system claim 21, wherein a plurality of light sources for generating a monochromatic light beam are provided, the light beams of which have different wavelengths and no wavelength of a light beam is an integral multiple of the wavelength of another light beam, and the light beams of which are bundled in a beam coupler.

28. The device according to claim 19, wherein the detector unit comprises a time delay integration camera.

29. The device according to claim 19, wherein it comprises an optical element for splitting the bundled light beams that is arranged in such a manner that splitting the light beams takes place before they impinge on the detector unit.

30. The device according to claim 19, wherein a lens is arranged between the optical device and the object surface of the object for directing the measuring beams to a measuring area on the object surface.

31. The device according to claim 19, wherein the lens is an interchangeable lens with different magnifications or a fixed lens.

32. The device according to claim 19, wherein the system is a drive unit for generating a relative movement between the optical device and the object.

33. A method for sensing the surface profile of an object surface of an object by means of interferometric distance measurement, comprising the following steps: emitting a monochromatic light beam in the direction of an optical device by means of a light source; splitting the light beam from the light source into a first sub-beam and a second sub-beam by means of a beam splitter; dividing each sub-beam of the light source into a reference beam and a measuring beam by means of a beam divider; directing the reference beams to a mirror; reflecting the reference beams at the mirror and forming mirror beams which have a phase shift; directing the mirror beams to the beam divider; reflecting the mirror beams at the beam divider; directing the measuring beams onto a measuring area on the object surface; reflecting the measuring beams at the object surface and forming object beams; directing the object beams to the beam divider; interfering the mirror beams with the object beams and forming evaluation beams, wherein the evaluation beams have a phase difference; detecting the evaluation beams by means of a detector unit; evaluating the detected evaluation beams and determining the surface profile of the object surface by means of a signal evaluation unit; using the two measured intensity values of the evaluation beams which were both reflected from the same location on the object surface in order to compensate for the influence of the reflectance of the light beams on the object surface when determining the surface profile and thereby to determine a correct profile for object surfaces with not 100% reflection as well as for object surfaces with different reflection coefficients within an area to be measured.

34. The method according to claim 33, with the following steps: evaluating the detected evaluation beams and determining the phase difference; and compensating for the influence of the reflectance of the light beams on the object surface by means of the determined phase difference when determining the surface profile, wherein from the determined reflection coefficients of the object surface a monochromatic inspection image is created for each wavelength, wherein the reflection coefficients are calculated from the previously determined height profile and the measured signal intensity of at least one of the beams.

35. The method for optical inspection of an object surface for defects, wherein a defect evaluation is based on an inspection image determined by means of the method according to claim 34 and on a subsequent back-calculation to the reflection of the light at the object surface wherein the reflection r.sub.xy is obtained by: r xy = i xy - d xy q xy * M xy max * ( 1 + cos [ 2 π λ x / 2 * Δ z y ] ) , wherein i.sub.xy=measured value at wavelength x at a pixel y of a sensor; d.sub.xy=dark signal value without excitation light; q.sub.xy=irradiated light intensity of the measurement; M.sub.xyz.sup.max=maximum value of the transfer function at wavelength x at pixel y of the sensor; and Δz.sub.y=z.sub.mirror,z−z.sub.sample wherein z.sub.mirror,z=distance beam divider to reference mirror and z.sub.sample=distance beam divider to object surface.

Description

[0074] Hereinafter, an exemplary embodiment of the invention is described with reference to the accompanying figures. In the figures:

[0075] FIG. 1 shows a schematic illustration of a system according to the invention for interferometric distance measurement with three laser light sources;

[0076] FIG. 2 shows a schematic illustration of method steps for setting up the system and for interferometric distance measurement;

[0077] FIG. 3 shows a detailed illustration of the path lengths covered by a first sub-beam of light and the associated directions in an enlarged detail of FIG. 1;

[0078] FIG. 4 shows a detailed illustration of the path lengths covered by a second sub-beam of light and the associated directions in an enlarged detail of FIG. 1;

[0079] FIG. 5 shows a schematic illustration of a first embodiment of the system according to FIG. 1; and

[0080] FIG. 6 shows a schematic illustration of a second embodiment of the system according to FIG. 1.

[0081] FIG. 1 shows an embodiment of the system 1 according to the invention with an optical device 2, at least one light source 7, a detector unit 8 and a signal evaluation unit 9. The optical device 2 comprises a beam splitter 3, a beam divider 4 and a mirror 5. The mirror 5 has two mirror segments 6, which in turn are in the form of two mirrors 60, 70. The beam splitter 3 is preferably formed as a beam shaping optics 80. The beam divider 4 is preferably a partially transparent mirror 90 with preferably a 50% partial transparency. The at least one light source 7 preferably comprises three laser sources 110, 120, 130 as shown here.

[0082] In the following, the structure of the system 1 and the beam path in the system 1 are described using the example of a wafer as a measurement object. In the system 1, wafers 20 or other flat objects whose surface profile is to be sensed are moved in succession relative to a camera (detector unit 8).

[0083] In the present preferred embodiment, the camera is stationary and the objects are passed through below the camera. For this purpose, a holding and transport unit 10 supporting and moving the wafers 20 is preferably provided. The movement is preferably in a direction perpendicular to the surface normal of the object surface 30 of the wafer 20. In another exemplary embodiment, the camera can be moved. In an alternative exemplary embodiment, the relative movement can be divided between the camera and the object such that, for example, the camera preferably performs the movement in an axial direction while the object is preferably movable in the direction perpendicular thereto. The movement between camera and object is preferably continuous.

[0084] The exemplary embodiment in FIG. 1 uses three lasers as light sources 7 and two TDI multi-channel line scan cameras (TDI=time delay integration) as sensors or detectors of the detector unit 8. However, a different number of light sources can also be used. Likewise, a broader-band light source in combination with narrow-band filters can be used as an alternative.

[0085] Similarly, instead of TDI sensors, simple multi-channel line sensors (without TDI process) or area scan cameras or a set of multiple line scan cameras can be used.

[0086] In FIG. 1, the beam offsets generated when the beams pass through a plane-parallel plate are omitted to simplify the illustration. In the exemplary embodiment in FIG. 1, the light from the 3 laser sources 110, 120 and 130 is combined by a fiber coupler 100. In the fiber coupler 100, the 3 fibers 170, 180, 190 fed by the laser sources are spliced so that the light of the laser sources is emitted in a common, multicolor light beam 84. The transmission via a light guide serves only to decouple the lasers from the actual receiving system. Instead of light guide transmission, the laser beams can also be directed directly into the receiving unit via suitable optics, which reduces losses but requires more adjustment effort. Likewise, the combination of the three beams from the 3 laser modules can also be done via dichroic mirrors.

[0087] With the beam shaping optics 80, the light beam 84 is shaped into a parallel light bundle consisting of the two bundle halves, so-called sub-beams, 410 and 420 with cross-sections adapted to the receiving surface. The beam halves 410 and 420 are arranged around the beam center axis 400. They do not necessarily have to be adjacent to each other, i.e., a central region between them can remain unused. Splitting is preferably but not necessarily symmetrical. In this case, all wavelength components are evenly distributed over the entire cross-section and in particular between the bundle halves (sub-beams) 410 and 420. The light bundle (consisting of the sub-beams 410, 420) is directed by the 50% partially transparent mirror 90 in part as a sub-beam (measuring beams 430 and 440) onto the wafer surface 40 to be measured via a lens 50. The other 50% of the light beam (sub-beams 410, 420) pass through the partially transparent mirror 90 as sub-beams (reference beams) 470 and 480.

[0088] The reference beams 470 and 480 impinge on the two reference mirrors 60 and 70 where they are reflected. The resulting partial bundles (mirror beams 490 and 500) are again 50% reflected at the partially transparent mirror 90.

[0089] The partial beams directed onto the wafer surface 40, measuring beams 430 and 440, are focused by the lens 50 in a common imaging area. Due to the reflection at the wafer surface 40, they are reflected back through the lens 50 as partial bundles (object beams) 450 and 460. In this case, the first object beam 450 comprises the reflected light from the first measuring beam 430 and the second object beam 460 comprises the reflected light from the second measuring beam 440. Half of the light from each of the object beams 450 and 460 passes through the partially transparent mirror 90. The light originating from the first object beam 450 then interferes with the light from the first mirror beam 490 reflected at the partially transparent mirror 90 in a light bundle to form the first evaluation beam 520.

[0090] The intensity of the first evaluation beam 520 is now modulated by the interference of the light waves according to the difference in distance between the partially transparent mirror 90 and the mirror 60 on the one hand and the partially transparent mirror 90 and the wafer surface 40 on the other hand. The intensity modulation is performed independently for all three included wavelengths.

[0091] Similarly, an interferometry signal is generated from the light of the second object beam 460. After passing through the partially transparent mirror 90, the light originating from the second object beam 460 interferes with the light of the second mirror beam 500 reflected at the partially transparent mirror 90 in a light bundle, the second evaluation beam 510.

[0092] The intensity of the second evaluation beam 510 is now modulated by the interference of the light waves according to the difference in distance between the partially transparent mirror 90 and the mirror 70 on the one hand and the partially transparent mirror 90 and the wafer surface 40 on the other hand. The intensity modulation is also performed independently for all three included wavelengths. FIGS. 3 and 4 show enlarged details of FIG. 1 to illustrate the formation of interference and the path differences that must be taken into account.

[0093] The two evaluation beams 510 and 520 differ in intensity in the three wavelengths used. The intensity difference of each wavelength considered by itself is determined exclusively by the phase offset generated at the mirrors 60 and 70. In the illustrated exemplary embodiment, the phase offset is generated by a different distance of the two mirrors 60 and 70 from the partially transparent mirror 90. However, the phase offset can also be generated in other ways, e.g. by coating one of the mirror segments 6 or the partial mirrors (e.g., mirror 70) with a transparent coating.

[0094] The second evaluation beam 510 is directed by mirrors 260 as a second detection beam 540 onto a prism 270. The prism 270 is used for spectrally splitting the light from the second detection beam 540 into the three wavelength components λ.sub.1, λ.sub.2 and λ.sub.3. Another suitable dispersive element, such as a grating, can be used in place of the prism 270 without loss of generality. The spectral components split in this manner from the second detection beam 540 are focused by a first tube optics 200 onto a TDI multi-channel line scan camera 210. When using TDI multi-channel line scan cameras with color filters permanently installed in front of the sensor blocks, the spectral splitting of the beam can also be omitted.

[0095] By recording the light intensity impinging on each pixel of the line scan camera 210, three signals i.sub.1,1, i.sub.2,1 and i.sub.3,1 are thus generated, the intensity of which is modulated by the profile height at the respective location of the wafer surface 40 and can thus be used for interferometric determination of the surface profile. Here, the first index represents the wavelength and the second index represents the camera 210 providing the signal.

[0096] Since mirror 260 blocks only the path of the second evaluation beam 510, the first evaluation beam 520, which is intensity-modulated by interference, impinges unobstructed as the first detection beam 530 onto prism 250. Prism 250 is used for spectral splitting of the light from the first detection beam 530 into the three wavelength components λ.sub.1, λ.sub.2 and λ.sub.3. Another suitable dispersive element, such as, for example, a grating can also be used instead of prism 250 without loss of generality. The spectral components thus split from the first detection beam 530 are focused by a second tube optics 230 onto a TDI multi-channel line scan camera 220. By recording the light intensity impinging on each pixel of the line scan camera 220, three signals i.sub.1,2, i.sub.2,2 and i.sub.3,2 are thus generated, the intensity of which is modulated by the surface profile of the wafer surface 40 and can thus be used for interferometric determination of the surface profile.

[0097] FIG. 5 shows an alternative embodiment of the system according to the invention in which the divider mirror 90 and the two mirrors 60, 70 (reference mirrors) are replaced by a prism 600 consisting of two blocks adhesively bonded together, a partially reflecting layer being provided on the adhesively bonded 45° surface. This layer forms the partially transparent mirror 610 which splits the two sub-beams 410, 420, as does the mirror 90 according to the embodiment of FIG. 1. The details and beam paths of the individual beams are described in FIG. 1 and are no longer shown in FIG. 5 for clarity.

[0098] The two mirrors 60, 70 present according to the embodiment of FIG. 1, which serve as reference mirrors, are provided on the outer surface of the prism 600. For this purpose, for example, mirror coatings can be vapor-deposited onto the outer surface or mirror surfaces can be applied or glued on. By appropriate construction of the prism 600, the necessary offset can be created on the outer surface so that the mirrors 60, 70 have the same offset as shown in FIG. 1.

[0099] After the mirror beams 490, 500 and the object beams 450, 460 interfere in the prism 600 to form the evaluation beams 520 and 510, these beams are bundled by means of a collimator optics 620 and directed to an optical element 700. The optical element 700 preferably has a rectangular or square base area. On the two sides facing the collimator optics 620, mirror coatings or mirrors 730, 740 are arranged to reflect impinging beams. The mirrors 730, 740 can also be formed as independent mirrors or reflective elements; however, they must then be aligned or adjusted.

[0100] Impinging beams are deflected at the optical element 700. At mirrors 730, 740, the two evaluation beams 520 and 510 are separated from each other and directed to cameras 210, 220. At mirror 730, the evaluation beam 520 is reflected into a first detection beam 530, which is directed to the first camera 210 by means of a deflection mirror 710. At mirror 740, the second evaluation beam 510 is directed to a deflection mirror 710 as a second detection beam 540 so that the second detection beam 540 impinges on the second camera 210. The signals recorded by the cameras 210 and 220 are processed in the signal evaluation unit 9 so that the profile of the wafer surface 40 of the wafer 20 is sensed.

[0101] In the embodiment shown here, the detection beams 530, 540 are also focused such that they impinge on the cameras 210, 220 at a single point.

[0102] FIG. 6 shows an alternative embodiment of the system 1 according to the invention, which corresponds to the structure according to the embodiment shown in FIG. 5. The two embodiments of FIGS. 5 and 6 differ only in the use of the optical element that deflects the two evaluation beams 510, 520 into the detection beams 530, 540. According to FIG. 6, a flatter optical element 750 is used, so that no right-angled beam guidance is achieved, as is provided in the embodiment according to FIG. 5. According to FIG. 5, the evaluation beams are deflected by 90° to the two deflection mirrors 710, 720 which are arranged at an angle of 45° so that, the detection beams 530, 540, after they have impinged, are directed to the two cameras 210, 220 at an angle of 90°.

[0103] According to FIG. 6, a deflection other than 90° is effected by means of the optical element 750 and the two mirrors 730, 740 due to the arrangement of the mirrors 730, 740 on the diamond-shaped optical element's 750 side surfaces which are not arranged at right angles. The two deflection mirrors 710 and 720 are arranged accordingly so that the detection beams are directed, preferably focused, to a point on the respective camera 210, 220. This allows a much more compact measurement setup, so that the system can be made smaller and more compact and the accuracy of the measurements can be increased.

[0104] Of course, the prism 600 of FIGS. 5 and 6 can preferably also be used in an embodiment of the invention according to FIG. 1, replacing the partially transparent mirror 90 and the mirrors 60, 70. Also, the optical device 2 of FIG. 1 can be used in the embodiments according to FIGS. 5, 6.

[0105] Likewise, the detection unit according to FIG. 1, comprising the cameras 210, 220 and the prisms 275, 270, the tube optics 200, 230 and the mirrors 260 can be replaced by the detection unit according to FIGS. 5, 6, so that the collimator optics 620, the optical element 700 or 750, the mirrors 710, 720, 730 and 740 and the cameras 210, 220 are used. The same applies vice versa.

Signal Evaluation

[0106] The following explanation of the calculation of the surface profile from the obtained signals i.sub.x, r.sub.x and q.sub.x with x=1, 2, 3 for the three wavelengths λ.sub.1, λ.sub.2 and λ.sub.3 is performed only for one pixel of the camera sensors of the line scan cameras 210 and 220, respectively. It is understood that this calculation can be performed for each pixel of the cameras 210, 220. It is thus possible according to the sensor size and arrangement to determine a plurality of height points of the profile simultaneously.

[0107] For a line scan camera with, e.g., 16384 points per line available today, this means 16384 height values for each readout cycle of the cameras. Furthermore, in the arrangement described here, the line scan cameras 210, 220 can be moved continuously relative to the wafer 20. In accordance with the clock speed of the cameras, a corresponding number of lines with height information per unit time is obtained. Thus, at a clock rate of the proposed multi-channel TDI line scan cameras of, e.g., 100 kHz, more than 1600 million height values per second are obtained. Such cameras are offered by different manufacturers (e.g. by Vieworks and by Dalsa Teledyne). The use of such TDI multi-channel line scan cameras is a particularly suitable variant, since very high measurement speeds can be achieved. These cameras contain a plurality of (usually 4) TDI blocks in one camera, which can be operated and read out simultaneously. When using such cameras, recording the used wavelengths belonging to one line on the wafer surface will be done one after the other. This means that while the first TDI block records the line area at λ.sub.1, the second TDI block determines the signal at λ.sub.2 and the third TDI block the signal at λ.sub.3.

[0108] In principle, this temporal offset is irrelevant for the calculation presented below. It is only necessary that the signal images obtained are assigned and evaluated in a phased manner (corresponding to the spatial offset of the TDI blocks). With this arrangement, high signal quality (correspondingly high-resolution and robust measurement) can be particularly well combined with high speed. Alternative arrangements are explained below.

[0109] The measurement procedure and its preparation is illustrated in FIG. 2. For a correct determination of the height values, the measurement must be prepared by means of a dark signal measurement and a determination of the transfer function of the optics and sensor system.

[0110] In the dark signal measurement, the signal d is measured with the light source switched off at each camera pixel y of the two line scan cameras 210 and 220. This determines the so-called dark noise of the camera, which represents an offset for each further measurement and is subtracted from the signal. This is done for both line scan cameras (sensor arrangements) 210 (index z=1) and 220 (index z=2).


d.sub.xyz signal value read out at wavelength x(λ.sub.x) at pixel y of the sensor z   (5)

[0111] To determine the optical and electrical transfer function of the arrangement (system 1), a bright signal measurement h is performed with a known planar object. For this purpose, the wafer surface 40 is replaced by a flat reference piece with known reflection properties. Since the signal h at each sensor pixel is uniquely determined by the intensity of the light source (signal value q), the transfer function M, the reflectance of the reference piece r, the path difference lz of the two interfering light beams (first object beam 450 and first mirror beam 490) (impinging on sensor (line scan camera) 220, z=2) or, respectively, second mirror beam 500 and second object beam 460 (impinging on sensor (line scan camera) 210, z=1) and the dark signal d, the transfer function M can be determined for each wavelength x, for each pixel y and for both sensors z as a function of the path difference, provided the values h, q and r are known. The transfer function is generally different for each wavelength x, each pixel y, and each camera z. It is determined by the sensitivity of the individual pixels, by the illumination, material properties, coatings and aberrations of the optics.

[0112] To check the output intensity of each laser light source 110, 120, 130 and include it in the calculation as a correction or reference value, the signals q.sub.1, q.sub.2 and q.sub.3 of the monitor diodes usually installed in each laser module (on the side of the laser facing away from the output) 140, 150, 160 can be used directly.

[0113] The signal h of the bright reference measurement is:


h.sub.xyz,href(l.sub.z)=q.sub.xy,href*M.sub.xyz(l.sub.z)*r.sub.x,ref+d.sub.xyz  (6)

[0114] Wherein: [0115] h.sub.zyz,href(l.sub.z) Measured value of the reference measurement at wavelength x at pixel y of sensor z (interferometer, line scan cameras 210 [z=1] and 220 [z=2]) as a function of path difference l.sub.z. [0116] q.sub.xyz,href Irradiated light intensity of the reference measurement at wavelength x at pixel y. [0117] M.sub.zyz(l.sub.z) Transfer function at wavelength x at pixel y of sensor z as a function of path difference l.sub.z. [0118] r.sub.x,ref Reflection coefficient of the reference measurement (known material) at wavelength x. [0119] d.sub.xyz read dark signal value (no excitation light) at wavelength x at pixel y of sensor z.

[0120] Note: The dark signal values are “wavelength-dependent” despite “dark=no light” because different sensor pixels are used for the different wavelengths—they can have different dark count values.

[0121] Here, the argument l.sub.z denotes the path difference between the two interfering beams, thus, first object beam 450 and first mirror beam 490, impinging on line scan camera 220, z=2, or, respectively, second object beam 460 and second mirror beam 500, impinging on line scan camera 210, z=1.

[0122] FIG. 3 shows the ratios for the light bundle pair of first object beam 450 and first mirror beam 490. As can be seen from this, the path difference l.sub.2 is just 2× (z.sub.mirror2−z.sub.sample) since the path difference z.sub.differenz cancels itself out because each partial bundle (first measuring beam 430 and second partial beam 420) runs through it exactly once. The distance z.sub.mirror2 of the reference mirror 60 carries the index 2 here since the two reference mirrors 60, 70, without loss of generality, have a different distance from the divider mirror 90 of the beam divider 4 for establishing the phase offset according to the invention.

[0123] Analogous ratios apply for the light bundle pair of second object beam 460 and second mirror beam 500, as shown in FIG. 4. Here, both partial bundles (first partial bundle comprising second sub-beam 420 and second object beam 460; second partial bundle comprising second reference beam 480 and second mirror beam 500) which interfere later, run through the path difference z.sub.differenz exactly 2 times, which in turn cancels it out, and the path difference l1 to be taken into account is just equal to 2× (z.sub.mirror1−z.sub.sample).

[0124] To determine the distance of the wafer surface 40 from the sensor arrangement z.sub.sample (which corresponds to the optical device 2), slightly different methods can be used to define and use the transfer function. The selected method is used equally for both sensors line scan cameras 210 and 220.

[0125] On the one hand, the transfer function M(I) can be divided into a non-interfering factor M.sup.max and the interference effect. The factor M.sup.max is obtained by determining the transfer function M(I) only in its maxima. The intensity modulation due to the interference can then be determined directly from the values of the measurement run. This first way is further described in the following.

[0126] On the other hand, the intensity modulation due to the interference can be included in the transfer function M(I). M(I) is then determined in the reference measurement for the entire working range as a function of the path difference of the two bundles (first and second sub-beam) and the distance to the wafer surface. The distance difference to the wafer surface is determined in the measurement run by comparing the transfer function values with those of the bright signal measurement carried out as reference. This comparison is carried out in each case for the triplet of wavelengths x=1,2,3 and for both sensors z=1,2 and searches for the distance matching all three wavelengths and both sensors.

[0127] For the sensors line scan cameras 210 and 220, the transfer functions apply (where the /2 in the denominator results from the 2× run through the difference z.sub.mirror−z.sub.sample):

[00006] M xyz ( l z ) = M xyz max * ( 1 + cos [ 2 π λ x / 2 * ( z mirror , z - z sample ) ] ) ( 7 )

[0128] Wherein: [0129] M.sub.xyz(l.sub.z) Transfer function at wavelength x at pixel y of sensor z as a function of path difference l.sub.z. [0130] M.sub.xyz.sup.max Maximum value of the transfer function at wavelength x at pixel y of sensor z.fwdarw.constructive interference=path difference l.sub.z=integer multiple of wavelength A. [0131] z.sub.mirror,z Distance of the divider mirror 90 from the mirror 60 and 70, respectively (cf. FIG. 4). [0132] z.sub.sample Distance of the divider mirror 90 from the wafer surface 40 (cf. FIG. 4).

[0133] If the maximum signal values are determined for each of the three wavelengths x=1,2,3, M.sup.max can be determined for each wavelength:

[00007] M xyz max = h xyz , href ( l zf u .Math. rh max ( x ) ) - d xyz q xy , href * r x , ref ( 8 )

[0134] Wherein: [0135] l.sub.z for hmax (x) Path difference at which the signal at sensor z is maximum, thus, l.sub.z is an integer multiple of the wavelength λ.sub.x.

[0136] The determination of M.sup.max is done for each wavelength, e.g. by continuously changing the distance z.sub.sample of the reference piece and thus running through a full wavelength period for all three wavelengths x=1, 2, 3 and both sensors z=1, 2. In doing so, it can also be determined from the respective z.sub.sample,y positions, at which the maximum value occurs, how large the phase offset Δz.sub.r=z.sub.mirror,y−z.sub.mirror2,y of the two mirrors 60 and 70 is at each pixel y.

[0137] In the measurement run with the unknown wafer surface 40 to be examined, the sensor signals i.sub.z1, i.sub.z2 and i.sub.z3 of the camera sensors 210 [z=1] and 220 [z=2] are now recorded simultaneously, as well as the output intensities of the laser light sources 110, 120, 130, by means of the built-in monitor diodes 140, 150, 160.

[0138] For the signals at the sensors, line scan cameras 210 and 220, the following applies:

[00008] i xyz = q xy * M xyz ( l z ) * r xy , wafer + d xyz ( 9 ) i xyz = q xy * M xyz max * ( 1 + cos [ 2 π λ x / 2 * ( z mirrorz , y - z sample , y ) ] ) * r xy , wafer + d xyz ( 10 )

[0139] Wherein: [0140] i.sub.xyz Interferometry measured value at wavelength x at pixel y of sensor z. [0141] q.sub.xy Irradiated light intensity of the measurement at wavelength x at pixel y. [0142] M.sub.xyz.sup.max Maximum value of the transfer function at wavelength x at pixel y of sensor z.fwdarw.constructive interference=path difference l.sub.z=integer multiple of wavelength λ.sub.x. [0143] r.sub.xy,wafer Reflection coefficient of wafer 20 at wavelength x at pixel y.

[0144] d.sub.xyz read out dark signal value (no excitation light) at wavelength x at pixel y of the sensor z.

[0145] With the relation Δz.sub.ry=z.sub.mirror1,y−z.sub.mirror2,y and the transformation Δz.sub.y=z.sub.mirror1,y−z.sub.sample,y the equations for the two sensors can be simplified to:

[00009] i xy 1 = q xy * M xy 1 max * ( 1 + cos [ 2 π λ x / 2 * Δ z y ] ) * r xy , wafer + d xy 1 ( 11 ) and i xy 2 = q xy * M xy 2 max * ( 1 + cos [ 2 π λ x / 2 * ( Δ z y - Δ z ry ) ] ) * r xy , wafer + d xy 2 ( 12 )

[0146] Since for the profile measurement of the wafer surface 40, thus, sensing the surface profile of the wafer, the determination of Δz.sub.y is sufficient and the distance differences Δz.sub.y are known from the preliminary measurement for the determination of the maxima of the transfer functions M.sup.max, this pair of signal equations (11), (12) defines for each pixel y a system of equations consisting of 6 equations (for the arguments applies: x=1, 2, 3 and z=1, 2) for the 4 unknown variables: [0147] the three reflection coefficients r.sub.xy,wafer of the wafer surface 40, the sought distance difference Δz.sub.y, which leads to the profile of the wafer surface 40.

[0148] By means of a suitable selection of wavelengths λ.sub.1, λ.sub.2 and λ.sub.3 for x=1, 2, 3, a working range of 0.5 mm with unique assignment of the two intensity measurement value triples i.sub.xy1 and i.sub.xy2 to a path difference Δz.sub.y can be established without difficulty, which is sufficient for a variety of profile measurement tasks. This is explained in the publication by K. Meiners-Hagen, R. Schrodel, F. Pollinger and A. Abou-Zeid mentioned at the beginning. In the arrangement disclosed therein, for example, the wavelengths 532 nm, 632 nm and 780 nm are used and a working range of 0.6 mm with unique assignment of the distance difference is achieved.

[0149] To simplify the mathematical evaluation, the equations can be combined:

[00010] i xy 1 - d xy 1 i xy 2 - d xy 2 = M xy 1 max M xy 2 max * 1 + cos [ 2 π λ x / 2 * Δ z y ] 1 + cos [ 2 π λ x / 2 * ( Δ z y - Δ z ry ) ] ( 13 )

and with:

[00011] P y = i xy 1 - d xy 1 i xy 2 - d xy 2 * M xy 2 max M xy 1 max ( 14 )

to:

[00012] P y * ( 1 + cos [ 2 π λ x / 2 * ( Δ z y - Δ z ry ) ] ) = 1 + cos [ 2 π λ x / 2 * Δ z y ] ( 15 )

[0150] This reduces the task for each pixel y to the determination of λ.sub.zy from the remaining 3 equations, each of which must be satisfied simultaneously. By combining them, the intensities q.sub.1y, q.sub.2y and q.sub.3y of the laser sources are no longer directly included in the calculation and the reflection coefficients r.sub.xy,wafer of the wafer surface 40 do not have to be explicitly determined.

[0151] If Δz.sub.y is represented in units of half wavelength, Δz.sub.y for each of the three wavelengths x can be represented as the sum of integer parts δ.sub.x and the remainder f.sub.x as:

[00013] Δ z yx = λ x 2 * ( δ x + f x ) ( 16 )

[0152] The sought path difference Δz.sub.y is determined from the Δz.sub.yx by determining the triple δ.sub.x of integer parts for which the mean deviation of the associated Δz.sub.yx from the respective mean value is minimal.

[0153] Use of a Reference Profile for Height Measurement of Recurring Structures.

[0154] An important application of the height profile measurement presented here is the measurement of the heights of contact structures on wafers. Here, only the height of the structure is of interest in order to be sure during the later insertion and contacting of the chips in the package or during the 3-dimensional integration of multiple chips that all contact structures actually are in electrical contact and that none of the structures is so large that the chip is mechanically damaged during installation. For this purpose, it should be determined in each case whether the zenith points of all contact structures lie in one plane with a sufficiently low tolerance.

[0155] In order to achieve this task quickly and reliably, it is proposed to perform a high-resolution reference measurement on a few examples of the contact structures, to create a reference model of the contact structure from this and, for routine measurement of similar wafers, to take only as many measurement points as are necessary to make sure that some of them lie on the contact structure top side.

[0156] Such a reference model can be created using the same profile measurement method described above. For this purpose, in a first embodiment of the invention, the lens 50 can be an interchangeable lens with different magnifications. The reference measurement can then be performed with a high magnification. The reference model can be generated, for example, by averaging or median calculation from multiple measured contacts. The routine measurement can be carried out at a lower magnification so that just enough measurement points are determined on the contact surface to be able to make a reliable comparison with the reference model.

[0157] In another alternative method, the lens 50 can be a fixed lens with a fixed magnification. In this case, the reference measurement can be performed by multiple measurements with a small lateral offset. This again provides a denser measurement point coverage for the measured contact. The reference model can again be generated by averaging or median calculation from multiple measured contacts.

[0158] The comparison with a reference model for the determination of the zenith points has the advantage that a lower measuring point density can be selected. One is not dependent on always having to meet the zenith point itself as a measuring point. Rather, it is sufficient to use a few points, e.g. 5 to 10 points on the contact surface. These points can be located at arbitrary places on the contact. The zenith height is then determined by adjusting the model height to the measured points. This allows to work with a lower measurement point density overall and thus with a higher throughput while still obtaining a reliable measurement of the zenith heights of the contacts.

Use for Inspection

[0159] After determining the height profile of the surface, the values Δz.sub.y are known. Thus, from the intensity data i.sub.xy1 of one of the channels, the values for the reflection coefficients r.sub.xywafer can be determined according to equation (11):

[00014] r xy , wafer = i xy 1 - d xy 1 q xy * M xy 1 max * ( 1 + cos [ 2 π λ x / 2 * Δ z y ] ) ( 17 )

[0160] From this, a monochromatic image at the respective wavelength can be created for each wavelength x by merging the pixel values calculated in this way for all y-locations.

[0161] To improve the image quality, the intensity values of both channels (z=1, 2) can also be used according to equation (18) and an averaged image can be created.

[00015] r xy , wafer = 1 2 * q xy * ( i xy 1 - d xy 1 M xy 1 max * ( 1 + cos [ 2 π λ x / 2 * Δ z y ] ) + i xy 2 - d xy 2 M xy 2 max * ( 1 + cos [ 2 π λ x / 2 * ( Δ z y - Δ z ry ) ] ) ) ( 18 )

[0162] These images can be used for further tasks, such as searching for defects or checking the lateral alignment of the data images.

Embodiment Variants

[0163] Alternative embodiment variants are intended to be part of the invention without loss of generality.

[0164] Thus, the use of TDI technology in the explained invention serves only to improve the signal-to-noise ratio and is not necessary for the principle of the invention. For simpler requirements, therefore, instead of TDI multi-channel camera line sensors (line cameras 210 and 220), an arrangement of in each case three independent line sensor cameras or of three independent single-channel TDI line sensor cameras can also be selected. The further spatial splitting required for this embodiment can be achieved by increasing the respective distance between the prisms 250 and 270, respectively, and the tube optics 230 and 200, respectively. The tube optics 230 or, respectively, 200 can also be designed as 3 individual optics for this purpose.

[0165] Alternatively, conventional area scan cameras or TDI single-channel line scan cameras operated in area scan mode can be used. In such an arrangement, the clock frequency is reduced accordingly to, e.g. a clock of 1 kHz (for the described area readout mode of a TDI single-channel line scan camera) and more than 16 million height values per second are still obtained.

[0166] It is understood that when using area scan cameras or TDI single-channel line scan cameras in area readout mode, the three wavelengths 1, 2 and 3 are imaged on the camera sensors to different lines x=1, 2, 3. The assignment of the obtained signal images to each other which, at the three wavelengths, each look at the same point on the wafer surface 40, is carried out here by a spatial assignment of the areas of the camera sensor. For the evaluation shown in principle above, it is irrelevant whether the assignment is made spatially (for two-dimensional sensors) or in a phased manner (for line sensors).

[0167] It should be noted that for lateral high-resolution applications (e.g. in the single-digit μm range), a lateral and rotational correction for the recorded signal images is required anyway due to the use of multiple cameras, because an adjustment of the entire arrangement to an offset of the camera of less than 1 μm is hardly achievable in a mechanical manner. Such a mathematical correction can usually be made possible by recording a reference pattern from which the exact location on the wafer viewed by each pixel can be determined.

[0168] In further embodiment variants, the number of wavelengths used can be adapted to the required working range. For particularly small working ranges, an embodiment with only one wavelength is already possible, or for small ranges with two wavelengths. With only one wavelength, the use according to the invention of the two “sensor arms” shown serves to determine the reflectance of the sample and thus—in contrast to conventional multi-wavelength interferometry arrangements—enables the measurement of profiles with changing or unknown materials. For even larger working ranges or an improvement of the reliability by redundancy of the measurement, the extension to more than 3 wavelengths is suitable, which can be implemented in particular with the above mentioned multi-block TDI camera (cameras with 7 TDI blocks have already been presented).

[0169] In further embodiment variants, the combination of 2 interferometry sensors presented here (line scan cameras 210, 220) can also be divided into an embodiment of two successive measurements or implemented in two measuring heads to be used successively.

[0170] The illumination can be done suitably with continuously radiating monochromatic light sources. For this purpose, lasers are as suitable as other broader-band beam sources, which can be combined with appropriate interference filters. The only condition is that the coherence length of the light used is sufficiently large for the working range to be implemented.

[0171] For the method according to the invention, it is sufficient that a phase offset is created between the two beams, thus between the first mirror beam 490 and the second mirror beam 500 of the “reference arm”, which is different from an integer multiple of the wavelength for all wavelengths used. This can be accomplished by spatially offsetting the mirrors 60 and 70, as shown in the exemplary embodiment. However, it is also possible to provide one mirror with one or two different coatings to create the phase offset. For example, a phase offset is created at an applied transparent layer by interference of the beams reflected at the front and back of the layer.

[0172] Likewise, instead of the beam divider 90, an adhesively bonded prism with a square cross-section can be used, which is provided with a partially mirror-coated layer in the adhesively bonded 45° surface. Thus, a partially transparent mirror is formed. When using such a prism, the “reference arm” can be created by vapor deposition of a mirror coating directly onto the outer surface of the prism. By applying two different layer thicknesses or different materials, the required phase offset can again be created. This arrangement results in a reduction of the interference contrast because of the clearly different dispersion in the “reference and measuring arm” of the interferometer; however, it is advantageous in any case at least for simple requirements because of its robustness. An embodiment with an adhesively bonded prism is shown in FIGS. 5 and 6.