OPTICAL MEASUREMENT SYSTEM AND OPTICAL MEASUREMENT METHOD
20250334525 ยท 2025-10-30
Assignee
Inventors
- Kensaku SHIMODA (Hirakata-shi, Osaka, JP)
- Tsutomu MIZUGUCHI (Hirakata-shi, Osaka, JP)
- Hiroyuki SANO (Hirakata-shi, Osaka, JP)
Cpc classification
G03H2001/005
PHYSICS
G03H1/0443
PHYSICS
G01B9/02047
PHYSICS
G03H1/0866
PHYSICS
G03H2001/0456
PHYSICS
G03H1/08
PHYSICS
International classification
G01N21/95
PHYSICS
Abstract
An optical measurement system includes a first light source that generates near infrared rays, a silicon-based image sensor, and an optical system including a beam splitter that divides light from the first light source into first light and second light. The optical system records with the image sensor, a first hologram resulting from modulation with second light, of light obtained by illumination of a sample with the first light, the second light being diverging light.
Claims
1. An optical measurement system comprising: a first light source configured to generate near infrared rays; a silicon-based image sensor; and an optical system comprising a beam splitter configured to divide light from the first light source into first light and second light, wherein the optical system is configured to record with the image sensor, a first hologram resulting from modulation with the second light, of light obtained by illumination of a sample with the first light, the second light being diverging light.
2. The optical measurement system according to claim 1, wherein the optical system is configured to generate the first hologram from transmitted light obtained by illumination of the sample with the first light, and in the optical system, a second hologram is recorded from transmitted light obtained by illumination with the first light, of a substrate instead of the sample, the substrate being included in the sample and not being an object to be measured.
3. The optical measurement system according to claim 1, wherein the optical system is configured to generate the first hologram from reflected light obtained by illumination of the sample with the first light, and in the optical system, a second hologram is recorded from reflected light obtained by illumination with the first light, of a reference plane instead of the sample.
4. The optical measurement system according to claim 1, further comprising: a second light source configured to generate visible light; and a processing apparatus, wherein the optical system is switchable between a first configuration in which the first hologram is generated from transmitted light obtained by illumination of the sample with the first light and a second configuration in which the first hologram is generated from reflected light obtained by illumination of the sample with the first light, and the processing apparatus is configured to measure an internal structure of the sample based on the first hologram recorded when the first light source is combined with the first configuration of the optical system, and measure a surface geometry of the sample based on the first hologram recorded when the second light source is combined with the second configuration of the optical system.
5. The optical measurement system according to claim 1, wherein the optical system is an off-axis holography optical system.
6. The optical measurement system according to claim 1, wherein the optical system comprises a restriction mechanism configured to restrict a size of a range where the sample is illuminated with the first light such that a component corresponding to the first light is not superimposed on a component other than the component corresponding to the first light in a spatial frequency domain of a hologram recorded with the image sensor.
7. An optical measurement method using an optical system comprising a beam splitter configured to divide light from a first light source configured to generate near infrared rays into first light and second light, the optical measurement method comprising: recording with a silicon-based image sensor, a first hologram resulting from modulation with the second light, of light obtained by illumination of a sample with the first light, the second light being diverging light; and recording with the image sensor, a second hologram resulting from modulation of the first light with the second light while there is no sample.
8. An optical measurement system comprising: a light source; an optical system comprising a beam splitter configured to divide light from the light source into first light and second light; an image sensor configured to record a hologram generated by the optical system; and a processing apparatus configured to calculate an amplitude phase distribution at a sample plane based on a first hologram and a second hologram, the sample plane being a plane of interest regarding a sample, the first hologram resulting from modulation with the second light, of light obtained by illumination of the sample with the first light, the second hologram resulting from modulation of the first light with the second light while there is no sample, wherein the optical system comprises a mechanism configured to change a manner of illumination with the first light, and the processing apparatus is configured to calculate a composed amplitude phase distribution by summing the amplitude phase distribution calculated for each manner of illumination with the first light, with the amplitude phase distribution being maintained as a complex number.
9. The optical measurement system according to claim 8, wherein the mechanism is configured to change an angle of illumination with the first light.
10. The optical measurement system according to claim 9, wherein the mechanism changes an azimuth angle while an angle of incidence of the first light is constant.
11. The optical measurement system according to claim 8, wherein the optical system comprises a restriction mechanism configured to restrict a size of a range where the sample is illuminated with the first light such that a component corresponding to the first light is not superimposed on a component other than the component corresponding to the first light in a spatial frequency domain of a hologram recorded with the image sensor.
12. The optical measurement system according to claim 8, wherein the processing apparatus is configured to provide a user interface screen configured to receive setting of the number of manners of illumination with the first light.
13. An optical measurement method using an optical system comprising a beam splitter configured to divide light from a light source into first light and second light, the optical measurement method comprising: recording with an image sensor, a first hologram resulting from modulation with the second light, of light obtained by illumination of a sample with the first light; recording with the image sensor, a second hologram resulting from modulation of the first light with the second light while there is no sample; changing a manner of illumination with the first light; calculating an amplitude phase distribution at a sample plane which is a plane of interest regarding the sample, based on the first hologram and the second hologram for each manner of illumination with the first light; and calculating a composed amplitude phase distribution by summing amplitude phase distributions calculated for respective manners of illumination with the first light, with the amplitude phase distributions being maintained as complex numbers.
14. The optical measurement method according to claim 7, further comprising generating the first hologram from transmitted light obtained by illumination of the sample with the first light, wherein a second hologram is recorded from transmitted light obtained by illumination with the first light, of a substrate instead of the sample, the substrate being included in the sample and not being an object to be measured.
15. The optical measurement method according to claim 7, further comprising generating the first hologram from reflected light obtained by illumination of the sample with the first light, wherein a second hologram is recorded from reflected light obtained by illumination with the first light, of a reference plane instead of the sample.
16. The optical measurement method according to claim 7, further comprising: measuring an internal structure of the sample based on the first hologram recorded when the first light source is combined with a first configuration in which the first hologram is generated from transmitted light obtained by illumination of the sample with the first light; and measuring a surface geometry of the sample based on the first hologram recorded when the second light source is combined with a second configuration in which the first hologram is generated from reflected light obtained by illumination of the sample with the first light.
17. The optical measurement method according to claim 13, wherein the changing the manner of illumination with the first light comprises changing an angle of illumination with the first light.
18. The optical measurement method according to claim 13, wherein the changing the manner of illumination with the first light comprises changing an azimuth angle while an angle of incidence of the first light is constant.
19. The optical measurement method according to claim 13, further comprising: restricting a size of a range where the sample is illuminated with the first light such that a component corresponding to the first light is not superimposed on a component other than the component corresponding to the first light in a spatial frequency domain of a hologram recorded with the image sensor.
20. The optical measurement method according to claim 13, further comprising: providing a user interface screen configured to receive setting of the number of manners of illumination with the first light.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
DESCRIPTION OF EMBODIMENTS
[0043] An embodiment of the present invention will be described in detail with reference to the drawings. The same or corresponding elements in the drawings have the same reference characters allotted and description thereof will not be repeated.
<A. Optical Measurement System>
[0044] Initially, an optical measurement system according to the present embodiment makes use of digital holography where diverging light such as a point light source is used as a reference beam. In the present embodiment, an exemplary configuration based on lensless digital holography where there is no lens between a sample and an image sensor will be described.
[0045] In the description below, an optical measurement system where an off-axis holography optical system is adopted will mainly be described. In a first embodiment, a transmission optical system will be illustrated and a reflection optical system will be illustrated in a second embodiment. The present embodiment may encompass the first embodiment and the second embodiment.
[0046] The optical measurement system according to the present embodiment measures a surface geometry and an internal structure of a sample. Furthermore, the optical measurement system according to the present embodiment can also measure an index of refraction of the sample. Though the optical measurement system can conduct measurement of any sample, it can be used for inspection of a surface of a semiconductor, measurement of a thickness or a distribution of indices of refraction of a film product, evaluation of surface roughness or an undulation of a precisely worked surface, and observation of a biological cell or evaluation of a shape thereof.
B. First Embodiment: Transmission Optical System
(b1: Optical System)
[0047]
[0048] The optical system shown in
[0049] The optical system shown in
[0050] A processing apparatus 100 measures a surface geometry, an internal structure, and the like of sample S based on off-axis hologram ILR and off-axis hologram IOR.
[0051] Referring to
[0052] Light source 10 is implemented by laser or the like, and generates coherent light. In the optical measurement system according to the present embodiment, a band of wavelength of light generated by light source 10 may be different depending on what is measured (measurement of the surface geometry or measurement of the internal structure).
[0053] More specifically, in measurement of the surface geometry of sample S, light source 10 that generates visible light may be employed. Specifically, light source 10 that generates light having a component in at least a part of a wavelength range from 380 to 780 nm is employed. For example, a visible light source having a peak wavelength at 532 nm may be employed.
[0054] In measurement of the internal structure of sample S, on the other hand, light source 10 that generates near infrared rays may be employed. Specifically, light source 10 that generates light having a component in at least a part of a wavelength range from 1000 to 1200 nm is employed. For example, a near infrared light source having a peak wavelength at 1030 nm may be employed.
[0055] The optical measurement system according to the present embodiment is configured such that a type of light source 10 can freely be changed.
[0056] Image sensor D records a hologram generated by the optical system shown in
[0057] Beam expander BE expands a cross-sectional diameter of light from light source 10 to a predetermined size. Beam splitter BS1 divides light expanded by beam expander BE into two light beams. One light beam divided by beam splitter BS1 corresponds to in-line reference beam L (first light) and the other light beam corresponds to off-axis reference beam R (second light).
[0058] In-line reference beam L is reflected by mirror M2 and guided to beam splitter BS2. Furthermore, in-line reference beam L passes through a half mirror HM2 of beam splitter BS2 and is guided to image sensor D. Objective lens MO and pinhole P are arranged between mirror M2 and beam splitter BS2. In-line reference beam L is condensed by objective lens MO and narrowed in cross-sectional diameter by pinhole
[0059] P. Pinhole P corresponds to a position of a point light source of in-line reference beam L. Objective lens MO and pinhole P implement the point light source of in-line reference beam L.
[0060] Off-axis reference beam R is reflected by mirror M1 and guided to beam splitter BS2. Furthermore, off-axis reference beam R is reflected by half mirror HM2 of beam splitter BS2 and guided to image sensor D. Mask A1 and lens L1 are arranged between mirror M1 and beam splitter BS2. Off-axis reference beam R passes through mask A1 and it is thereafter condensed by lens L1. A light condensation point FP1 which is a position of condensation of light corresponds to the position of the point light source of off-axis reference beam R.
[0061] Mask A1 is provided with an opening pattern SP1 in an area through which off-axis reference beam R passes. An image corresponding to opening pattern SP1 in mask A1 is formed on image sensor D. A size of opening pattern SP1 in mask A1 is determined such that an area out of a surface of beam splitter BS2 on a side of image sensor D is not illuminated with off-axis reference beam R that passes through mask
[0062] A1. By thus determining the size of opening pattern SP1 in mask A1, generation of noise due to undue interference is suppressed.
[0063] Off-axis reference beam R is adjusted such that in-line reference beam L can be recorded as a hologram.
[0064] In-line reference beam L and off-axis reference beam R are superimposed on each other by beam splitter BS2 arranged in a stage preceding image sensor D through optical paths as described above. In other words, image sensor D obtains off-axis hologram ILR resulting from modulation of in-line reference beam L with off-axis reference beam R which is diverging light.
[0065] Beam splitter BS2 is preferably formed in a cubic shape so as to facilitate arrangement thereof in the stage preceding image sensor D. The point light source of in-line reference beam L and the point light source of off-axis reference beam R are arranged in optical proximity to each other owing to beam splitter BS2.
[0066] Referring to
[0067] Measurement optical system 30 includes a mechanism that changes a manner of illumination with illumination light and a mechanism that restricts an illuminated range. More specifically, measurement optical system 30 includes a movable mirror MM, lenses L2, L31, and L32, and a mask A2.
[0068] Sample S to be measured is arranged between measurement optical system 30 and beam splitter BS2.
[0069] When a distance necessary for measurement optical system 30 is longer than a distance necessary for objective lens MO and pinhole P in the optical system shown in
[0070] Light outputted from one side of beam splitter BS1 is used as illumination light Q (first light) for illumination of sample S.
[0071] The optical measurement system according to the present embodiment includes the mechanism that changes the manner of illumination with illumination light Q.
[0072] Off-axis reference beam R (second light) outputted from the other side of beam splitter BS1 is guided to image sensor D through an optical path the same as in
[0073] Object beam O (that is, light that has passed through sample S) obtained by illumination of sample S with illumination light Q passes through half mirror HM2 of beam splitter BS2 and is guided to image sensor D. Lenses L31 and L32, mask A2, and lens L2 are arranged in this order between movable mirror MM and beam splitter
[0074] BS2.
[0075] Illumination light Q is condensed by lenses L31 and L32 and passes through mask A2. Illumination light Q that passes through mask A2 is further condensed by lens L2 and forms an image on sample S.
[0076] Mask A2 corresponds to a restriction mechanism that restricts a range of illumination of sample S with illumination light Q to a predetermined range. Such a mask A2 that an opening pattern SP2 corresponding to a predetermined range is formed in a shielding member may be employed as an exemplary restriction mechanism. Illumination light Q passes through an area corresponding to opening pattern SP2.
[0077] An image of opening pattern SP2 in mask A2 passes through lens L2 and is formed on sample S. In other words, of light with which mask A2 is illuminated, only light in a portion corresponding to opening pattern SP2 passes through mask A2. The range of illumination of sample S with illumination light Q that passes through mask A2 can thus be restricted. By restriction of the range of illumination with illumination light Q, unwanted light can be reduced and measurement accuracy can be enhanced.
[0078] Since the range of illumination may vary depending on a thickness of sample S in optical measurement system 1, for addressing such variation, opening pattern SP2 in mask A2 is changed or a position of lens L2 for forming an image of illumination light Q on sample S is changed as necessary.
[0079] The configuration for illumination of mask A2 with illumination light Q is not limited to the configuration shown in
[0080]
[0081] Referring to
[0082] Illumination light Q incident on movable mirror MM is reflected in a direction corresponding to an angle (orientation) of movable mirror MM and incident on image forming optical system 20. Illumination light Q then propagates in a direction of reflection by movable mirror MM, passes through opening pattern SP2 in mask A2, and forms an image on sample S in a shape the same as opening pattern SP2.
[0083]
[0084] The optical system arranged in stages preceding and subsequent to mask A2 is not limited to the optical system shown in
[0085] When movable mirror MM and mask A2 are arranged in optical proximity, lenses L31 and L32 do not have to be provided.
(b2: Measurement Processing)
[0086] Basic processing for measuring a geometry of sample S in optical measurement system 1 according to the first embodiment will now be described. In the description below, a light receiving surface of image sensor D is defined as a recording surface and an intersection between the recording surface and a central optical axis of beam splitter BS2 is defined as the origin. A direction of the optical axis is defined as a z axis and two axes orthogonal to the z axis are defined as an x axis and a y axis, respectively. In other words, the optical axis is perpendicular to the recording surface of image sensor D and the x axis and the y axis are in parallel to the recording surface of image sensor D, which is also similar in another embodiment.
[0087] Distributions of object beam O, off-axis reference beam R, and in-line reference beam L at the recording surface of image sensor D can be expressed in general expressions such as expressions (1) to (3) below.
[0088] In-line reference beam L, object beam O, and off-axis reference beam R are beams having angular frequencies w coherent to one another. Off-axis hologram ILR recorded in the optical system shown in
[0089] Since off-axis hologram ILR is invariable regardless of a state of object beam O, it should only be recorded at least once.
[0090] In the expressions (4) and (5), the first term on the right side corresponds to an light intensity component of object beam O or in-line reference beam L, the second term on the right side corresponds to a light intensity component of off-axis reference beam R, the third term on the right side corresponds to a direct image component produced as a result of modulation of object beam O with off-axis reference beam R, and the fourth term on the right side corresponds to a conjugate image component. As a result of application of a bandpass filter to the expressions (4) and (5) to extract the direct image component in the third term, a complex amplitude off-axis hologram JLR which is a record of in-line reference beam L and a complex amplitude off-axis hologram JOR which is a record of object beam O are calculated as in an expression (6) and an expression (7) below, respectively.
[0091] As a result of division of the expression (7) by the expression (6), a component of off-axis reference beam R is eliminated and a complex amplitude in-line hologram JOL with in-line reference beam L being defined as the reference is calculated as in an expression (8) below.
[0092] A component of in-line reference beam L can be eliminated by multiplying complex amplitude in-line hologram JOL shown in the expression (8) by in-line reference beam L. A method described in WO2020/045584 (PTL 3) can be adopted as a method of calculating in-line reference beam L. Through processing above, an object beam hologram U as shown in an expression (9) below is obtained.
[0093] When object beam hologram U includes a frequency component that does not satisfy a sampling theorem, correction processing as below is applied to generate a hologram including information from which a state of a plane of interest (which is also referred to as a sample plane below) can be reconstructed, the plane being distant by a predetermined distance from the recording surface. A hologram including information from which the state of the sample plane can be reconstructed is defined as a reconstruction object beam hologram U.sub.. When object beam hologram U satisfies the sampling theorem, object beam hologram U is adopted as it is as reconstruction object beam hologram U.sub..
[0094] By way of example of correction processing, before elimination of in-line reference beam L, the number of sampling points that form an image outputted from image sensor D may be increased by interpolation. Alternatively, a pitch between pixels of image sensor D may be subdivided by application of a division and superimposition step disclosed in WO2020/045584 (PTL 3). By using the division and superimposition step, an amount of computation can be reduced.
[0095] By diffraction calculation by plane wave expansion onto reconstruction object beam hologram U.sub., a distribution of optical waves at any sample plane can be reconstructed. A hologram resulting from propagation over a distance d of reconstruction object beam hologram U.sub. (at a sample plane distant from the recording surface by distance d) by plane wave expansion is denoted as U.sub.d.
[0096] Hologram U.sub.d can be generalized as in an expression (10) below, where d represents a distance from the light receiving surface (recording surface) of image sensor D to a position at which reconstruction is desired, within which M media (m=1, 2, . . . , M) are included, dm represents a distance of each medium, and nm represents an index of refraction of each medium. k.sub.zm in the expression is calculated in accordance with an expression (11).
[0097] When there are a plurality of media, a boundary surface between the media is assumed as being in parallel to the recording surface. A transmission coefficient at the time of incidence from a medium m into a medium m+1 is expressed as T.sub.m, m+1 (k.sub.x, k.sub.y). T.sub.M, M+1 (k.sub.x, k.sub.y) is regarded as being always 1.
[0098] For example, in the case of propagation only through air by distance d, a condition of M=1, d.sub.1=d, and n.sub.m=1 is set.
[0099] When the transmission coefficient at the time of incidence from medium m into medium m+1 can be regarded as being substantially even without depending on wave numbers k.sub.x and k.sub.y, calculation may be simplified with T.sub.m, m+1 being defined as T.sub.m, m+1=1.
(b3: Mechanism That Changes Manner of Illumination with Illumination Light)
[0100] A mechanism that changes a manner of illumination with illumination light Q will now be described.
[0101] The optical measurement system according to the present embodiment records a plurality of off-axis holograms I.sub.R (which will be an object beam hologram or an illumination light hologram depending on whether or not there is sample S) by changing a manner of illumination with illumination light Q. For example, an illumination angle may be adopted as the manner of illumination with illumination light Q.
[0102]
[0103] Referring to
[0104] As shown in
[0105] When the illumination angle is changed, as shown in
[0106] Then, as shown in
[0107] The illumination angle is changed to i angles (i being an integer not smaller than 2) (illumination light Q.sub.i) and a plurality of off-axis holograms I.sub.ORi are recorded. Reconstruction object beam holograms U.sub. i are calculated from respective off-axis holograms I.sub.ORi. An optical wave distribution obtained by propagation of reconstruction object beam holograms U.sub. i to the sample plane by plane wave expansion (a distance of propagation being identical) is defined as an object beam distribution U.sub.Si. Furthermore, an amplitude phase distribution U.sub.Pi at the sample plane is calculated by dividing object beam distribution U.sub.Si of a complex amplitude by illumination light distribution Q.sub.Si at the sample plane generated by illumination light Q.sub.i.
[0108] Thus, amplitude phase distribution U.sub.Pi at the sample plane is calculated based on the object beam hologram (first hologram) resulting from modulation with off-axis reference beam R, of object beam O obtained by illumination of sample S with illumination light Q and the illumination light hologram (second hologram) resulting from modulation of illumination light Q with off-axis reference beam R in the absence of sample S. Amplitude phase distribution U.sub.Pi is calculated by addition of an amount of phase shift .sub.i caused by sample S to a phase distribution .sub.Qi of illumination light Q.sub.i.
[0109] Finally, a composed amplitude phase distribution U.sub.SA is calculated by summing amplitude phase distribution U.sub.Pi calculated for each manner (illumination angle) of illumination with illumination light, with the amplitude phase distribution being maintained as a complex number, in accordance with an expression (12) below.
[0110] As shown in the expression (12) above, amplitude phase distribution U.sub.Pi is summed, with the amplitude phase distribution being maintained as the complex number, without being converted to intensity (absolute value), so that it can be handled as information having directivity such as a vector. Consequently, if phase components at positions other than the sample plane can be regarded as being sufficiently random, they can be expected to cancel each other and converge to zero which is an average value. Consequently, influence by diffracted light caused by scattering at object B can be suppressed. The SN ratio of the reconstructed image can thus be improved.
[0111] When amplitude phase distribution U.sub.Pi is converted to intensity (absolute value) and then summed, the intensity constantly has a positive value, and hence influence by diffracted light caused by scattering at object B remains.
[0112] In order to regard the phase components at positions other than the sample plane as sufficiently being random, a sufficiently large number of manners (for example, illumination angles) of illumination with illumination light Q are preferably set. The SN ratio is in proportion to a square root of the number of manners of illumination with illumination light Q (for example, measurement at four illumination angles is twice as high in SN ratio as measurement at a single illumination angle).
[0113] A range within which the illumination angle can be changed is optically determined by a size of sample S, a resolution of image sensor D, or a width of a field of view of image sensor D. The illumination angle is changed within an optically allowable range. Change in illumination angle means change in azimuth angle while an angle of incidence of illumination light Q is constant.
[0114] As will be described later, the number of illumination angles to be changed is typically determined by required quality and an allowable time period for processing.
[0115] When a coordinate of an image reconstructed from reconstruction object beam hologram U.sub. i is displaced by disturbance, a space resolution (on the xy plane) may lower, although a phase resolution does not lower. In this case, summing is preferably performed after displacement of a reconstructed image or sample S is rectified. More specifically, a coordinate at which correlation between object beams is maximum is calculated, and with the calculated coordinate being defined as the reference, displacement of the reconstructed image or sample S is calculated. In other words, displacement from a reference coordinate is calculated and corrected, and then summing processing is performed.
(b4: Mechanism that Restricts Range of Illumination)
[0116] A mechanism that restricts a range of illumination will now be described.
[0117] In order to apply the bandpass filter to the expressions (5) and (4) to extract the direct image component in the third term, in a spatial frequency band, the direct image component should not be superimposed on the light intensity component and the conjugate image component. In the present embodiment, the restriction mechanism such as mask A2 restricts a range of illumination light Q to prevent deterioration of the image due to superimposition of the spatial frequency band.
[0118] While a degree of freedom of an illumination method is maintained by formation on sample S, of an image of opening pattern SP2 in mask A2 arranged at a position distant from sample S, a spatial frequency bandwidth included in interference fringes is appropriately controlled to efficiently make use of the spatial frequency bandwidth within which image sensor D can make recording.
[0119] In optical measurement system 1, mask A1 and lens L1 implement the point light source of off-axis reference beam R. A spatial frequency f of interference fringes at any point on the recording surface can be expressed with an angle of incidence o of object beam O at that point and an angle of incidence .sub.R of off-axis reference beam R, as shown in an expression (13) below.
[0120]
[0121] Referring to
[0122] In contrast, referring to
[0123] In other words, by providing off-axis reference beam R from the point light source, an angle formed between a light beam (object beam O) generated from any point of sample S and a light beam generated from the point light source of off-axis reference beam R can substantially be constant at any point on the recording surface.
[0124] When object beam O is regarded as a set of wave sources located on the same z plane where the point light source of off-axis reference beam R is located, relation shown in an expression (14) below is approximately satisfied between a position (x.sub.s, y.sub.s) of the wave source on the z plane and a corresponding spatial frequency (u.sub.s, v.sub.s).
[0125] z.sub.L in the expression represents a distance in the direction of the z axis from the point light source of off-axis reference beam R to the recording surface and A represents a wavelength. The spatial frequency in the x direction is denoted as u and the spatial frequency in the y direction is denoted as v.
[0126] As shown in the expression (14), it can be seen that the position of the wave source (object beam O) on the z plane and the spatial frequency (a coordinate of a spectral component) satisfy approximately linear relation. Therefore, spread of the spatial frequency band of the direct image component can be controlled by restriction of an area where the wave source (object beam O) is present. The spatial frequency band can thus efficiently be made use of.
[0127] The area where the wave source is present means a range where sample S is illuminated. In other words, the range of illumination can be restricted by optimization of opening pattern SP2 in mask A2 and thus the spatial frequency band can appropriately be controlled. Since mask A2 serves to simply restrict the range of illumination of sample S, it does not cause distortion of a reconstructed image so long as a complex amplitude of object beam O is correctly recorded.
[0128] A method of determining a size of opening pattern SP2 in mask A2 will now be described.
[0129] As described above, off-axis hologram I.sub.R recorded in the optical system shown in
[0130] A coordinate of the origin of image sensor D is expressed as (0, 0, 0) and a coordinate of the center of sample S is expressed as (0, 0, z.sub.L).
[0131]
[0132] A bandwidth W.sub.x in the u direction of the conjugate image component (the fourth term) and a bandwidth W.sub.y in the v direction thereof can also approximately be expressed as in the expression (15) above. Central frequency u.sub.c in the u direction and central frequency v.sub.c in the v direction of the conjugate image component (the fourth term) are expressed, with signs in the expression (16) above being inverted.
[0133] A bandwidth of a component calculated by combining the light intensity component in the first term and the light intensity component in the second term spreads over a size twice as large as that expressed in the expression (15) with the origin being defined as the center.
[0134] Relation above can be shown as in
[0135]
[0136] Referring to
[0137] Referring to
[0138] In order to extract only the third term (direct image component) including information on object beam O from a spectrum in the Fourier space shown in
[0139] On the other hand, since the spatial frequency band of image sensor D is limited, it is not preferable to set excessively high central frequencies u.sub.c and v.sub.c. Therefore, in order to efficiently make use of the spatial frequency band of image sensor D, the component in the third term should be brought closer to the limit up to which it is not superimposed on components in other terms (the first term, the second term, and the fourth term).
[0140] In order to arrange the bands as being proximate to one another, the spatial frequency bandwidth is restricted to be kept within an appropriate range. When the off-axis reference beam is diverging light (point light source), relation in the expression (5) above is satisfied. Therefore, by restriction of the range of illumination with illumination light Q, the spatial frequency bandwidth of each component can be restricted to be kept within an appropriate range.
[0141] The size of the range of illumination of sample S with illumination light Q (that is, opening pattern SP2 in mask A2 serving as the restriction mechanism) is thus determined such that the component (the third term) corresponding to illumination light Q is not superimposed on components (the first term and the second term) other than the component corresponding to illumination light Q in the Fourier space (spatial frequency domain) of the hologram recorded with image sensor D.
[0142] By restriction of the spatial frequency bandwidth of each component to be kept within the appropriate range, the spatial frequency band of image sensor D can efficiently be made use of and noise caused by superimposition of the spatial frequency band can also be suppressed.
[0143] Though application to the off-axis holography optical system is described, mask A2 described above is effective also in restriction of the spatial frequency bandwidth of the direct image component to be kept within a range within which recording with image sensor D can be made, also in an optical system adapted to another holography where diverging light (that is, the point light source or a light source that can be regarded as the point light source) is employed as a reference beam.
[0144] Mask A2 used in the optical measurement system according to the present embodiment may be similar in outer geometry to a field stop used in an optical microscope. The field stop, however, is used for the purpose of suppression of stray light caused by impingement of needless light (light out of the field of view) on a wall in an optical path. Though the noise level can be lowered by the field stop, little stray light is suppressed. Therefore, unless detection of weak light is aimed at, no great problem will arise without taking positive measures.
[0145] In contrast, restriction of the range of illumination in digital holography where diverging light is adopted as the reference beam is effective for restriction of the spatial frequency bandwidth included in interference fringes to be kept within the range within which recording with image sensor D can be made. Mask A2 used in the optical measurement system according to the present embodiment is used for this purpose.
[0146] Mask A2 used in the optical measurement system according to the present embodiment thus exhibits an effect different from the effect exhibited by the field stop used in the optical microscope.
[0147] Though use of mask A2 provided with opening pattern SP2 of a predetermined size as an exemplary restriction mechanism is illustrated in the description above, without being limited as such, the restriction mechanism may be implemented by any optical element.
[0148] For example, the size of the opening pattern (a cross-sectional area within which illumination light passes) may freely be changed with the use of an optical element capable of controlling a transmittance of light such as a polarization mirror or liquid crystal. As the size of the opening pattern can freely be changed, change in distance between sample S and image sensor D or change in position of the point light source of the off-axis reference beam can readily be addressed.
(b5: Illumination Light Profile)
[0149] The optical measurement system according to the present embodiment requires an illumination light profile for calculation of composed amplitude phase distribution U.sub.SA.
[0150] For example, change in distance from the recording surface to the sample plane is also assumed in the case of measurement with sample S successively being replaced. In this case, by diffraction calculation, an illumination light profile at another distance can be calculated based on an illumination light profile at the sample plane distant from the recording surface by a certain distance. Therefore, it is not necessary to record the illumination light profile each time the distance from the recording surface to the sample plane changes.
[0151] When sample S includes a substrate which is not an object to be measured, an illumination light profile as to illumination light that passes through the substrate can be calculated by performing on the illumination light profile, calculation of propagation within a medium by plane wave expansion. In this case, when approximate thickness and index of refraction of the substrate have already been known, a sample consisting only of the substrate (a sample different from sample S to be measured) should be prepared, and the illumination light profile does not have to be recorded.
[0152] A layer of sample S other than the substrate can be measured with the use of the profile of illumination light that passes through the substrate. When recording only in connection with the substrate can be made, by recording of the profile of illumination light that passes through the substrate while the substrate alone is arranged, calculation of propagation within the medium in the substrate can also be omitted.
(b6: Measurement of Surface Geometry of Sample)
[0153] A method of measuring the surface geometry of sample S with optical measurement system 1 according to the first embodiment will now be described. An amount of phase shift caused by the sample is used for measurement of the surface geometry of sample S. In this case, visible light is preferably used as illumination light Q.
[0154] In the optical measurement system according to the present embodiment, the illumination light profile recorded while no sample S is arranged is used to subtract phase distribution o of illumination light from the phase distribution of composed amplitude phase distribution U.sub.SA to thereby calculate an amount of phase shift 40; caused by sample S.
[0155] Optical measurement system 1 according to the first embodiment measures the surface geometry of sample S with the use of a relational expression of amount of phase shift caused by sample S and a thickness d of sample S shown in an expression (17) below.
[0156] k.sub.z1 in the expression represents a wave number in the z direction in sample S, K.sub.z2 represents a wave number in the z direction in the medium where sample S is present, represents a phase correction term, and represents a light source wavelength. Wave numbers k.sub.z1 and k.sub.z2 can be calculated in accordance with an expression (18) and an expression (19) below, respectively.
[0157] n.sub.1 in the expression represents an index of refraction of a medium where sample S is present and n.sub.2 represents an index of refraction of sample S. For example, when sample S is present in vacuum, an index of refraction n.sub.1 is set to n.sub.1=1.
[0158] Since wave number k.sub.x in the x direction and wave number k.sub.y in the y direction in the expression represent amounts of phase shift per unit length in the x direction and the y direction, they can be calculated by differentiation of phase distribution .sub.Q of illumination light Q at the sample plane as shown in an expression (20) and an expression (21) below.
[0159] Phase correction term in the expression (17) is used for correction of phase shift due to a complex transmittance when the transmittance attains to a complex number for such a reason as light absorption by sample S. When phase shift due to the complex transmittance can be regarded as being even in the entire sample S for such a reason that sample S is entirely of the same material, phase correction term may be omitted.
[0160] When the coordinate of the point light source of illumination light is displaced by disturbance, illumination light Q.sub.i may be corrected by translation of pixels on image sensor D. An amount of translation is typically determined to maximize correlation between object beam distribution Us and illumination light Q.sub.i.
[0161] When a shape of a wave front of illumination light is smooth, an amount of information may be reduced by using a low-pass filter or polynomial approximation.
[0162]
[0163] Referring to
[0164] In succession, processing for obtaining an illumination light profile is performed. More specifically, the optical system shown in
[0165] In succession, movable mirror MM is driven to set any one of a plurality of
[0166] illumination angles (step S8). Coherent light is then generated from light source 10 and processing apparatus 100 obtains illumination light hologram Q.sub.i (x, y) (second hologram) recorded with image sensor D (step S10).
[0167] Processing apparatus 100 thus records with image sensor D, a hologram resulting from modulation with off-axis reference beam R, of light obtained by illumination with illumination light Q while there is no sample S. Alternatively, processing apparatus 100 records with image sensor D, a hologram of transmitted light obtained by illumination with illumination light Q, of only a substrate instead of sample S, the substrate being included in sample S and not being an object to be measured. Processing apparatus 100 calculates illumination light profile Q.sub.i(x, y) from illumination light hologram Q.sub.i (x, y) (step S12).
[0168] Processing in steps S8 to S12 is repeated as many times as the number (N) of predetermined illumination angles (1iN).
[0169] Processing for obtaining an amplitude phase distribution of sample S is then performed. More specifically, sample S is arranged at a predetermined position of the optical system shown in
[0170] In succession, processing apparatus 100 calculates reconstruction object beam hologram U.sub. i (x, y) from object beam hologram U.sub.i(x, y) (step S20). Processing apparatus 100 then has reconstruction object beam hologram U.sub. i (x, y) and corresponding illumination light profile Q.sub.i(x, y) propagate to a position of the sample plane by plane wave expansion and calculates object beam distribution U.sub.Si(x, y) and illumination light distribution Q.sub.Si (x, y) at the sample plane (step S22). Furthermore, processing apparatus 100 calculates amplitude phase distribution U.sub.Pi (x, y) at the sample plane by dividing object beam distribution U.sub.Si(x, y) by corresponding illumination light distribution Q.sub.Si (x, y) (step S24).
[0171] Processing in steps S16 to S24 is repeated as many times as the number (N) of predetermined illumination angles (1iN).
[0172] Processing apparatus 100 then calculates the surface geometry of sample S. More specifically, processing apparatus 100 calculates composed amplitude phase distribution U.sub.SA (x, y) by summing amplitude phase distribution U.sub.Pi (x, y), with the amplitude phase distribution being maintained as a complex number (step S26).
[0173] In succession, processing apparatus 100 calculates an amount of phase shift (x, y) from an angle of deviation of composed amplitude phase distribution U.sub.SA (x, y) at the sample plane (step S28). Processing apparatus 100 then calculates a thickness d(x, y) of sample S based on amount of phase shift (x, y) (step S30). A relational expression shown in the expression (17) above is used for calculation of thickness d of sample S.
[0174] Finally, processing apparatus 100 calculates a geometric profile of sample S by aggregating thicknesses d(x, y) at coordinates in the sample plane (step S32).
[0175] The surface geometry of sample S can be calculated through processing as above.
[0176] An index of refraction and a profile of the index of refraction of sample S can also be measured. In this case, an index of refraction n.sub.2 (x, y) of sample S is calculated in step S30 and the profile of the index of refraction of sample S is calculated by aggregating indices of refraction n.sub.2 (x, y) at coordinates in the sample plane in step S32.
(b7: Measurement of Internal Structure of Sample)
[0177] A method of measuring the internal structure of sample S with optical measurement system 1 according to the first embodiment will now be described. Near infrared rays are preferably used as illumination light Q in measurement of the internal structure of sample S. With the use of near infrared rays as illumination light Q, object beam hologram U obtained by illumination of sample S with illumination light Q shows the internal structure of sample S. By setting the sample plane at any position with respect to sample S (a position distant by any distance from the recording surface), the internal structure of sample S can be measured.
[0178]
[0179] The procedure shown in the flowchart in
[0180] Specifically, processing apparatus 100 calculates composed amplitude phase distribution U.sub.SA (x, y) by summing amplitude phase distribution U.sub.Pi (x, y), with the amplitude phase distribution being maintained as a complex number (step S26) and images calculated composed amplitude phase distribution U.sub.SA (x, y), to thereby visualize the internal structure of sample S.
C. Second Embodiment: Reflection Optical System
(c1: Optical System)
[0181]
[0182] Since the optical system shown in
[0183] The optical system shown in
[0184] The optical system shown in
[0185] Light outputted from one side of beam splitter BS1 is used as illumination light Q for illumination of sample S.
[0186] More specifically, illumination light Q divided by beam splitter BS2 is reflected by mirror M2, thereafter passes through a measurement optical system 32, and is guided to beam splitter BS2. Illumination light Q is further reflected by half mirror HM2 of beam splitter BS2 and sample S is illuminated therewith. Object beam O obtained by illumination of sample S with illumination light Q (that is, light reflected by sample S) passes through half mirror HM2 of beam splitter BS2 and is guided to image sensor D.
[0187] Measurement optical system 32 includes movable mirror MM, lens L3, mask A2, and a lens L4. As in the first embodiment, illumination light Q reflected by movable mirror MM is condensed by lens L3 and passes through mask A2. Illumination light Q that passes through mask A2 is further condensed by lens L4 and an image thereof is formed on sample S. In other words, an image of opening pattern SP2 in mask A2 passes through lens L4 and is formed on sample S. The range within which sample S is illuminated with illumination light Q that passes through mask A2 can thus be restricted. By restriction of the range of illumination with illumination light Q, unwanted light can be reduced and measurement accuracy can be enhanced. As movable mirror MM rotates, the manner of illumination with illumination light Q is changed.
[0188] Since the range of illumination may vary depending on the thickness of sample S also in optical measurement system 2, for addressing such variation, opening pattern SP2 in mask A2 is changed or a position of lens L2 for formation of an image of illumination light Q on sample S is changed as necessary.
[0189] When mirror M2 and mask A2 are arranged in optical proximity, lens L3 does not have to be provided.
(c2: Illumination Light Profile)
[0190] An illumination light profile in optical measurement system 2 will now be described. In a reflection optical system adopted in optical measurement system 2, a reference plane is arranged at a position where sample S is to be arranged (sample position) and reflected light from the reference plane is used as illumination light Q. The reference plane is preferably a plane, and for example, optical flat can be employed. In other words, the optical system shown in
[0191] Since an illumination light distribution Q.sub.S at the sample plane different in distance can be calculated by propagation of recorded illumination light Q, illumination light Q does not have to be recorded each time the distance from the recording surface changes as in the first embodiment (transmission optical system). When the coordinate of the point light source of illumination light is displaced due to disturbance, illumination light Q may be corrected by translation of pixels on image sensor D.
[0192] A plurality of illumination light profiles may be recorded while the reference plane is translated in the x direction and the y direction for the purpose of elimination of an error in geometry included in the reference plane, and an average value of the plurality of recorded illumination profiles may be adopted as the illumination light profile to actually be used.
(c3: Measurement of Surface Geometry of Sample)
[0193] A method of measuring the surface geometry of sample S with optical measurement system 2 according to the second embodiment will now be described.
[0194] An amount of phase shift caused by sample S is used for measurement of the surface geometry of sample S. In this case, visible light is preferably used as illumination light Q.
[0195] Relation between amount of phase shift caused by sample S and a height h of sample S is as shown in an expression (22) below.
[0196] k.sub.x in the expression represents a wave number in the x direction, k.sub.y represents a wave number in the y direction, and represents a phase correction term.
[0197] Wave numbers k.sub.x and k.sub.y can be calculated in accordance with the expression (20) and the expression (21) above. When phase shift due to a complex reflectivity can be regarded as being even in the entire sample S for such a reason that sample S is entirely of the same material, phase correction term may be omitted.
[0198] In measurement of a geometry (in-plane profile) over the entire sample plane, an optical system where illumination light Q is parallel light may be adopted. When illumination light Q is a spherical wave, focus displacement with respect to the sample plane may be detected as a false geometry in a concave or convex shape. Such a false geometry is caused by the fact that illumination light is a spherical wave. Therefore, in measurement of the geometry (in-plane profile) over the entire sample plane, the optical system in which illumination light Q is parallel light is preferably adopted.
[0199]
[0200] Referring to
[0201] In succession, processing for obtaining an illumination light profile is performed. More specifically, the optical system shown in
[0202] In succession, movable mirror MM is driven to set any one of a plurality of illumination angles (step S58). Coherent light is then generated from light source 10 and processing apparatus 100 obtains illumination light hologram Q.sub.i (x, y) (second hologram) recorded with image sensor D (step S60).
[0203] Processing apparatus 100 calculates illumination light profile Q.sub.i(x, y) from illumination light hologram Q.sub.i (x, y) (step S62).
[0204] Processing in steps S58 to S62 is repeated as many times as the number (N) of predetermined illumination angles (1iN).
[0205] Processing for obtaining an amplitude phase distribution of sample S is then performed. More specifically, sample S is arranged at a predetermined position of the optical system shown in
[0206] In succession, processing apparatus 100 calculates reconstruction object beam hologram U.sub. i (x, y) from object beam hologram U.sub.i(x, y) (step S70). Processing apparatus 100 then has reconstruction object beam hologram U.sub. i (x, y) and corresponding illumination light profile Q.sub.i(x, y) propagate to a position of the sample plane by plane wave expansion and calculates object beam distribution U.sub.Si(x, y) and illumination light distribution Q.sub.Si (x, y) at the sample plane (step S72). Furthermore, processing apparatus 100 calculates amplitude phase distribution U.sub.Pi (x, y) at the sample plane by dividing object beam distribution U.sub.Si(x, y) by corresponding illumination light distribution Q.sub.Si (x, y) (step S74).
[0207] Processing in steps S66 to S74 is repeated as many times as the number (N) of predetermined illumination angles (1iN).
[0208] Processing apparatus 100 then calculates the surface geometry of sample S. More specifically, processing apparatus 100 calculates composed amplitude phase distribution U.sub.SA (x, y) by calculating amplitude phase distribution U.sub.Pi (x, y), with the amplitude phase distribution being maintained as a complex number (step S76). In succession, processing apparatus 100 calculates amount of phase shift (x, y) from an angle of deviation of composed amplitude phase distribution U.sub.SA (x, y) at the sample plane (step S78). Processing apparatus 100 then calculates a height h (x, y) of sample S based on amount of phase shift (x, y) (step S80). A relational expression shown in the expression (22) above is used for calculation of height h of sample S. Finally, processing apparatus 100 calculates a geometric profile of sample S by aggregating height h (x, y) at coordinates on the sample plane (step S82).
[0209] The surface geometry of sample S can be calculated through processing as above.
(c4: Measurement of Internal Structure of Sample)
[0210] In measurement of the internal structure of sample S, near infrared rays are preferably used as illumination light Q. By using near infrared rays as illumination light Q, object beam hologram U obtained by illumination of sample S with illumination light Q shows the internal structure of sample S. The internal structure of sample S can be measured by setting the sample plane at any position (a position distant by any distance from the recording surface) with respect to sample S.
[0211]
[0212] A procedure shown in the flowchart in
[0213] Specifically, processing apparatus 100 calculates composed amplitude phase distribution U.sub.SA (x, y) by summing amplitude phase distribution U.sub.Pi (x, y) with the amplitude phase distribution being maintained as a complex number (step S76) and images calculated composed amplitude phase distribution U.sub.SA (x, y), to thereby visualize the internal structure of sample S.
<D. Processing Apparatus 100>
d.SUB.1.: Exemplary Hardware Configuration
[0214]
[0215] Processor 102 is typically a computing processing unit such as a central processing unit (CPU) or a graphics processing unit (GPU), and it reads one program or a plurality of programs stored in storage 110 on main memory 104 and executes the same. Main memory 104 is a volatile memory such as a dynamic random access memory (DRAM) or a static random access memory (SRAM), and functions as a working memory for execution of a program by processor 102.
[0216] Input device 106 includes a keyboard, a mouse, and the like and accepts an operation from a user. Display 108 provides output of a result of execution of a program by processor 102 to a user.
[0217] Storage 110 is implemented by a non-volatile memory such as a hard disk or a flash memory, and various programs and data are stored therein. More specifically, an operating system (OS) 112, a measurement program 114, hologram data 116, and a measurement result 118 are held in storage 110.
[0218] Operating system 112 provides an environment where processor 102 executes a program. Measurement program 114 implements an optical measurement method according to the present embodiment by being executed by processor 102. Hologram data 116 corresponds to image data outputted from image sensor D. Measurement result 118 includes a measurement result obtained by execution of measurement program 114.
[0219] Interface 120 mediates data transmission between processing apparatus 100 and image sensor D. Network interface 122 mediates data transmission between processing apparatus 100 and an external server apparatus.
[0220] Medium drive 124 reads necessary data from a recording medium 126 (for example, an optical disc) where a program to be executed by processor 102 is stored and has the data stored in storage 110. Measurement program 114 or the like executed in processing apparatus 100 may be installed through recording medium 126 or downloaded from a server apparatus through network interface 122 or the like.
[0221] Measurement program 114 may perform processing by calling a necessary module out of program modules provided as a part of operating system 112 in a predetermined sequence and at predetermined timing. In such a case, measurement program 114 not including the modules is also encompassed in the technical scope of the present invention. Measurement program 114 may be provided as being incorporated as a part of another program.
[0222] All or some of functions provided by execution of a program by processor 102 of processing apparatus 100 may be implemented by a hard-wired logic circuit (for example, a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC)).
[0223] Since the exemplary hardware configuration of processing apparatus 100 is also similar in another embodiment, detailed description thereof will not be repeated.
d2: Exemplary Functional Configuration
[0224]
[0225] Referring to
[0226] Off-axis hologram obtaining module 150 has a hologram recorded with image sensor D recorded as off-axis hologram ILR while the optical system in recording of the in-line reference beam is set up.
[0227] Illumination light hologram obtaining module 152 obtains illumination light hologram Q.sub.i (x, y) recorded with image sensor D while the optical system in recording of the object beam is set up.
[0228] Object beam hologram obtaining module 154 obtains object beam hologram U.sub.i(x, y) recorded with image sensor D while the optical system in recording of the object beam is set up.
[0229] Illumination light hologram obtaining module 152 and object beam hologram obtaining module 154 are both configured to record a detection signal from image sensor D, and any one of them is activated in response to a status signal set manually or automatically.
[0230] Hologram reconstruction module 156 calculates a reconstruction illumination light hologram (illumination light profile Q.sub.i(x, y)) from illumination light hologram Q.sub.i (x, y) obtained by illumination light hologram obtaining module 152 and calculates reconstruction object beam hologram U.sub. i (x, y) from object beam hologram U.sub.i(x, y) obtained by object beam hologram obtaining module 154.
[0231] Hologram reconstruction module 156 has illumination light profile Q.sub.i(x, y) and reconstruction object beam hologram U.sub. i (x, y) propagate to the position of the sample plane by plane wave expansion and calculates illumination light distribution Q.sub.Si (x, y) and object beam distribution U.sub.si(x, y) at the sample plane.
[0232] Amplitude phase distribution calculation module 158 calculates amplitude phase distribution U.sub.Pi (x, y) at the sample plane by dividing object beam distribution Q.sub.Si (x, y) by corresponding illumination light distribution Q.sub.Si (x, y).
[0233] Summing module 160 calculates composed amplitude phase distribution U.sub.SA by summing amplitude phase distribution U.sub.Pi, with the amplitude phase distribution being maintained as the complex number.
[0234] Object beam phase calculation module 162 calculates amount of phase shift (x, y) from the angle of deviation of composed amplitude phase distribution U.sub.SA (x, y).
[0235] Object geometry calculation module 164 calculates information (a thickness, an index of refraction, or the like) for specifying the surface geometry of sample S based on amount of phase shift (x, y). Object geometry calculation module 164 outputs a result of calculation as information on the geometry of sample S.
[0236] Imaging module 166 images the intensity distribution and/or the phase distribution of composed amplitude phase distribution U.sub.SA (x, y).
[0237] Illumination angle control module 168 determines the type and the number of illumination angles to be realized by movable mirror MM, in accordance with setting. Illumination angle control module 168 has movable mirror MM driven so as to realize a target illumination angle in coordination with hologram reconstruction module 156 or the like.
(d3: User Interface)
[0238] An exemplary user interface involved with setting of the illumination angle realized by the optical measurement system according to the present embodiment will now be described.
[0239]
[0240] User interface screen 170 includes an input field 172 where the number of illumination angles is to be inputted, a display field 174 where a degree of improvement in SN ratio is to be shown, and a display field 176 where time required for measurement is to be shown.
[0241] Input field 172 receives setting of the number (i described above) of illumination angles to be varied. Processing apparatus 100 calculates the degree of improvement in SN ratio based on the number of set illumination angles and has the degree shown in display field 174, and it calculates time required for measurement and has the time shown in display field 176.
[0242] The user sets the number of illumination angles in consideration of quality (SN ratio) of a result of measurement and allowable tact time shown on user interface screen 170. The number of appropriate illumination angles may be suggested when required quality and allowable tact time are inputted.
[0243] The number of illumination angles may be set through user interface screen 170 as shown in
E. Exemplary Measurement
[0244] Exemplary measurement with the optical measurement system according to the present embodiment will now be described.
[0245]
[0246]
[0247]
[0248] In exemplary measurement shown in
[0249] In contrast, influence by diffracted light included in illumination light can be suppressed by summing of amplitude phase distributions U.sub.Pi with the amplitude phase distributions being maintained as the complex numbers, and it can be seen that not only the geometry of the pattern but also soiling attached to the surface of the test target is clearly visualized.
[0250]
[0251]
[0252] In exemplary measurement shown in
[0253] As shown in
<F. Combined Configuration>
[0254] In an example where a silicon wafer is assumed as sample S, in measurement of both of the surface geometry and the internal structure, a combined configuration which is combination of reflection optical measurement system 2 (see
[0255] of the optical measurement system according to the present embodiment. Referring to
[0256] Referring to
[0257] Referring to
[0258] As shown in
<G. Another Embodiment>
[0259] As described above, processing for obtaining off-axis hologram ILR (processing in steps S2 and S4 in
[0260] When off-axis hologram ILR is not obtained, calculation processing in accordance with the expression (8) above is not performed, and complex amplitude off-axis hologram J.sub.OR shown in the expression (7) should only be adopted as it is as object beam hologram U(x, y).
[0261] Alternatively, the component of off-axis reference beam R (=R.sub.0exp(i.sub.R)) may be eliminated from complex amplitude off-axis hologram J.sub.OR shown in the expression (7), and resultant complex amplitude off-axis hologram J.sub.OR may be adopted as object beam hologram U(x, y). In elimination of off-axis reference beam R, complex amplitude off-axis hologram J.sub.OR shown in the expression (7) should only be divided by a complex conjugate of off-axis reference beam R. A distribution of off-axis reference beam R is calculated with a method of calculation from an analytical solution of a spherical wave based on physical arrangement of the point light source of off-axis reference beam R.
[0262] When off-axis hologram ILR is not obtained, in addition to the expression (8) above, the expressions (4), (6), and (9) are not used.
<H. Modification>
[0263] The optical systems described above are by way of example, and depending on required specifications or restriction imposed by a space or the like, any optically equivalent modification can be made. For example, a single lens may be modified to a lens assembly, or any reflection member can be employed instead of the mirror.
[0264] Though an exemplary implementation in which processing apparatus 100 performs computing processing involved with measurement of the surface geometry and/or the internal structure of sample S is exemplified in the description above, any form of implementation can be adopted without being limited as such. For example, a computing resource on a cloud may be used to perform a part or the entirety of processing for which processing apparatus 100 is responsible.
[0265] Though an example in which sample S such as a silicon wafer is measured is mainly described above, sample S to be measured is not limited. Specifically, processing for calculating composed amplitude phase distribution U.sub.SA by summing amplitude phase distribution U.sub.Pi with the amplitude phase distribution being maintained as a complex number as described above can be applied to measurement of any sample S for which improvement in SN ratio is required.
<I. Summary>
[0266] The optical measurement system according to the present embodiment can select near infrared rays or visible light as light for illumination of a sample in accordance with the type of the sample and the purpose of measurement. Since the optical measurement system has light reception sensitivity not only to visible light but also to near infrared rays by adoption of silicon-based image sensor D, replacement is not required. By adoption of such combination of the light source and the image sensor, the surface geometry and the internal structure of a sample such as a silicon wafer can be measured.
[0267] The optical measurement system according to the present embodiment measures the surface geometry and the internal structure of the sample with the use of information on a phase of an object beam. Therefore, the optical measurement system can measure the sample on the n.sub.m order, without a depth resolution (a resolution on the z axis) being restricted by a depth of focus. At this time, the internal structure of the sample can be observed at a high resolution by diffraction calculation in consideration of an index of refraction of a medium. Consequently, a defect on the n.sub.m order in the sample can also be detected.
[0268] Since the optical measurement system according to the present embodiment can reconstruct an image at any distance from an optical wave distribution obtained by digital holography, a vertical scanning mechanism or the like is not required.
[0269] The optical measurement system according to the present embodiment calculates the composed amplitude phase distribution by summing the amplitude phase distribution calculated for each manner (illumination angle) of illumination with illumination light, with the amplitude phase distribution being maintained as the complex number. By summing the amplitude phase distribution with the amplitude phase distribution being maintained as the complex number while the manner (illumination angle) of illumination with illumination light is varied, noise can be reduced and the SN ratio can be improved.
[0270] In an optical measurement apparatus according to the present embodiment, a range where a sample is illuminated with illumination light is restricted to a predetermined range, so that superimposition of a component containing information on an object beam on a light intensity component and a conjugate optical component in a Fourier space (spatial frequency domain) can be avoided. Consequently, noise due to superimposition between components can be suppressed and more accurate measurement can be realized.
[0271] It should be understood that the embodiments disclosed herein are illustrative and non-restrictive in every respect. The scope of the present invention is defined by the terms of the claims rather than the description above and is intended to include any modifications within the scope and meaning equivalent to the terms of the claims.
REFERENCE SIGNS LIST
[0272] 1, 2 optical measurement system; 10 light source; 20 image forming optical system; 30, 32 measurement optical system; 100 processing apparatus; 102 processor; 104 main memory; 106 input device; 108 display; 110 storage; 112 operating system; 114 measurement program; 116 hologram data; 118 measurement result; 120 interface; 122 network interface; 124 medium drive; 126 recording medium; 150 off-axis hologram obtaining module; 152 illumination light hologram obtaining module; 154 object beam hologram obtaining module; 156 hologram reconstruction module; 158 amplitude phase distribution calculation module; 160 summing module; 162 object beam phase calculation module; 164 object geometry calculation module; 166 imaging module; 168 illumination angle control module; 170 user interface screen; 172 input field; 174, 176 display field; A1, A2 mask; BE beam expander; BS1, BS2 beam splitter; D image sensor; L in-line reference beam; L1, L2, L3, LA, L21, L22, L31, L32 lens; M1 mirror, MO objective lens; P pinhole; S sample; SP1, SP2 opening pattern