Optical system phase acquisition method and optical system evaluation method

10365164 ยท 2019-07-30

Assignee

Inventors

Cpc classification

International classification

Abstract

When an optical system is illuminated with illumination light fluxes emitted from respective input image points, an interference image generated by superimposing output light fluxes output from the optical system and a reference light flux coherent with the output light fluxes is imaged to acquire interference image data collectively including information of an interference image about all input image points. Diffractive optical light propagation simulation is performed to acquire a phase distribution associated with only light emitted from a single input image point at a position where reconstructed light fluxes to the respective output light fluxes are separated into each light flux. In each input image point, this simulation is performed to acquire a phase distribution on an exit pupil plane.

Claims

1. An optical system phase acquisition method of calculating a broad sense phase distribution sj(u, v) by calculating process by a processing apparatus (Up) in an optical system that comprises an optical system (Ot) configured of one or more image forming optical elements, a plurality of input image points (Psj) (j=1, 2, . . . applies similarly hereafter), and a phase defining plane (T) located at a prescribed relative position with respect to an output side of the optical system (Ot), the broad sense phase distribution sj(u, v) being information associated with a phase distribution derived from the input image points (Psj) on the phase defining plane (T), the method comprising: a step (a1) of arranging an imaging element (Uf) on the optical system (Ot) and the output side of the optical system (Ot); a step (a2) of arranging the plurality of input image points (Psj) on an input side of the optical system (Ot); a step (a3) of illuminating the optical system (Ot) by input light fluxes (Fsj) emitted from the plurality of input image points (Psj), and illuminating output light fluxes (Foj) outputted from the optical system (Ot) to the imaging element (Uf); a step (a4) of acquiring interference image data (Df) that collectively includes information of an interference image about the plurality of input image points (Psj) by imaging the interference image by the imaging element (Uf), the interference image being generated by superimposing a reference light flux (Fr) coherent with the output light fluxes (Foj) to the output light fluxes (Foj); a step (a5) of the imaging element (Uf) transmitting the interference image data (Df) to the processing apparatus (Up); a step (b1) of performing, by the processing apparatus (Up), diffractive optical light propagation simulation to a separation reconstruction surface (Sg) which is at a position where the output light fluxes (Foj) separates into respective light fluxes and reconstructing the output light fluxes (Foj) in the separation reconstruction surface (Sg) based on the interference image data (Df), and thereafter calculating a broad sense phase distribution gj(u, v) in the separation reconstruction surface (Sg) associated with only light emitted from one input image point (Psj) by the calculating process; and a step (b2) of performing, by the processing apparatus (Up), diffractive optical light propagation simulation based on the broad sense phase distribution gj(u, v) to calculate a broad sense phase distribution sj(u, v) belonging to the input image point (Psj) on the phase defining plane (T) by the calculating process.

2. The optical system phase acquisition method according to claim 1, further comprising: a step (a6) of arranging a relay optical system (Oq) between the optical system (Ot) and the imaging element (Uf), wherein the step (a3) is a step in which the output light fluxes (Foj) outputted from the optical system (Ot) are inputted to the relay optical system (Oq) and the imaging element (Uf) is illuminated with a relay output light flux outputted from the relay optical system (Oq), and the step (a4) is a step in which the imaging element (Uf) images the reference image generated by superimposing the reference light flux (Fr) to the relay output light flux.

3. An optical system evaluation method of evaluating the optical system (Ot) by the calculating process performed by the processing apparatus (Up), the method using the optical system phase acquisition method that uses the input image points and the processing apparatus according to claim 1, this method comprising: after the broad sense phase distribution sj(u, v) on the phase defining plane (T) with respect to the input image point (Psj) is calculated by the processing apparatus (Up) performing the step (b2), a step (c1) of calculating an optical path length extending from a position of arbitrarily provided coordinates (U, V) on an aberration defining virtual plane to an ideal output image point conjugate to the input image point (Psj), and calculating an optical path length aberration distribution H(U, V) on the aberration defining virtual plane, based on the broad sense phase distribution sj(u, v) by the calculating process by the processing apparatus (Up); and a step (c2) of evaluating the optical system (Ot) based on the optical path length aberration distribution H(U, V) by the calculating process by the processing apparatus (Up).

Description

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

(1) FIG. 1 is a block diagram of a configuration associated with a technique of an optical system phase acquisition method of the present invention;

(2) FIG. 2 is a schematic diagram of a concept associated with the technique of the optical system phase acquisition method of this invention;

(3) FIG. 3 is a schematic diagram of the configuration associated with the technique of the optical system phase acquisition method of this invention;

(4) FIG. 4 is a schematic diagram of the configuration associated with the technique of the optical system phase acquisition method of this invention;

(5) FIG. 5 is a schematic diagram of the concept associated with the technique of the optical system phase acquisition method of this invention; and

(6) FIGS. 6a to 6d are schematic diagrams of the concept associated with the technique of the optical system phase acquisition method of this invention.

DETAILED DESCRIPTION OF THE INVENTION

(7) In the description of this invention, image formation denotes a phenomenon in which with respect to an input image of a real image or a virtual image at finite distance or at infinity, an optical system generates an output image of the real image or the virtual image at finite distance or at infinity.

(8) The term conjugate, for example, when it is described as A and B are conjugate to each other as a general term in the field of geometric optics, means that an image of A is formed on B or an image of B is formed on A as a result of a function of an optical device, such as a lens, having an imaging function, at least on the basis of paraxial theory. In this case, A and B are images that obviously encompass an isolated point image as a target, and also encompass, as a target, a set of point images as well as an image having an extent in which point images are distributed in a continuous manner.

(9) Here, as a general term in the field of geometric optics, point image or image point (i.e., image) encompasses any of: a point that actually emits light therefrom; a point toward which light is converged and that causes a bright point to be shown on a screen when the screen is disposed thereat; a point toward which light is seen as if to be converged (but a screen is not allowed to be disposed thereat because the point is inside an optical system); and a point from which light is seen as if to be emitted (but a screen is not allowed to be disposed thereat because the point is inside an optical system). Point image or image point (i.e. image) is used without distinguishing the above-mentioned points one from another.

(10) Taking, as an example, a general camera lens in which an aperture stop typically exists inside the lens, an image of the aperture stop viewed through the lens when the lens is viewed from the light entering side is called an entrance pupil, an image of the aperture stop viewed through the lens when the lens is viewed from the light exiting side is called an exit pupil, and a ray (usually meridional ray) that travels toward the center of the entrance pupil or that travels out from the center of the exit pupil is called a principal ray.

(11) Further, rays other than the principal ray are called marginal rays in a broad sense.

(12) However, an optical system that uses light having directivity such as laser light has no necessity to extract a bundle of rays by the aperture stop, and may therefore have no aperture stop therein in many cases. In such a case, they are defined depending on an existence form of light in the optical system.

(13) Typically, a central ray of directional distribution of light in a bundle of rays emitted from the emission point is considered the principal ray. The entrance pupil is considered to be located at a position where the principal ray entering the optical system or an extension thereof intersects the optical axis, and the exit pupil is considered to be located at a position where the principal ray exiting the optical system or an extension thereof intersects the optical axis.

(14) However, in a precise sense, it is conceivable that the principal ray as defined above and the optical axis do not intersect each other, for example, as a result of adjustment error, and may only be skew in some cases.

(15) However, such a phenomenon has no relation to the essence and is worthless to argue. In the description below, it is therefore assumed that such a phenomenon does not occur, or it is assumed that the principal ray and the optical axis intersect each other at a position where the principal ray and the optical axis are closest to each other.

(16) Moreover, when paying attention to two adjacent partial optical systems A and B in the optical system where B is provided immediately downstream of A to be adjacent thereto, an exit pupil of A serves as an entrance pupil of B (in a manner similar to a manner in which an output image of A serves as an input image of B). Further, in the first place, all of the entrance pupils and the exit pupils of arbitrarily-defined partial optical systems in the optical system should be conjugate to one another (all of the entrance pupils and the exit pupils of the arbitrarily-defined partial optical systems are an image of the aperture stop where the aperture stop is provided, or are conjugate to one another even when no aperture stop is provided). For this reason, the entrance pupil and the exit pupil are simply referred to as pupil in the absence of the necessity to particularly distinguish one from the other.

(17) Unless particularly contextually limited, the appellation of rays is considered to be used in a broad sense. For example, a ray coinciding with an optical axis is considered to be one type of meridional ray, and the meridional ray is considered to be one type of skew ray.

(18) In the description of some embodiments of the invention and the drawings, the optical axis of the optical system is referred to as z axis. However, in a case where the optical axis is bent by a reflector, a direction in which a ray that has been along the original z axis is reflected to travel is also referred to as z axis, and no coordinate axis is newly provided therefor.

(19) First, referring to FIG. 1 which is a block diagram of a configuration associated with a technique of an optical system phase acquisition method of the present invention, an embodiment for carrying out the optical system phase acquisition method will be described.

(20) Positions of input image points (Ps1, Ps2, . . . ) to be actually input to an optical system (Ot) is assumed to be appropriately determined depending on purposes, such as for calculating OTF as evaluation of the optical system as described above.

(21) When determining the positions, even if the optical system (Ot) as a target for phase acquisition is designed axisymmetrically, it is preferable to determine the positions such that image points are arranged with respect to all the four quadrants around the axis without assuming axisymmetry.

(22) This is because phase distribution is affected by machining error and assembly error of a refractive surface or a reflective surface in the optical system (Ot).

(23) In the drawings, although all the input image points (Ps1, Ps2, . . . ) are arranged on the y axis for ease of drawing, they may be arranged on a plane perpendicular to the z axis, such as an xy plane, or may be arranged three-dimensionally.

(24) The input image points (Ps1, Ps2, . . . ) can be realized by a secondary light emitting point generated by applying a light flux from a primary coherence light source, such as a laser, to, for example, a mask plate having pin holes therethrough or a lens array in which lenses are arrayed two-dimensionally.

(25) Then, illumination light fluxes (Fs1, Fs2, . . . ) from the input image points (Ps1, Ps2, . . . ) are input to the optical system (Ot) to generate output light fluxes (Fo1, Fo2, . . . ), and the output light flux (Fo1) is applied to an imaging surface (Sf) of an imaging element (Uf) including a CCD and a CMOS image sensor.

(26) In this embodiment, an object in the holography technology corresponds to the optical system (Ot), illumination light corresponds to the illumination light fluxes (Fs1, Fs2, . . . ), and object light corresponds to the output light fluxes (Fo1, Fo2, . . . ).

(27) In the drawings, the illumination light flux (Fs2) and the output light flux (Fo2) are shown by dashed lines so that they can be easily distinguished from the illumination light flux (Fs1) and the output light flux (Fo1).

(28) A reference light flux (Fr) which is generated by, for example, dividing the light flux from the primary coherence light source and is coherent with the output light fluxes (Fo1, Fo2, . . . ) is applied to the imaging surface (Sf) so as to be superimposed with the output light fluxes (Fo1, Fo2, . . . ).

(29) Consequently, since an interference image including interference fringes generated by interference between the output light fluxes (Fo1, Fo2, . . . ) and the reference light flux (Fr) is generated on the imaging surface (Sf), the interference image is imaged by the imaging element (Uf), and a processing apparatus (Up) receives and stores interference image data (Df) thus obtained.

(30) Although the interference image data (Df) is an interference image on the imaging surface (Sf), that is, a multi-gradation bright and dark pattern of an interference fringe image, this can be converted into a phase distribution in the light electric field on the imaging surface (Sf). The conversion method is known as a Fourier-transform method (reference: Takeda, M., et al.: Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry, J. Opt. Soc. Am, Vol. 72, No. 1 (1982), pp. 156-160).

(31) However, since the phase distribution acquired by this conversion method is a phase distribution associated with superimposed light electric fields formed on the imaging surface (Sf) by fluxes emitted from the input image points (Ps1, Ps2, . . . ), the phase distribution cannot be used for calculating OTF as it is. Thus, it is required that the phase distribution is separated into light electric fields formed by each of the independent input image points (Ps1, Ps2, . . . ) and then to obtain phase distributions s1(u, v), s2(u, v), and . . . on a phase defining plane (T).

(32) Here, the phase defining plane (T) is a plane determining a phase when the phase is acquired in this invention, and the position, shape (plane, sphere, or the like), and so on (if the phase defining plane (T) has a plane shape, inclination to the optical axis is included, and if it has a sphere shape, the radius, the center position, and so on are included) can be arbitrarily determined according to the object of the phase acquisition.

(33) However, usually an exit pupil of the optical system (Ot) which can be considered that the position and the shape are not affected by the position of the input image point and do not change is often adopted as the phase defining plane (T), and therefore, in the following description in this specification, the embodiment will be described, bearing in mind a case where the phase defining plane (T) is taken in the exit pupil of the optical system (Ot).

(34) The separation into light electric fields formed by each of the independent input image points (Ps1, Ps2, . . . ) will be described with reference to FIG. 2 which is a schematic diagram of a concept associated with the technique of the optical system phase acquisition method of this invention.

(35) When the optical system (Ot) as shown above in FIG. 1 forms a real image as an output image with respect to the input image points (Ps1, Ps2, . . . ), at a position traveled in the light traveling direction, as shown in FIG. 2, if diffractive optical light propagation simulation in which light is propagated in the light traveling direction by a satisfactory distance is performed, reconstructed output light fluxes (Fo1, Fo2, . . . ) should be separated. On the other hand, when the optical system (Ot) forms a virtual image as an output image, if diffractive optical light propagation simulation in which light is propagated in a direction opposite to the light traveling direction by a satisfactory distance is performed, reconstructed output light fluxes (Fo1, Fo2, . . . ) should be separated.

(36) Thus, first, in the first stage, there is performed diffractive optical light propagation simulation in which, with a phase distribution determined by the Fourier-transform method as an initial state, light is propagated from the imaging surface (Sf) to a separation reconstruction surface (Sg) which is a plane which is away from the imaging surface (Sf) by a distance required for separation in a direction in which the output light fluxes (Fo1, Fo2, . . . ) to be reconstructed separate and is vertical to an optical axis, and a phase distribution g1(u, v) of the light electric field in a light flux cross-section portion (Go1) on the separation reconstruction surface (Sg) of the reconstructed output light flux (Fo1) is calculated and acquired.

(37) In the second stage, diffractive optical light propagation simulation in which light of the phase distribution g1(u, v) on the separation reconstruction surface (Sg) is propagated to the phase defining plane (T) is performed, and the phase distribution s1(u, v) is calculated.

(38) Then, when the process comprising the two stages including the first and second stages is applied to each of the reconstructed output light fluxes (Fo1, Fo2, . . . ), the phase distributions s1(u, v), s2(u, v), and . . . can be separated and calculated.

(39) Naturally, when the imaging surface (Sf) and the phase defining plane (T) coincide with each other, in the second stage, diffractive optical light propagation simulation in which light is returned to the imaging surface (Sf) as the phase defining plane (T) may be performed.

(40) In the above description, there has been described that there is performed the diffractive optical light propagation simulation in which, with the phase distribution determined by the Fourier-transform method as an initial state, light is propagated from the imaging surface (Sf) to the separation reconstruction surface (Sg). However, as with the contents of the simulation as described above, there is performed diffractive optical light propagation simulation in which the reference light (Fr) subjected to amplitude modulation in an optical filter of a bright and dark pattern of the interference image data (Df) is propagated from the imaging surface (Sf) to the phase defining plane (T), and the phase distribution g1(u, v) may be calculated.

(41) Although the phase usually falls within a range of 0 to 2, if this range is compelled against a phase distributed on a plane, a portion where a phase value rapidly changes appears linearly even when the phase actually smoothly changes.

(42) Since the phase represents a degree of relative delay/advance of a wave motion at a certain point with respect to a wave motion at a point determined as a reference, a rapid change in phase value is not essential.

(43) Since the phase distribution of light wave motion on a plane is derived from the optical path length of a path of light having propagated from a point light source to the plane, it should be correct that a change that exceeds the range of 0 to 2 is allowed, and correction is performed to prevent the rapid change.

(44) Such correction processing is called phase unwrapping. Since a change in phase distribution on the phase defining plane (T) is smooth, phase unwrapping correction processing can be relatively easily applied to the phase distribution s1(u, v).

(45) FIG. 1 will be briefly supplemented below. Although FIG. 1 illustrates that the output light flux (Fo1) and the reference light flux (Fr) are directly applied to the imaging element (Uf) from different directions to be superimposed, they are usually multiplexed by using a beam splitter to be often superimposed.

(46) Although FIG. 1 further illustrates that the illumination light flux (Fs1) and the reference light flux (Fr) are independent, since they should be coherent, they are usually generated from a common light source.

(47) The reference light flux (Fr) may be generated by applying a spatial frequency filter which removes components other than a spatial DC component to the illumination light flux.

(48) The processing apparatus (Up) can be realized by a computer including an interface and a CPU configured to receive the interference image data (Df) from the imaging element (Uf), a nonvolatile memory which stores the interference image data (Df), OS, and processing programs required for calculation, and the volative memory which is loaded with the OS and the processing programs required for calculation and stores data required for execution of processing and calculation. The processing apparatus (Up) can read out the stored interference image data (Df) and execute calculation based on the digital holographic imaging technology.

(49) The calculation contents in the digital holographic imaging are as described as the contents of simulation as described above.

(50) Here, a specific configuration for achieving acquisition of the interference image data (Df) collectively including information of interference images about the input image points (Ps1, Ps2, . . . ), as described above will be described with reference to FIG. 3 which is a schematic diagram of the configuration associated with the technique of the optical system phase acquisition method of this invention.

(51) A light source beam (As) from a primary coherence light source (Us) such as a helium-neon laser is divided into a beam (Ai) for generating an illumination light flux and a beam (Ar) for generating a reference light flux by a beam splitter (BS1) for beam division.

(52) A reference light flux generating unit is constituted of a mirror (Mr) and a beam expander (BEr). The beam (Ar) for generating a reference light flux is reflected by the mirror (Mr) and then input to the beam expander (BEr) constituted of a condenser lens (Lrf) and a collimator lens (Lrc), and a reference light flux (Fr) is generated as a parallel light flux in which a beam is enlarged to have a necessary thickness.

(53) If a pin hole opening (Ua) is installed so as to coincide with a light-collecting point of the condenser lens (Lrf), the beam expander (BEr) can provide a function of a spatial frequency filter which removes components other than a spatial DC component, whereby optical noise caused by dust and the like adhered to a surface of an optical element present in an optical path extending to the pin hole opening (Ua) is removed, so that the reference light flux (Fr) can be cleaned up.

(54) On the other hand, an illumination light flux generating unit is constituted of a mirror (Mi) and a beam expander (BEi). The beam (Ai) for generating an illumination light flux is reflected by the mirror (Mi) and then input to the beam expander (BEi) constituted of a condenser lens (Lif) and a collimator lens (Lic), and a primary illumination light flux (Fi) is generated as a parallel light flux in which a beam is enlarged to have a necessary thickness.

(55) A pin hole opening similar to the pin hole opening (Ua) may be installed with respect to the beam expander (BEi), but it is omitted in this drawing.

(56) The illumination light flux generating unit further includes a lens array (Lm) and a mask plate (Pm) and, if necessary, includes a light flux conversion optical system (Oc).

(57) The primary illumination light flux (Fi) enters the lens array (Lm) and forms light-collecting regions at a focusing portion of each lens of the lens array (Lm).

(58) Pin holes are opened in the mask plate (Pm), and the lens array (Lm) and the mask plate (Pm) are combined such that a region centering on each of the pin holes is illuminated by the light-collecting region formed by the lens array (Lm).

(59) The pin holes of the mask plate (Pm) each form an input image point set, and since light fluxes from them are telecentric, the light flux conversion optical system (Oc) is provided if necessary. Matching is performed such that each main ray travels toward an entrance pupil of the optical system (Ot) as a target of acquisition of a phase, and the rays as illumination light fluxes (Fs1, Fs2, . . . ) of each of them collectively enter the optical system (Ot).

(60) Accordingly, virtual images formed by forming an image of each of the pin holes of the mask plate (Pm) by the light flux conversion optical system (Oc) become the input image points (Ps1, Ps2, . . . ).

(61) The output light fluxes (Fo1, Fo2, . . . ) generated by the optical system (Ot) by receiving the illumination light fluxes (Fs1, Fs2, . . . ) are reflected by a beam splitter (BS2) and applied to the imaging surface (Sf) of the imaging element (Uf).

(62) Here, it is assumed that an angle of the beam splitter (BS2) is adjusted such that the optical axis of the optical system (Ot) is vertical to the imaging surface (SD.

(63) The reference light flux (Fr) is transmitted through the beam splitter (BS2) and similarly applied to an imaging surface of the imaging element (Uf) while being superimposed with the output light fluxes (Fo1, Fo2, . . . ), and an interference image is formed on the imaging surface of the imaging element (Uf) and imaged. Thus, the interference image data (Df) collectively including information of interference images about the input image points (Ps1, Ps2, . . . ) can be acquired.

(64) However, in this drawing, it is assumed that there is provided a so-called off-axis type in which an optical axis of the reference light flux (Fr) is set to incline with respect to the imaging surface of the imaging element (Uf) without being vertical thereto so as not to be coaxial with the optical axes of the output light fluxes (Fo1, Fo2, . . . ).

(65) Corresponding to the fact that +1st, 0th, and 1st order diffraction light are generated from a sinusoidal concentration diffraction grating, in holography (including digital holographic imaging), also in a reconstructed image, three kinds of normal images including a +1st-order image, a 0th-order image (transmitted light), and 1st-order image (conjugate image) are generated.

(66) When the off-axis type is not provided (in a case of an inline type), light fluxes forming those three kinds of images are all output in the same direction, so that cumbersome noise is superimposed with the normal images.

(67) The objective of providing the off-axis type is to thereby separate the directions of the light fluxes forming those three kinds of images and avoid the problem that cumbersome noise is superimposed with the normal images.

(68) However, if the off-axis type is adopted, since an interference fringe of an interference image becomes thin, as the imaging element (Uf), an imaging element in which the pixel size is fine and the number of pixels is large is required to be used, and thus the off-axis type has such a drawback that the load of the computing process increases.

(69) In order to avoid this problem, it is necessary to adopt the inline type and then avoid the problem that cumbersome noise is superimposed with the normal images. In this respect, many kinds of proposals have been tried.

(70) For example, there is a method of imaging the interference images in which the phase of the reference light flux (Fr) is shifted and reconstructing an image by calculation using the data thereof. (Reference: OPTICS LETTERS, Vol. 22, No. 16, Aug. 15, 1997, pp 1268-1270, Yamaguchi I. et al: Phase-shifting digital holography)

(71) Also in the present optical system phase acquisition method, the above method can be applied, and in order to shift the phase of the reference light flux (Fr), the mirror (Mr) is modified such that the mirror (Mr) is movable by using a fine-movement mechanism according to a piezo element and so on, for example, whereby the above method can be applied.

(72) Although the size of the optical system (Ot) as a target of acquisition of a phase, that is, the thickness of a cross section vertical to an optical axis of each of the output light fluxes (Fo1, Fo2, . . . ) or a space occupied by the entirety of the output light fluxes (Fo1, Fo2, . . . ) differs from one to another, the dimension of the imaging surface (Sf) of the imaging element (Uf) which is available is limited.

(73) Accordingly, in some cases, the dimension of the imaging surface (Sf) may be insufficient with respect to a necessary size, and particularly when the interference image data (Df) collectively including information of interference images about the input image points (Ps1, Ps2, . . . ) is acquired, such a circumstance is likely to occur.

(74) Avoidance of this problem will be described with reference to FIG. 4 which is a schematic diagram of the configuration associated with the technique of the optical system phase acquisition method of this invention.

(75) In the configuration of this drawing, with respect to the configuration described in FIG. 3, a relay optical system (Oq) exhibiting an action of reducing the thickness of a light flux group constituted of the entirety of the illumination light fluxes (Fs1, Fs2, . . . ) is inserted in the post stage of the optical system (Ot), that is, between the optical system (Ot) and the imaging element (Uf).

(76) The output light fluxes (Fo1, Fo2, . . . ) generated by the optical system (Ot) by receiving the illumination light fluxes (Fs1, Fs2, . . . ) enter the relay optical system (Oq) disposed coaxially with the optical axis of the optical system (Ot), and relay output light fluxes (Fq1, Fq2, . . . ) are collectively output from the relay optical system (Oq).

(77) The relay output light fluxes (Fq1, Fq2, . . . ) are reflected by the beam splitter (BS2) to be applied onto the imaging surface (Sf) of the imaging element (Uf) without exceeding the dimension, and thus to be superimposed with the reference light flux (Fr) transmitted through the beam splitter (BS2), whereby an interference image is imaged.

(78) Since the configuration of the relay optical system (Oq) is required to be determined according to the configuration of the optical system (Ot), how to achieve the configuration of the relay optical system (Oq) cannot be unqualifiedly mentioned; however, usually, it is preferable to design such that a reduced image conjugate to the exit pupil of the optical system (Ot) is formed on the imaging surface (SD.

(79) However, since the beam splitter (BS2) is required to be present between the relay optical system (Oq) and the imaging surface (Sf), the relay optical system (Oq) should be designed as a retrofocus optical system, and in this case, the relay optical system (Oq) is required to be realized by a combined lens constituted of a plurality of lenses.

(80) There will be described the method of acquiring the phase distributions s1(u, v), s2(u, v), and . . . , belonging to each of the input image points (Ps1, Ps2, . . . ) on the phase defining plane (T) of the optical system (Ot), from the interference image data (Df) acquired through the relay optical system (Oq) and collectively including information of interference images about the input image points (Ps1, Ps2, . . . ).

(81) Hereinabove, there has been described that, when the phase distributions s1(u, v), s2(u, v), and . . . belonging to each of the input image points (Ps1, Ps2, . . . ) on the phase defining plane (T) of the optical system (Ot) are acquired from the interference image data (Df) acquired not through the relay optical system (Oq) and collectively including information of interference images about the input image points (Ps1, Ps2, . . . ), there is performed diffractive optical light propagation simulation in which light is propagated from the imaging surface (Sf) to the separation reconstruction surface (Sg) which is a plane which is away from the imaging surface (Sf) by a distance required for separation in the direction in which the output light fluxes (Fo1, Fo2, . . . ) to be reconstructed separate and is vertical to the optical axis, and the phase distribution g1(u, v) of the light electric field in the light flux cross-section portion (Go1) on the separation reconstruction surface (Sg) of the reconstructed output light flux (Fo1) is calculated.

(82) Also in the above case where the interference image data (Df) is acquired through the relay optical system (Oq), since a plane vertical to an optical axis at a suitable position where light fluxes to be reconstructed are observed separately, that is, a separation reconstruction surface exists, paying an attention to a certain light flux (set to be a j-th light flux), if phase distribution of the light flux on the separation reconstruction surface is calculated, a direction of a ray at an arbitrary point on the separation reconstruction surface can be determined.

(83) Accordingly, ray tracing simulation in which light travels in a direction opposite to an original light traveling direction is performed based on design data of the relay optical system (Oq) until the light travels from the separation reconstruction surface in an output image space of the relay optical system (Oq) to the phase defining plane (T) of the optical system (Ot) in an input image space of the relay optical system (Oq), whereby the coordinates (u, v) at an arrival point on the phase defining plane (T) can be determined, and, at the same time, the phase distribution sj(u, v) at an arrival point on the phase defining plane (T) can be determined by subtracting a phase, obtained by multiplying an optical path length from the separation reconstruction surface to the phase defining plane (T) by a wavenumber, from a phase on the separation reconstruction surface. Thus, the phase distribution on the phase defining plane (T) can be acquired.

(84) The phase distributions s1(u, v), s2(u, v), and . . . can be acquired by applying this operation to all light fluxes observed separately by reconstruction.

(85) As is easily seen from the description above, since the ray tracing simulation is performed back through light propagation in the relay optical system (Oq), the relay optical system (Oq) may have aberration as designed.

(86) However, if a lens included in the relay optical system (Oq) has defects such as an error in a curvature radius of a refractive surface and surface sag, or if there are eccentricity and an error in a distance between lenses, since an error occurs in phase distribution to be acquired, careful processing is required.

(87) In an optical system, when, in addition to aberration as designed, there are error in profile of a refractive surface or a reflective surface of each image forming optical element and defects such as eccentricity, surface distance error, and assembly error such as tilt, in evaluating how the performance is affected thereby, for example, in order to inspect how much the optical system can decompose fine image information to input (read) or output (write) the information, a resolution test chart in which a pattern according to a metallized film is formed on a glass substrate by applying a lithography technique is sometimes used. However, in some cases, a numerical value which is an index of a performance associated with the resolution needs to be calculated to perform quantitative evaluation.

(88) For example, in a case of an electrical circuit amplifier, in order to examine how gain of an output signal or a phase delay changes as a frequency of an input signal of a sine wave increases, a transfer function based on Fourier transform calculation is evaluated.

(89) In a case of an image forming optical system, in order to examine how a contrast of an output image and a phase shift change as a spatial frequency of an input image increases, an optical transfer function similarly based on Fourier transform calculation, that is, OTF (reference: M. Born, E. Wolf (translated by Toru KUSAKAWA and Hideshi YOKOTA): Principles of Optics II, published by the Tokai University, Dec. 20, 1980, fourth issue, 9.5) is evaluated.

(90) However, in the case of the electrical circuit amplifier, unless waveform distortion due to nonlinear response is considered, monotonous repetition of a sinusoidal waveform just continues. Thus, although there is only one transfer function, in a case of an optical system, the resolution changes depending on a place of a screen, and therefore, it is different in that OTF is required to be calculated for each place when a plurality of places desired to be evaluated are present on a screen.

(91) Hereinafter, a calculation method for OTF evaluation in an optical system to which the optical system phase acquisition method of this invention, which is one of optical system evaluation methods, is applied will be described.

(92) If the places on the screen which are desired to be evaluated are determined, for example, the above optical system of FIG. 3 or 4 constituted of the lens array (Lm) and the mask plate (Pm) or the light flux conversion optical system (Oc) instead of the lens array (Lm) and the mask plate (Pm) realizes the input image points (Ps1, Ps2, . . . ) exist respectively at portions corresponding to the places on the screen which are desired to be evaluated to input the input image points to the optical system (Ot) to be evaluated, and thus to acquire the phase distributions s1(u, v), s2(u, v), and . . . belonging to each of the input image points (Ps1, Ps2, . . . ) on the phase defining plane (T).

(93) Hereinafter, as described above, the suffix j denotes any one of the numbers assigned as suffixes to identify a portion related to each input image point with respect to the entire input image points (Ps1, Ps2, . . . ), and a case where OTF near the input image point (Psj) is evaluated will be described.

(94) In order to acquire OTF, an optical path length aberration distribution is required to be obtained based on a phase distribution.

(95) Here, as shown in FIG. 5 which is a schematic diagram of a concept associated with the technique of the optical system phase acquisition method of this invention, with respect to a sum (U, V)+.sub.o(U, V) of an optical path length (U, V) extending to a point (B) at coordinates (U, V) in which a ray emitted from an input image point (Psj) passes through the optical system (00 to reach an aberration defining virtual plane (K) arbitrarily provided and thus to intersect and a virtual optical path length .sub.o(U, V) of a virtual ray, which extends from the point (B) to an ideal output image point (Qoj) conjugate to the input image point (Psj), an optical path length aberration H(U, V) is defined by a difference from that of a main ray, that is, the following formula (1):
H(U,V)=(U,V)+.sub.o(U,V)(.sub.p+.sub.op).

(96) .sub.p is an optical path length extending to a point (Bp) in which a main ray emitted from the input image point (Psj) reaches the aberration defining virtual plane (K) and thus to intersect. .sub.op is a virtual optical path length of a virtual main ray from the point (Bp) to the ideal output image point (Qoj).

(97) The term ideal of the ideal output image point (Qoj) means that the position is predicted by an aplanatic paraxial theory.

(98) In FIG. 5, for facilitating understanding of a main ray, the input image point (Psj) and the ideal output image point (Qoj) are illustrated so as to be located on y and Y axes. However, this invention is not limited thereto in actual application.

(99) Although FIG. 6 illustrates a case where the aberration defining virtual plane (K) is a plane vertical to an optical axis located more backward than the phase defining plane (T), the aberration defining virtual plane (K) may be the same as the phase defining plane (T) or may be a plane vertical to the virtual main ray or a spherical surface centered on the ideal output image point (Qoj). Particularly, in this case, the optical path length aberration is referred to as wavefront aberration.

(100) When the aberration defining virtual plane (K) is set to be the same as the phase defining plane (T), the optical path length (U, V) can be directly obtained from the phase distribution sj(u, v) on the phase defining plane (T) by dividing it by wavenumber, as described above.

(101) However, when the aberration defining virtual plane (K) is not the same as the phase defining plane (T), as described above, a phase distribution on the aberration defining virtual plane (K) is temporarily acquired by diffractive optical light propagation simulation from the phase defining plane (T) to the aberration defining virtual plane (K), and the optical path length (U, V) may be obtained.

(102) Hereinafter, for ease of handling, the aberration defining virtual plane (K) is set to be the same as the phase defining plane (T), and, as described above, the case where the phase defining plane (T) is taken with respect to an exit pupil of the optical system (Ot) will be described.

(103) A pupil function represented by the following formula (2) including the optical path length aberration distribution H(U, V) is defined:
G(U,V)=E(U,V)exp{2iH(U,V)/}.

(104) Here, E(U, V) is a function whose value is 0 at a position corresponding to outside of the exit pupil. When an illumination distribution or transmittance distribution is present in a pupil, if distribution of amplitude, a phase filter, or the like is present, E(U, V) can include it as a function having a distribution of phase shift. i is an imaginary unit.

(105) In the input image spatial coordinates, if an input image on a plane in which the z coordinates are constant is considered, and accordingly, if an output image on a plane conjugate to the relevant plane is considered, while the spatial frequency in the X direction on an output image plane X, Y is written as m, the spatial frequency in the Y direction is written as n. In this case, OTF is represented as a function of m and n.

(106) When an input image set is coherent, OTF can be calculated by the following formula (3):
OTF(m,n)=G(Lm+X, Ln+Y).

(107) L represents a length of a path extending from the point (Bp) to the ideal output image point (Qoj) and extending along a virtual main ray.

(108) When the input image set is noncoherent, OTF can be calculated by autocorrelation integration of the following formula (4):
OTF(m,n)=G(L(m+m),L(n+n)).Math.G.sup.(Lm, Ln)dmdn.

(109) However, an integration region is from to +, and the mark {tilde over ()}represents a complex conjugate.

(110) In an optical system, when there are the aberration and the defects, as the degree deteriorates, OTF is rapidly reduced with respect to an increase in a spatial frequency m, n. Therefore, this invention can be applied to evaluation of the optical system (Ot).

(111) The influence of the aberration and the defects on the performance of the optical system becomes larger as it approaches a periphery of a screen, and therefore, it is preferable that the input image points (Ps1, Ps2, . . . ) are set at positions around the optical axis, which correspond to a maximum field angle according to specifications, and OTF with respect to each position is evaluated.

(112) Hereinabove, although the case where OTF is obtained and evaluated has been described as an example in which the optical system (Ot) is evaluated based on the optical path length aberration distribution H(U, V), the optical path length aberration distribution H(U, V) related to the input image point (Psj) obtained by the formula (1) is subjected to Zernike polynomial expanding (reference: M. Born, E. Wolf (translated by Toru KUSAKAWA and Hideshi YOKOTA): Principles of Optics II, published by the Tokai University, Dec. 20, 1980, fourth issue, 9.2.1, Appendix VII), and the optical system (Ot) may be evaluated by the magnitude of the expanding coefficient.

(113) Zernike polynomials are defined inside a unit circle, that is, a circle whose radius is 1, and when coordinates (, ) there are represented by polar coordinates (, ) connected by the following formula (5):
= cos
= sin ,
an optical path length aberration H(, ) is represented by a sum of values obtained by multiplying a Zernike polynomial Znm(, ) which is the expanding coefficient by a Zernike expanding coefficient Anm which is the expanding coefficient, as shown in FIG. 6a which is a schematic diagram of a concept associated with the technique of the optical system phase acquisition method of this invention, when the Zernike polynomials are used.

(114) The Zernike polynomial Znm(, ) is represented by a product of a radial function Rnm() and an argument function fm(), as shown in FIG. 6b. Here, the radial function Rnm() is determined by the method shown in FIG. 6c, and the argument function fm() is as shown in FIG. 6d.

(115) However, as described in FIG. 6a, a main order n handles numbers from 0 to a suitably determined upper limit N, and only auxiliary orders m whose difference from the main order n is even are handled.

(116) In the calculation shown in FIG. 6c, in some cases, a value of (n+m)/2s becomes a negative integer. Although the factorial of the value cannot be calculated, when a reciprocal form of J!=J.Math.(J1)! as a generally exhibited property associated with the factorial, 1/(J1)!=J/J! is applied to a sequence J=0, 1, 2, and . . . , since it can be interpreted in an extended manner that the reciprocal of the factorial of a negative integer is 0, a coefficient of the power term of with respect to s in a condition where the value of (n+m)/2s becomes negative is set to be 0.

(117) A specific form of the Zernike polynomial Znm(, ) in a range n=0, 1, . . . , and 10 of the main order n is as represented by the following formula (6).

(118) TABLE-US-00001 k n m Znm(, ) 1 0 0 +1 2 1 1 + .Math. cos 3 1 1 + .Math. sin 4 2 2 +{circumflex over ()}2 .Math. cos2 5 2 0 +2{circumflex over ()}2 1 6 2 2 +{circumflex over ()}2 .Math. sin2 7 3 3 +{circumflex over ()}3 .Math. cos3 8 3 1 {+3{circumflex over ()}3 2} cos 9 3 1 {+3{circumflex over ()}3 2} sin 10 3 3 +{circumflex over ()}3 .Math. sin3 11 4 4 +{circumflex over ()}4 .Math. cos4 12 4 2 {+4{circumflex over ()}4 3{circumflex over ()}2} cos2 13 4 0 +6{circumflex over ()}4 6{circumflex over ()}2 +1 14 4 2 {+4{circumflex over ()}4 3{circumflex over ()}2} sin2 15 4 4 +{circumflex over ()}4 .Math. sin4 16 5 5 +{circumflex over ()}5 .Math. cos5 17 5 3 {+5{circumflex over ()}5 4{circumflex over ()}3} cos3 18 5 1 {+10{circumflex over ()}5 12{circumflex over ()}3 +3} cos 19 5 1 {+10{circumflex over ()}5 12{circumflex over ()}3 +3} sin 20 5 3 {+5{circumflex over ()}5 4{circumflex over ()}3} sin3 21 5 5 +{circumflex over ()}5 .Math. sin5 22 6 6 +{circumflex over ()}6 .Math. cos6 23 6 4 {+6{circumflex over ()}6 5{circumflex over ()}4} cos4 24 6 2 {+15{circumflex over ()}6 20{circumflex over ()}4 +6{circumflex over ()}2} cos2 25 6 0 +20{circumflex over ()}6 30{circumflex over ()}4 +12{circumflex over ()}2 1 26 6 2 {+15{circumflex over ()}6 20{circumflex over ()}4 +6{circumflex over ()}2} sin2 27 6 4 {+6{circumflex over ()}6 5{circumflex over ()}4} sin4 28 6 6 +{circumflex over ()}6 .Math. sin4 29 7 7 +{circumflex over ()}7 .Math. cos7 30 7 5 {+7{circumflex over ()}7 6{circumflex over ()}5} cos5 31 7 3 {+21{circumflex over ()}7 30{circumflex over ()}5 +10{circumflex over ()}3} cos3 32 7 1 {+35{circumflex over ()}7 60{circumflex over ()}5 +30{circumflex over ()}3 4} cos 33 7 1 {+35{circumflex over ()}7 60{circumflex over ()}5 +30{circumflex over ()}3 4} sin 34 7 3 {+21{circumflex over ()}7 30{circumflex over ()}5 +10{circumflex over ()}3} sin3 35 7 5 {+7{circumflex over ()}7 6{circumflex over ()}5} sin5 36 7 7 +{circumflex over ()}7 .Math. sin7 37 8 8 +{circumflex over ()}8 .Math. cos8 38 8 6 {+8{circumflex over ()}8 7{circumflex over ()}6} cos6 39 8 4 {+28{circumflex over ()}8 42{circumflex over ()}6 +15{circumflex over ()}4} cos4 40 8 2 {+56{circumflex over ()}8 105{circumflex over ()}6 +60{circumflex over ()}4 10{circumflex over ()}2} cos2 41 8 0 +70{circumflex over ()}8 140{circumflex over ()}6 +90{circumflex over ()}4 20{circumflex over ()}2 +1 42 8 2 {+56{circumflex over ()}8 105{circumflex over ()}6 +60{circumflex over ()}4 10{circumflex over ()}2} sin2 43 8 4 {+28{circumflex over ()}8 42{circumflex over ()}6 +15{circumflex over ()}4} sin4 44 8 6 {+8{circumflex over ()}8 7{circumflex over ()}6} sin4 45 8 8 +{circumflex over ()}8 .Math. sin8 46 9 9 +{circumflex over ()}9 .Math. cos9 47 9 7 {+9{circumflex over ()}9 8{circumflex over ()}7} cos7 48 9 5 {+36{circumflex over ()}9 56{circumflex over ()}7 +21{circumflex over ()}5} cos5 49 9 3 {+84{circumflex over ()}9 168{circumflex over ()}7 +105{circumflex over ()}5 20{circumflex over ()}3} cos3 50 9 1 {+126{circumflex over ()}9 280{circumflex over ()}7 +210{circumflex over ()}5 60{circumflex over ()}3 +5} cos 51 9 1 {+126{circumflex over ()}9 280{circumflex over ()}7 +210{circumflex over ()}5 60{circumflex over ()}3 +5} sin 52 9 3 {+84{circumflex over ()}9 168{circumflex over ()}7 +105{circumflex over ()}5 20{circumflex over ()}3} sin3 53 9 5 {+36{circumflex over ()}9 56{circumflex over ()}7 +21{circumflex over ()}5} sin5 54 9 7 {+9{circumflex over ()}9 8{circumflex over ()}7} sin7 55 9 9 +{circumflex over ()}9 .Math. sin9 56 10 10 +{circumflex over ()}10 .Math. cos10 57 10 8 {+10{circumflex over ()}10 9{circumflex over ()}8} cos8 58 10 6 {+45{circumflex over ()}10 72{circumflex over ()}8 +28{circumflex over ()}6} cos6 59 10 4 {+120{circumflex over ()}10 252{circumflex over ()}8 +168{circumflex over ()}6 35{circumflex over ()}4} cos4 60 10 2 {+210{circumflex over ()}10 504{circumflex over ()}8 +420{circumflex over ()}6 140{circumflex over ()}4 +15{circumflex over ()}2} cos2 61 10 0 +252{circumflex over ()}10 630{circumflex over ()}8 +560{circumflex over ()}6 210{circumflex over ()}4 +30{circumflex over ()}2 1 62 10 2 {+210{circumflex over ()}10 504{circumflex over ()}8 +420{circumflex over ()}6 140{circumflex over ()}4 +15{circumflex over ()}2} sin2 63 10 4 {+120{circumflex over ()}10 252{circumflex over ()}8 +168{circumflex over ()}6 35{circumflex over ()}4} sin4 64 10 6 {+45{circumflex over ()}10 72{circumflex over ()}8 +28{circumflex over ()}6} sin6 65 10 8 {+10{circumflex over ()}10 9{circumflex over ()}8} sin8 66 10 10 +{circumflex over ()}10 .Math. sin10

(119) Here, the sign {circumflex over ()} represents power, and, for example, {circumflex over ()}2 means square. k is a mere serial number.

(120) A method of determining the Zernike expanding coefficient Anm related to the optical path length aberration distribution H(U, V) will be described.

(121) However, as described above, since the Zernike expanding coefficient is defined only inside the unit circle, it is necessary to previously determine a coordinate conversion function configured to convert the coordinates (U, V) on the aberration defining virtual plane (K) into the coordinates (, ) which is connected to the polar coordinates (, ) by the formula (5) and in which {circumflex over ()}2+{circumflex over ()}21.

(122) In the following description, for ease of explanation, when the Zernike polynomial expanding of the optical path length aberration distribution H(U, V) is described, even if not explicitly referred to, it is premised that the coordinates (U, V) and the coordinates (, ) are matched applying the coordinate conversion function.

(123) As already known, the Zernike polynomial has such a feature that Zernike polynomials are perpendicular to each other in a unit circle, that is, integration in a unit circle of a product of two Zernike polynomials Znm(, ) and Znm(, ) is 0 when pairs of orders n, m and n, m are not the same.

(124) When the pairs of orders n, m and n, m are the same as each other, the product, that is, a square integration value Snm=/(n+1) when the auxiliary order m is 0, and Snm=/2(n+1) when the auxiliary order m is not 0.

(125) The Zernike expanding coefficient Anm can be determined by using the orthogonality.

(126) Namely, when any Zernike polynomial of the formula (6), for example, the k-th Zernike polynomial in the serial number is selected, a product of a value of the optical path length aberration H(U, V) desired to be expanded in the coordinates (, ) and a value of the k-th Zernike polynomial Znm(, ) in the same coordinates (, ) (using coordinate conversion (, ).fwdarw.(, ) based on the formula (5)) is subjected to numerical integration in a unit circle, that is, the range of {circumflex over ()}2+{circumflex over ()}21. The Zernike expanding coefficient Anm corresponding to the k-th Zernike polynomial Znm(, ) can be obtained by dividing the calculated integrated value by the square integration value Snm corresponding to the k-th Zernike polynomial Znm(, ).

(127) For example, if the Zernike polynomial expanding is performed in a range of n=0, 1, . . . , and 8, paying an attention to 45 serial numbers corresponding to the serial numbers k=1, 2, . . . , and 45 of the formula (6), the acquisition of the Zernike expanding coefficient Anm corresponding to the k-th Zernike polynomial Znm(, ) may be applied to k=1 to 45.

(128) Since the Zernike polynomial of each k corresponds to aberration classified based on aberrations, the magnitude of the Zernike expanding coefficient Anm of certain k represents the magnitude of aberration of classification corresponding to the relevant k, and thus the optical system (Ot) can be evaluated by the magnitude of the Zernike expanding coefficient.

(129) There is a method of obtaining an expanding coefficient by solving an equation other than the above method using numerical integration.

(130) FIG. 6a is abbreviated as the following formula (7):
H(,)=k{Ak.Math.Zk(,)}.

(131) Here, the sum is performed based on the serial number k instead of the pair of the orders n, m, and expression at the coordinates represented explicitly is changed to (, ), premising that the coordinate conversion (, ).fwdarw.(, ) is used.

(132) Similarly to the above, if expanding is performed by the 45 Zernike polynomials corresponding to the serial numbers k=1, 2, . . . , and 45, since values of 45 Zernike expanding coefficients Ak may be determined, suitable 45 coordinates (i, i) i=1, 2, . . . , and 45 holding the optical path length aberration H(U, V) desired to be expanded are selected, and a value of a phase is extracted from data of the optical path length aberration H(U, V) at the coordinates and set to be the left side of the formula (7). If calculation in which a Zernike polynomial Z(, ) on the right side is based on the coordinates (i, i) is performed, the following formula (8) comprising 45 formulae according to a linear combination of 45 unknowns Ak k=1, 2, . . . , and 45 is obtained:
H(1,1)=k{Ak.Math.Zk(1,1)}
H(2,2)=k{Ak.Math.Zk(2,2)}
. . .
H(i,i)=k{Ak.Math.Zk(i,i)}
. . . .

(133) Since this is a simultaneous linear equation in which the number of the unknowns and the number of the formulae are the same, it can be solved, and thus, all values of the 45 Zernike expanding coefficients Ak can be determined.

(134) Alternatively, more than 45 coordinates (i, i), for example, the coordinates with the number four to five times are selected, and the 45 Zernike expanding coefficients Ak may be determined by a least-squares method (reference: Chapter 8 of Shinban suchikeisan hando-bukku (New Edition of Numerical Calculation Handbook), edited by Yutaka OHNO and Kazuo ISODA, Ohmsha, Sep. 1, 1990, first edition, first issue).

(135) This will be hereinafter briefly described.

(136) For example, the 180 coordinates (i, i) are taken. The number is four times 45 as the maximum number of k. The number of the coordinates is referred to as I for the sake of simplicity.

(137) In order to determine the coefficient Ak for the unknown of the right side of the formula (7), when there is the measured value H(i, i) of H(, ) at the coordinates (i, i) corresponding to i=1, 2, . . . , and I, an error (square sum of the error) from true values of them is minimized, and in accordance with teaching of the least-squares method, the following formula (9) corresponding to each k of k=1 to 45 and comprising 45 formulae is to be solved as a simultaneous linear equation:
j[i{Zj(i,i).Math.Zk(i,i)}].Math.Aj=i H(i,i).Math.Zk(i,i)
(k=1 to 45).

(138) Here, i represents a sum of i=1, 2, . . . , and I, and j represents a sum of j=1, 2, . . . , and 45.

(139) Although the formula (8) is somewhat hard to understand, the inside of [ ] of the left side is a numerical value depending on k and a coefficient multiplied to Aj, and thus, the left side is a linear combination of 45 Aj. The right side is also a numerical value depending on k and means that 45 such formulae are present.

(140) When this is visually represented, it is found that the formula (8) is the following formula when a mark is a numerical value depending on k:

(141) K = 1 : .Math. A 1 + .Math. A 2 + .Math. + .Math. A 45 = K = 2 : .Math. A 1 + .Math. A 2 + .Math. + .Math. A 45 = .Math. .Math. .Math. K = 45 : .Math. A 1 + .Math. A 2 + .Math. + .Math. A 45 = .

(142) Namely, since the formula (8) is a simultaneous linear equation including 45 unknowns Aj and comprising 45 formulae, Aj j=1, 2, . . . , and 45 can be determined by solving this simultaneous linear equation.

(143) In the above description, although there has been described the example in which in the formula (6), when the Zernike polynomial expanding in the range of the main order n=0, 1, . . . , and 8 is adopted, 45 serial numbers corresponding to the serial numbers k=1, 2, . . . , and 45 are handled, the number of the main orders to be adopted may be determined according to accuracy desired to be achieved.

(144) For example, when it is determined that the main order n is up to 6, the serial number k may be up to 28. When it is determined that the main order n is up to 10, the serial number k may be up to 66.

(145) Hereinabove, as an embodiment of the optical system evaluation method of this invention, the method applied to evaluation using OTF and Zernike polynomial expanding has been described. However, since acquisition of the optical path length aberration distribution H(U, V) of the optical system based on this invention is equal to observation of the optical system with an interferometer, this invention is applicable to all optical system evaluation techniques using the interferometer.

(146) The invention is applicable in industry of effectively using a method of calculating OTF and Zernike expanding coefficients, for example, and thereby efficiently acquiring a phase distribution of light with respect to each input image point that is useful when the optical system is evaluated, including a case where a lens inspection device or the like includes aberration as designed, error in profile of a refractive surface or a reflective surface of each image forming optical element, and defects such as eccentricity, surface distance error, and assembly error such as tilt.