Method and system for determining information about a transparent optical element comprising a lens portion and a plane parallel portion
09658129 ยท 2017-05-23
Assignee
Inventors
- Xavier Colonna De Lega (Middlefield, CT)
- Martin F. Fay (Middletown, CT, US)
- Peter J. de Groot (Middletown, CT)
Cpc classification
International classification
Abstract
A method for determining information about a transparent optical element including a lens portion and a plane parallel portion, the lens portion having at least one curved surface and the plane parallel portion having opposing first and second surfaces, includes: directing measurement light to the transparent optical element; detecting measurement light reflected from at least one location on the first surface of the plane parallel portion; detecting measurement light reflected from the second surface of the plane parallel portion at a location corresponding to the at least one location on the first surface; determining, based on the detected light, information about the plane parallel portion; and evaluating the transparent optical element based on the information about the plane parallel portion.
Claims
1. A method for determining information about a transparent optical element comprising a lens portion and a plane parallel portion, the lens portion comprising at least one curved surface and the plane parallel portion comprising opposing first and second surfaces, the method comprising: positioning the transparent optical element relative to an optical instrument comprising an objective lens having an optical axis, the first surface of the transparent optical element facing the objective lens; directing measurement light through the objective lens to the transparent optical element while scanning a relative position between the objective lens and the transparent optical element along the optical axis; during the scanning with the first surface facing the objective lens, detecting measurement light reflected from at least one location on the first surface of the plane parallel portion; during the scanning with the first surface facing the objective lens, detecting measurement light reflected from the second surface of the plane parallel portion at a location corresponding to the at least one location on the first surface; determining, based on the detected light, information about the plane parallel portion, the information comprising a physical thickness profile, an optical thickness profile of the plane parallel portion, a height profile of the first surface of the plane parallel portion and a height profile of the second surface of the plane parallel portion, and/or information about a refractive index of a material forming the transparent optical element; and evaluating the transparent optical element based on the information about the plane parallel portion.
2. The method of claim 1, wherein the information about the refractive index comprises information about variations of a refractive index between different locations of the plane parallel portion.
3. The method of claim 1, wherein the information about the refractive index comprises information about birefringence of the material forming the transparent optical element.
4. The method of claim 1, further comprising detecting measurement light reflected from a reference feature on a fixture that supports the transparent optical element and determining, based on the detected light from the reference feature information about the reference feature.
5. The method of claim 4, wherein the measurement light reflected from the reference feature is reflected from a location corresponding to the at least one location on the first surface of the plane parallel portion.
6. The method of claim 1, wherein the measurement light is detected for a first polarization and, thereafter, a second polarization different from the first polarization.
7. The method of claim 1, wherein evaluating the transparent optical element comprises inferring information about a dimensional or optical property of the lens portion based on the information about the plane parallel portion.
8. The method of claim 1, wherein the plane parallel portion is a tilt control interlock of the transparent optical element.
9. The method of claim 1, wherein the lens portion comprises a second curved surface opposite the first curved surface.
10. The method of claim 1, wherein the information about the lens portion comprises information about birefringence of a material forming the lens portion.
11. The method of claim 1, wherein evaluating the transparent optical element comprises determining whether the transparent optical element meets a specification requirement based on the information about the plane parallel portion.
12. A method of forming an optical assembly, comprising: determining information about the transparent optical element using the method of claim 1, where the transparent optical element is a lens; and securing the lens relative to one or more other lenses in a barrel to form the optical assembly.
13. A system for determining information about a transparent optical element comprising a lens portion and a plane parallel portion, the lens portion comprising at least one curved surface and the plane parallel portion comprising opposing first and second surfaces, the system comprising: a fixture for supporting the transparent optical element; an optical instrument comprising a light source, a detector, and optical elements including an objective lens having an optical axis, the optical elements being arranged to direct light from the light source towards the transparent optical element when the transparent optical element is supported by the fixture with the first surface facing the objective lens and direct light reflected from the transparent optical element to the detector; and an electronic controller in communication with the detector, the electronic controller being programmed to cause the system to scan a relative position between the objective lens and the transparent optical element along the optical axis with the first surface facing the objective lens, to cause the detector to detect light reflected from the first surface and the second surface during the scanning, and to determine information about the plane parallel portion based on light detected from corresponding locations of the first and second surfaces of the plane parallel portion, the information comprising a physical thickness profile, an optical thickness profile of the plane parallel portion, a height profile of the first surface of the plane parallel portion and a height profile of the second surface of the plane parallel portion, and/or information about a refractive index of a material forming the transparent optical element.
14. The system of claim 13, wherein the optical instrument is an optical areal surface topography instrument.
15. The system of claim 13, wherein the fixture comprises a reference feature located in a path of the light from the optical instrument.
16. The system of claim 15, wherein the reference feature is a planar reflector.
17. The system of claim 15, wherein the fixture comprises a stand which positions the transparent optical element a distance from the reference feature.
18. The system of claim 13, wherein the optical instrument comprises a polarization module configured to polarize light from the light source.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
DETAILED DESCRIPTION
(26) Referring to
(27) An optical metrology instrument 201 is used to evaluate some of the optical properties of the lens 200, including in particular the refractive index uniformity and residual stress birefringence, as well as dimensional features such as the thickness of the lens, including but not limited to the thickness, T, in the figure as a function of the coordinates x, y (see the Cartesian coordinate system shown in
(28) In general, optical metrology instrument 201 can be one of a variety of different instruments capable of performing an areal surface topography measurement of lens 200. Example instruments include coherence scanning interferometry (CSI) microscopes (such as disclosed, e.g., in P. de Groot, Coherence Scanning Interferometry, in Optical Measurement of Surface Topography, edited by R. Leach, chapt. 9, pp. 187-208, (Springer Verlag, Berlin, 2011)), imaging confocal microscopes (such as disclosed, e.g., in R. Artigas, Imaging Confocal Microscopy, in Optical Measurement of Surface Topography, edited by R. Leach, chapt. 11, pp. 237-286, (Springer Berlin Heidelberg, 2011)), structured illumination microscopes (such as disclosed, e.g., in X. M. Colonna de Lega Non-contact surface characterization using modulated illumination, U.S. Patent (2014).), focus sensing (such as disclosed, e.g., in F. Helmli, Focus Variation Instruments, in Optical Measurement of Surface Topography, edited by R. Leach, chapt. 7, pp. 131-166, (Springer Berlin Heidelberg, 2011)) or wavelength tuned Fourier transform phase shifting interferometry (FTPSI) systems (such as disclosed, e.g., in L. L. Deck, Fourier-Transform Phase-Shifting Interferometry, Applied Optics 42 (13), 2354-2365 (2003)).
(29) Referring to
(30) In the embodiment of the
(31) After reflecting from the test and reference surfaces, the test and reference light are recombined by beams splitter 320 to form combined light 332, which is transmitted by beam splitter 312 and relay lens 336 to form an optical interference pattern on an electronic detector 334 (for example, a multi-element CCD or CMOS detector). The intensity profile of the optical interference pattern across the detector is measured by different elements of the detector and stored in an electronic processor 301 (e.g., a standalone or networked computer, or processor integrated with other components of the system) for analysis. Relay lens 136 images different points in a focal plane of the objective 306 to corresponding points on detector 134.
(32) A field stop 338 positioned between relay optics 308 and 310 defines the area of test surface 124 illuminated by test light 122. After reflection from the lens 200 and reference surface, combined light 332 forms a secondary image of the source at pupil plane 314 of the objective lens.
(33) Optionally, polarization elements 340, 342, 344, and 346 define the polarization state of the test and reference light being directed to the respective test and reference surfaces, and that of the combined light being directed to the detector. Depending on the embodiment, each polarization element can be a polarizer (e.g., a linear polarizer), a retardation plate (e.g., a half or quarter wave plate), or a similar optic that affects the polarization state of an incident beam. Furthermore, in some embodiments one or more of the polarization elements can be absent. Moreover, depending on the embodiment, beam splitter 312 can be a polarizing beam splitter or a non-polarizing beam splitter. In general, because of the presence of polarization elements 340, 342 and/or 346, the state of polarization of test light 322 at test surface 324 can be function of the azimuthal position of the light in pupil plane 314.
(34) In the presently described embodiment, source 302 provides illumination over a broad band of wavelengths (e.g., an emission spectrum having a full-width, half-maximum of more than 20 nm, of more than 50 nm, or preferably, even more than 100 nm). For example, source 302 can be a white light emitting diode (LED), a filament of a halogen bulb, an arc lamp such as a Xenon are lamp or a so-called supercontinuum source that uses non-linear effects in optical materials to generate very broad source spectra (>200 nm). The broad band of wavelengths corresponds to a limited coherence length. A translation stage 350 adjusts the relative optic path length between the test and reference light to produce an optical interference signal at each of the detector elements. For example, in the embodiment of
(35) Referring back to
(36) Metrology information for the upper surface 211 of the lens 200 is derived from the reflection of light in air (signal S1 in the figure). Respectively, metrology information for the lower surface 212 of the lens 200 is derived from the reflection of light within the lens material (signal S2) in the figure.
(37) Considering the specific example of a CSI microscope system such as system 300, the relative distance T between the upper and lower surfaces 211 and 212 at a specific coordinate x, y will be given by
T=T/n.sub.G(1)
where T is the apparent or measured optical thickness as determined by CSI microscopy or by wavelength-tuned FTPSI using coherence information, and n.sub.G at low NA (e.g., 0.06 or less) is the group-velocity index of refraction (at high NA, e.g., 0.2 or more, the value n.sub.G could change because of the obliquity effect, resulting in an effective group-velocity index of refraction). Conversely, signal S2 will appear to originate at a higher z location when using confocal, structured illumination or focus sensing. The physical thickness in this case is given by
T=nT(2)
where T is the apparent or measured optical thickness as determined by confocal or related focus-sensitive instruments, and n is the phase-velocity index of refraction.
(38) The thickness map T(x,y) or T(x,y) provides information about the mean value and uniformity of the physical thickness T(x,y) as well as the optical properties of the lens 200 as exemplified by the index of refraction n.sub.G (x,y) or n(x,y). In some cases, the composite uniformity and mean value of both of these properties, dimensional and optical, is sufficient for process control in the manufacture of the lens 200.
(39) If desired, additional information such as the thickness map T(x,y) or the optical refractive index n(x,y) obtained by other means, such as by contact profilometry (as disclosed, e.g., in P. Scott, Recent Developments in the Measurement of Aspheric Surfaces by Contact Stylus Instrumentation, Proc. SPIE 4927, 199-207 (2002)), may supplement the measurements performed by the optical metrology instrument 201, allowing for separation and independent evaluation of the effects of the refractive index from the physical thickness.
(40) While the foregoing lens characterization relies on height profile information about surfaces 211 and 212 alone, lens characterization may utilize other information too. For example, in some implementations, a specialized reference fixture is included to provide additional optical information. Referring to
(41) Fixture 400 includes support structures 410 and reflective upper surface 420. Lens 200 rests on support structures 410, which position the lens a distance T.sub.air from reflective surface 420. Support structures 410 may be composed of multiple pillars or walls on opposing sides of lens 200, or may be a single cylindrical support separating an inner portion 422 from an outer portion 421 of reflective surface 420. Fixture 400 may be tailored specifically for lens 200, and may be replaced with another fixture when a different shaped lens is measured.
(42)
(43) In a first step, depicted in
(44) In a second step, depicted in
(45) The metrology information is combined to create maps of the thickness and refractive index distribution between the upper and lower parallel surfaces of the lens element. For coherence scanning interferometers and comparable interferometric instruments, after acquiring the apparent height information z.sub.1, . . . 4, the physical and optical thickness maps are, respectively:
T(x,y)=z.sub.1(x,y)z.sub.2(x,y)+z.sub.3(x,y)z.sub.4(x,y)(3)
T(x,y)=z.sub.1(x,y)z.sub.2(x,y).(4)
(46) The map of the group-velocity refractive index is then
n.sub.G(x,y)=T(x,y)/T(x,y).(5)
(47) When the metrology system relies on confocal, structured illumination or focus sensing surface profiling, Eqs. (4) and (5) become
T(x,y)=z.sub.1(x,y)z.sub.2(x,y),(6)
n=T/T.(7)
(48) The thickness map provides information about the mean thickness of the lens as well as possible tilt between the two sides of the lens, based on variations in the measured thickness from one side of the lens to the other. The refractive index map provides information about possible refractive index gradients across the lens area.
(49) As an optional additional step, knowing the nominal refractive dispersion properties of the material in the lens, it is often possible to transform the group index to the phase index:
n=Transform(n.sub.G).(8)
(50) In some cases, the transform may be as simple as an additive constant. For instance, the additive constant is
(51)
where n(k) is the nominal refractive index of the material (as stated by the manufacturer or measured through some other means), expressed as a function of wavenumber, and k.sub.0 is the centroid wavenumber of the spectral band used for the measurement. Other transforms are possible such as a lookup table or polynomial function. Transform polynomials can be created by fitting data points of measured group index values (using the instrument) as a function of the known refractive index of test samples.
(52) Additional measurements may also be made in order to improve the accuracy of the process. For example, referring to
(53) In some embodiments, the measurement is repeated for different configurations of the instrument such that data collection is performed with substantially different spectral distributions, for example, a first spectral distribution centered between 400 nm and 490 nm, a second spectral distribution centered between 490 nm and 590 nm and a third spectral distribution centered between 590 nm and 700 nm. Each spectral distribution provides an independent measurement of the optical properties of the lens material. The multiple measured values of group-velocity index or phase-velocity index can then be combined to derive an estimate of the material optical properties variation with wavelength (or dispersion), which can be used to verify that the material is within tolerances and/or for controlling the manufacturing process. In the case where the instrument measures group-index (e.g. a coherence scanning interferometer), the estimate of dispersion is further used to compute an estimate of the refractive index, for example using the product of the first order derivative by the centroid wavenumber. In some embodiments, the multiple spectral distributions are present concurrently while the instrument collects the data resulting from the scanning data acquisition. The multiple spectral bands are separated at the detector, for example using a color sensitive device (CCD or CMOS camera equipped with color filters). Alternatively, returning light from the sensor is spatially separated by dichroic optical elements that reflect or transmit specific spectral components toward multiple monochrome sensors. A minimum of two spectral bands is required to estimate the dispersion property of the material.
(54) While the foregoing measurements may be performed using polarized or unpolarized light, it is possible to glean additional information about lens 200 using polarized light. For example, referring to
(55) The presence of stress birefringence in a sample may be monitored by observing its effects in the plane-parallel areas of the sample. Here, the measurement process outlined in flowchart 500 or flowchart 700 is performed at least twice, where each complete data acquisition cycle is performed for a different polarization state of the illumination light used by the metrology system. The polarization state of the optical measurement instrument may be manipulated using convention polarizers and/or waveplates.
(56) For example, as shown in flowchart 800, a first measurement is performed with the illumination light linearly polarized along the x direction and repeated with illumination light linearly polarized along they direction. In some embodiments, the polarization directions are aligned with respect to datum features on lens 200, such as where the lens is an injection-molded lens the datum features may correspond to the gate where the injected material enters the mold cavity.
(57) The multiple refractive index maps collected are then combined to provide a quantitative measurement of birefringence present in the lens material. For example, in step 870, a birefringence effect is calculated from the measurements. In step 880, a mean refractive index is calculated from the measurements. Birefringence may be, for example, expressed as the difference of optical paths through the lens, as shown in step 870 of flowchart 800. Here the cumulative effect of birefringence through the lens is calculated as
B(x,y)=[n.sub.2(x,y)n.sub.1(x,y)]T(9)
while the mean index (as shown in step 880) is
n(x,y)=0.5[n.sub.2(x,y)+n.sub.1(x,y)].(10)
(58) Birefringence can similarly be expressed as the difference of optical path per unit length of propagation within the material. The phase-velocity refractive indices n.sub.1,2 correspond to the two polarization orientations. For process control, these indices are adequately represented by the group index measurements that follow, for example, from CSI microscope measurements. Further, for some process control situations, a measurement of optical thickness variation
B(x,y)=T.sub.2(x,y)T.sub.1(x,y)(11)
or
B(x,y)=T.sub.2(x,y)T.sub.1(x,y)(12)
using the simpler configuration of
(59) While the foregoing embodiments involve measurements characterizing the inactive portion (e.g., plane parallel portion) of the lens and inferring information about the lens generally from those characterizations, other implementations are also possible. For example, measurements directly characterizing the active portion of the lens can also be performed.
(60) Referring to
(61) The inactive portion 910 is composed of a series of planar, annular surfaces with step features offsetting inner and outer planar surfaces on each side of lens 900. In general, the surfaces of inactive portion 910 may, for example, include features formed on the sample to aid in the alignment and fixturing of the lens in a final assembly, and/or to facilitate measurement of the relative alignment of lens features. In this case, the upper side of inactive portion 910 includes planar surfaces 912 and 916. A step 914 separates surfaces 912 and 916. Step 914 meets surface 912 at edge 914o and surface 916 at edge 914i. Surface 916 meets upper convex surface 921 at edge 918.
(62) The lower side of inactive portion 910 includes planar surfaces 911 and 917. A step 915 separates surfaces 911 and 917. Step 915 meets surface 911 at edge 915o and surface 917 at edge 915i. Surface 917 meets lower concave surface 922 at edge 919.
(63) Optical metrology instrument 201 is used to evaluate some of the dimensional features of lens 900, including (but not limited) to the apex-to-apex thickness T.sub.Apex and the relative x, y lateral offsets (referred to a common axis z) of surface feature locations, including (but not limited to) apex centers and alignment surface features. These evaluations are performed by measuring the upper surface profile to determine 3D apex location and relative 3D location and topography of other surface features. These measurements serve as indicators of the overall dimensional properties of the lens.
(64) During operation, optical instrument 201 looks down at the sample along an observation direction parallel to the z-axis shown in
(65) Metrology information for apex 923 is derived from the reflection of light in air (signal S.sub.UA in
(66) Considering the specific example of a CSI microscope system such as that shown in
(67) S.sub.LF is a non-interferometric intensity signal which may be analyzed to determine the location of lower surface edge features 915. Referring to
T.sub.BFT.sub.feature/n(13)
(68) For this computation, thickness and index may be assumed nominal values or previously measured by some other means, e.g., using the same instrument or a caliper. Depending on the required accuracy for a given application, it can further be beneficial to compensate for the effect of spherical aberration induced by refraction through the lens material, and compute a corrected value for T.sub.BF, e.g., using the formula:
(69)
where NA refers to the numerical aperture of the optical instrument.
(70) The lateral location of the upper surface apex C.sub.UA is given by the x, y ordinates of P.sub.UA. The location of other features of interest can be defined in other ways, for example as the center of measured edge positions, indicated as C.sub.UF and C.sub.LF in
XY.sub.Feature=C.sub.UFC.sub.LF(14)
(71) Similarly, the upper surface apex-to-feature lateral distance XY.sub.UAF can be computed as:
XY.sub.UAF=C.sub.UAC.sub.UF(15)
(72) In some cases, XY.sub.Feature is sufficient for process control in the manufacture of the lenses, for example as a measure of the lateral alignment of the mold halves. Similarly, XY.sub.UAF along with relative apex height H.sub.UA may be sufficient for identifying issues with lens formation, for example if these deviate from dimensions expected from the upper surface mold half.
(73) It may be desired to explicitly measure dimensional properties between the upper surface apex and the lower surface apex, such as apex thickness T.sub.Apex indicated in
XY.sub.LAF=C.sub.LAC.sub.LF(16)
Note that H.sub.LA is negative for the particular geometry depicted in
(74) In some cases this second measurement can provide an independent measurement of XY.sub.Feature.
(75) In some embodiments, metrology information from measuring the lens first with one surface facing the instrument, and then the other, is combined according to flowchart 1400 shown in
(76) For measurement of lower surface features, metrology instrument 201 and lens 900 are moved relative to each other so that a lower feature of interest, such as edge 915o, is at a best focus position (step 1425). This location may be determined using nominal or measured values of T.sub.feature and n. In this position, the instrument measures an intensity profile for the lower feature (step 1430). Using information from the intensity profile, the system computes (in step 1435) an inter-feature lateral offset XY.sub.Feature.
(77) Next, lens 900 is flipped and positioned with its lower surface facing instrument 201 (step 1440). In this position, a height profile is measured in the region of lower apex 924, and a lower apex location, P.sub.LA is computed (step 1445). The system then, in step 1450, measures a height profile and an intensity profile for one or more features on the lower surface (e.g., edge 915). With this measurement, the system compute a lower apex height, H.sub.LA, and a lower apex-to-feature lateral distance XY.sub.LAF (step 1455).
(78) In step 1460, apex thickness T.sub.Apex can be computed as:
T.sub.Apex=H.sub.UA+T.sub.feature+H.sub.LA(17)
(79) Finally, in step 1465, inter-apex lateral distance XY.sub.Apex corresponds to the lateral distance between C.sub.UA and C.sub.LA and can computed according to the following, where superscripts indicate whether parameters are obtained from the upper surface measurement or the lower surface measurement:
XY.sub.Apex=XY.sub.UAF.sup.upper(XY.sub.Feature).sup.upperXY.sub.LAF.sup.lower(18)
(80) If the lower-surface measurement provides an independent measurement of the inter-feature lateral distance XY.sub.Feature, the following expressions can optionally be used to potentially reduce statistical variability:
XY.sub.Feature=0.5[XY.sub.Feature.sup.upper+XY.sub.Feature.sup.lower](19)
XY.sub.Apex=XY.sub.UAF.sup.upper+XY.sub.FeatureXY.sub.LAF.sup.lower(20)
(81) In some embodiments, as discussed previously with respect to
(82) In certain embodiments, the information regarding x, y spatial variations in areas including the features of interest may be exploited to more accurately determine dimensional features. For example, this information could include maps of refractive index n(x,y), thickness T(x,y), and surface topography S.sub.UA(x,y) and S.sub.UA(x,y).
(83) Referring to
T.sub.Apex=f.sub.ApexZ(H.sub.UA,T.sub.feature,H.sub.LA,W)(21)
(84) Lateral distances XY.sub.Feature and XY.sub.Apex can be expressed as:
XY.sub.Feature=f.sub.FeatureXY(C.sub.UF,C.sub.LF,W)(22)
XY.sub.Apex=f.sub.ApexXY(XY.sub.UAF,XY.sub.Feature,XY.sub.LAF,W)(23)
(85)
(86) Due to refractive effects, there will be a lateral shift L between the apparent and actual lateral location of the position of interest given approximately by:
L=T sin(.sub.refr)(24)
where sin(.sub.refr) and sin(.sub.tilt) are related via Snell's law:
sin(.sub.refr)=sin(.sub.tilt)/n.(25)
(87) Thus, L is given by:
L=T sin(.sub.tilt)/n.(26)
(88) In
(89) Local tilt .sub.tilt will have some azimuthal orientation .sub.tilt in the XY plane. As shown in
x=L.Math.cos(.sub.tilt)(27)
y=L.Math.sin(.sub.tilt)(28)
(90) In general, index n, thickness T, tilt .sub.tilt and azimuthal orientation .sub.tilt will depend on lateral location (x,y), so L will also generally be a function of (x,y). Refraction correction can be applied to each measured edge point, following which the collection of corrected edge points can be analyzed as desired to generate a corrected location for the feature of interest.
(91) Referring to
(92) Next, in step 1650, the lens is flipped so that the lower surface faces the optical instrument (step 1650) and a height profile of the lower apex region is acquired (step 1655). The system computes the lower apex location, P.sub.LA, from this height profile. The system then measures a height profile and an intensity profile for the lower surface feature of interest (step 1660). H.sub.LA, the lower surface apex height, and XY.sub.LAF, the lower apex-to-feature lateral distance, are then computed (step 1665) from the information acquired from steps 1655 and 1660. Using this value for H.sub.LA, along with values for H.sub.UA and T.sub.feature, the system computes apex thickness, T.sub.Apex (step 1670). Using XY.sub.UAF, XY.sub.LAF, XY.sub.Feature, and W, the system also computes a value for the inter-apex lateral offset, XY.sub.Apex (step 1675).
(93) In some embodiments, the sample is measured at two or more azimuthal orientations relative to the optical instrument. By obtaining independent measurements of dimensional properties of the lens at different azimuthal orientations, the system can combine these independent measurements so as to reduce systematic error in the final reported dimensional properties.
(94) Examples of sources of systematic error include misalignment between the optical axis and the scan axis, lateral or axial misalignment of the illumination, and bias in sample tilt.
(95) In some cases, systematic error has a component that is independent of sample orientation. For example, the reported lateral distance between two particular features may be biased by some offset in instrument coordinates (x.sub.bias, y.sub.bias). This bias can depend on the particular sample features being measured. In such cases, systematic error in measured lateral distance can be reduced by combining measurements with the sample at an azimuthal orientation .sub.0 relative to the instrument as well as with the sample at an azimuthal orientation .sub.180 relative to the instrument, where .sub.180 should be offset by 180 relative to .sub.0. As depicted in
(96) Referring top
(97) Specific steps are as follows. First, the lens is positioned with its upper surface facing the optical instrument and at azimuthal orientation .sub.0 (step 1805). In this orientation, the system performs a sequence of height and intensity profile measurements and computes values for XY.sub.UAF.sup.0 and XY.sub.Feature.sup.0 (step 1810).
(98) For the next measurement sequence, the lens is positioned with its upper surface facing the optical instrument and at azimuthal orientation .sub.180 (step 1815). In this orientation, the system performs a sequence of height and intensity profile measurements and computes values for XY.sub.UAF.sup.180 and XY.sub.Feature.sup.180 (step 1820).
(99) For the subsequent measurement sequence, the lens is positioned with its lower surface facing the optical instrument and at azimuthal orientation .sub.0 (step 1825). In this orientation, the system performs a sequence of height and intensity profile measurements and computes a value for XY.sub.LAF.sup.0 (step 1830).
(100) For the final measurement sequence, the lens is positioned with its lower surface facing the optical instrument and at azimuthal orientation .sub.180 (step 1835). In this orientation, the system performs a sequence of height and intensity profile measurements and computes a value for XY.sub.LAF.sup.180 (step 1840). The order in which measurements are made at these relative orientations is not critical, and can be governed by what is most convenient.
(101) The computed values are next used to compute constituent inter-apex lateral distances for each orientation, XY.sub.Apex.sup.0 and XY.sub.Apex.sup.180 (step 1845). Finally, using these constituent values, the system computes an inter-apex lateral distance, XY.sub.Apex.sup.final (step 1850), and final apex-to-feature lateral distances, XY.sub.UAF.sup.final and XY.sub.LAF.sup.final (step 1855).
(102) Final reported lateral distances XY.sup.final are computed by combining the corresponding lateral distances measured at .sub.0 and .sub.180, respectively XY.sup.0 and XY.sup.180:
XY.sup.final=f.sub.Combine(XY.sup.0,XY.sup.180)(29)
(103) The preceding equation can be applied to lateral distances of interest, including those discussed previously such as inter-feature lateral distance XY.sub.Feature, apex-to-feature lateral distances XY.sub.UAF and XY.sub.LAF, and inter-apex lateral distance XY.sub.Apex. If constituent measurements of lateral distances are all in the sample's frame of reference, i.e., relative to sample coordinates (x.sub.sample, y.sub.sample), in some cases the combining function may be as simple as the arithmetic mean of constituent measurements. Alternatively, or additionally, some operations may map tool-reference-frame results to sample-reference-frame in a single step. Possible other operations can account for previously determined remnant tool bias.
(104) As noted previously, another potential source of measurement error is material birefringence in the sample. In some cases, measurement error can be reduced by combining measurements obtained with the instrument in a variety of polarization states, for example using a polarizer and/or waveplate. This can further be combined with variations in relative azimuthal orientation of the sample relative to the instrument.
(105) The apparatus and methods described above allow for evaluation of in-process transparent samples, including in particular, lenses including curved active surface areas as well as plane-parallel areas that serve as surrogates for determining the dimensional and optical properties of the samples. Transparent samples include lenses, such as molded lenses that are part of multi-lens lens assemblies, e.g., for digital cameras. Such lens assemblies are extensively used in cameras for mobile devices, such as cell phones, smart phones, and tablet computers, among other examples.
(106) In some embodiments, the foregoing methods may be applied to measuring a mold for the lens. For example, referring to
(107) The flowcharts in
(108) For those lenses out of specification, the lenses are rejected (step 2040) and the system reports the corresponding molding sites as being outside process control targets (step 2050). For those lenses that meet specification, the lenses are sorted into thickness bins (step 2060) and the corresponding molding sites are reported as being within process control targets (step 2070).
(109) Flowchart 2001 in
(110) For those lenses out of specification, the lenses are rejected (step 2041) and the system reports the corresponding molding sites as being outside process control targets (step 2051). For those lenses that meet specification, the lenses are sorted into thickness bins (step 2061) and the corresponding molding sites are reported as being within process control targets (step 2071).
(111) The measurement techniques can also be used to characterize the molding process used to make lenses. Implementations for characterizing the molding process are shown in the flow charts in
(112) In the process shown in flowchart 2100 in
(113) In the process shown in flowchart 2101 in
(114) Although the foregoing flowcharts depict separate processes for birefringence and thickness measurements than for apex and feature measurements, in some embodiments both these sets of measurements may be combined in order to, for example, improve lens characterization and/or lens molding.
(115) While certain implementations have been described, other implementations are also possible. For example, while lens 200 and lens 900 are both meniscus lenses, more generally other types of lenses may be characterized using the disclosed techniques including, for example, convex-convex lenses, concave-concave lenses, plano-convex lenses, and plano-concave lenses. Lens surfaces may be aspherical. In some embodiments, lens surfaces may include points of inflection where the concavity of the surface changes. An example of such a surface is surface 132 in
(116) Moreover, a variety of alignment features in addition to those illustrated above may be used. For example, while the planar surfaces in lenses 200 and 900 are annular surfaces, other geometries are possible. Discrete features, such as a raised portions on a surface, depressions, or simply marks on a surface, may be used as features in the measurements described above.
(117) While this specification is generally centered on the metrology of optical components, a related class of application is the metrology of the molds that are used to manufacture injection-molded lenses. In this case, a mold exhibits all the features also found on a lens, namely an active optical surface and one or more location, centration or alignment datums. The metrology steps described for one side of a lens are then readily applicable. For instance, the instrument is used to measure the centration and height of the apex of the optical surface with respect to the mechanical datums. Other metrology steps include the characterization of steps between outer datums, as well as the angle of steep conical centration datums.
(118) In certain embodiments, such as where the part under test is larger than the field of view of the optical instrument, measurements of different regions of the part may be stitched together to provide measurements of the entire part. Exemplary techniques for stitching measurements are disclosed in J. Roth and P. de Groot, Wide-field scanning white light interferometry of rough surfaces, Proc. ASPE Spring Topical Meeting on Advances in Surface Metrology, 57-60 (1997).
(119) In some implementations, additional corrections may be applied to improve measurement accuracy. For example, corrections for the phase change on reflection properties of the surfaces may be applied. See, e.g., in P. de Groot, J. Biegen, J. Clark, X. Colonna de Lega and D. Grigg, Optical Interferometry for Measurement of the Geometric Dimensions of Industrial Parts, Applied Optics 41 (19), 3853-3860 (2002).
(120) In certain implementations, the part may be measured from more than one viewing angle, or from both sides. See, e.g., P. de Groot, J. Biegen, J. Clark, X. Colonna de Lega and D. Grigg, Optical Interferometry for Measurement of the Geometric Dimensions of Industrial Parts, Applied Optics 41 (19), 3853-3860 (2002).
(121) The results of the measurements may be combined with other measurements, including for example stylus measurements of aspheric form, such as disclosed, e.g., in P. Scott, Recent Developments in the Measurement of Aspheric Surfaces by Contact Stylus Instrumentation, 4927, 199-207 (2002).
(122) A variety of data processing methods may be applied. For instance, methods adapted to measuring multiple surfaces using coherence scanning interferometer may be used. See, e.g., P. J. de Groot and X. Colonna de Lega, Transparent film profiling and analysis by interference microscopy, Proc. SPIE 7064, 706401-1 706401-6 (2008).
(123) The computations associated with the measurements and analysis described above can be implemented in computer programs using standard programming techniques following the method and figures described herein. Program code is applied to input data to perform the functions described herein and generate output information. The output information may be applied to one or more output devices such as a display monitor. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the programs can be implemented in assembly or machine language, if desired. In any case, the language can be a compiled or interpreted language. Moreover, the program can run on dedicated integrated circuits preprogrammed for that purpose.
(124) Each such computer program is preferably stored on a storage medium or device (e.g., ROM, optical disc or magnetic disc) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein. The computer program can also reside in cache or main memory during program execution. The calibration method can also be implemented, at least in part, as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
(125) Other embodiments are in the following claims.