Optical evaluation of lenses and lens molds
09599534 ยท 2017-03-21
Assignee
Inventors
- Martin F. Fay (Middletown, CT, US)
- Xavier Colonna De Lega (Middlefield, CT)
- Peter J. de Groot (Middletown, CT)
Cpc classification
International classification
Abstract
A method for determining information about an object including a curved portion and a planar portion, the curved portion having a first curved surface having an apex and defining an axis of the object, includes: directing measurement light to the object; detecting measurement light reflected from the first curved surface of the curved portion; detecting measurement light reflected from at least one other surface of the object; and determining, based on the detected light, information about the apex of the first curved surface of the curved portion.
Claims
1. A method for determining information about an object comprising a curved portion and a planar portion, the curved portion comprising a first curved surface having an apex and defining an optical axis of the object, the method comprising: directing measurement light to the object; detecting measurement light reflected from the first curved surface of the curved portion; detecting measurement light reflected from at least one other surface of the object; and determining, based on the detected light from the first curved surface and from the at least one other surface, information about the apex of the first curved surface of the curved portion.
2. The method of claim 1, wherein the object is a transparent optical element.
3. The method of claim 2, wherein the transparent optical element is a lens element.
4. The method of claim 1, wherein the object is portion of a mold for an optical element.
5. The method of claim 1, wherein the curved portion comprises a second curved surface opposite the first curved surface, the second curved surface having an apex, and the information about the apex of the first curved surface comprises a thickness of the object between the apex of the first surface and the apex of the second surface measured along the optical axis.
6. The method of claim 1, wherein the curved portion comprises a second curved surface opposite the first curved surface, the second curved surface having an apex, and the information about the apex of the first curved surface comprises a lateral offset between the apex of the first surface and the apex of the second surface measured in a plane orthogonal to the optical axis.
7. The method of claim 1, wherein the measurement light is directed to the object by an optical instrument and the first curved surface faces the optical instrument when reflecting the measurement light.
8. The method of claim 7, wherein determining the information about the apex of the first curved surface comprises determining a location of the apex.
9. The method of claim 8, wherein the at least one other surface comprises another surface facing the optical instrument and determining the information about the apex of the first curved surface further comprises determining a lateral offset measured in a plane orthogonal to the optical axis between the apex and a feature of interest on the at least one other surface.
10. The method of claim 9, wherein the at least one other surface comprises a surface facing away from the optical instrument and determining the information about the apex of the first curved surface further comprises determining a lateral offset measured in a plane orthogonal to the optical axis between a feature on the surface facing away from the optical instrument and the feature of interest on the other surface facing the optical instrument.
11. The method of claim 10, wherein the curved portion comprises a second curved surface opposite the first curved surface and determining the information about the apex of the first curved surface comprises determining a location of the apex of the second curved surface.
12. The method of claim 11, wherein determining the information about the apex of the first curved surface comprises determining a thickness of the curved portion measured along the optical axis based on the locations of the first and second curved surfaces apexes.
13. The method of claim 11, wherein determining the information about the apex of the first curved surface comprises determining a lateral offset between the apex of the first surface and the apex of the second surface measured in a plane orthogonal to the optical axis based on: (i) the lateral offset between the apex of the first curved surface and the feature of interest on the other surface facing the optical instrument; (ii) the lateral offset between the feature of interest on the other surface facing the optical instrument and the feature of interest on the surface facing away from the optical instrument; and (iii) the lateral offset between the apex of the second curved surface and the feature of interest on the surface facing away from the optical instrument.
14. The method of claim 1, wherein determining information about the apex of the first curved surface comprises determining information about a tilt of at least one surface of the planar portion and accounting for the tilt when determining the information about the apex of the first surface.
15. The method of claim 1, further comprising adjusting an azimuthal orientation of the object with respect to an optical instrument used to direct the measurement light to the object after detecting the measurement light, and repeating the detection of measurement light from the first curved surface and from the at least one other surface after the azimuthal orientation adjustment.
16. The method of claim 1, further comprising changing a polarization state of the measurement light after detecting the measurement light, and repeating the detection of measurement light from the first curved surface and from the at least one other surface after the polarization state change.
17. The method of claim 16, further comprising determining information about a birefringence of the object based on the detected measurement light before and after the polarization state change.
18. A method of forming an optical assembly, comprising: determining information about the object using the method of claim 1, where the object is a lens; and securing the lens relative to one or more other lenses in a barrel to form the optical assembly.
19. The method of claim 18, further comprising securing the optical assembly relative to a sensor to provide a module for a digital camera.
20. A system for determining information about an object comprising a curved portion and a planar portion, the curved portion comprising a first curved surface having an apex and defining an optical axis of the object, the system comprising: a fixture for supporting the object; an optical instrument comprising a light source, a detector, and optical elements arranged to direct light from the light source towards the object when the object is supported by the fixture and direct light reflected from the object to the detector; and an electronic controller in communication with the detector, the electronic controller being programmed to determine information about the apex of the first surface based on light detected from the first curved surface and from at least one other surface of the object.
21. The system of claim 20, wherein the optical instrument is an optical areal surface topography instrument.
22. The system of claim 20, wherein the fixture comprises an actuator configured to reorient the object with respect to the optical instrument.
23. The system of claim 20, wherein the optical instrument comprises a polarization module configured to polarize light from the light source.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
DETAILED DESCRIPTION
(26) Referring to
(27) An optical metrology instrument 201 is used to evaluate some of the optical properties of the lens 200, including in particular the refractive index uniformity and residual stress birefringence, as well as dimensional features such as the thickness of the lens, including but not limited to the thickness, T, in the figure as a function of the coordinates x, y (see the Cartesian coordinate system shown in
(28) In general, optical metrology instrument 201 can be one of a variety of different instruments capable of performing an areal surface topography measurement of lens 200. Example instruments include coherence scanning interferometry (CSI) microscopes (such as disclosed, e.g., in P. de Groot, Coherence Scanning Interferometry, in Optical Measurement of Surface Topography, edited by R. Leach, chapt. 9, pp. 187-208, (Springer Verlag, Berlin, 2011)), imaging confocal microscopes (such as disclosed, e.g., in R. Artigas, Imaging Confocal Microscopy, in Optical Measurement of Surface Topography, edited by R. Leach, chapt. 11, pp. 237-286, (Springer Berlin Heidelberg, 2011)), structured illumination microscopes (such as disclosed, e.g., in X. M. Colonna de Lega Non-contact surface characterization using modulated illumination, U.S. Pat. No. 8,649,027 (2014).), focus sensing (such as disclosed, e.g., in F. Helmli, Focus Variation Instruments, in Optical Measurement of Surface Topography, edited by R. Leach, chapt. 7, pp. 131-166, (Springer Berlin Heidelberg, 2011)) or wavelength tuned Fourier transform phase shifting interferometry (FTPSI) systems (such as disclosed, e.g., in L. L. Deck, Fourier-Transform Phase-Shifting Interferometry, Applied Optics 42 (13), 2354-2365 (2003)).
(29) Referring to
(30) In the embodiment of the
(31) After reflecting from the test and reference surfaces, the test and reference light are recombined by beams splitter 320 to form combined light 332, which is transmitted by beam splitter 312 and relay lens 336 to form an optical interference pattern on an electronic detector 334 (for example, a multi-element CCD or CMOS detector). The intensity profile of the optical interference pattern across the detector is measured by different elements of the detector and stored in an electronic processor 301 (e.g., a standalone or networked computer, or processor integrated with other components of the system) for analysis. Relay lens 336 images different points in a focal plane of the objective 306 to corresponding points on detector 334.
(32) A field stop 338 positioned between relay optics 308 and 310 defines the area of test surface 324 illuminated by test light 122. After reflection from the lens 200 and reference surface, combined light 332 forms a secondary image of the source at pupil plane 314 of the objective lens.
(33) Optionally, polarization elements 340, 342, 344, and 346 define the polarization state of the test and reference light being directed to the respective test and reference surfaces, and that of the combined light being directed to the detector. Depending on the embodiment, each polarization element can be a polarizer (e.g., a linear polarizer), a retardation plate (e.g., a half or quarter wave plate), or a similar optic that affects the polarization state of an incident beam. Furthermore, in some embodiments one or more of the polarization elements can be absent. Moreover, depending on the embodiment, beam splitter 312 can be a polarizing beam splitter or a non-polarizing beam splitter. In general, because of the presence of polarization elements 340, 342 and/or 346, the state of polarization of test light 322 at test surface 324 can be function of the azimuthal position of the light in pupil plane 314.
(34) In the presently described embodiment, source 302 provides illumination over a broad band of wavelengths (e.g., an emission spectrum having a full-width, half-maximum of more than 20 nm, of more than 50 nm, or preferably, even more than 100 nm). For example, source 302 can be a white light emitting diode (LED), a filament of a halogen bulb, an arc lamp such as a Xenon are lamp or a so-called supercontinuum source that uses non-linear effects in optical materials to generate very broad source spectra (>200 nm). The broad band of wavelengths corresponds to a limited coherence length. A translation stage 350 adjusts the relative optic path length between the test and reference light to produce an optical interference signal at each of the detector elements. For example, in the embodiment of
(35) Referring back to
(36) Metrology information for the upper surface 211 of the lens 200 is derived from the reflection of light in air (signal S1 in the figure). Respectively, metrology information for the lower surface 212 of the lens 200 is derived from the reflection of light within the lens material (signal S2) in the figure.
(37) Considering the specific example of a CSI microscope system such as system 300, the relative distance T between the upper and lower surfaces 211 and 212 at a specific coordinate x, y will be given by
T=T/n.sub.G(1)
where T is the apparent or measured optical thickness as determined by CSI microscopy or by wavelength-tuned FTPSI using coherence information, and n.sub.G at low NA (e.g., 0.06 or less) is the group-velocity index of refraction (at high NA, e.g., 0.2 or more, the value n.sub.G could change because of the obliquity effect, resulting in an effective group-velocity index of refraction). Conversely, signal S2 will appear to originate at a higher z location when using confocal, structured illumination or focus sensing. The physical thickness in this case is given by
T=nT(2)
where T is the apparent or measured optical thickness as determined by confocal or related focus-sensitive instruments, and n is the phase-velocity index of refraction.
(38) The thickness map T(x, y) or T(x, y) provides information about the mean value and uniformity of the physical thickness T(x, y) as well as the optical properties of the lens 200 as exemplified by the index of refraction n.sub.G(x, y) or n(x, y). In some cases, the composite uniformity and mean value of both of these properties, dimensional and optical, is sufficient for process control in the manufacture of the lens 200.
(39) If desired, additional information such as the thickness map T(x, y) or the optical refractive index n(x, y) obtained by other means, such as by contact profilometry (as disclosed, e.g., in P. Scott, Recent Developments in the Measurement of Aspheric Surfaces by Contact Stylus Instrumentation, Proc. SPIE 4927, 199-207 (2002)), may supplement the measurements performed by the optical metrology instrument 201, allowing for separation and independent evaluation of the effects of the refractive index from the physical thickness.
(40) While the foregoing lens characterization relies on height profile information about surfaces 211 and 212 alone, lens characterization may utilize other information too. For example, in some implementations, a specialized reference fixture is included to provide additional optical information. Referring to
(41) Fixture 400 includes support structures 410 and reflective upper surface 420. Lens 200 rests on support structures 410, which position the lens a distance T.sub.air from reflective surface 420. Support structures 410 may be composed of multiple pillars or walls on opposing sides of lens 200, or may be a single cylindrical support separating an inner portion 422 from an outer portion 421 of reflective surface 420. Fixture 400 may be tailored specifically for lens 200, and may be replaced with another fixture when a different shaped lens is measured.
(42)
(43) In a first step, depicted in
(44) In a second step, depicted in
(45) The metrology information is combined to create maps of the thickness and refractive index distribution between the upper and lower parallel surfaces of the lens element. For coherence scanning interferometers and comparable interferometric instruments, after acquiring the apparent height information z.sub.1, . . . 4, the physical and optical thickness maps are, respectively:
T(x,y)=z.sub.1(x,y)z.sub.2(x,y)+z.sub.3(x,y)z.sub.4(x,y)(3)
T(x,y)=z.sub.1(x,y)z.sub.2(x,y).(4)
(46) The map of the group-velocity refractive index is then
n.sub.G(x,y)=T(x,y)/T(x,y)(5)
(47) When the metrology system relies on confocal, structured illumination or focus sensing surface profiling, Eqs. (4) and (5) become
T(x,y)=z.sub.1(x,y)z.sub.2(x,y),(6)
n=T/T.(7)
(48) The thickness map provides information about the mean thickness of the lens as well as possible tilt between the two sides of the lens, based on variations in the measured thickness from one side of the lens to the other. The refractive index map provides information about possible refractive index gradients across the lens area.
(49) As an optional additional step, knowing the nominal refractive dispersion properties of the material in the lens, it is often possible to transform the group index to the phase index:
n=Transform(n.sub.G).(8)
(50) In some cases, the transform may be as simple as an additive constant. For instance, the additive constant is
(51)
where n(k) is the nominal refractive index of the material (as stated by the manufacturer or measured through some other means), expressed as a function of wavenumber, and k.sub.0 is the centroid wavenumber of the spectral band used for the measurement. Other transforms are possible such as a lookup table or polynomial function. Transform polynomials can be created by fitting data points of measured group index values (using the instrument) as a function of the known refractive index of test samples.
(52) Additional measurements may also be made in order to improve the accuracy of the process. For example, referring to
(53) In some embodiments, the measurement is repeated for different configurations of the instrument such that data collection is performed with substantially different spectral distributions, for example, a first spectral distribution centered between 400 nm and 490 nm, a second spectral distribution centered between 490 nm and 590 nm and a third spectral distribution centered between 590 nm and 700 nm. Each spectral distribution provides an independent measurement of the optical properties of the lens material. The multiple measured values of group-velocity index or phase-velocity index can then be combined to derive an estimate of the material optical properties variation with wavelength (or dispersion), which can be used to verify that the material is within tolerances and/or for controlling the manufacturing process. In the case where the instrument measures group-index (e.g. a coherence scanning interferometer), the estimate of dispersion is further used to compute an estimate of the refractive index, for example using the product of the first order derivative by the centroid wavenumber. In some embodiments, the multiple spectral distributions are present concurrently while the instrument collects the data resulting from the scanning data acquisition. The multiple spectral bands are separated at the detector, for example using a color sensitive device (CCD or CMOS camera equipped with color filters). Alternatively, returning light from the sensor is spatially separated by dichroic optical elements that reflect or transmit specific spectral components toward multiple monochrome sensors. A minimum of two spectral bands is required to estimate the dispersion property of the material.
(54) While the foregoing measurements may be performed using polarized or unpolarized light, it is possible to glean additional information about lens 200 using polarized light. For example, referring to
(55) The presence of stress birefringence in a sample may be monitored by observing its effects in the plane-parallel areas of the sample. Here, the measurement process outlined in flowchart 500 or flowchart 700 is performed at least twice, where each complete data acquisition cycle is performed for a different polarization state of the illumination light used by the metrology system. The polarization state of the optical measurement instrument may be manipulated using convention polarizers and/or waveplates.
(56) For example, as shown in flowchart 800, a first measurement is performed with the illumination light linearly polarized along the x direction and repeated with illumination light linearly polarized along they direction. In some embodiments, the polarization directions are aligned with respect to datum features on lens 200, such as where the lens is an injection-molded lens the datum features may correspond to the gate where the injected material enters the mold cavity.
(57) The multiple refractive index maps collected are then combined to provide a quantitative measurement of birefringence present in the lens material. For example, in step 870, a birefringence effect is calculated from the measurements. In step 880, a mean refractive index is calculated from the measurements. Birefringence may be, for example, expressed as the difference of optical paths through the lens, as shown in step 870 of flowchart 800. Here the cumulative effect of birefringence through the lens is calculated as
B(x,y)=[n.sub.2(x,y)n.sub.1(x,y)](9)
while the mean index (as shown in step 880) is
(58) Birefringence can similarly be expressed as the difference of optical path per unit length of propagation within the material. The phase-velocity refractive indices n.sub.1,2 correspond to the two polarization orientations. For process control, these indices are adequately represented by the group index measurements that follow, for example, from CSI microscope measurements. Further, for some process control situations, a measurement of optical thickness variation
B(x,y)=T.sub.2(x,y)T.sub.1(x,y)(11)
or
B(x,y)=T.sub.2(x,y)T.sub.1(x,y)(12)
using the simpler configuration of
(59) While the foregoing embodiments involve measurements characterizing the inactive portion (e.g., plane parallel portion) of the lens and inferring information about the lens generally from those characterizations, other implementations are also possible. For example, measurements directly characterizing the active portion of the lens can also be performed.
(60) Referring to
(61) The inactive portion is composed of a series of planar, annular surfaces with step features offsetting inner and outer planar surfaces on each side of lens 900. In general, the surfaces of the inactive portion may, for example, include features formed on the sample to aid in the alignment and fixturing of the lens in a final assembly, and/or to facilitate measurement of the relative alignment of lens features. In this case, the upper side of the inactive portion includes planar surfaces 912 and 916. A step 914 separates surfaces 912 and 916. Step 914 meets surface 912 at edge 914o and surface 916 at edge 914i. Surface 916 meets upper convex surface 921 at edge 918.
(62) The lower side of the inactive portion includes planar surfaces 911 and 917. A step 915 separates surfaces 911 and 917. Step 915 meets surface 911 at edge 915o and surface 917 at edge 915i. Surface 917 meets lower concave surface 922 at edge 919.
(63) Optical metrology instrument 201 is used to evaluate some of the dimensional features of lens 900, including (but not limited) to the apex-to-apex thickness T.sub.Apex and the relative x, y lateral offsets (referred to a common axis z) of surface feature locations, including (but not limited to) apex centers and alignment surface features. These evaluations are performed by measuring the upper surface profile to determine 3D apex location and relative 3D location and topography of other surface features. These measurements serve as indicators of the overall dimensional properties of the lens.
(64) During operation, optical instrument 201 looks down at the sample along an observation direction parallel to the z-axis shown in
(65) Metrology information for apex 923 is derived from the reflection of light in air (signal S.sub.UA in
(66) Considering the specific example of a CSI microscope system such as that shown in
(67) S.sub.LF is a non-interferometric intensity signal which may be analyzed to determine the location of lower surface edge features 915o. Referring to
T.sub.BF=T.sub.feature/n(13)
(68) For this computation, thickness and index may be assumed nominal values or previously measured by some other means, e.g., using the same instrument or a caliper. Depending on the required accuracy for a given application, it can further be beneficial to compensate for the effect of spherical aberration induced by refraction through the lens material, and compute a corrected value for T.sub.BF, e.g., using the formula:
(69)
where NA refers to the numerical aperture of the optical instrument.
(70) The lateral location of the upper surface apex C.sub.UA is given by the x, y ordinates of P.sub.UA. The location of other features of interest can be defined in other ways, for example as the center of measured edge positions, indicated as C.sub.UF and C.sub.LF in
XY.sub.Feature=C.sub.UFC.sub.LF(14)
(71) Similarly, the upper surface apex-to-feature lateral distance XY.sub.UAF can be computed as:
XY.sub.UAF=C.sub.UAC.sub.UF(15)
(72) In some cases, XY.sub.Feature is sufficient for process control in the manufacture of the lenses, for example as a measure of the lateral alignment of the mold halves. Similarly, XY.sub.UAF along with relative apex height H.sub.UA may be sufficient for identifying issues with lens formation, for example if these deviate from dimensions expected from the upper surface mold half.
(73) It may be desired to explicitly measure dimensional properties between the upper surface apex and the lower surface apex, such as apex thickness T.sub.Apex indicated in
XY.sub.LAF=C.sub.LAC.sub.LF(16)
Note that H.sub.LA is negative for the particular geometry depicted in
(74) In some cases this second measurement can provide an independent measurement of XY.sub.Feature.
(75) In some embodiments, metrology information from measuring the lens first with one surface facing the instrument, and then the other, is combined according to flowchart 1400 shown in
(76) For measurement of lower surface features, metrology instrument 201 and lens 900 are moved relative to each other so that a lower feature of interest, such as edge 915o, is at a best focus position (step 1425). This location may be determined using nominal or measured values of T.sub.feature and n. In this position, the instrument measures an intensity profile for the lower feature (step 1430). Using information from the intensity profile, the system computes (in step 1435) an inter-feature lateral offset XY.sub.Feature.
(77) Next, lens 900 is flipped and positioned with its lower surface facing instrument 201 (step 1440). In this position, a height profile is measured in the region of lower apex 924, and a lower apex location, P.sub.LA is computed (step 1445). The system then, in step 1450, measures a height profile and an intensity profile for one or more features on the lower surface (e.g., edge 915o). With this measurement, the system compute a lower apex height, H.sub.LA, and a lower apex-to-feature lateral distance XY.sub.LAF (step 1455).
(78) In step 1460, apex thickness T.sub.Apex can be computed as:
T.sub.Apex=H.sub.UAT.sub.feature+H.sub.LA(17)
(79) Finally, in step 1465, inter-apex lateral distance XY.sub.Apex corresponds to the lateral distance between C.sub.UA and C.sub.LA and can computed according to the following, where superscripts indicate whether parameters are obtained from the upper surface measurement or the lower surface measurement:
XY.sub.Apex=XY.sub.UAF.sup.upper+(XY.sub.Feature).sup.upperXY.sub.LAF.sup.lower(18)
(80) If the lower-surface measurement provides an independent measurement of the inter-feature lateral distance XY.sub.Feature, the following expressions can optionally be used to potentially reduce statistical variability:
XY.sub.Feature=0.5[XY.sub.Feature.sup.upper+XY.sub.Feature.sup.lower](19)
XY.sub.Apex=XY.sub.UAF.sup.upper+XY.sub.FeatureXY.sub.LAF.sup.lower(20)
(81) In some embodiments, as discussed previously with respect to
(82) In certain embodiments, the information regarding x, y spatial variations in areas including the features of interest may be exploited to more accurately determine dimensional features. For example, this information could include maps of refractive index n(x, y), thickness T(x, y), and surface topography S.sub.UA(x, y) and S.sub.LA(x, y).
(83) Referring to
T.sub.Apex=f.sub.ApexZ(H.sub.UA,T.sub.feature,H.sub.LA,W)(21)
(84) Lateral distances XY.sub.Feature and XY.sub.Apex can be expressed as:
XY.sub.Feature=f.sub.FeatureXY(C.sub.UF,C.sub.LF,W)(22)
XY.sub.Apex=f.sub.ApexXY(XY.sub.UAF,XY.sub.Feature,XY.sub.LAF,W)(23)
(85)
(86) Due to refractive effects, there will be a lateral shift L between the apparent and actual lateral location of the position of interest given approximately by:
L=T sin(.sub.refr)(24)
where sin(.sub.refr) and sin(.sub.tilt) are related via Snell's law:
sin(.sub.refr)=sin(.sub.tilt)/n.(25)
Thus, L is given by:
L=T sin(.sub.tilt)/n.(26)
(87) In
(88) Local tilt .sub.tilt will have some azimuthal orientation .sub.tilt in the XY plane. As shown in
x=L.Math.cos(.sub.tilt)(27)
y=L.Math.sin(.sub.tilt)(28)
(89) In general, index n, thickness T, tilt .sub.tilt and azimuthal orientation .sub.tilt will depend on lateral location (x, y), so L will also generally be a function of (x, y). Refraction correction can be applied to each measured edge point, following which the collection of corrected edge points can be analyzed as desired to generate a corrected location for the feature of interest.
(90) Referring to
(91) Next, in step 1650, the lens is flipped so that the lower surface faces the optical instrument (step 1650) and a height profile of the lower apex region is acquired (step 1655). The system computes the lower apex location, P.sub.LA, from this height profile. The system then measures a height profile and an intensity profile for the lower surface feature of interest (step 1660). H.sub.LA, the lower surface apex height, and XY.sub.LAF, the lower apex-to-feature lateral distance, are then computed (step 1665) from the information acquired from steps 1655 and 1660. Using this value for H.sub.LA, along with values for H.sub.UA and T.sub.feature, the system computes apex thickness, T.sub.Apex (step 1670). Using XY.sub.UAF, XY.sub.LAF, XY.sub.Feature, and W, the system also computes a value for the inter-apex lateral offset, XY.sub.Apex (step 1675).
(92) In some embodiments, the sample is measured at two or more azimuthal orientations relative to the optical instrument. By obtaining independent measurements of dimensional properties of the lens at different azimuthal orientations, the system can combine these independent measurements so as to reduce systematic error in the final reported dimensional properties.
(93) Examples of sources of systematic error include misalignment between the optical axis and the scan axis, lateral or axial misalignment of the illumination, and bias in sample tilt.
(94) In some cases, systematic error has a component that is independent of sample orientation. For example, the reported lateral distance between two particular features may be biased by some offset in instrument coordinates (x.sub.bias, y.sub.bias). This bias can depend on the particular sample features being measured. In such cases, systematic error in measured lateral distance can be reduced by combining measurements with the sample at an azimuthal orientation .sub.0 relative to the instrument as well as with the sample at an azimuthal orientation .sub.180 relative to the instrument, where .sub.180 should be offset by 180 relative to .sub.0. As depicted in
(95) Referring top
(96) Specific steps are as follows. First, the lens is positioned with its upper surface facing the optical instrument and at azimuthal orientation .sub.0 (step 1805). In this orientation, the system performs a sequence of height and intensity profile measurements and computes values for XY.sub.UAF.sup.0 and XY.sub.Feature.sup.0 (step 1810).
(97) For the next measurement sequence, the lens is positioned with its upper surface facing the optical instrument and at azimuthal orientation .sub.180 (step 1815). In this orientation, the system performs a sequence of height and intensity profile measurements and computes values for XY.sub.UAF.sup.180 and XY.sub.Feature.sup.180 (step 1820).
(98) For the subsequent measurement sequence, the lens is positioned with its lower surface facing the optical instrument and at azimuthal orientation .sub.0 (step 1825). In this orientation, the system performs a sequence of height and intensity profile measurements and computes a value for XY.sub.LAF.sup.0 (step 1830).
(99) For the final measurement sequence, the lens is positioned with its lower surface facing the optical instrument and at azimuthal orientation .sub.180 (step 1835). In this orientation, the system performs a sequence of height and intensity profile measurements and computes a value for XY.sub.LAF.sup.180 (step 1840). The order in which measurements are made at these relative orientations is not critical, and can be governed by what is most convenient.
(100) The computed values are next used to compute constituent inter-apex lateral distances for each orientation, XY.sub.Apex.sup.0 and X.sub.Apex.sup.180 (step 1845). Finally, using these constituent values, the system computes an inter-apex lateral distance, XY.sub.Apex.sup.final (step 1850), and final apex-to-feature lateral distances, XY.sub.UAF.sup.final and XY.sub.LAF.sup.final (step 1855).
(101) Final reported lateral distances XY.sup.final are computed by combining the corresponding lateral distances measured at .sub.0 and .sub.180, respectively XY.sup.0 and XY.sup.180:
XY.sup.final=f.sub.combine(XY.sup.0,XY.sup.180)(29)
(102) The preceding equation can be applied to lateral distances of interest, including those discussed previously such as inter-feature lateral distance XY.sub.Feature, apex-to-feature lateral distances XY.sub.UAF and XY.sub.LAF, and inter-apex lateral distance XY.sub.Apex. If constituent measurements of lateral distances are all in the sample's frame of reference, i.e., relative to sample coordinates (x.sub.sample, y.sub.sample), in some cases the combining function may be as simple as the arithmetic mean of constituent measurements. Alternatively, or additionally, some operations may map tool-reference-frame results to sample-reference-frame in a single step. Possible other operations can account for previously determined remnant tool bias.
(103) As noted previously, another potential source of measurement error is material birefringence in the sample. In some cases, measurement error can be reduced by combining measurements obtained with the instrument in a variety of polarization states, for example using a polarizer and/or waveplate. This can further be combined with variations in relative azimuthal orientation of the sample relative to the instrument.
(104) The apparatus and methods described above allow for evaluation of in-process transparent samples, including in particular, lenses including curved active surface areas as well as plane-parallel areas that serve as surrogates for determining the dimensional and optical properties of the samples. Transparent samples include lenses, such as molded lenses that are part of multi-lens lens assemblies, e.g., for digital cameras. Such lens assemblies are extensively used in cameras for mobile devices, such as cell phones, smart phones, and tablet computers, among other examples.
(105) In some embodiments, the foregoing methods may be applied to measuring a mold for the lens. For example, referring to
(106) The flowcharts in
(107) For those lenses out of specification, the lenses are rejected (step 2040) and the system reports the corresponding molding sites as being outside process control targets (step 2050). For those lenses that meet specification, the lenses are sorted into thickness bins (step 2060) and the corresponding molding sites are reported as being within process control targets (step 2070).
(108) Flowchart 2001 in
(109) For those lenses out of specification, the lenses are rejected (step 2041) and the system reports the corresponding molding sites as being outside process control targets (step 2051). For those lenses that meet specification, the lenses are sorted into thickness bins (step 2061) and the corresponding molding sites are reported as being within process control targets (step 2071).
(110) The measurement techniques can also be used to characterize the molding process used to make lenses. Implementations for characterizing the molding process are shown in the flow charts in
(111) In the process shown in flowchart 2100 in
(112) In the process shown in flowchart 2101 in
(113) Although the foregoing flowcharts depict separate processes for birefringence and thickness measurements than for apex and feature measurements, in some embodiments both these sets of measurements may be combined in order to, for example, improve lens characterization and/or lens molding.
(114) While certain implementations have been described, other implementations are also possible. For example, while lens 200 and lens 900 are both meniscus lenses, more generally other types of lenses may be characterized using the disclosed techniques including, for example, convex-convex lenses, concave-concave lenses, plano-convex lenses, and plano-concave lenses. Lens surfaces may be aspherical. In some embodiments, lens surfaces may include points of inflection where the concavity of the surface changes. An example of such a surface is surface 132 in
(115) Moreover, a variety of alignment features in addition to those illustrated above may be used. For example, while the planar surfaces in lenses 200 and 900 are annular surfaces, other geometries are possible. Discrete features, such as a raised portions on a surface, depressions, or simply marks on a surface, may be used as features in the measurements described above.
(116) While this specification is generally centered on the metrology of optical components, a related class of application is the metrology of the molds that are used to manufacture injection-molded lenses. In this case, a mold exhibits all the features also found on a lens, namely an active optical surface and one or more location, centration or alignment datums. The metrology steps described for one side of a lens are then readily applicable. For instance, the instrument is used to measure the centration and height of the apex of the optical surface with respect to the mechanical datums. Other metrology steps include the characterization of steps between outer datums, as well as the angle of steep conical centration datums.
(117) In certain embodiments, such as where the part under test is larger than the field of view of the optical instrument, measurements of different regions of the part may be stitched together to provide measurements of the entire part. Exemplary techniques for stitching measurements are disclosed in J. Roth and P. de Groot, Wide-field scanning white light interferometry of rough surfaces, Proc. ASPE Spring Topical Meeting on Advances in Surface Metrology, 57-60 (1997).
(118) In some implementations, additional corrections may be applied to improve measurement accuracy. For example, corrections for the phase change on reflection properties of the surfaces may be applied. See, e.g., in P. de Groot, J. Biegen, J. Clark, X. Colonna de Lega and D. Grigg, Optical Interferometry for Measurement of the Geometric Dimensions of Industrial Parts, Applied Optics 41(19), 3853-3860 (2002).
(119) In certain implementations, the part may be measured from more than one viewing angle, or from both sides. See, e.g., P. de Groot, J. Biegen, J. Clark, X. Colonna de Lega and D. Grigg, Optical Interferometry for Measurement of the Geometric Dimensions of Industrial Parts, Applied Optics 41(19), 3853-3860 (2002).
(120) The results of the measurements may be combined with other measurements, including for example stylus measurements of aspheric form, such as disclosed, e.g., in P. Scott, Recent Developments in the Measurement of Aspheric Surfaces by Contact Stylus Instrumentation, 4927, 199-207 (2002).
(121) A variety of data processing methods may be applied. For instance, methods adapted to measuring multiple surfaces using coherence scanning interferometer may be used. See, e.g., P. J. de Groot and X. Colonna de Lega, Transparent film profiling and analysis by interference microscopy, Proc. SPIE 7064, 706401-1 706401-6 (2008).
(122) The computations associated with the measurements and analysis described above can be implemented in computer programs using standard programming techniques following the method and figures described herein. Program code is applied to input data to perform the functions described herein and generate output information. The output information may be applied to one or more output devices such as a display monitor. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the programs can be implemented in assembly or machine language, if desired. In any case, the language can be a compiled or interpreted language. Moreover, the program can run on dedicated integrated circuits preprogrammed for that purpose.
(123) Each such computer program is preferably stored on a storage medium or device (e.g., ROM, optical disc or magnetic disc) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein. The computer program can also reside in cache or main memory during program execution. The calibration method can also be implemented, at least in part, as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
(124) Other embodiments are in the following claims.