INTERFEROMETER IN OPTOELECTRONIC PACKAGE
20260110642 ยท 2026-04-23
Assignee
Inventors
Cpc classification
International classification
Abstract
An optoelectronic package for measuring distance, tip, and tilt of an object relative to a detector. The optoelectronic package comprises a carrier; a photodiode element (photodiode elements) located on the carrier and having a center opening; a vertical-cavity surface-emitting laser (VCSEL) located in the center opening and directing light rays toward the object; and an interference generating optical element positioned between the photodiode element and the object. Light passes through the optical element from the VCSEL creating a ring of measured light which reflects off the surface of the object and combines with a ring of reflecting reference light to produce interference fringes on the photodediode elements with varying light intensity, which correspond to displacement and angular tip and tilt of the surface of the object. The measurement of these variations will be interpreted by a computational element to produce a value of distance, tip, and tilt of the object.
Claims
1. An optoelectronic package for measuring the surface of an object, the optoelectronic package comprising: a carrier; a segmented ring-patterned photodiode element located on the carrier and having a center opening the photodiode element being adapted to identify varying light intensity based on optical interference; a vertical-cavity surface-emitting laser (VCSEL) located in the center opening of the photodiode element and directing light rays toward the object; and a lens positioned between the VCSEL and the object, the lens adapted to receive the light rays from the VCSEL and create a ring of measured light which reflects off the surface of the object and the measured light is adapted to combine with a ring of reference light to produce optical interference on the photodiode element permitting the photodiode element to identify varying light intensity from the optical interference, wherein variations in light intensity correspond to displacement, angular tip and tilt, or a combination thereof of the surface of the object.
2. The optoelectronic package according to claim 1 further comprising a computer system configured to engage with one or more components of the optoelectronic package.
3. The optoelectronic package according to claim 1 further comprising a thermoelectrically cooled configuration adapted to reduce thermal noise in each photodiode element and remove heat generated by VCSEL.
4. The optoelectronic package according to claim 1 wherein the VCSEL is a microelectromechanical system (MEMS) tunable VCSEL.
5. The optoelectronic package according to claim 1 wherein the photodiode element has multiple concentric arc sections separated into three or more quadrants by gaps and separated from each other by open concentric rings.
6. The optoelectronic package according to claim 1 wherein the object is a motion system within a semiconductor processing tool.
7. The optoelectronic package according to claim 1 wherein the object is a semiconductor wafer.
8. The optoelectronic package according to claim 1 further comprising a black mask located on at least one of the lens and the photodiode element to absorb or trap light and thereby minimize stray light and its adverse effects.
9. An optoelectronic package for measuring the surface of an object, the optoelectronic package comprising: a carrier; a focal plane array located on the carrier and having a center point the focal plane array being adapted to identify varying light intensity based on optical interference; a vertical-cavity surface-emitting laser (VCSEL) located at the center point of the focal plane array and directing light rays toward the object; and a lens positioned between the VCSEL and the object, the lens adapted to receive the light rays from the VCSEL and create a ring of measured light which reflects off the surface of the object and the measured light is adapted to combine with a ring of reference light to produce optical interference on the focal plane array permitting the photodiode element to identify varying light intensity from the optical interference, wherein variations in light intensity correspond to displacement, angular tip and tilt, or a combination thereof of the surface of the object.
10. The optoelectronic package according to claim 9 further comprising a computer system configured to engage with one or more components of the optoelectronic package.
11. The optoelectronic package according to claim 9 further comprising a thermoelectrically cooled configuration to reduce thermal noise in each photodiode element and remove heat generated by VCSEL.
12. The optoelectronic package according to claim 9 wherein the VCSEL is a microelectromechanical system (MEMS) tunable VCSEL.
13. The optoelectronic package according to claim 9 wherein the photodiode element has multiple concentric arc sections separated into three or more quadrants by gaps and separated from each other by open concentric rings.
14. The optoelectronic package according to claim 9 wherein the object is a motion system within a semiconductor processing tool.
15. The optoelectronic package according to claim 9 wherein the object is a semiconductor wafer.
16. The optoelectronic package according to claim 9 further comprising a black mask located on at least one of the lens and the photodiode element to absorb or trap light and thereby minimize stray light and its adverse effects.
17. An optical detection system comprising: at least one light source; an optical assembly configured to direct light from the at least one light source to a target; a photodiode element arrangement positioned to receive light reflected or transmitted from the target, wherein the photodiode element arrangement comprises a focal plane array; a black mask disposed over the focal plane array and configured to define one or more apertures through which light is received by the focal plane array; processing electronics operatively coupled to the photodiode element arrangement and configured to generate output signals indicative of the received light; and a computer system operatively coupled to the processing electronics and configured to process the output signals to determine at least one characteristic of the target and to output a result indicative of the at least one characteristic of the target.
Description
BRIEF DESCRIPTION OF THE DRAWING
[0025] The disclosure is best understood from the following detailed description when read in connection with the accompanying drawing. Included in the drawing are the following figures:
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
DETAILED DESCRIPTION OF THE DISCLOSURE
[0054] In this specification and in the claims that follow, reference will be made to a number of terms which shall be defined to have the following meanings ascribed to them. The term substantially, as used in this document, is a descriptive term that denotes approximation and means considerable in extent or largely but not wholly that which is specified and is intended to avoid a strict numerical boundary to the specified parameter. Directional terms as used in this disclosurefor example up, down, right, left, front, back, top, bottomare made only with reference to the figures as drawn and are not intended to imply absolute orientation.
[0055] The term about means those amounts, sizes, formulations, parameters, and other quantities and characteristics are not and need not be exact, but may be approximate and/or larger or smaller, as desired, reflecting tolerances, conversion factors, rounding off, measurement error and the like, and other factors known to those of skill in the art. When a value is described to be about or about equal to a certain number, the value is within 10% of the number. For example, a value that is about 10 refers to a value between 9 and 11, inclusive. When the term about is used in describing a value or an end-point of a range, the disclosure should be understood to include the specific value or end-point. Whether or not a numerical value or end-point of a range in the specification recites about, the numerical value or end-point of a range is intended to include two embodiments: one modified by about and one not modified by about. It will be further understood that the end-points of each of the ranges are significant both in relation to the other end-point and independently of the other end-point.
[0056] The term about further references all terms in the range unless otherwise stated. For example, about 1, 2, or 3 is equivalent to about 1, about 2, or about 3, and further comprises from about 1-3, from about 1-2, and from about 2-3. Specific and preferred values disclosed for components and steps, and ranges thereof, are for illustration only; they do not exclude other defined values or other values within defined ranges. The components and method steps of the disclosure include those having any value or any combination of the values, specific values, more specific values, and preferred values described.
[0057] The indefinite article a or an and its corresponding definite article the as used in this disclosure means at least one, or one or more, unless specified otherwise. Include, includes, including, have, has, having, comprise, comprises, comprising, or like terms mean encompassing but not limited to, that is, inclusive and not exclusive.
A. The Optical Detector System
[0058] The ability to transmit data wirelessly provides tremendous utility. Wireless transmission uses one or more frequencies of electromagnetic signals, such as optical wavelengths, to send information. Optical wavelengths may include, but are not limited to, infrared wavelengths, visible light wavelengths, ultraviolet wavelengths, and so forth. Optical wavelengths may move from one location to another in free space, including the atmosphere, a vacuum, and so forth.
[0059] Optical detector systems use an incoming beam with a beam shape that is typically (although not necessarily) circular in cross section, presenting a circular pattern (or spot) of light on the detector array. (A non-spot beam shape is a beam shape, where it impinges upon the detector array, that is non-circular in cross section.) The combined characteristics of the detector array and spot produce information about how much the output of the detector array changes in response to a change in the position of the light incident on the detector array. For example, the information describes how amplitude of an output signal from the photodiode elements in the array changes as the spot moves across the detector array.
[0060] The accuracy of the information is affected by several factors. One factor is how much of the incoming beam of light that impinges on the detector array produces output. The portion of the beam that impinges on photodiode elements in the array produces output. The portion of light that impinges on gaps between or among the photodiode elements does not. For example, if the spot of light falls entirely within a gap between photodiode elements, no output is produced.
[0061] The optical detector system provides output that is indicative of a relative position of an incoming beam of light relative to the detector array as well as distance of the incoming beam of light relative to the detector array. This output may then be used to operate one or more devices to provide active tracking of a beam of incoming light. The system may be used in a variety of applications including, but not limited to, intersatellite communications, communications between a satellite and ground station, communications between a satellite and user terminals, between vehicles, between terrestrial stations, and the like. For example, the system may be used in terrestrial applications, mobile applications, and so forth. Some of the applications are described in U.S. Pat. No. 11,424,827, mentioned above, which is incorporated by reference in this document.
[0062] Conventional optical detector systems use a single element as discussed above.
[0063]
[0064] The optical detector system 100 according to the present disclosure includes a photodiode element 102 having an improved geometrical pattern or array of detectors. The array combines a center quadrant (or segmented PSD) with radial wedges (a 1D PSD) that extend outward from the center quadrant to the periphery of the photodiode element 102. Several embodiments of the optical detector system 100 are disclosed.
[0065] In certain embodiments, the photodiode elements 102 may be implemented as focal plane arrays (FPAs) A FPA is an arrangement of multiple photodiode element elements, typically organized in a one-dimensional (linear) or two-dimensional (matrix) configuration, that are positioned at the focal plane of an optical system. This configuration enables the simultaneous detection of light at multiple spatial locations, thereby facilitating the acquisition of spatially resolved optical information across the detector surface. The use of FPAs as photodiode elements 102 can be particularly advantageous in applications requiring high spatial resolution or parallel detection of optical signals.
[0066] Such FPA photodiode elements 102 may comprise a plurality of individual photodiode elements, each capable of generating an electrical signal in response to incident light. These photodiode elements can be fabricated using semiconductor processes similar to those used for single-element photodiodes but arranged in a regular grid or linear array to form the FPA. The electrical signals from each element of the array can be read out individually or in groups, depending on the desired imaging or detection modality.
[0067] In certain embodiments, the focal plane array may be configured as a one-dimensional linear array, suitable for line-scanning applications or for detecting the position of a light beam along a single axis. Alternatively, the FPA may be a two-dimensional matrix, enabling full-field imaging or the detection of complex spatial light patterns. The choice between linear and matrix configurations may be determined by the specific requirements of the optical system and the nature of the signals to be detected.
[0068] The integration of photodiode element FPAs as element 102 allows for enhanced functionality, such as the ability to perform spatially resolved measurements of irradiance, as shown in the detector images of
[0069] Focal plane arrays used as photodiode elements 102 may be fabricated from various semiconductor materials, such as silicon, indium gallium arsenide, or other materials suitable for the desired wavelength range. The array may include integrated readout circuitry, such as multiplexers or amplifiers, to facilitate the efficient extraction and processing of signals from the individual detector elements. In some embodiments, the FPA may be cooled or otherwise optimized to reduce noise and enhance sensitivity, depending on the application requirements.
[0070] The use of FPAs as photodiode elements 102 also enables advanced signal processing techniques, such as pixel binning, region-of-interest selection, or real-time image analysis. These capabilities can be leveraged to improve the signal-to-noise ratio, increase dynamic range, or enable adaptive measurement strategies. The system may further include a processor or computer system, such as that illustrated in
[0071] Such embodiments where photodiode elements 102 comprise FPAs provide a flexible and robust platform for capturing spatially resolved optical data, supporting a wide range of measurement and imaging applications. The versatility of FPAs allows the system to be readily adapted to different optical configurations and detection requirements, whether for high-resolution imaging, beam profiling, or other advanced optical analyses. This adaptability ensures that the photodiode element arrangement can be optimized for the specific needs of the system, while maintaining compatibility with the other components and functionalities described herein.
[0072] As illustrated in
[0073] As also illustrated in
[0074] The width, length, and number of the individual radial wedge sections 120 can be optimized to accommodate a small spot beam so that there is no positional ambiguity. Therefore, the photodiode element 102 of the optical detector system 100 avoids the ambiguity found in existing position sensing detectors when small beam diameters are used. Further, the radial wedge sections 120 can be electrically configured to provide both a radial distance and an angular position to vastly improve guidance when the optical detector system 100 is used for beam steering. The optical detector system 100 can support a simple optical window or specific lensing can be used to manipulate an incoming beam into a unique output so as to fall onto the radial wedge sections 120 or the inner quadrant sections 110 to provide a unique photoelectric displacement output. The optical detector system 100 can also minimize the size of the center of the array, without sacrificing wide-field accuracy, and can better accommodate blind spots.
[0075]
[0076] The goal of the optical detector system 100 is to position the spot 140 precisely at the center of the photodiode element 102 (where the spot 140 is shown in
[0077] The embodiment of the optical detector system 100 illustrated in
[0078] The optical detector system 100 requires flip chip for connectivity (i) between the four quadrant sections 110 and their corresponding anode bond pads 118; and (ii) between the radial wedge sections 120 and their corresponding anode bond pads 128. Therefore, the optical detector system 100 is preferably back-side illuminated. Flip chip, also known as controlled collapse chip connection or its abbreviation, C4, is a method for interconnecting dies such as semiconductor devices, integrated circuit chips, integrated passive devices, and microelectromechanical systems (MEMS), to external circuitry with solder bumps that have been deposited onto the chip pads. The solder bumps are deposited on the chip pads on the top side of the wafer during the final wafer processing step. In order to mount the chip to external circuitry (e.g., a circuit board or another chip or wafer), it is flipped over so that its top side faces down, and aligned so that its pads align with matching pads on the external circuit, and then the solder is reflowed to complete the interconnect. The flip chip connectivity is in contrast to wire bonding, in which the chip is mounted upright and fine wires are welded onto the chip pads and lead frame contacts to interconnect the chip pads to external circuitry.
[0079] A back-illuminated sensor, also known as a backside illumination (BI) sensor, is a type of digital image sensor that uses a novel arrangement of the imaging elements to increase the amount of light captured and thereby improve low-light performance. A traditional, front-illuminated sensor is constructed in a fashion similar to the human eye, with a lens at the front and photodiode elements at the back. This traditional orientation of the sensor places the active matrix of the sensora matrix of individual picture elementson its front surface and simplifies manufacturing. The matrix and its wiring reflect some of the light, however, and thus the photocathode layer can only receive the remainder of the incoming light; the reflection reduces the signal that is available to be captured.
[0080] A back-illuminated sensor contains the same elements as the front-illuminated sensor, but arranges the wiring behind the photocathode layer by flipping the silicon wafer during manufacturing and then thinning its reverse side so that light can strike the photocathode layer without passing through the wiring layer. This change can improve the chance of an input photon being captured from about 60% to over 90%. The greatest difference is realized when pixel size is small, because the light capture area gained in moving the wiring from the top (light incident) to bottom surface is proportionately smaller for a larger pixel.
[0081] The embodiment of the optical detector system 100 illustrated in
[0082]
[0083] Another variation in the embodiment of the optical detector system 100 illustrated in
[0084] More specifically, the number and/or geometry of each of the radial wedge sections 120 can be reduced to reduce the detector area. Rather than twenty-four discrete and independent radial wedge sections 120 each separated by a gap 122, there may be only twelve discrete and independent radial wedge sections 120 each separated by a larger gap 122. Rather than having a pie shape, each radial wedge section 120 may have a substantially rectangular shape separated from adjacent radial wedge sections 120 by gaps 122 that have both a relatively large area and a substantially rectangular shape themselves. Each radial wedge section 120 may have a diamond shape. Each radial wedge section 120 may be configured as a sparse or relatively thin line detector. Although the radial wedge sections 120 may have a pie shape, the sections may not extend from a narrower head proximate the center of the photodiode element to a wider foot proximate the periphery of the photodiode element, i.e., the radial wedge sections 120 may extend instead from a narrower head somewhat removed from the center of the photodiode element and a wider foot proximate the periphery of the photodiode element. Similarly, other shapes (e.g., diamond and line) may not extend fully from the center to the periphery of the photodiode element.
[0085] The result of wire bonding the design variations outlined above for reducing the detector area in each of the radial wedge sections 120 would be that the parallel capacitance in the outer quadrant sections 120 can be made greater than (e.g., pie-shaped detectors), less than (e.g., thin line detectors), or equal to (e.g., diamond-shaped detectors) the capacitance of the inner quadrant sections 110 depending on application requirements.
[0086]
[0087] The embodiment of the optical detector system 100 illustrated in
[0088] More generally, lateral effect sensors use a detector longitudinally to share charge in a ratio of geometric proportion consistent with the gradient of electrical-resistivity uniformity and/or geometric shape. A photon may fall between the two anodes, resulting in a shared charge. The sheet resistance between a spot at location x causes charge to flow to the contact of least resistance. With a flux of photons, a statistical probability based upon diffusion conditions causes the ratio of charge collected at one anode to be directly proportional to the distance between the two anodes. For sensors where the resistivity is non-uniform (gradient), or the geometry is not linear (as in a wedge or pie-shape), this must be taken into account, but can be measured by transmission line measurement test structures or modeled with accurate coefficients for material and electrical properties and geometric dimensions.
[0089] Like the embodiment of the optical detector system 100 illustrated in
[0090] The photodiode element 102 in each embodiment of the optical detector system 100 provides an output signal that is indicative of light incident upon its active area. For example, light incident on an active portion of a photodiode element may produce an output current that is proportionate to the power of the incident light. As disclosed above, the individual inner quadrant sections 110 of the photodiode element 102 are separated from one another by a gap 112 and the individual radial wedge sections 120 of the photodiode element 102 are separated from one another by a gap 122. The gap 112, 122 may have a width of about 1 m, about 10 m, about 20 m, about 30 m, about 40 m, about 50 m, about 100 m, about 1,000 m (1 mm), or in the range between 1 and 1,000 m, between 10 and 100 m, between 20 and 50 m, between 20 and 40 m, between 20 and 30 m, between 30 and 50 m, or between 40 and 50 m. The output signals may be processed by a computer apparatus that includes a processor, database, and stored instructions to configure the processor to process data in accordance with the methods of the disclosure.
B. The VCSEL
[0091]
[0092] Surrounding the active region 66 are additional layers, often acting as electrical conductors, that facilitate current injection and serve as an upper DBR 68. Like its bottom counterpart, the upper DBR 68 ensures that emitted light remains within the cavity, contributing to the high efficiency and precise wavelength control of the laser. Thus, the laser resonator consists of two DBR mirrors 64, 68 parallel to the wafer surface with the active region 66 in between. The planar DBR mirrors 64, 68 consist of layers with alternating high and low refractive indices. Each layer has a thickness of a quarter of the laser wavelength in the material, yielding intensity reflectivities above 99%. High reflectivity mirrors are required in VCSELs to balance the short axial length of the gain region.
[0093] In common VCSELs the upper and lower mirrors 64, 68 are doped as p-type and n-type materials, forming a diode junction. In more complex structures, the p-type and n-type regions may be embedded between the mirrors 64, 68, requiring a more complex semiconductor process to make electrical contact to the active region 66, but eliminating electrical power loss in the DBR structure.
[0094] VCSELs function by injecting an electrical current into the semiconductor structure through an upper metal contact 70 and a lower metal contact 72. This current causes electrons and holes to recombine within the active region 66 of the semiconductor, releasing photons in the process. These photons are confined within the laser cavity, formed by the highly reflective DBRs 64, 68, where they undergo resonance and intensity amplification. As a result, coherent laser light is emitted perpendicular to the surface through the upper DBR 68. The power of the emitted light can be controlled by adjusting the injected current.
[0095] VCSELs for wavelengths from 650 nm to 1300 nm are typically based on gallium arsenide (GaAs) wafers with DBRs formed from GaAs and aluminum gallium arsenide (Al.sub.xGa.sub.(1-x)As). The GaAsAlGaAs system is favored for constructing VCSELs because the lattice constant of the material does not vary strongly as the composition is changed, permitting multiple lattice-matched epitaxial layers to be grown on a GaAs substrate. The refractive index of AlGaAs does vary relatively strongly as the Al fraction is increased, however, minimizing the number of layers required to form an efficient Bragg mirror compared to other candidate material systems. Furthermore, at high aluminum concentrations, an oxide can be formed from AlGaAs, and this oxide can be used to restrict the current in a VCSEL, enabling very low threshold currents.
[0096] The larger output aperture of VCSELs, compared to most edge-emitting lasers, produces a lower divergence angle of the output beam. The small active region, compared to edge-emitting lasers, reduces the threshold current of VCSELs, resulting in low power consumption. The low threshold current also permits high intrinsic modulation bandwidths in VCSELs. The wavelength of VCSELs may be tuned, within the gain band of the active region, by adjusting the thickness of the reflector layers.
[0097] VCSELs are known for their high power conversion efficiency, leading to less energy waste and lower operating costs. They produce a circular, low divergence beam which simplifies coupling to optical fibers and facilitates array formation. VCSELs can modulate directly at very high frequencies, often exceeding several tens of GHz. VCSELs can be tested and characterized on-wafer before they are cleaved into individual chips, reducing production costs. A suitable VCSEL 60 is commercially available as Model V00140 from Vixar Inc. of Plymouth, Minnesota.
[0098] As an alternative to the VCSEL 60, the light source may be a microelectromechanical system (MEMS) tunable VCSEL. A MEMS tunable VCSEL is a compact, high-speed laser light source whose wavelength is tunable over a wide range. The operating principle of a silicon-MEMS tunable VCSEL is as follows: when a voltage is applied to the upper and lower layers of the Si-MEMS substrate, static electricity occurs and attracts a thin film of silicon on the upper layer toward the lower layer. As a result, the optical resonator becomes longer and the laser oscillation wavelength increases accordingly. With this mechanism, wavelengths can be swept continuously, which is particularly useful for optical measurements.
[0099] MEMS tunable VCSELs are commercially available, for example, from Thorlabs, Inc. of Newton, New Jersey and Yokogawa Corporation of America of Houston, Texas. A tunable wavelength laser such as a MEMs tunable VCSEL enables the sensor to measure semi-transparent material via a variation in optical coherent tomography (OCT) systems requiring superior sensitivity. The MEMs tunable VCSEL may include an active power control that maintains constant output power over the lifetime of the laser.
C. The Interferometer
[0100] The principles of interference are easy to understand and begin when two or more light waves interact. Add the heights and depths of the separate waves where they interact, and the result is the interference pattern. Two specific kinds of interference define a spectrum of possibilities. Total constructive interference happens when the peaks and troughs of identical waves perfectly coincide. The result is a larger wave equal in size to the sum of the heights (and depths) of the merging waves at each point where they intersect (i.e., the brightness of the resulting beam is the sum of brightnesses of the interacting beams). Total destructive interference is the exact opposite. When the peaks of one wave meet and exactly match the troughs of identical waves, they cancel each other out and no wave results (i.e., there is no light). Of course, in nature, two or more light waves are rarely identical, and the peaks and troughs of one wave will rarely perfectly meet the peaks or troughs of another wave. Regardless, no matter how they differ, when the waves intersect, the result is always the sum of the heights and depths of the waves wherever they intersect. This means that the alignment of the waves as they interact dictates the resulting interference pattern.
[0101] Interferometers are investigative tools used in many fields of science and engineering. Pioneered in the mid- to late-1800s, they are called interferometers because they work by merging sources of light to create an interference pattern, which can be measured and analyzed: hence interfere-meter or interferometer. The interference patterns generated by interferometers contain information about the object being studied. They are often used to make very small measurements that cannot be achieved any other way. Despite their different designs and the various ways in which they are used, all interferometers have one thing in common: they superimpose beams of light to generate an interference pattern.
[0102] The Michelson interferometer is a common configuration for optical interferometry and was invented by the 19/20th-century American physicist Albert Abraham Michelson. Using a beamsplitter, a light source is split into two beams. Each of those light beams is reflected back toward the beamsplitter which then combines their amplitudes using the superposition principle. The resulting interference pattern that is not directed back toward the source is typically directed to some type of photoelectric detector or camera that records the interference pattern. For different applications of the interferometer, the two light paths can have different lengths or incorporate optical elements or even materials under test.
[0103] The Mach-Zehnder interferometer is a device used to determine the relative phase shift variations between two collimated beams derived by splitting light from a single source. The interferometer has been used, among other things, to measure phase shifts between the two beams caused by a sample or a change in length of one of the paths. The apparatus is named after the physicists Ludwig Mach (the son of Ernst Mach) and Ludwig Zehnder; Zehnder's proposal in an 1891 article was refined by Mach in an 1892 article. Mach-Zehnder interferometry with electrons as well as with light has been demonstrated. The versatility of the Mach-Zehnder configuration has led to its being used in a range of research efforts especially in fundamental quantum mechanics. The Mach-Zehnder check interferometer is a highly configurable instrument. In contrast to the well-known Michelson interferometer, each of the well-separated light paths is traversed only once.
[0104] A Mirau interferometer works on the same basic principle as a Michelson interferometer. The difference between the two is in the physical location of the reference beam. The reference arm of a Mirau interferometer is located within a microscope objective assembly. It is named after Andr Henri Mirau, who obtained U.S. Pat. No. 2,612,074 on the concept in 1952.
[0105] A schematic of a Mirau interferometer 1 is shown in
[0106] More generally, the incident light beam T travels two paths. The first path TUVWXYZ passes through the lens 6 then through the beam splitter 2, 3, 4, reflects off the object 8, passes back through the beam splitter 2, 3, 4 then through the lens 6 (at the point Y), and returns as part of the beam Z to the objective or viewing device. The incident light beam T also travels a second path TUVWXYZ with successive reflections off the beam splitter 2, 3, 4 and the mirror 5 before returning to the viewing device as part of the beam Z.
[0107] A Cartesian coordinate system (x, y, z) is a coordinate system that specifies each point uniquely in three-dimensional space by three Cartesian numerical coordinates, which are the signed distances to the point from three, fixed, mutually perpendicular directed lines, measured in the same unit of length. Each reference line is called a coordinate axis or just an axis of the system, and the point where they meet is its origin, usually at ordered triplet (0, 0, 0). The coordinates can also be defined as the positions of the perpendicular projections of the point onto the three axes, expressed as signed distances from the origin. The coordinate measured from the y-axis parallel to the x-axis is called the abscissa and the other coordinate in the x-y plane is called the ordinate. The z-axis extends vertically from the horizontal x-y plane. The coordinate system is illustrated in
[0108] By changing the z position of the object 8, interference images are acquired at a sequence of path (phase) differences: 0, /4, /2, and 3/4. These interference maps are functions of background intensity, fringe modulation, and phase. Three such images provide enough information to solve for the topographic image of the object 8. The Mirau interferometer 1 also makes it possible to determine, with a high precision and without contacting the object 8, the relative position of the object 8.
D. The Optoelectronic Package
[0109]
[0110]
[0111] As also illustrated in
[0112] The photodiode element 102 shown in
[0113] On the LCC 302 is positioned the photodiode element 102 including the VCSEL 60 as illustrated in
[0114] In summary, the optoelectronic package 300 includes the photodiode element 102 having multiple concentric arc sections 121, a laser diode (preferably the VCSEL 60) located centrally within the photodiode element 102, and the interferometer (preferably, the Mirau interferometer 1) and functions to measure the surface of the object 310. As shown in
[0115] Extremely precise measurement of both surface displacement (submicron levels) and angular displacement (microradian levels) is extremely important in many industries, but none more so than semiconductor wafer processing. Traditionally these measurements are very difficult to integrate into wafer processing equipment due to the size and complexity of measurement systems. Further, conventional systems typically offer only a displacement measurement. The optoelectronic package 300 offers a precise measurement with three degrees of freedom (displacement, tip, and tilt) which can be embedded into the extremely small spaces often found in semiconductor wafer processing equipment.
[0116] The optoelectronic package 300 may also include a black mask 308 which can be provided in one or more layers as shown in
[0117]
[0118] Also illustrated in
[0119]
[0120]
E. The Computer System
[0121]
[0122] This disclosure contemplates any suitable number of computer systems 200. This disclosure contemplates the computer system 200 taking any suitable physical form. As example and not by way of limitation, the computer system 200 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, or a combination of two or more of these devices. Where appropriate, the computer system 200 may include one or more computer systems 200; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 200 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated in this document. As an example and not by way of limitation, the one or more computer systems 200 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated in this document. The one or more computer systems 200 may perform at different times or at different locations one or more steps of one or more methods described or illustrated in this document, where appropriate.
[0123] In particular embodiments, the computer system 200 includes a processor 202, memory 204, storage 206, an input/output (I/O) interface 208, a communication interface 210, and a bus 212. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
[0124] In particular embodiments, the processor 202 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, the processor 202 may retrieve (or fetch) the instructions from an internal register, an internal cache, the memory 204, or the storage 206; decode and execute them; and then write one or more results to an internal register, an internal cache, the memory 204, or the storage 206. In particular embodiments, the processor 202 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates the processor 202 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, the processor 202 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in the memory 204 or the storage 206, and the instruction caches may speed up retrieval of those instructions by the processor 202. Data in the data caches may be copies of data in the memory 204 or the storage 206 for instructions executing at the processor 202 to operate on; the results of previous instructions executed at the processor 202 for access by subsequent instructions executing at the processor 202 or for writing to the memory 204 or the storage 206; or other suitable data. The data caches may speed up read or write operations by the processor 202. The TLBs may speed up virtual-address translation for the processor 202. In particular embodiments, the processor 202 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates the processor 202 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, the processor 202 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 202. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
[0125] In particular embodiments, the memory 204 includes main memory for storing instructions for the processor 202 to execute or data for the processor 202 to operate on. As an example and not by way of limitation, the computer system 200 may load instructions from the storage 206 or another source (such as, for example, another computer system 200) to the memory 204. The processor 202 may then load the instructions from the memory 204 to an internal register or internal cache. To execute the instructions, the processor 202 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, the processor 202 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. The processor 202 may then write one or more of those results to the memory 204. In particular embodiments, the processor 202 executes only instructions in one or more internal registers or internal caches or in the memory 204 (as opposed to the storage 206 or elsewhere) and operates only on data in one or more internal registers or internal caches or in the memory 204 (as opposed to the storage 206 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple the processor 202 to the memory 204. The bus 212 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between the processor 202 and the memory 204 and facilitate accesses to the memory 204 requested by the processor 202. In particular embodiments, the memory 204 includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. The memory 204 may include one or more memories 204, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory.
[0126] In particular embodiments, the storage 206 includes mass storage for data or instructions. As an example and not by way of limitation, the storage 206 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. The storage 206 may include removable or non-removable (or fixed) media, where appropriate. The storage 206 may be internal or external to the computer system 200, where appropriate. In particular embodiments, the storage 206 is non-volatile, solid-state memory. In particular embodiments, the storage 206 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates the storage 206 taking any suitable physical form. The storage 206 may include one or more storage control units facilitating communication between the processor 202 and the storage 206, where appropriate. Where appropriate, the storage 206 may include one or more storages 206. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
[0127] In particular embodiments, the I/O interface 208 includes hardware, software, or both, providing one or more interfaces for communication between the computer system 200 and one or more I/O devices. The computer system 200 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and the computer system 200. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 208 for them. Where appropriate, the I/O interface 208 may include one or more device or software drivers enabling the processor 202 to drive one or more of these I/O devices. The I/O interface 208 may include one or more I/O interfaces 208, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface.
[0128] In particular embodiments, the communication interface 210 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between the computer system 200 and one or more other computer systems 200 or one or more networks. As an example and not by way of limitation, the communication interface 210 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 210 for it. As an example and not by way of limitation, the computer system 200 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, the computer system 200 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. The computer system 200 may include any suitable communication interface 210 for any of these networks, where appropriate. The communication interface 210 may include one or more communication interfaces 210, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.
[0129] In particular embodiments, the bus 212 includes hardware, software, or both coupling components of the computer system 200 to each other. As an example and not by way of limitation, the bus 212 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. The bus 212 may include one or more buses 212, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.
[0130] In this document, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
[0131] This disclosure contemplates one or more computer-readable storage media implementing any suitable storage. In particular embodiments, a computer-readable storage medium implements one or more portions of the processor 202 (such as, for example, one or more internal registers or caches), one or more portions of the memory 204, one or more portions of the storage 206, or a combination of these, where appropriate. In particular embodiments, a computer-readable storage medium implements RAM or ROM. In particular embodiments, a computer-readable storage medium implements volatile or persistent memory. In particular embodiments, one or more computer-readable storage media embody software. In this document, reference to software may encompass one or more applications, bytecode, one or more computer programs, one or more executables, one or more instructions, logic, machine code, one or more scripts, or source code, and vice versa, where appropriate. In particular embodiments, software includes one or more application programming interfaces (APIs). This disclosure contemplates any suitable software written or otherwise expressed in any suitable programming language or combination of programming languages. In particular embodiments, software is expressed as source code or object code. In particular embodiments, software is expressed in a higher-level programming language, such as, for example, C, Perl, or a suitable extension thereof. In particular embodiments, software is expressed in a lower-level programming language, such as assembly language (or machine code). In particular embodiments, software is expressed in C++, C#, Python, Java, JavaScript, Solidity, Vyper, Golang, Simplicity, or Rholang. In particular embodiments, software is expressed in Hyper Text Markup Language (HTML), Extensible Markup Language (XML), JavaScript Object Notation (JSON) or other suitable markup language.
[0132] Although illustrated and described above with reference to certain specific embodiments and examples, the present disclosure is nevertheless not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the spirit of the disclosure.