VOLUMETRIC METAOPTICS FOR MULTI-DIMENSIONAL WAVEFRONT SENSING

20260063474 ยท 2026-03-05

    Inventors

    Cpc classification

    International classification

    Abstract

    Methods and devices enabling simultaneous sorting light based on its wavelength, polarization, and direction of propagation are disclosed. The disclosed device maps different combinations of input light properties to different corresponding pixels on an underlying image sensor array, allowing for compressed sensing of multiple light properties simultaneously. The described devices can be designed using advanced inverse design and topology optimization techniques, including adjoint-based optimization and level-set methods. Exemplary performance results show smooth, predictable behavior for input states between the explicitly optimized states, allowing it to interpolate and classify a continuum of input light properties.

    Claims

    1. A device for multi-dimensional wavefront sensing, comprising: a three-dimensional (3D) structure configured to receive incident light; and a sensor array disposed underneath the 3D structure, the sensor array comprising multiple pixels; wherein the 3D structure is configured to focus different combinations of wavelength, polarization, and propagation direction of the incident light onto different corresponding pixels or combinations of pixels of the sensor array.

    2. The device of claim 1, wherein the 3D structure is configured to maintain wavelength and polarization demultiplexing functionalities across a range of incident angles within an acceptance cone.

    3. The device of claim 2, wherein the 3D structure comprises a stack of multiple layers.

    4. The device of claim 3, wherein each layer of the stack comprises a first material with a first refractive index and a second material with a second refractive index being different from the first refractive index.

    5. The device of claim 4, wherein the first material comprises silicon dioxide (SiO2) and the second material comprises titanium dioxide (TiO2).

    6. The device of claim 1, wherein the 3D structure has dimensions of 3 m3 m4 m.

    7. The device of claim 1, wherein the sensor array comprises a 33 array of pixels.

    8. The device of claim 1, wherein the 3D structure is configured to perform wavelength demultiplexing, polarization sorting, and angle-dependent focusing simultaneously.

    9. The device of claim 8, wherein the wavelength demultiplexing is configured to separate at least two distinct wavelengths.

    10. The device of claim 8, wherein the polarization sorting is configured to separate at least two polarization states, including orthogonal linear states.

    11. The device of claim 8, wherein the angle-dependent focusing is configured to focus at least five different incident angles to five different locations on the sensor array.

    12. An optical system for multi-dimensional wavefront sensing, comprising: an array of devices, each device comprising: a three-dimensional (3D) structure configured to receive incident light; and a sensor array disposed underneath the 3D structure, the sensor array comprising multiple pixels; wherein each 3D structure is configured to focus different combinations of wavelength, polarization, and propagation direction of the incident light onto different pixels or combinations of pixels of a corresponding portion of the sensor array.

    13. The optical system of claim 12, wherein each 3D structure in the array is identical.

    14. The optical system of claim 12, wherein at least some of the 3D structures in the array have different configurations.

    15. The optical system of claim 12, wherein the system is configured to perform light field imaging.

    16. The optical system of claim 15, further comprising a tunable bandpass filter coupled to the array of devices to enable wavelength-dependent light field imaging.

    17. The optical system of claim 12, configured to perform laser beam profiling, the laser beam profiling including classification of wavelength, polarization, and incident angle of the laser beam.

    18. The system of claim 12, wherein each 3D structure is optimized using topology optimization based on adjoint-based inverse design.

    19.-28. (canceled)

    Description

    DESCRIPTION OF THE DRAWINGS

    [0019] FIG. 1A shows an exemplary optical system according to an embodiment of the present disclosure.

    [0020] FIG. 1B shows exemplary fields at the focal plane according to an embodiment of the present disclosure.

    [0021] FIG. 2 shows exemplary transmissions to each pixel for each input state for the embodiment of FIG. 1A.

    [0022] FIGS. 3A-3B demonstrate exemplary performances of the disclosed methods and devices.

    [0023] FIG. 4 demonstrates the dependence of device performance on pixel distribution.

    [0024] FIG. 5 shows an exemplary block diagram representing an exemplary optimization method according to the teachings of the present disclosure.

    [0025] FIG. 6 shows exemplary convergence plots in accordance with embodiments of the present disclosure.

    [0026] FIG. 7 shows an exemplary three-dimensional structure designed based on layers according to an embodiment of the present disclosure.

    DETAILED DESCRIPTION

    [0027] Two-dimensional (2D) image sensors are the most common detectors for light. A leading optical engineering task is how to best extract the information from the incident optical field using an optical system and a planar image sensor. For example, a black and white camera maximizes the amount of spatial information by performing a one-to-one mapping between the direction of propagation and a pixel on the image sensor, but spectral and polarization information is lost. A line scan multispectral camera maps only one spatial coordinate to a direction on the image sensor while the other direction records the spectrum of the spatial pixel. Various other mappings are used in cameras to image color, polarization, light field, etc.

    [0028] Since the information capacity of a 2D image sensor is finite, getting more information about some degrees of freedom for light comes at the expense of information in other degrees of freedom. Also, in general purpose systems with trivial mapping implementations that use conventional optical components like lenses, gratings, and prisms, a single pixel on the image sensor detects a specific combination of single degrees of freedom. For example, a camera may detect the combination of a specific direction, wavelength band, and polarization. Thus, if S spatial directions, W wavelength bands, and P polarizations need to be resolved, then SWP pixels are needed, where P is at most 4 to fully classify polarization through Stokes parameters.

    [0029] In some applications, there is prior knowledge about the input light field, in which case it is possible to use mappings that more efficiently utilize the pixels of the sensor. According to embodiments of the present disclosure, an inverse designed volumetric meta-optic device can efficiently map different combinations of wavelengths, directions and polarizations into combinations of pixels on an image sensor. The compressed information can fully classify properties of the incident fields under certain approximations, namely that the wavefronts are monochromatic, locally linear in phase, and linearly polarized. The disclosed devices are based on metaoptics, which describes materials patterned with subwavelength resolution that impart customized transformations to incident light.

    [0030] FIG. 1A shows an exemplary optical system (100A) according to an embodiment of the present disclosure. Optical system (100A) comprises a three-dimensional (3D) structure (101) and a sensor array (102) disposed underneath the 3D structure (101). Sensor array (102) includes multiple pixels (104). As an example, the sensor array shown is a 33 array including 9 pixels. An incident planewave (103) is input to 3D structure (101). Subsequent to traversing and undergoing scattering within 3D structure (101), the incident light, depending upon its properties, will be focused onto different pixels (104) of the sensor array (102). In accordance with the teachings of the present disclosure, optical system (100A) can classify the incident light based on some desired fundamental properties of the light. In an embodiment, the fundamental properties may be propagation direction, wavelength, and polarization, although other optical functionalities may also be envisaged. The sensor array can be a 33 sensor array with a total of 9 pixels. This amount of multi-functionality at both high efficiency and compactness has not been obtained to this degree in the past. Other embodiments with a number of pixels larger or smaller than 9 may also be implemented.

    [0031] According to the teachings of the present disclosure, the 3D structure (101) shown in FIG. 1A may be composed of silicon dioxide (SiO.sub.2) and titanium dioxide (TiO2). In this configuration, silicon dioxide may function as the background material, providing a medium against which light scattering occurs. The design of the structure may implement materials other than SiO.sub.2 and TiO.sub.2. In some embodiments, the bottom of 3D structure (102) of FIG. 1A may be 1.5 m above sensor array (102). According to the teachings of the present disclosure, to attain the desired functionalities, the 3D structure (101) is fabricated in a specific configuration that enables it to execute said functionalities. The interstices within the structure that are not occupied by the background material (i.e., SiO2) may be filled with a secondary material, such as TiO2.

    [0032] Continuing with the optical system (100A) of FIG. 1A, and in order to demonstrate the performance of the disclosed methods and devices, an optical system that can classify incident light based on the fundamental properties of propagation direction, wavelength, and polarization over a 33 grid of imaging sensors was simulated and analyzed by the inventors. This amount of multi-functionality at both high efficiency and compactness has not been obtained to this degree in the past.

    [0033] For the purpose of demonstration, the exemplary optical system, similar to what was is shown in FIG. 1A, was designed for three functionalities: polarization splitting of two linearly polarized states, wavelength splitting of two distinct wavelengths, and a basic form of imaging in which five planewave components are focused to five different locations. This totals to nine states that are desired to be classified, and in practice the 3D can be placed above a 33 grid of pixels such as CMOS or CCD arrays. Although the 3D structure is designed for a discrete set of states, it can be used to interpolate states on a continuum by analyzing the ratio of intensities among all the pixels.

    [0034] The polarization and wavelength splitting functionalities are designed to work for all of the assumed incident angles, which include a normally incident case and non-normal incidences that are 5 degrees tilted from normal (polar angle 0=5) with azimuthal angles {0, 90, 180, 270}. The two design wavelengths are 532 nm and 620 nm. The two polarizations are linear and orthogonal, with one polarized in the xz-plane and the other in the yz-plane. The device is a 3 m3 m4 m stack of 20 layers. Each 200 nm layer is comprised of titanium dioxide (TiO.sub.2, n=2.4) and silicon dioxide (SiO.sub.2, n=1.5), with a minimum feature size of 50 nm. The background material is assumed to be SiO.sub.2. FIG. 1B shows examples of the fields at the focal plane under a 620 nm xz-polarized planewave excitation angled at (, )=(5, 90). Since this 5 polar angle is in SiO.sub.2 with n=1.5, these planewave components couple to 7.5 planewave components in air. On the left side of FIG. 1B, light focusing to different pixels depending on the state of the input light is shown. The right-hand side of the same figure shows the output of 620 nm, xz-polarized planewave input incident at an angle (, )=(5, 90).

    [0035] Continuing with the same exemplary design as described above, the design of the 3D structure was initially optimized in a narrow wavelength range around 532 nm and 620 nm. However, it was observed that the behaviors of functionalities are preserved and predictable when excited at states in between the states the device was optimized for. For example, as the input wavelength is continuously shifted from 532 nm to 620 nm, the ratio of the 620 nm pixel transmission to the 532 nm pixel transmission monotonically increases, while the imaging and polarization functions remain efficient. This occurs, again as mentioned above, despite the device only being optimized in a narrow wavelength range around 532 nm and 620 nm. Similar behavior was observed in the imaging functionality when excited at input angles that were not explicitly optimized within a 5 cone. The read-out from these pixels can thus be used to infer the state of the light incident on the device so long as the input is assumed to be a monochromatic planewave. We find that this behavior does not depend on the chosen mapping of functionality to specific pixels in the focal plane by analyzing the behavior of devices optimized for 20 unique pixel distributions.

    [0036] The performance of the exemplary embodiment described above can be quantified in two ways. First, the performance of the device under the excitation beams that were assumed when optimizing the device can be checked. This is referred to as training modes. Second, the device at different input angles, wavelengths, and polarization states may be studied to analyze its ability to classify states. This is referred to as validation modes.

    [0037] According to the teachings of the present disclosure, the disclosed methods may be implemented to co-optimize 20 different training modes featuring unique combinations of wavelength, polarization, and angle of incidence. The transmission to each pixel for each input state is shown in FIG. 2. The input angle, polarization state, and wavelength are represented by the y-axis. The x-axis represents the fraction of power scattered to different regions. For each state the transmission to the three correct pixels (201), the transmission to the six incorrect pixels (202), and the transmission elsewhere (203), i.e. oblique scattering, back-scattering, etc.) is shown. On average, the transmission to the correct pixels totals 47.7%, the transmission to incorrect pixels totals 16.8%, and the transmission elsewhere totals 35.5%. The imaging pixels tend to be more susceptible to error in the sense that more power is erroneously transmitted to them than the wavelength- or polarization-sorting pixels on average. A possible reason for this is that the different angled input states have higher correlation than the different spectral and polarization input states due to the small device aperture, giving rise to more cross-talk between imaging pixels. Undesired scattering to incorrect pixels could be explicitly minimized as part of the optimization in future tests. The most straight-forward way to do this would be to incorporate the minimization of transmission to these pixels as figure of merits (FoMs) in the overall optimization.

    [0038] The design process optimizes the performance of the device only for the training modes, and it is unclear what the output of the device will be if the excitation parameters are continuously varied. The performance of the device at input states between that of the designed input states is quantified through by simulating the performance at wavelengths in between 532 nm and 620 nm, at a large set of angles within a 10 input angle cone, and at various polarization states. FIGS. 3A-3B demonstrate the performance of the disclosed methods. FIG. 3A, panels (a-i) plot the transmission to the different pixels in the focal plane for from 0 to 10 and from 0 to 360. The top-down left-right ordering of these plots is matched to the ordering of the pixels depicted in FIG. 1B. The wavelength demultiplexing plots of FIG. 3A panels (a, g) are averaged in polarization. The polarization sorting plots shown in FIG. 3A panels (c, i) are averaged in wavelength. The imaging plots of FIG. 3A panels (b, d-f, h) are averaged in both polarization and wavelength. FIG. 3B. panels (j-k) represents horizontal and vertical traces of the surface plots of FIG. 3A panels (a-i) along the dashed lines, respectively. FIG. 3B panels (j-k) show various curves representing transmission to different pixels: curves (314J, 314K) correspond to the 532 nm pixel, curves (313J, 313K) correspond to the 620 nm pixel, curves (312J, 312K) correspond to the the xz-polarization pixel, curves (315J, 315K) correspond to the yz-polarization pixel, curves (311J, 311K) represent the on-axis imaging pixels, and curves (316J, 316K) are the off-axis imaging pixels. With reference to FIG. 3B panel (1), curves (314L, 313L) represent the transmission to the 532 and 620 nm pixels, respectively. Curve (312L) is the transmission to the correct polarization pixel given the polarization of the input source, and curve (320L) is the transmission to the incorrect polarization pixel. In FIG. 3B panel (1), all traces are averaged over all evaluated input angles within a 5 cone

    [0039] Referring to FIG. 3B panels (j, k) in order obtain the presented plots, the transmission values are averaged across wavelength, polarization, or both depending on the functionality. For the wavelength sorting functionality, the values are averaged across polarization; for the polarization sorting functionality the values are averaged across wavelength; and for the imaging functionality the values are averaged across both wavelength and polarization. The wavelength and polarization demultiplexing functionalities show a more uniform response across (, ) values than the imaging functionality, which means these functions are not highly dependent on incident angle. The imaging functionality is, by design, sensitive to the incident angle. The variation in azimuth for the polarization and spectral pixels is expected because the device is not symmetric, which is required since the desired output fields are not symmetric. However, these deviations are non-ideal, as deviations from a uniform angular response can cause uncertainties in angle classification to cascade to uncertainties in spectral and polarization properties.

    [0040] Curves (313L, 314L) of FIG. 3B show that as the wavelength shifts from 532 nm to 620 nm, the transmission through the 532 nm pixel smoothly decreases, and the transmission through the 620 nm pixel smoothly increases. These transmission values are averaged over both polarization and across all simulated incident angles [0, 360] and [0, 5]. As expected, the transmission to the 532 nm pixel is maximized for 532 nm wavelengths, and likewise for the 620 nm pixel. The traces smoothly vary between these two wavelengths, indicating that the power is predictably redistributed between the pixels in these cases. As stated previously, curve (312L) represents the average transmission to the correct polarization pixel (for example, xz-polarized input being focused to the xz-polarization pixel), and curve (320L) represents the average transmission to the incorrect polarization pixel (for example, xz-polarized input being focused to the yz-polarization pixel). These transmission values are averaged over wavelength and are averaged over incident angle in the same way curves (312J, 312K, 314J, 314K) are. While there is a drop in efficiency at the non-optimized wavelengths, there remains a high contrast between the curves (312L, 320L) at all wavelengths.

    [0041] Continuing with FIG. 3B, the imaging functionality transmission values vary smoothly with respect to incident angles (, ), but this occurs in a less intuitive way than it would with a lens. Rather than the focus spot continuously shifting across the focal plane as (, ) varies, the focused spots instead brighten or dim while remaining approximately centered in the individual pixels. As an example, when (, )=(2.5, 45) the light is primarily scattered to the center of the (0, 0), (5, 0), (5, 90) pixels, as well as the relevant wavelength and polarization pixels. In this specific case 532 nm light is not focused to the 620 nm pixel which is the pixel that the incident planewave is oriented towards, and as such is where light would be focused if the device were replaced with a lens. Instead, the wavelength sorting and polarization sorting functionalities are mostly preserved under all excitation angles within an acceptance cone.

    [0042] The effect of altering the polarization state is predictable since the polarization state of any input excitation can be described by two orthogonal linearly polarized states with a relative amplitude and phase shift between the orthogonal components. As an example, a simple experiment involving a linearly polarized source input into a Wollaston prism would show that as the source is rotated, the output of the Wollaston prism fluidly shifts from one linearly polarized output beam to the other. This behavior is observed in the exemplary design discussed above in the polarizing functionality, where the ratio between the polarizer pixels transmission can be used to infer the relative power of the two orthogonal linear polarization states. However, it would be desirable that the device performance for the wavelength and imaging functionalities is not adversely affected by the polarization states that were not explicitly optimized for. The efficiency of these functionalities may be altered as the cross-polarized output of one beam interferes with the parallel-polarized output of the orthogonally polarized input beam. The amount of cross-polarization is defined by integrating the ratio |E.sub.x|.sup.2/E.sub.y|.sup.2 under yz-polarized illumination over the output plane for all incident angles. The highest amount of cross-polarization is-11.9 dB at 620 nm and 13.6 dB at 532 nm. To verify that the various functionalities are not strongly affected, the results of all functionalities at all possible input polarization states were analyzed by sweeping the relative amplitude and phase of the orthogonal input components, and it was observed that interference effects can alter the transmission to the various pixels by a few percent. According to the teachings of the present disclosure, if a particular design seeks to further minimize cross-polarization, then cross-polarization can be explicitly minimized during the optimization. Doing so will consume some design degrees of freedom, which may detract from the efficiency of the various functionalities, but the optimization will not require more time since the number of electromagnetic simulations per iteration will be unchanged.

    [0043] Based on the previously detailed results, the embodiments such as the one shown in FIG. 1A could be used to analyze the state of incident light at wavelengths between 532 nm and 620 nm and inputs with incident angles within an approximately 5 cone. Additionally, the relative amplitude of the xz-polarized and yz-polarized component can be classified. The classification is done by measuring the ratio of intensity between pixels and using a look-up table to then approximate the state of light. In the case of classifying wavelength and imaging angle, the behavior of smoothly transitioning between states is non-trivial and is not exhibited in typical optical devices such as gratings and lenses. These components tend to spatially shift a beam in the focal plane as the wavelength is varied (grating) or the incident angle is varied (lens), whereas our device only alters the ratio of transmission to the various pixels with very little spatial shift of the beams. In the case of classifying polarization this behavior tends to occur naturally in other optical devices such as birefringent prisms because any polarization state can be decomposed into a coherent sum of two orthogonally polarized states.

    [0044] While classifying the wavelength and input angle states without ambiguity is not significantly affected by the incident polarization state it does require that the incident light be a monochromatic planewave. The planewave assumption is a common assumption employed in Shack-Hartman sensors and plenoptic sensors, and the assumption of monochromaticity can be satisfied with color filters, or if the light fundamentally comes from a narrow-band source such as a laser. Thus, there are numerous applications in which the disclosed methods and devices can be useful. As an example, the 3D structures designed based on the disclosed teachings can be tiled across a sensor array to enhance the functionality of an image sensor. Such an array could be used to accurately classify the properties of a laser beam, including all fundamental properties of wavelength, polarization, and incident angle within the device's acceptance cone. The angle-dependent nature of the device is similar in principle to angle-sensitive CMOS pixels, and could be used for light field imaging since the incident wavefront angle can be computationally determined using the relative intensity of the imaging pixels. If coupled with a device such as a tunable bandpass filter, then a wavelength-dependent light field image can be obtained, with the added functionality of measuring the relative intensity of the two orthogonal linear polarization components for basic polarimetry. The disclosed methods and devices open up new degrees of freedom in computational imaging applications.

    [0045] Referring back to the embodiment of FIG. 1A, according to embodiments of the present disclosure, the qualitative behavior of the device does not depend on the specific assignment of functionalities to pixels. This is further clarified in what follows.

    [0046] There could be a large number of pixel permutations. Twenty different pixel distributions were considered and analyzed. Pixel distributions were chosen randomly, but were chosen to be sufficiently different from the original device by satisfying two criteria: 1) all pixels had to be moved from their original location, 2) no pixel could be rotated by 90, 180, or 270 degrees, thus ensuring that devices were not similar by any rotational symmetry. Each device was optimized until the device was more than 15% binary according to the following equation, where B is the binarization, N is the total number of permittivity points in the design region, Eris the permittivity at the Ah point, and Emin and Emax are the smallest and largest values of permittivity allowed in the design

    [00001] B = 1 N .Math. i .Math. "\[LeftBracketingBar]" i - mid max - mid .Math. "\[RightBracketingBar]" , i ( 1 ) mid = min + max 2 ( 2 )

    The angular response of each functionality (the plots in FIGS. 3C-3K) to one another using the overlap integral in Eq. 3 below, integrated over all simulated (, ) points. This equation quantifies the similarity in the angular transmission profile, evaluating to zero if the responses are completely dissimilar and one if the responses are the same. It is equivalent to a cross correlation with zero lag normalized such that the max possible value is 1.

    [00002] ( T 1 , T 2 ) = [ .Math. "\[LeftBracketingBar]" T 1 T 2 d d .Math. "\[RightBracketingBar]" 2 .Math. "\[LeftBracketingBar]" T 1 .Math. "\[RightBracketingBar]" 2 d d .Math. "\[LeftBracketingBar]" T 2 .Math. "\[RightBracketingBar]" 2 d d ] 1 / 2 ( 3 )

    For each functionality, the overlap integral for all combinations of devices was evaluated. Each result is plotted as a scatter point shown in FIG. 4, where the x-axis represents the nine different functionalities, and the y-axis represents the computed overlap integral. As can be noticed from the scattered plots, the average overlap integral is greater than 96.9% for all functionalities, suggesting that the arbitrary choice of mapping functionalities to specific pixels has little effect on the behavior of the device.

    [0047] With further reference to FIG. 1A, the 3D structure may be designed using topology optimization, aided by adjoint-based inverse design. The multi-objective optimization may implement techniques to approach a nearly binary solution during the continuous permittivity design phase. A level-set approach may be used for imposed fabrication constraints. Through various simulations, the inventors found that the designed 3D structures function at many more input conditions than those that were explicitly optimized for. This is an important feature of the disclosed method because it eases the computational burden of optimizing many different merit functions.

    [0048] The goal of photonic topology optimization is to find a refractive index distribution that maximizes an electromagnetic figure-of-merit (FoM). According to the teachings of the present disclosure, for highly multi-functional structure, the associated optimization is multi-objective. Each objective is a mapping of an input to a FOM: the input is a planewave with a specific angle, polarization, and wavelength; the FoM is the power transmission through the desired pixel. The general procedure for this optimization is shown in FIG. 5. It consists of three main steps: first, the FoMs and their associated gradients are computed, step (51); second, the gradients are combined with a weighted average, step (52); third, the device is updated in accordance with the averaged gradient using either a density-based optimization, step (53), of a continuous permittivity or a level-set optimization, step (54), of a discrete permittivity.

    [0049] The individual FoM gradients are evaluated at every point in the design region using the adjoint method, which entails combining the electric fields in the design region for a forward, step (55), and an adjoint simulation, step (56) to compute the desired gradient. In this case, the forward case simulates the device under the assumed planewave excitation, and the adjoint case simulates a dipole (with a particular phase and amplitude based on the forward simulation) placed at the center of the desired pixel. This choice of sources optimizes the device to focus light to the location of the dipole. However, we record the performance of the device as power transmission through the desired pixel rather than intensity at a point, since power transmission better represents the signal a sensor pixel would record.

    [0050] The optimization is performed in two phases: a density-based phase, and a level-set phase. Each phase has a unique update procedure. In the density-based optimization the permittivity of the device is modelled as a grid of grayscale permittivity values between the permittivity of the two material boundaries. This permittivity representation is effectively fictitious (unless an effective index material can be reliably fabricated), and the goal is to converge to a binary device that performs well and is comprised of only two materials.

    [0051] While the density-based optimization can converge to a fully binary solution, it is faster to terminate the optimization early and force each device voxel to its nearest material boundary. This thresholding step reduces the device performance, which we recover by further optimizing the device with a level-set optimization. Level-set optimization models the device boundaries as the zero-level contour of a level-set function ((x, y)=0), and thus benefits from describing inherently binary structures. Empirically the final device performance is dependent on initial seed, hence the need for the improved density-based optimization that converges to a near-binary solution. Here we use level-set techniques to simultaneously optimize device performance and ensure the final device obeys fabrication constraints.

    [0052] With reference to step (53) of FIG. 5, the term level-set optimization stems from treating the device boundaries as the zero-level contour of a level-set function (x, y). The level-set function (LSF) is perturbed in accordance with the gradient, which has the effect of perturbing the zero-level contour of the LSF and the associated device boundaries to increase performance. A signed-distance function to define the level-set function (LSF), where the value of (x,y) is proportional to the signed distance from the device boundary. The gradients of the electromagnetic FOM and a fabrication penalty function are combined and used to perturb the LSF, which has the effect of perturbing the boundary of the device such that the electromagnetic FoM is increased and the fabrication penalty term is decreased. The fabrication penalty term includes both a minimum radius of curvature constraint and a minimum gap size constraint of 60 nm. The LSF is then recomputed to ensure it remains a signed-distance function. This process is repeated until the FoM has converged (recovering the performance lost from the binarization step), and the fabrication penalty term is minimized.

    [0053] According to the teachings of the present disclosure, fabrication restrictions during the level-set optimization are preserved. To that end, a multi-dimensional fabrication penalty function is first analytically computed over the entire device region. In an embodiment, this function includes two types of fabrication constraints: one limits the radius of curvature of the device boundaries, and the other limits the smallest gap size of the device. The terms are integrated over the entire design region, yielding a real number (the fabrication penalty term) that we wish to minimize. The gradient of this fabrication penalty term is computed over the device region using a finite-difference approximation and is subsequently combined with the gradient of the electromagnetic FoM that is computed through the adjoint method, as indicated by step (56) of FIG. 5. The level-set function is perturbed in the direction of this combined gradient, resulting in a shifting of the material boundaries that co-optimizes the electromagnetic FoM and the fabrication penalty term.

    [0054] FIG. 6 shows exemplary convergence plots in accordance with embodiments of the present disclosure. Curves (610, 620) represent the average device FoM and device binarization, respectively. The FoM being plotted is the power transmission through the appropriate pixel. In some embodiments this is different from the FoM that is used to define the adjoint source, which is the intensity at the center of the appropriate pixel.

    [0055] The density-based optimization is considered converged when the binarization, which is defined by Eq. 1 is nearly 100%. At various points, the binarization is forced to increase by passing the permittivity through a sigmoidal function and changing the device permittivity to the output of this function. The FoM is then allowed to recover before repeating this discrete push in binarization. At some iteration, e.g. iteration 512 in the case shown in FIG. 6, the optimization switches to a level-set optimization, which recovers some of the performance that was lost during the final phases of the density optimization. During the level-set optimization, the binarization is recorded as approximately 90% because the boundaries of the device are smoothed out yielding a continuous permittivity value. This can intuitively be thought of as the level-set function passing through a simulation mesh voxel, which is modelled in the FDTD simulation as a dielectric volume average of the two materials. Therefore, the density optimization is overly restrictive at binarization values above 90%, since it does not allow for this type of border smoothing, instead modelling the device as a discretized grid of binary voxels. This explains why the level-set function is able to recover substantially more performance than what is lost at the initial conversion from density-based optimization to a level-set optimization, recovering to approximately the same average FoM value as the density-based optimization when it was 90% binary.

    [0056] With further reference to FIG. 6, during the convergence process, the FoM may be permitted to decrease during the density-phase of the optimization. In other words, a hard constraint is imposed that forces binarization to increase, thus forcing the device to tend towards a binary device while either maximizing or minimally sacrificing the FoM using the gradient information from the adjoint-retrieval process.

    [0057] FIG. 7 shows an exemplary 3D structure (730) according to an embodiment of the present disclosure. Such structure is built based on, for example, a 20-layer design. Constituent layers (740) are also shown. In an embodiment, the shaded region includes TiO.sub.2, and the white area includes SiO.sub.2. Layer 1 is the bottom layer nearest to a sensor array that may be disposed underneath 3D structure (730). In another embodiment, each layer (740) may measure 33 m.

    [0058] In what follows, a breakdown/summary of the optimization algorithm in accordance with the teachings of the present disclosure is provided:

    1. Initial Density-Based Optimization:

    [0059] The device is initially modeled as a grid of continuous permittivity values between the two material boundaries (e.g., TiO2 and SiO2). [0060] Multiple figures of merit (FoM) functions are defined, corresponding to different desired optical functionalities (e.g., wavelength sorting, polarization sorting, and angle-dependent focusing). [0061] The optimization uses the adjoint method to efficiently compute gradients of the FoMs with respect to the permittivity distribution. [0062] A multi-objective optimization is performed, combining the gradients of different FoMs using a weighted average. [0063] The permittivity distribution is updated iteratively based on the combined gradient.

    2. Binarization Process:

    [0064] Throughout the density-based optimization, the algorithm periodically pushes the design towards a more binary state. [0065] This is performed by passing the current permittivity distribution through a sigmoidal function. [0066] After each binarization step, the optimization continues to allow the FoM to recover. During this recovery, binarization is forbidden from decreasing using the methods described in FIG. 5 (54).

    3. Transition to Level-Set Optimization:

    [0067] Once the density-based optimization reaches a near-binary state (e.g., around 90% binary), the optimization switches to a level-set approach. [0068] The device structure is now represented by a level-set function (LSF), where the device boundaries are defined by the zero-level contour of the LSF.

    4. Level-Set Optimization:

    [0069] The LSF is initialized based on the final state of the density-based optimization. [0070] The optimization now updates the LSF, which in turn modifies the device boundaries. [0071] This phase incorporates both electromagnetic performance and fabrication constraints. [0072] Two types of fabrication constraints are included: [0073] Minimum radius of curvature for device boundaries [0074] Minimum gap size between features (e.g. 60 nm) [0075] A fabrication penalty term is computed analytically over the entire design region. [0076] The gradient of this penalty term is combined with the electromagnetic FoM gradient. [0077] The LSF is perturbed based on this combined gradient, optimizing both performance and manufacturability.

    5. Iterative Refinement:

    [0078] The level-set optimization continues iteratively until the FoM converges and the fabrication penalty term is minimized. [0079] After each update, the LSF is recomputed to ensure it remains a signed-distance function.

    [0080] This disclosed methods and devices provide the following benefits: [0081] 1) The simultaneous sorting of wavelength, polarization, and incident angle is an advancement of existing art which is essentially based on sorting only wavelengths and polarization. This enhancement allows for predictable control of all fundamental properties of light within a single component. [0082] 2) The device performance was found to behave well for inputs whose parameters lie between the states the device was trained on. This well-behaved interpolating behavior was true for wavelength, polarization, and incident angle. This has not been disclosed in prior art and the results suggest a promising avenue towards increasing generalizability and computation feasibility of inverse-design. [0083] 3) The importance of mapping specific functions to specific pixels in the focal plane was studied and showed a remarkable insensitivity to this mapping. Previous work in this field involved only four pixels per volumetric device, so very few permutations were available. The present work involves nine pixels and studied 20 non-symmetric permutations.

    [0084] The scale of the disclosed optimization method is also substantially larger than prior works. In addition to the increased number of outputs/pixels, the increased functional complexity required a thicker device to realize reasonable efficiencies. This adds to the total number of optimized points within the design volume. At this larger scale, it was empirically found that standard methods of incorporating fabrication constraints failed. Most notably the binarization constraint that restricts the ultimate design to only two materials failed, which is typically enforced through the application of a sigmoidal function to the device gradients. However, this is a soft constraint that operates by scaling gradients and does not strictly enforce binarization at finite sigmoid strengths. To address this, a method that incorporates a hard constraint on binarization is provided. The method identifies the direction of steepest ascent that simultaneously increases binarization by a certain user-set, non-zero amount. This iterative update strategy enforces the design ultimately converges to a binary design.