SPATIAL MODE PROCESSING FOR HIGH-RESOLUTION IMAGING

20240242495 ยท 2024-07-18

Assignee

Inventors

Cpc classification

International classification

Abstract

Optical imaging includes: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes: receiving a set of output optical signals from the spatial mode sorter during a detection interval of time: processing information based at least in part on the set of output optical signals received in the detection interval of time: and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.

Claims

1. A method for optical imaging, the method comprising: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes; receiving a set of output optical signals from the spatial mode sorter during a detection interval of time; processing information based at least in part on the set of output optical signals received in the detection interval of time; and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing; wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.

2. The method of claim 1, further comprising; receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time; processing information based at least in part on the second set of output optical signals received in the second detection interval of time; and providing a second estimated measurement for discriminating among the first set of two or more predetermined target images based at least in part on information derived from the processing.

3. The method of claim 1, further comprising; receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time; processing information based at least in part on the second set of output optical signals received in the second detection interval of time; and providing a second estimated measurement for discriminating among a second set of two or more predetermined target images based at least in part on information derived from the processing;

4. The method of claim 1, wherein the processing includes: determining, based at least in part on the set of output optical signals, information that is dependent on a second moment of a transverse spatial distribution of the input optical signal; and performing a statistical analysis of the determined information based on a decision rule that provides a discrimination among the two or more predetermined target images.

5. The method of claim 4, wherein the determined information further comprises information that is dependent on a first moment of a spatial distribution of the input optical signal.

6. The method of claim 4, wherein the statistical analysis includes additional information obtained by prior measurement or prior estimation.

7. The method of claim 4, wherein the decision rule comprises a comparison between the determined information and a set of second moments containing transverse spatial distributions of each of the predetermined target images.

8. The method of claim 4, wherein the determined information is dependent on a third moment of a transverse spatial distribution of the input optical signal.

9. The method of claim 6, wherein the decision rule comprises a comparison between the determined information and a set of third moments containing transverse spatial distributions of each of the predetermined target images.

10. The method of claim 1, wherein the set of target spatial modes includes: a zero-order radially symmetric spatial mode, and two first-order spatial modes that represent transverse spatial distributions along orthogonal axes.

11. The method of claim 1, wherein a subset of the set of target spatial modes are Hermite-Gaussian modes.

12. The method of claim 1, wherein a subset of the set of target spatial modes are distorted Hermite-Gaussian modes.

13. The method of claim 1, wherein a subset of the set of target spatial modes are matched to the spatial mode of a point spread function of an imaging system.

14. The method of claim 1, wherein the set of target spatial modes is modified to compensate for misalignment of the spatial mode sorter with respect to the received input optical signal.

15. The method of claim 1, wherein the set of target spatial modes is modified to compensate for optical aberrations distorting the received input optical signal.

16. The method of claim 1, further comprising spatially aligning the spatial mode sorter to compensate for changes in a spatial or angular position of the received optical signal.

17. The method of claim 1, wherein the two or more predetermined target images represent images of different types of vehicles.

18. The method of claim 1, wherein the two or more predetermined target images represent images of different celestial bodies.

19. The method of claim 1, wherein the two or more predetermined target images represent images of different biological structures.

20. The method of claim 11, wherein the processing includes assigning classification labels to an input optical signal from a set of two or more predetermined classification labels.

21. One or more non-transitory computer-readable media, having instructions stored thereon that, when executed by a computer system, cause the computer system to perform operations comprising: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes; receiving a set of output optical signals from the spatial mode sorter during a detection interval of time; processing information based at least in part on the set of output optical signals received in the detection interval of time; and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing; wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.

22. An apparatus for imaging a distribution of one or more optical sources, the apparatus comprising: a spatial mode sorter that is configurable based on a set of target spatial modes onto which an input optical signal is projected; and a control module configured to: configure the spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in the set of target spatial modes; receive a set of output optical signals from the spatial mode sorter during a detection interval of time; process information based at least in part on the set of output optical signals received in the detection interval of time; and provide an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing; wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0032] The disclosure is best understood from the following detailed description when read in conjunction with the accompanying drawing. It is emphasized that, according to common practice, the various features of the drawing are not to-scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity.

[0033] FIG. 1 is a schematic diagram of an example system for spatial mode processing.

[0034] FIG. 2 is a schematic diagram illustrating direct imaging and spatial mode sorting.

[0035] FIG. 3 is a set of images including subsets (a, b, c, d) of images corresponding to four pairs of objects imaged with direct imaging, before and after each image is convolved with a Gaussian point spread function.

[0036] FIG. 4 is a set of four plots (a, b, c, d) comparing direct imaging and spatial mode sorting by plotting a scaling factor related to the success probability of object discrimination.

[0037] FIG. 5 is a plot showing the maximum number of objects that can be distinguished at a given threshold error rate with a 2D Gaussian aperture.

[0038] FIG. 6A is a schematic diagram of an implementation of a spatial mode sorter system.

[0039] FIG. 6B is a schematic diagram illustrating the sorting of a first spatial mode.

[0040] FIG. 6C is a schematic diagram illustrating the sorting of a second spatial mode.

[0041] FIG. 6D is a schematic diagram illustrating the sorting of a third spatial mode.

[0042] FIG. 7 is a schematic diagram of a second implementation of a spatial mode sorter.

[0043] FIG. 8 is a schematic diagram of a third implementation of a spatial mode sorter.

[0044] FIG. 9 shows a flowchart for an example spatial mode sorting procedure.

DETAILED DESCRIPTION

[0045] Object discrimination is at the heart of decision making in medical diagnostics, extrasolar astronomy, and autonomous sensing. For incoherent imaging with large standoff distances, small objects, and/or aperture-limited imaging systems, the physical principle of diffraction impedes accurate discrimination between spatially distinct objects. A classic heuristic criterion, attributed to Rayleigh, holds that two objects cannot be discriminated when their distinguishing features exhibit length scales smaller than the width of the system point spread function. More quantitatively, for hypothesis tests between such sub-Rayleigh objects, the probability of correct identification degrades as the PSF more severely perturbs the measured images.

[0046] A paradigm shift for sub-Rayeigh imaging has emerged from the calculation of task-specific error bounds that optimize over all measurements permitted by quantum mechanics. These quantum limits revealed that direct measurements of the optical intensity profile are responsible for the catastrophic degree of error implied by the Rayleigh criterion, whereas alternative measurements yield far lower error than direct imaging for many tasks. Quantum limits, and quantum-optimal measurements that achieve them, were found for specific hypothesis tests including one-vs-two point source discrimination and exoplanet detection. However, no general results exist that broadly apply to real-world object discrimination settings.

[0047] Referring to FIG. 1, an example of a system 100 for spatial mode processing including an optical imaging system 102 that includes an optical processing module 104 (e.g., including an optical front-end and a processing module implemented on a special-purpose or general-purpose processor) for receiving an optical input 103 and producing measurement information 105 as output. The optical processing module 104 receives image information 106 to configure the optical imaging system 102 to discriminate among different images. For example, a set of predetermined target images 108 can be stored in a storage system 110. Between a series of detection intervals of time, the optical processing module 104 is, in some implementations, able to configure a configurable spatial mode sorter 112 to provide separate output optical signals for each spatial mode in a set of target spatial modes, as described in more detail herein. In some implementations, the spatial mode sorter 112 is initially configured and then used in a single detection interval to provide information for discriminating (e.g., for binary discrimination between two predetermined target images). Examples of some aspects of this part of the procedure (e.g., spatial-mode sorting) are described more detail below.

[0048] For hypothesis tests between any two incoherent, quasi-monochromatic 2D objects in the sub-Rayleigh regime, examples are described herein for techniques to 1) compute the quantum Chernoff bound on asymptotic discrimination error, 2) quantify the sub-optimal error rate of direct imaging, and 3) identify a quantum-optimal measurement whose linear-optical design does not depend on the object models. The results of prophetic examples included herein extend to M-ary discrimination: the same object-independent measurement is quantum-optimal for any database of M>2 objects.

[0049] Without intending to be bound by theory, for describing some examples, we let H.sub.j, j?[1, M], denote a hypothesis corresponding to one of M candidate objects. Under H.sub.j, the quantum state ?.sub.j on Hilbert space custom-character describes one temporal mode of the quasi-monochromatic optical field collected by an imaging system. Many naturally occurring incoherent sources exhibit a small mean photon flux ?<<1 per temporal mode such that multi-photon detection within the optical coherence time is vanishingly rare. In this case, a weak-source approximation uses the Fock expansion ?.sub.j=(1??)|0custom-charactercustom-character0|??.sub.j+O(?.sup.2), where |0custom-charactercustom-character0| is the quantum vacuum state and the single-photon state ?.sub.j carries all of the spatial information about the object under H.sub.j. Since ?.sub.j is restricted to single-photon (unary) excitation, its infinite-dimensional spatial-mode structure can be mapped to a Hilbert space custom-character.sup.(1).

[0050] Let an imaging system with a 2D coherent PSF ?({right arrow over (x)}) relate object- and image-plane position vectors {right arrow over (x)}.sub.obj={x.sub.obj, y.sub.obj} and {right arrow over (x)}=?{right arrow over (x)}.sub.obj by the transverse magnification ?. We model the spatial irradiance of the object under H.sub.j by a normalized radiant exitance profile m.sub.j({right arrow over (x)}.sub.obj). The state of the collected optical field on custom-character.sup.(1) is then

[00001] ? j = ? ? - ? ? 1 ? 2 m j ( x .fwdarw. ? ) .Math. "\[LeftBracketingBar]" ? x .fwdarw. .Math. .Math. ? x .fwdarw. .Math. "\[LeftBracketingBar]" d 2 x .fwdarw. , ( 1 )

where the pure state |?.sub.{right arrow over (x)}custom-character=??.sub.??.sup.??({right arrow over (a)}?{right arrow over (x)})|{right arrow over (a)}custom-characterd.sup.2{right arrow over (a)} encodes the effect of the aperture and |{right arrow over (x)}custom-character is a single-photon eigenket at image-plane position {right arrow over (x)}. In a basis of eigenvectors |?.sub.mcustom-character=??.sub.??.sup.??.sub.m({right arrow over (x)})|{right arrow over (x)}custom-characterd.sup.2{right arrow over (x)} on custom-character.sup.(1) set by orthogonal 2D functions ?.sub.m({right arrow over (x)}), the density matrix

[00002] ? j = .Math. m , n = 0 ? d j , m , n .Math. "\[LeftBracketingBar]" ? m .Math. .Math. ? n .Math. "\[LeftBracketingBar]" ( 2 )

has elements d.sub.j,m,n=??.sub.??.sup.??.sup.2m.sub.j({right arrow over (x)}/?)c.sub.m,n({right arrow over (x)})d.sup.2{right arrow over (x)}, where c.sub.m,n({right arrow over (x)})=custom-character?.sub.m|?.sub.{right arrow over (x)}custom-charactercustom-character?.sub.{right arrow over (x)}|?.sub.ncustom-character.

[0051] Consider a binary hypothesis test between objects m.sub.1({right arrow over (x)}.sub.obj) and m.sub.2({right arrow over (x)}.sub.obj) with equal prior probabilities. To make a decision Z?[1,2], a receiver measures the state ?.sub.1.sup..Math.custom-character or ?.sub.2.sup..Math.custom-character acquired over custom-character temporal modes and then applies a pre-determined decision rule on the outcome(s). If the conditional probability of deciding H.sub.j under true hypothesis H.sub.j is Pcustom-character(Z=j|H.sub.j), the average error probability P.sub.err,custom-character=[Pcustom-character(Z=1|H.sub.2)+Pcustom-character(Z=2|H.sub.1)]/2 is a symmetric performance metric for the measurement/decision rule scheme.

[0052] Optimizing over all such schemes, the quantum-limited minimum average error P.sub.err,min,custom-character?e.sup.??.sup.Qcustom-character follows an exponential decay when custom-character>>1, where the quantum Chernoff exponent (QCE) ?.sub.Q quantifies how efficiently each additional copy of the received state ?.sub.j suppresses the minimum error. We later show that the quantum limit can be written as P.sub.err,min,custom-character?e.sup.??.sup.Q.sup.(1).sup.N, where N=?M is the average photon number of n.sub.j.sup..Math.custom-character and where the per-photon QCE

[00003] ? Q ( 1 ) = log [ min 0 ? s ? 1 Tr ( ? 1 s ? 2 1 - s ) ] ( 3 )

obeys ?.sub.Q???.sub.Q.sup.(1) with weak-source sub-Rayleigh objects.

[0053] The most general description of a measurement, a positive operator-valued measure (POVM), consists of a set of positive semi-definite operators {?.sub.z}.sub.z on custom-character, linked to measurement outcomes {z} on an outcome space custom-character, that resolve the identity operator as custom-character=I. For a particular measurement performed on ?.sub.j.sup..Math.custom-character, the minimum average error probability among all decision rules goes as P.sub.err,min,Meas,custom-character?e.sup.??.sup.Meascustom-character, where ?.sub.Meas is the Chernoff exponent (CE) for the chosen measurement. The quantum and classical statistics are related by the achievable quantum Chernoff bound ?.sub.Meas??.sub.Q; that is, the QCE automatically optimizes over the CEs of all POVMs on custom-character.sup..Math.custom-character. Under the weak-source approximation, we show that the minimal error of any measurement that uses temporally-resolved photon counting goes as P.sub.err,min,Meas,custom-character?e.sup.??.sup.Meas.sup.(1).sup.N, where ?.sub.Meas???.sub.Meas.sup.(1) in the sub-Rayleigh regime and where

[00004] ? Meas ( 1 ) = - log [ min 0 ? s ? 1 .Math. ? ? ? ( 1 ) P ( ? .Math. ? 1 ) s P ( ? | ? 2 ) 1 - s ] ( 4 )

is the per-photon CE, which depends on probabilities P(z|?.sub.j)=Tr(?.sub.z.sup.(1)?.sub.j) of outcomes, in a single-photon subspace custom-character.sup.(1), of the reduced POVM {?.sub.z.sup.(1)}custom-character.sub.(1) on custom-character.sup.(1).

[0054] A measurement whose CE matches the QCE (?.sub.Meas.sup.(1)=?.sub.Q.sup.(1)) is considered to be quantum-optimal for the given hypothesis test. Conversely, a relative gap (?.sub.Meas.sup.(1)<?.sub.Q.sup.(1)) indicates a fundamental sub-optimality in the measurement that cannot be remedied by data post-processing.

[0055] Our goals are twofold: compute the QCE ?.sub.Q.sup.(1) for generalized sub-Rayleigh object discrimination and find a universally optimal measurement for which ?.sub.Meas.sup.(1)=?.sub.Q.sup.(1). As a first step, m.sub.2({right arrow over (x)}.sub.obj) and if object m.sub.1({right arrow over (x)}.sub.obj) is a single point source at position {right arrow over (x)}.sub.1,obj={right arrow over (x)}.sub.1/?, we find that the QCE is exactly

[00005] ? Q ( 1 ) = - log [ ? ? - ? ? 1 ? 2 m 2 ( x .fwdarw. - x .fwdarw. 1 ? ) .Math. "\[LeftBracketingBar]" ? ( x .fwdarw. ) .Math. "\[RightBracketingBar]" 2 d 2 x .fwdarw. ] , ( 5 )

where ?({right arrow over (x)})=custom-character?.sub.{right arrow over (?)}|?.sub.{right arrow over (x)}custom-character is the 2D autocorrelation of the PSF and {right arrow over (?)} denotes the origin of the image-plane coordinate system. In this case, ?.sub.BSPADE.sup.(1)=?.sub.Q.sup.(1) is achieved by a 2D binary spatial mode demultiplexing (BSPADE) device that passively couples the PSF-matched spatial mode (i.e., |?.sub.{right arrow over (?)}custom-character) to one shot-noise-limited photon-counting detector and all other light to a second identical detector. As an example, for discriminating one-vs-two point sources with a 2D Gaussian PSF ?({right arrow over (x)})=(2??.sup.2).sup.?1/2exp(?(x.sup.2+y.sup.2)/4?.sup.2), where d is the source separation under H.sub.2, we confirm that the BSPADE CE enjoys a quadratic (d.sup.2) scaling advantage as d<<? over the CE of idealized 2D direct imaging (an infinite spatial bandwidth, unity fill factor, unity quantum efficiency photon-counting detector array).

[0056] We now generalize to arbitrary m.sub.1({right arrow over (x)}.sub.obj) and m.sub.2({right arrow over (x)}.sub.obj), with applications in bioimaging, astronomy, and computer vision. We focus on the sub-Rayleigh limit ?<<1, where ?=??/? quantifies the geometric ratio between the magnified spatial extent of the object(s) ? and the PSF width ?.

[0057] We also define {tilde over (m)}.sub.j({right arrow over (x)}.sub.obj)=?.sup.2m.sub.j(?{right arrow over (x)}.sub.obj), {tilde over (?)}({right arrow over (x)})=??(?{right arrow over (x)}), and {tilde over (?)}({right arrow over (x)})=?(?{right arrow over (x)}) as non-dimensionalized representations of the object(s), the coherent PSF, and the PSF autocorrelation function, respectively, to isolate the influence of diffraction (i.e., ?) from that of the object and aperture. In some implementations, the objects' 2D centroids coincide at a location known to the receiver either from prior knowledge or a preliminary measurement, such that the task is object identification not localization, and that the PSF ?({right arrow over (x)}) is even in x and y, as with a circularly symmetric aperture.

[0058] To derive the generalized QCE, we represent ?.sub.1 and ?.sub.2 [Eq. (2)] in a basis of PSF-adapted (PAD) eigenvectors |{tilde over (?)}.sub.mcustom-character=??.sub.??.sup.?{tilde over (?)}.sub.m({right arrow over (x)})|{right arrow over (x)}custom-characterd.sup.2{right arrow over (x)} on custom-character.sup.(1) via Gram-Schmidt orthogonalization of the 2D Cartesian derivatives of the non-dimensionalized PSF {tilde over (?)}({right arrow over (x)}). For a 2D Gaussian PSF, the PAD basis functions {tilde over (?)}.sub.m({right arrow over (x)}) are Hermite-Gauss polynomials. After expanding ?.sub.1 and ?.sub.2 in powers of ?<<1 and truncating to finite dimensions, we use operator perturbation theory to find

[00006] ? Q ( 1 ) = max 0 ? s ? 1 [ ( sm 1 , x 2 + ( 1 - s ) m 2 , x 2 - m 1 , x 2 s m 2 , x 2 1 - s ) ? x 2 + ( sm 1 , y 2 + ( 1 - s ) m 2 , y 2 - m 1 , y 2 s m 2 , y 2 1 - s ) ? y 2 ] ? 2 + O ( ? 3 ) , ( 6 )

where m.sub.j,x.sub.k.sub.,y.sub.l=??.sub.??.sup.?x.sub.obj.sup.ky.sub.obj.sup.l{tilde over (m)}.sub.j({right arrow over (x)}.sub.obj)d.sup.2{right arrow over (x)}.sub.obj are spatial moments of the non-dimensionalized object models and ?.sub.x.sub.k.sub.y.sub.l=?[Re(?.sup.k+l{tilde over (?)}({right arrow over (x)})/?x.sup.k?dy.sup.l)].sub.{right arrow over (x)}={right arrow over (?)} are derivatives of the PSF autocorrelation function. The QCE of Eq. (6) represents the quantum limit for discrimination between any two incoherent objects in the sub-Rayleigh limit.

[0059] The CE for direct imaging with a zeroless PSF that is separable in x and y is given by by

[00007] ? Direct ( 1 ) = ( 1 / 32 ) ( ? x + ? y ) ? 4 + O ( ? 5 ) , ( 7 )

with custom-character.sub.a=(m.sub.1,a.sub.2?m.sub.2,a.sub.2).sup.2??.sub.??.sup.??.sub.a.sub.2({right arrow over (x)}).sup.2/|{tilde over (?)}({right arrow over (x)})|.sup.2d.sup.2{right arrow over (x)} for a?[x,y] and where ?.sub.x.sub.k.sub.y.sub.l({right arrow over (x)})=?.sup.k+l|{tilde over (?)}({right arrow over (x)})|.sup.2/?.sub.x.sup.k?.sub.y.sup.l are derivatives of the incoherent PSF. Eqs. (6) and (7) reveal a quadratic scaling sub-optimality in direct imaging?.sub.Direct.sup.(1)??.sup.4 vs ?.sub.Q.sup.(1)??.sup.2for all binary discrimination tasks. Alternatively, a TriSPADE measurement sorts the collected light between the PSF-matched spatial mode and the first-order PAD-basis modes in two perpendicular dimensions, using only linear optics and photodetectors to implement a POVM ?.sub.0=|?.sub.0custom-charactercustom-character{tilde over (?)}.sub.0|, ?.sub.1=|{tilde over (?)}.sub.1custom-charactercustom-character{tilde over (?)}.sub.1|, and ?.sub.2=|{tilde over (?)}.sub.2custom-charactercustom-character{tilde over (?)}.sub.2| that does not depend on the candidate object models. The resulting CE ?.sub.TriSPADE.sup.(1) achieves the QCE when ?<<1.

[0060] FIG. 2 shows an example of a direct imaging system 200 that collects incoming light and images the light onto a detector 202. In contrast, a spatial mode sorting system 210 collects incoming light and a spatial mode sorter 212 projects it onto a first spatial mode 214A that is detected by a first detector 213A, a second spatial mode 214B that is detected by a second detector 213B, and a third spatial mode 214C that is detected by a third detector 213C.

[0061] The upper two images of FIG. 3a show the object irradiance of a vertical and horizontal ellipse while the lower two images of FIG. 3a show their Gaussian point spread function convolved image-plane intensity profiles. The upper two images of FIG. 3b show the object irradiance of a filled and hollow pore while the lower two images of FIG. 3b show their Gaussian point spread function convolved image-plane intensity profiles. The upper two images of FIG. 3c show the object irradiance of two possible exoplanet detection scenarios while the lower two images of FIG. 3c show their Gaussian point spread function convolved image-plane intensity profiles. The upper two images of FIG. 3d show the object irradiance of two different QR codes while the lower two images of FIG. 3d show their Gaussian point spread function convolved image-plane intensity profiles.

[0062] To illustrate our results, in FIG. 4a, FIG. 4b, FIG. 4c, FIG. 4d we numerically evaluate ?.sub.Q.sup.(1), ?.sub.Direct.sup.(1), and ?.sub.TriSPADE.sup.(1) for the examples depicted in FIG. 3a, FIG. 3b, FIG. 3c, and FIG. 3d respectively. Thick lines represent the analytical lowest-order (in ?) results for ?.sub.Q.sup.(1) (solid) and ?.sub.Direct.sup.(1) (dashed). Thin lines represent numerical results for ?.sub.Q.sup.(1) (solid), ?.sub.Direct.sup.(1) (dotted), and ?.sub.TriSPADE.sup.(1) (dashed). A misalignment of ?/10 is used for the lower TriSPADE CE in FIG. 4c. The lowest-order behavior of the QCE in ? [Eq. (6)] is an excellent approximation for both the full QCE and the TriSPADE CE throughout the sub-Rayleigh regime (?<1), and the results clearly exhibit the expected O(?.sup.2) scaling gap. We also find TriSPADE to be robust to optical misalignment; a mode sorter that is misaligned from the mutual object centroid retains the quadratic scaling advantage over direct imaging. These results suggest that TriSPADE can perform a wide range of sub-Rayleigh hypothesis tests with substantially less error than conventional imaging methods.

[0063] We now extend our analysis to M>2 equiprobable objects, such as a database of QR codes. The M-ary QCE ?.sub.Q,M.sup.(1)=min.sub.i?j?.sub.Q,i,l.sup.(1), which characterizes the quantum-limited asymptotic error for discriminating M states, is found by minimizing the pairwise QCEs ?.sub.Q,i,j.sup.(1) for each pair of states {?.sub.i, ?.sub.j}. The similarly defined M-ary CE ?.sub.Meas,M.sup.(1)=min.sub.i?j?.sub.Meas,i,j.sup.(1) obeys the multiple quantum Chernoff bound ?.sub.Meas,M.sup.(1)??.sub.Q,M.sup.(1). We have shown that ?.sub.TriSPADE,i.j.sup.(1)=?.sub.Q,i,j.sup.(1) for any two states when ?<<1. Therefore, the TriSPADE POVM, which does not depend on the candidate states, will simultaneously achieve the QCE for all pairs of states in a database. It follows that ?.sub.TriSPADE,M.sup.(1)=?.sub.Q,M.sup.(1). We conclude that TriSPADE is a quantum-optimal measurement for any M-object database in the sub-Rayleigh limit.

[0064] Finally, in FIG. 5 we show how many objects can be distinguished to a desired accuracy with a conventional or quantum-optimal measurement. The inset of FIG. 5 shows the error probability vs. mean detected photon number. We find that TriSPADE resolves more objects than direct imaging when ?<?{square root over (2)}/(?{square root over (m.sub.x.sub.2.sub.,max)}+/?{square root over (m.sub.x.sub.2.sub.,min)}) regardless of the threshold error rate ?.sub.Thresh.sup.(1). As the threshold is relaxed, meaning more photons are available and/or more error can be tolerated (inset), the gap between TriSPADE and direct imaging grows to over two orders of magnitude for small ?. We conclude that TriSPADE significantly increases the complexity of distinguishable sub-Rayleigh object databases without compromising performance.

[0065] The examples described herein show that a realizable optical receiver could substantially enhance decision-making accuracy for super-resolution biological, astronomical, and terrestrial imaging.

[0066] The spatial mode sorting may be performed with various optical configurations, as discussed below.

[0067] FIG. 6A shows an example of a spatial mode sorting system 600. The incoming beam 602 reflects off a spatial light modulator 604 containing five independently controlled regions that modifies the intensity or phase of the incoming beam 602. A mirror 606 reflects the incoming beam 602 after it has interacted with one or more of the regions of the spatial light modulator 604. The incoming beam 602 is then sent to a detector 608, such as an EMCCD or CMOS camera. The information produced by the detector is then sent to a processor 610, such as an FPGA, which can then control the intensity and phase of future incoming light 602 after reflecting from the spatial light modulator 604.

[0068] FIG. 6B shows the spatial mode sorting system 600 with an incoming beam containing a first mode 620A and sorting it to a first region of the detector image 622A.

[0069] FIG. 6C shows the spatial mode sorting system 600 with an incoming beam containing a second mode 620B and sorting it to a second region of the detector image 622B.

[0070] FIG. 6D shows the spatial mode sorting system 600 with an incoming beam containing a first mode 620C and sorting it to a third region of the detector image 622C.

[0071] If a superposition of the modes 620A, 620B, and 620C is received in the beam 602, the ratio of the spot intensities on the resulting detector image can be used to infer the relative strength of the modes in the received beam 602.

[0072] FIG. 7 shows a second example of a spatial mode sorting system 700. A first spatial light modulator 701 reflects and modifies the intensity or phase of the incoming beam 710. A second spatial light modulator 702, a third spatial light modulator 703, a fourth spatial light modulator 704, and a fifth spatial light modulator 705 further reflect and modify the intensity or phase of the incoming beam 710.

[0073] FIG. 8 shows a third example of a spatial mode sorting system 800. A first spatial light modulator 801 transmits and modifies the intensity or phase of the incoming beam 810. A second spatial light modulator 802, a third spatial light modulator 803, a fourth spatial light modulator 804, and a fifth spatial light modulator 805 further transmit and modify the intensity or phase of the incoming beam 810.

[0074] FIG. 9 shows a flowchart for an example spatial mode sorting procedure 900 for discriminating among a first set of predetermined target images. The procedure 900 includes configuring (902) a spatial mode sorter to provide, in response to receiving (904) an input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes. The procedure 900 includes processing (906) information based at least in part on the set of output optical signals received in the detection interval of time. The procedure 900 includes providing (908) an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. The procedure 900 may be performed when, during the detection interval of time, a total number of the output optical signals is greater than two and less than ten. The procedure 900 may be iterated multiple times until a goal is reached (e.g., until no further imaging is allowed), with each iteration providing (908) an estimated measurement for discriminating among a first set of two or more predetermined target images. The procedure 900 may be iterated multiples times, providing (908) a plurality of estimated measurements for discriminating among a plurality of sets of two more predetermined target images.

[0075] The techniques described above for controlling and configuring a spatial mode sorting system can be implemented using software for execution on a computer system. For example, the software can define procedures in one or more computer programs that execute on one or more programmed or programmable computer systems (e.g., desktop, distributed, client/server computer systems) each including at least one processor, at least one data storage system (e.g., including volatile and non-volatile memory and/or storage elements), at least one input device (e.g., keyboard and mouse) or port, and at least one output device (e.g., monitor) or port. The software may form one or more modules of a larger program.

[0076] The software may be provided on a non-transitory medium such as a computer-readable storage medium (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer system, or delivered over a communication medium (e.g., encoded in a propagated signal) such as network to a computer system where it is stored in a non-transitory medium and executed. Each such computer program can be used to configure and operate the computer system when the non-transitory medium is read by the computer system to perform the procedures of the software.

[0077] While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.