Spatial mode processing for high-resolution imaging
12573193 ยท 2026-03-10
Assignee
Inventors
- Michael Grace (Tucson, AZ, US)
- Saikat Guha (Tucson, AZ, US)
- Mark Neifeld (Tucson, AZ, US)
- Amit Ashok (Tucson, AZ, US)
Cpc classification
G06V10/92
PHYSICS
G06V10/60
PHYSICS
International classification
G06V10/88
PHYSICS
G06V10/60
PHYSICS
Abstract
Optical imaging includes: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes: receiving a set of output optical signals from the spatial mode sorter during a detection interval of time: processing information based at least in part on the set of output optical signals received in the detection interval of time: and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
Claims
1. A method for optical imaging, the method comprising: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes; receiving a set of output optical signals from the spatial mode sorter during a detection interval of time; processing information based at least in part on the set of output optical signals received in the detection interval of time; and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing; wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
2. The method of claim 1, further comprising; receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time; processing information based at least in part on the second set of output optical signals received in the second detection interval of time; and providing a second estimated measurement for discriminating among the first set of two or more predetermined target images based at least in part on information derived from the processing.
3. The method of claim 1, further comprising; receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time; processing information based at least in part on the second set of output optical signals received in the second detection interval of time; and providing a second estimated measurement for discriminating among a second set of two or more predetermined target images based at least in part on information derived from the processing.
4. The method of claim 1, wherein the processing includes: determining, based at least in part on the set of output optical signals, information that is dependent on a second moment of a transverse spatial distribution of the input optical signal; and performing a statistical analysis of the determined information based on a decision rule that provides a discrimination among the two or more predetermined target images.
5. The method of claim 4, wherein the determined information further comprises information that is dependent on a first moment of a spatial distribution of the input optical signal.
6. The method of claim 4, wherein the statistical analysis includes additional information obtained by prior measurement or prior estimation.
7. The method of claim 4, wherein the decision rule comprises a comparison between the determined information and a set of second moments containing transverse spatial distributions of each of the predetermined target images.
8. The method of claim 4, wherein the determined information is dependent on a third moment of a transverse spatial distribution of the input optical signal.
9. The method of claim 6, wherein the decision rule comprises a comparison between the determined information and a set of third moments containing transverse spatial distributions of each of the predetermined target images.
10. The method of claim 1, wherein the set of target spatial modes includes: a zero-order radially symmetric spatial mode, and two first-order spatial modes that represent transverse spatial distributions along orthogonal axes.
11. The method of claim 1, wherein a subset of the set of target spatial modes are Hermite-Gaussian modes.
12. The method of claim 1, wherein a subset of the set of target spatial modes are distorted Hermite-Gaussian modes.
13. The method of claim 1, wherein a subset of the set of target spatial modes are matched to the spatial mode of a point spread function of an imaging system.
14. The method of claim 1, wherein the set of target spatial modes is modified to compensate for misalignment of the spatial mode sorter with respect to the received input optical signal.
15. The method of claim 1, wherein the set of target spatial modes is modified to compensate for optical aberrations distorting the received input optical signal.
16. The method of claim 1, further comprising spatially aligning the spatial mode sorter to compensate for changes in a spatial or angular position of the received optical signal.
17. The method of claim 1, wherein the two or more predetermined target images represent images of different types of vehicles.
18. The method of claim 1, wherein the two or more predetermined target images represent images of different celestial bodies.
19. The method of claim 1, wherein the two or more predetermined target images represent images of different biological structures.
20. The method of claim 11, wherein the processing includes assigning classification labels to an input optical signal from a set of two or more predetermined classification labels.
21. One or more non-transitory computer-readable media, having instructions stored thereon that, when executed by a computer system, cause the computer system to perform operations comprising: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes; receiving a set of output optical signals from the spatial mode sorter during a detection interval of time; processing information based at least in part on the set of output optical signals received in the detection interval of time; and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing; wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
22. An apparatus for imaging a distribution of one or more optical sources, the apparatus comprising: a spatial mode sorter that is configurable based on a set of target spatial modes onto which an input optical signal is projected; and a control module configured to: configure the spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in the set of target spatial modes; receive a set of output optical signals from the spatial mode sorter during a detection interval of time; process information based at least in part on the set of output optical signals received in the detection interval of time; and provide an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing; wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The disclosure is best understood from the following detailed description when read in conjunction with the accompanying drawing. It is emphasized that, according to common practice, the various features of the drawing are not to-scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION
(14) Object discrimination is at the heart of decision making in medical diagnostics, extrasolar astronomy, and autonomous sensing. For incoherent imaging with large standoff distances, small objects, and/or aperture-limited imaging systems, the physical principle of diffraction impedes accurate discrimination between spatially distinct objects. A classic heuristic criterion, attributed to Rayleigh, holds that two objects cannot be discriminated when their distinguishing features exhibit length scales smaller than the width of the system point spread function. More quantitatively, for hypothesis tests between such sub-Rayleigh objects, the probability of correct identification degrades as the PSF more severely perturbs the measured images.
(15) A paradigm shift for sub-Rayeigh imaging has emerged from the calculation of task-specific error bounds that optimize over all measurements permitted by quantum mechanics. These quantum limits revealed that direct measurements of the optical intensity profile are responsible for the catastrophic degree of error implied by the Rayleigh criterion, whereas alternative measurements yield far lower error than direct imaging for many tasks. Quantum limits, and quantum-optimal measurements that achieve them, were found for specific hypothesis tests including one-vs-two point source discrimination and exoplanet detection. However, no general results exist that broadly apply to real-world object discrimination settings.
(16) Referring to
(17) For hypothesis tests between any two incoherent, quasi-monochromatic 2D objects in the sub-Rayleigh regime, examples are described herein for techniques to 1) compute the quantum Chernoff bound on asymptotic discrimination error, 2) quantify the sub-optimal error rate of direct imaging, and 3) identify a quantum-optimal measurement whose linear-optical design does not depend on the object models. The results of prophetic examples included herein extend to M-ary discrimination: the same object-independent measurement is quantum-optimal for any database of M>2 objects.
(18) Without intending to be bound by theory, for describing some examples, we let H.sub.j, j[1, M], denote a hypothesis corresponding to one of M candidate objects. Under H.sub.j, the quantum state .sub.j on Hilbert space describes one temporal mode of the quasi-monochromatic optical field collected by an imaging system. Many naturally occurring incoherent sources exhibit a small mean photon flux <<1 per temporal mode such that multi-photon detection within the optical coherence time is vanishingly rare. In this case, a weak-source approximation uses the Fock expansion .sub.j=(1)|0
0|.sub.j+O(.sup.2), where |0
0| is the quantum vacuum state and the single-photon state .sub.j carries all of the spatial information about the object under H.sub.j. Since .sub.j is restricted to single-photon (unary) excitation, its infinite-dimensional spatial-mode structure can be mapped to a Hilbert space
.sup.(1).
(19) Let an imaging system with a 2D coherent PSF ({right arrow over (x)}) relate object- and image-plane position vectors {right arrow over (x)}.sub.obj={x.sub.obj, y.sub.obj} and {right arrow over (x)}={right arrow over (x)}.sub.obj by the transverse magnification . We model the spatial irradiance of the object under H.sub.j by a normalized radiant exitance profile m.sub.j({right arrow over (x)}.sub.obj). The state of the collected optical field on .sup.(1) is then
(20)
where the pure state
(21)
encodes the effect of the aperture and |{right arrow over (x)} is a single-photon eigenket at image-plane position {right arrow over (x)}. In a basis of eigenvectors
(22)
set by orthogonal 2D functions .sub.m({right arrow over (x)}), the density matrix
(23)
has elements d.sub.j,m,n=.sub..sup..sup.2m.sub.j({right arrow over (x)}/)c.sub.m,n({right arrow over (x)})d.sup.2{right arrow over (x)}, where c.sub.m,n({right arrow over (x)})=.sub.m|.sub.{right arrow over (x)}
.sub.{right arrow over (x)}|.sub.n
.
(24) Consider a binary hypothesis test between objects m.sub.1({right arrow over (x)}.sub.obj) and m.sub.2({right arrow over (x)}.sub.obj) with equal prior probabilities. To make a decision Z[1,2], a receiver measures the state .sub.1.sup..Math. or .sub.2.sup..Math.
acquired over
temporal modes and then applies a pre-determined decision rule on the outcome(s). If the conditional probability of deciding H.sub.j under true hypothesis H.sub.j is P
(Z=j|H.sub.j), the average error probability P.sub.err,
=[P
(Z=1|H.sub.2)+P
(Z=2|H.sub.1)]/2 is a symmetric performance metric for the measurement/decision rule scheme.
(25) Optimizing over all such schemes, the quantum-limited minimum average error P.sub.err,min,e.sup..sup.
follows an exponential decay when
>>1, where the quantum Chernoff exponent (QCE) .sub.Q quantifies how efficiently each additional copy of the received state .sub.j suppresses the minimum error. We later show that the quantum limit can be written as P.sub.err,min,
e.sup..sup.
and where the per-photon QCE
(26)
obeys .sub.Q.sub.Q.sup.(1) with weak-source sub-Rayleigh objects.
(27) The most general description of a measurement, a positive operator-valued measure (POVM), consists of a set of positive semi-definite operators {.sub.z}.sub.z on , linked to measurement outcomes {z} on an outcome space
, that resolve the identity operator as
=I. For a particular measurement performed on .sub.j.sup..Math.
, the minimum average error probability among all decision rules goes as P.sub.err,min,Meas,
e.sup..sup.
, where .sub.Meas is the Chernoff exponent (CE) for the chosen measurement. The quantum and classical statistics are related by the achievable quantum Chernoff bound .sub.Meas.sub.Q; that is, the QCE automatically optimizes over the CEs of all POVMs on
.sup..Math.
. Under the weak-source approximation, we show that the minimal error of any measurement that uses temporally-resolved photon counting goes as P.sub.err,min,Meas,
e.sup..sup.
(28)
is the per-photon CE, which depends on probabilities P(z|.sub.j)=Tr(.sub.z.sup.(1).sub.j) of outcomes, in a single-photon subspace .sup.(1), of the reduced POVM {.sub.z.sup.(1)}
.sub.
.sup.(1).
(29) A measurement whose CE matches the QCE (.sub.Meas.sup.(1)=.sub.Q.sup.(1)) is considered to be quantum-optimal for the given hypothesis test. Conversely, a relative gap (.sub.Meas.sup.(1)<.sub.Q.sup.(1)) indicates a fundamental sub-optimality in the measurement that cannot be remedied by data post-processing.
(30) Our goals are twofold: compute the QCE .sub.Q.sup.(1) for generalized sub-Rayleigh object discrimination and find a universally optimal measurement for which .sub.Meas.sup.(1)=.sub.Q.sup.(1). As a first step, m.sub.2({right arrow over (x)}.sub.obj) and if object m.sub.1({right arrow over (x)}.sub.obj) is a single point source at position {right arrow over (x)}.sub.1,obj={right arrow over (x)}.sub.1/, we find that the QCE is exactly
(31)
where ({right arrow over (x)})=.sub.{right arrow over ()}|.sub.{right arrow over (x)}
is the 2D autocorrelation of the PSF and {right arrow over ()} denotes the origin of the image-plane coordinate system. In this case, .sub.BSPADE.sup.(1)=.sub.Q.sup.(1) is achieved by a 2D binary spatial mode demultiplexing (BSPADE) device that passively couples the PSF-matched spatial mode (i.e., |.sub.{right arrow over ()}
) to one shot-noise-limited photon-counting detector and all other light to a second identical detector. As an example, for discriminating one-vs-two point sources with a 2D Gaussian PSF ({right arrow over (x)})=(2.sup.2).sup.1/2exp((x.sup.2+y.sup.2)/4.sup.2), where d is the source separation under H.sub.2, we confirm that the BSPADE CE enjoys a quadratic (d.sup.2) scaling advantage as d<< over the CE of idealized 2D direct imaging (an infinite spatial bandwidth, unity fill factor, unity quantum efficiency photon-counting detector array).
(32) We now generalize to arbitrary m.sub.1({right arrow over (x)}.sub.obj) and m.sub.2({right arrow over (x)}.sub.obj), with applications in bioimaging, astronomy, and computer vision. We focus on the sub-Rayleigh limit <<1, where =/ quantifies the geometric ratio between the magnified spatial extent of the object(s) and the PSF width .
(33) We also define {tilde over (m)}.sub.j({right arrow over (x)}.sub.obj)=.sup.2m.sub.j({right arrow over (x)}.sub.obj), {tilde over ()}({right arrow over (x)})=({right arrow over (x)}), and {tilde over ()}({right arrow over (x)})=({right arrow over (x)}) as non-dimensionalized representations of the object(s), the coherent PSF, and the PSF autocorrelation function, respectively, to isolate the influence of diffraction (i.e., ) from that of the object and aperture. In some implementations, the objects' 2D centroids coincide at a location known to the receiver either from prior knowledge or a preliminary measurement, such that the task is object identification not localization, and that the PSF ({right arrow over (x)}) is even in x and y, as with a circularly symmetric aperture.
(34) To derive the generalized QCE, we represent .sub.1 and .sub.2 [Eq. (2)] in a basis of PSF-adapted (PAD) eigenvectors |{tilde over ()}.sub.m=.sub..sup.{tilde over ()}.sub.m({right arrow over (x)})|{right arrow over (x)}
d.sup.2{right arrow over (x)} on
.sup.(1) via Gram-Schmidt orthogonalization of the 2D Cartesian derivatives of the non-dimensionalized PSF {tilde over ()}({right arrow over (x)}). For a 2D Gaussian PSF, the PAD basis functions {tilde over ()}.sub.m({right arrow over (x)}) are Hermite-Gauss polynomials. After expanding .sub.1 and .sub.2 in powers of <<1 and truncating to finite dimensions, we use operator perturbation theory to find
(35)
where m.sub.j,x.sub.
(36) The CE for direct imaging with a zeroless PSF that is separable in x and y is given by by
(37)
with .sub.a=(m.sub.1,a.sub.
{tilde over ()}.sub.0|, .sub.1=|{tilde over ()}.sub.1
{tilde over ()}.sub.1|, and .sub.2=|{tilde over ()}.sub.2
{tilde over ()}.sub.2| that does not depend on the candidate object models. The resulting CE .sub.TriSPADE.sup.(1) achieves the QCE when <<1.
(38)
(39) The upper two images of
(40) To illustrate our results, in
(41) We now extend our analysis to M>2 equiprobable objects, such as a database of QR codes. The M-ary QCE .sub.Q,M.sup.(1)=min.sub.ij.sub.Q,i,l.sup.(1), which characterizes the quantum-limited asymptotic error for discriminating M states, is found by minimizing the pairwise QCEs .sub.Q,i,j.sup.(1) for each pair of states {.sub.i, .sub.j}. The similarly defined M-ary CE .sub.Meas,M.sup.(1)=min.sub.ij.sub.Meas,i,j.sup.(1) obeys the multiple quantum Chernoff bound .sub.Meas,M.sup.(1).sub.Q,M.sup.(1). We have shown that .sub.TriSPADE,i.j.sup.(1)=.sub.Q,i,j.sup.(1) for any two states when <<1. Therefore, the TriSPADE POVM, which does not depend on the candidate states, will simultaneously achieve the QCE for all pairs of states in a database. It follows that .sub.TriSPADE,M.sup.(1)=.sub.Q,M.sup.(1). We conclude that TriSPADE is a quantum-optimal measurement for any M-object database in the sub-Rayleigh limit.
(42) Finally, in
(43) The examples described herein show that a realizable optical receiver could substantially enhance decision-making accuracy for super-resolution biological, astronomical, and terrestrial imaging.
(44) The spatial mode sorting may be performed with various optical configurations, as discussed below.
(45)
(46)
(47)
(48)
(49) If a superposition of the modes 620A, 620B, and 620C is received in the beam 602, the ratio of the spot intensities on the resulting detector image can be used to infer the relative strength of the modes in the received beam 602.
(50)
(51)
(52)
(53) The techniques described above for controlling and configuring a spatial mode sorting system can be implemented using software for execution on a computer system. For example, the software can define procedures in one or more computer programs that execute on one or more programmed or programmable computer systems (e.g., desktop, distributed, client/server computer systems) each including at least one processor, at least one data storage system (e.g., including volatile and non-volatile memory and/or storage elements), at least one input device (e.g., keyboard and mouse) or port, and at least one output device (e.g., monitor) or port. The software may form one or more modules of a larger program.
(54) The software may be provided on a non-transitory medium such as a computer-readable storage medium (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer system, or delivered over a communication medium (e.g., encoded in a propagated signal) such as network to a computer system where it is stored in a non-transitory medium and executed. Each such computer program can be used to configure and operate the computer system when the non-transitory medium is read by the computer system to perform the procedures of the software.
(55) While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.