System and Method for Detection, Characterization, and Imaging of a Stellar Occultation
20170227351 · 2017-08-10
Inventors
Cpc classification
G02B23/00
PHYSICS
G01P13/00
PHYSICS
G01P15/00
PHYSICS
International classification
G01P15/00
PHYSICS
Abstract
An asteroid characterization and imaging system comprising at least one light collecting aperture positioned to collect intensity time history data and a data analysis unit configured to detect an occultation event and process said intensity time history data. Embodiments according to the present invention include a method of detecting, characterizing and imaging a near-Earth object comprising collecting intensity time history data by at least one light collecting aperture positioned to observe a star, detecting a stellar occultation event, recording said intensity time history data, processing said intensity time history data, predicting at least one of a set of object characteristics, and imaging said near-Earth celestial object.
Claims
1. An asteroid characterization and imaging system comprising: at least one light collecting aperture positioned to collect intensity time history data; and a data analysis unit configured to detect an occultation event and process said intensity time history data.
2. The system of claim 1 further comprising: a data receiving station wherein said data receiving station connectively communicates with said at least one light collecting aperture.
3. The system of claim 1 wherein said data analysis unit comprises: a communications module; a memory; a central processing unit; an imaging module; and a characteristic calculation module wherein said imaging module processes said intensity time history data to produce an image of said asteroid.
4. The system of claim 3 wherein said characteristic calculation module processes said intensity time history data to calculate asteroid characteristic data.
5. The system of claim 4 wherein said asteroid characteristic data comprises at least one of: a velocity calculation of said asteroid; a size of said asteroid; a trajectory of said asteroid; and a distance between said asteroid and said at least one light collecting aperture.
6. A method of detecting, characterizing and imaging a near-Earth celestial object comprising: collecting intensity time history data by at least one light collecting aperture positioned to observe a star; detecting a stellar occultation event; recording said intensity time history data; processing said intensity time history data; predicting at least one of a set of object characteristics; and imaging said near-Earth celestial object.
7. The method of claim 6 wherein said set of object characteristics comprises: a velocity calculation of said near-Earth celestial object; a size of said near-Earth celestial object; a trajectory of said near-Earth celestial object; and a distance between said near-Earth celestial object and said at least one light collecting aperture.
8. The method of claim 6 wherein said at least one light collecting aperture is positioned on a surface of Earth.
9. The method of claim 6 wherein said at least one light collecting aperture is positioned in a geosynchronous orbit of Earth.
10. The method of claim 6 wherein said imaging of said near-Earth celestial object further comprises: inputting said intensity time history data into a shadow function; applying a phase retrieval algorithm to said shadow function produce an unresolved image; applying a silhouette function to said unresolved image to produce a silhouette image of said near-Earth object to produce a sharpened silhouette image.
11. The method of claim 10 further comprising applying a signal-to-noise ratio to said silhouette image.
12. The method of claim 6 wherein said at least one light collecting aperture is positioned with a plane normal to a line of sight to said star.
13. The method of claim 6 wherein said light collecting aperture continuously collects said intensity time history data.
14. The method of claim 6 wherein said light collecting aperture collect said intensity time history data at scheduled intervals.
15. The method of claim 6 wherein said intensity time history data is stored in a memory.
16. The method of claim 7 wherein said set of object characteristics is stored in a memory.
17. The method of claim 10 wherein said sharpened silhouette image is stored in a memory.
18. A method of imaging a near-Earth celestial object comprising: collecting light intensity data as a function of time of a stellar occultation event; processing said light intensity data as a function of time to generate a shadow function; applying a phase retrieval algorithm to said shadow function to generate an unresolved image; applying a silhouette function to said unresolved image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] These and other features and advantages of the present invention will be better understood by reading the following Detailed Description, taken together with the Drawings wherein:
[0009]
[0010]
[0011]
[0012]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0013] The embodiment of
[0014] In accordance with one embodiment of the invention, observations made during a stellar occultation by an asteroid include detection of the asteroid, determination of the asteroid's size and shape, imaging of the asteroid, and prediction of trajectory. These characteristics are important to observe because they allow identification of asteroids or any other celestial object, especially near-Earth objects. If the orbit of a near-Earth object crosses Earth's orbit, the resulting impact event could be catastrophic. Therefore detection, characterization and identification of orbital paths of near-Earth objects like near-Earth asteroids may allow prediction of objects that may pose a danger to the Earth and the magnitude that that danger. Once identified and characterized, these near-Earth objects may, in the future, be mined for mineral resources.
[0015] Known methods allow detection of large near-Earth objects, between 10,000 meters and 100,000 meters, which cast a sharp shadow when occulting. These near-Earth objects are within the radiative near-field or Fresnel zone, with Freznel number>1. However, smaller near-Earth objects such as those with diameters between 40 meters and 140 meters do not create these defined shadows with occulting a star. These smaller near-Earth objects have shadows near or within the far field or Fraunhofer region from a distance of astronomical unit (1 AU). Therefore, known methods cannot efficiently detect and characterize these types of objects.
[0016] In one embodiment of the invention, shown in
[0017] According to this embodiment of the invention, the star 107 is being occulted by an asteroid 105. Each of the light collecting apertures 103 continuously records intensity time history data. This intensity time history data represents the intensity of light reaching the light collecting aperture from the star as a function of time. By continuously recording the intensity of light from the star, the light collecting apertures measure the relative intensities before, during and after the occultation of the star. In an embodiment of the invention, light collecting aperture includes a reasonably high bandwidth, at least 10 MHz, Avalanche Photodiode, run in Geiger mode, or a high bandwidth, high sensitivity photomultiplier to measure the light intensity. In this embodiment, the light is passed through a band-limited filter with center at 0.5μ wavelength before its intensity is recorded. The intensity time history data is communicated to a receiving station 109. The receiving station 109 collects the recorded intensity time history data to characterize, image and track the asteroid 105.
[0018] A second embodiment of the present invention, shown in
[0019]
[0020] While
[0021]
[0022] As in
[0023] In one or more embodiments of the invention, the light collecting apertures may send a signal to the receiving station when a stellar occultation is detected prior to sending the intensity time history data. In an embodiment of the invention, the light collecting apertures collect intensity time history data but do not record it until an occultation event is detected. In one or more embodiments of the invention, the light collecting apertures record the intensity time history data at programmed intervals. In an embodiment of the invention, the programmed intervals coincide with anticipated stellar occultation events.
[0024]
[0025] The data analysis module 400 may include a communications module 410, an internal memory 420, an imaging module 430, a size calculation module 440, a distance calculation module 450, a velocity calculation module 460, and a trajectory calculation module 470.
[0026] The communications module 410 may be configured to receive intensity time history data from the light collecting apertures. In an embodiment of the invention the communications module 410 may be further configured to receive a signal from the light collecting aperture indicating that an occultation event has been detected. In one or more embodiments of the invention the communication module 410 is further configured to send intensity time history data, object characteristic, or imaging data of an occulting near-Earth object. In an embodiment of the invention, the communications module 410 may send intensity time history data, object characteristic, or imaging data of an occulting near-Earth object to a database or other data repository. In an embodiment of the invention the communications module 410 may be configured to send a signal to the light collecting apertures to begin collecting intensity time history data. In an embodiment of the invention, the communications module 410 may be configured to send a signal to the light collecting apertures to end collecting intensity time history data. In an embodiment of the invention, the communications module 410 may be configured to send an instruction to the light collecting apertures including a schedule of occultation events indicating start and end times for collecting intensity time history data.
[0027] The internal memory 420 of the data analysis module 400 may be configured to store intensity time history data of an occultation event received by the communications module. The internal memory 420 may be further configured to store processing data that may need to be accessed by the individual calculation modules including the imaging module, size calculation module 440, distance calculation module 450, velocity calculation module 460, and trajectory calculation module 470. The size of the occulting near-Earth object may be determined by the size calculation module 440. The distance of the occulting near-Earth object may be determined by the distance calculation module 450. The velocity of the occulting near-Earth object may be determined by the velocity calculation module 460. In an embodiment of the invention, the diameter of the array projected onto the direction of motion of the shadow of the occulting near-Earth object factoring in a time delay is used to calculate the velocity of the object. The trajectory of the occulting near-Earth object may be determined by the trajectory calculation module 470. In one or more embodiments of the invention, the trajectory of the occulting near-Earth object may be used to determine future occultation events.
[0028] Returning to
[0029] Note that for a circular array, as in
[0030] Using just this data, a number of significant parameters can be estimated. The intensity distribution is heavily diffracted, but some well known diffraction algorithms allow us to estimate the size, with may be performed by the size calculation module 440, and distance, which may be performed by the distance calculation module 450, of the near-Earth object from the extent of the relatively deep shadow and the width of the striations that immediately surround it.
[0031] In one or more embodiments of the invention the size, distance, velocity, and trajectory of the occulting near-Earth object may be analyzed to indicate whether the occulting near-Earth object poses an impact threat to Earth. In an embodiment of the invention, the size, distance, velocity, and trajectory of the occulting near-Earth object may be sent via the communications module to a central impact alert system.
[0032] The imaging module 430 may be configured to run an algorithm to create an image of the occulting near-Earth object using intensity time history data. In one or more embodiments of the invention, the imaging module 430 creates an actual silhouette of the near-Earth object in the form of a pixellated image. In an exemplary embodiment of the invention, this pixellated image has roughly 10 to 20 pixels on a side. This embodiment implies the shadow pattern needs to have a spatial resolution corresponding to 10-20 measurements across the cross-track direction and time history measurements (representing intensity variation in the along-track direction) during time intervals at most 0.1 to 0.05 of the duration of the passage of the near-Earth object across the array. In one or more embodiments of the invention, the imaging module 430 clarifies the accuracy of the image using a signal-to-noise ratio.
[0033] The imaging algorithm is detailed below. In an embodiment of the invention, the algorithm for creating the a sharp silhouette image of the occulting near-Earth object includes producing a shadow function, applying a phase retrieval algorithm to determine the complex field amplitude at position vector x=(x,y) on the plane normal to the line-of-sight at the array location, and applying an inversion technique using a silhouette function.
[0034] Shadow Function
[0035] The shadow function is the shadow pattern intensity relative to that of the unobstructed star is the square of the magnitude of the complex field amplitude, U(x), at position vector x=(x,y) on the plane normal to the line-of-sight at the array location. This may be represented as:
[0036] The complex field amplitude U(x,y) at coordinates (x,y) on the plane normal to the line-of-sight at the array location is:
where λ is the mid-band wavelength,
[0037] If the star is treated as a point source, the complex field amplitude at position vector x=(x,y) is given by:
The imaging module 430 may calculate U(x) at position vector x=(x,y) on the plane normal to the line-of-sight at the array location and use it to determine a silhouette function.
[0038] In an alternate embodiment of the invention, if the light source is treated as an extended incoherent source, then the total shadow pattern is given by the convolution integral
where B(λ,θ′) is the normalized spectral radiance of the light from the source transmitted by a suitable gray band-limited filter with band-pass width, Δλ, and center band wavelength
[0039] The integral over the wavelength tends to smooth the fluctuations in
reducing the contrast of the striations bordering the shadow interior. To maintain an acceptable level of contrast, the spectral resolution of the filter, R.sub.j=
[0040] In this form, the imaging module 430 is able to calculate the silhouette function using a phase retrieval algorithm.
[0041] Phase Retreival Algorithm
[0042] The phase retrieval problem occurs widely in astronomy, crystallography and electron microscopy. In these applications, the images (intensity distributions) being reconstructed are typically some object, say a planet or molecular structure, with a dark background, thus having finite support. The objective is to reconstruct the image, typically pixellated, when only the magnitude of its Fourier transform is known. This is possible when the dark background furnishes sufficiently many image-domain constraints, i.e. the restriction that the background pixels are black. For the most part, phase retrieval is formulated to address discretized data, and the object is to determine a pixellated image given the modulus of the (discrete) Fourier transform.
[0043] In one embodiment of the invention, the imaging module 430 calculates |U(x,y)|.sup.2, while U(x,y) is the Fourier transform of a known function,
multiplied by Γ(θ.sub.x,θ.sub.y), or the silhouette function. The imaging module 430 exploits some special features that differ from the typical phase retrieval situation. For example, there are more image domain constraints here than in the traditional problem: All background pixels in Γ(θ.sub.x,θ.sub.y) are uniformly unity, while pixels within the silhouette are zero. Since all pixels are constrained, this leads to faster, more reliable convergence and much greater tolerance of measurement noise in the modulus data. Once U(x,y) is determined, one can invert
to obtain:
[0044] Borrowing the notations of Equations
and
we are given the data: J(x,y)=|Ũ(x,y)|.sup.2 where Ũ(x,y) is the counterpart of U(x,y) in
In phase retrieval applications, Ũ(x,y) is the optical coherence, not the complex field amplitude. Also far-field radiation can be assumed, in which case the exponential,
in
is approximately unity. Then Ũ(x,y) is given by the Fourier transform relation:
where {hacek over (Γ)}(θ.sub.x,θ.sub.y), the counterpart of the term
is the image intensity distribution, not the silhouette function. In phase retrieval, {hacek over (Γ)}(θ.sub.x,θ.sub.y) is a function that may assume any real, non-negative values. With relation Ũ(x,y), the challenge is to compute {hacek over (Γ)}(θ.sub.x,θ.sub.y) given knowledge of the data, J(x,y)=|Ũ(x,y)|.sup.2.
[0045] Since the phase of Ũ(x,y) is not available, a solution cannot exist unless there are additional constraints on {hacek over (Γ)}(θ.sub.x,θ.sub.y). These usually take the form of constraints on the value of {hacek over (Γ)}(θ.sub.x,θ.sub.y), (usually that it vanishes), over some region in the (θ.sub.x,θ.sub.y) plane. Many methods of phase retrieval are based on known methods that proceed by imposing a sequence of projections designed to converge to satisfaction of both the image-domain constraints (zero values of {hacek over (Γ)}(θ.sub.x,θ.sub.y) in a specified region), and the Fourier domain constraints (the specified values of J(x,y)=|Ũ(x,y)|.sup.2). This involves starting with an estimate of {hacek over (Γ)}(θ.sub.x,θ.sub.y), nulling its values in the specified region, taking its Fourier transform, correcting the modulus in conformity with J(x,y)=|Ũ(x,y)|.sup.2, taking the inverse transform to obtain the next estimate of {hacek over (Γ)}(θ.sub.x,θ.sub.y) and repeating the process until (or if) acceptable convergence is attained. The algorithm, however, is very susceptible to local minima and frequently stagnates before convergence.
[0046] In another embodiment of the invention, the image domain constraint imposition is modified through known methods so that instead of rigidly setting the values of {hacek over (Γ)}(θ.sub.x,θ.sub.y) in the constraint region to zero, a step is taken towards imposing the constraint based on the previous iterate. This offers enormous improvement with very infrequent incidence of stagnation.
[0047] The two methods above assume that the image domain constraint region is known. This work suggests that a priori knowledge of the (θ.sub.x,θ.sub.y) plane region wherein {hacek over (Γ)}(θ.sub.x,θ.sub.y)=0 may not be necessary. Successful phase retrieval appears possible as long as it is known a priori that the constraint region encompasses at least half the total extent of the image (half the number of pixels in the discrete case).
[0048] If the light source is treated as an extended incoherent source, then the Fourier transform pair may be defined as:
[0049] Then, taking the Fourier transform of both sides of the above equation and employing the convolution theorem, results in:
[0050] Solving for |U.sub.ps(
[0051] where in most instances one may employ a Gaussian approximation for F.sub.u[B(
[0052] The restriction on the spectral resolution, while preserving the contrast in the shadow pattern may sometimes unduly impair the signal-to-noise ratio. In an embodiment of the invention, a battery of narrow band filters with contiguous, non-overlapping pass bands that span a wide wavelength range is used as the light collecting apertures. Each filter band has an associated photodetector and a high spectral resolution. To be specific, suppose that there are N.sub.C such wavelength channels, all of equal width, Δλ.sub.C, spanning the visible band, Δλ.sub.V=λ.sub.V2−λ.sub.V1. The mid-band wavelength of channel k is:
for k=1,2, . . . ,N. Then
can be written:
[0053] Each channel may be processed independently by the imaging module 430 in a manner analogous to
and then the results may be combined to
achieve much improved signal-to-noise ratio while preserving adequate spectral resolution and contrast.
[0054] It should be noted that even when all constraints are satisfied, there can be ambiguities in the solution. The trivial ambiguities are those transformations of the image that do not affect the Fourier transform modulus. These comprise translations of the image in the picture plane and 180° rotations. Nontrivial ambiguities arise when the image is itself the convolution of two or more other images. This relatively rare phenomenon gives rise to multiple possible solutions.
[0055] Silhouette Function
[0056] Generally, the imaging module 430 scans the pixels in the silhouette plane using a simple raster scan or some other scanning algorithm and for each pixel, changes the value (from unity to zero or vice versa), computes the modified shadow pattern, and then determines the cross correlation with the shadow pattern data to generate an image of the near-Earth object.
[0057] Silhouette function calculation can be seen to be very similar to phase retrieval: data on the modulus of a quantity, U(x,y), is known, which is related via Fourier transformation to a known function
multiplied by the function of interest, Γ(θ.sub.x,θ.sub.y). Provided that the detection array is appropriately sized, a region of sufficient extent wherein Γ(θ.sub.x,θ.sub.y) vanishes is known. Additionally the silhouette function only assumes the values zero or unity. The silhouette function may be defined by:
[0058] In one or more embodiments, the silhouette function is represented as a pixellated image in the (x.sub.a,y.sub.a) plane, where each matrix element is either zero or unity. In principle, the search space is finite, and there is a finite-step algorithm that may converge to a solution for which all constraints are satisfied. With respect to ambiguities, note that by the conditions of the problem, the silhouette is known to be surrounded by uniform intensity, so translations may be eliminated by shifting the silhouette pattern in the picture frame. There is also a one-to-one relation between the orientation of the shadow pattern and the silhouette so that rotational ambiguity is not possible. Further, nontrivial ambiguities are unlikely because the convolution of silhouette functions with other silhouette functions must have values greater than unity.
[0059] Let {hacek over (Γ)}ε.sup.M×M and Jε
.sup.M×M and be the matrices of the trial values of the image and of the specified Fourier transform modulus, and denote the discrete Fourier transform by F( . . . ). The error metric is defined by:
[0060] The silhouette function conducts a raster scan of the M×M image plane. At each pixel, the color (0 or 1) is changed and the error metric is computed. If the error is reduced from the previous step, the new color is retained; otherwise the color change is rescinded. The raster scan is performed iteratively until the actual image is sharp enough to meet a threshold or perfectly reproduced, except for a translation in the image plane.
[0061] From
the shadow intensity distribution is:
where
In other words, this is the reverse color version of the sharp shadow. The introduction of
can be carried out at once to yield:
Ψ(x,y)=|U|.sup.2
U=i−F∫.sub.−∞.sup.∞dφ.sub.x∫.sub.−∞.sup.∞dφ.sub.y
where the previous estimate of the near-Earth object radius, a, is used to define non-dimensional variables:
and where F is the corresponding Fresnel number.
[0062] With the above expressions, relating
Ψ(x,y)=|U|.sup.2
U=i−F∫.sub.−∞.sup.∞dφ.sub.x∫.sub.−∞.sup.∞dφ.sub.y
assuming
where: E(ξ)=C(ξ)+iS(ξ) and F′ is the corresponding Fresnel number. This calculation is not repeated for every coarse resolution pixel for which
[0063] In an embodiment of the invention, the imaging module 430 may use this method of mapping
and retain the value change if there is improvement, or rescind the change if not.
[0064] However, in another embodiment of the invention, the imaging module uses a procedure that avoids any errors in the construction of the shadow pattern by directly comparing results with the raw data. To do this we take one further processing step: For given estimate of
[0065] Alignment Method
[0066] In another embodiment of the invention, the time history data may be aligned by using the travel velocity to transform times to distances normal to the direction of travel so that the phase retrieval may be applied directly without a complete reconstruction of the shadow function. This is the shadow pattern, i.e. the normalized intensity distribution over the plane parallel to the plane of the array that moves with the shadow. Using just this data, a number of significant parameters may be estimated. The intensity distribution is heavily diffracted, but the relations and some well known diffraction results allow the data analysis module to estimate the size and distance of the near-Earth Object from the extent of a relatively deep shadow and the width of the striations that immediately surround it.
[0067] First, let us consider the information on the near-Earth object distance implied by the shadow function. Referring to
recall that (φ.sub.x,φ.sub.y) are coordinates on a plane very close to the occulting object divided by
θ.sub.x=x.sub.a/
[0068] In terms of
may be written:
[0069] While the field amplitude depends on the geometry of the silhouette function, the only other parameter is λ
where C( . . . ) and S( . . . ) are the Fresnel integrals. Equations
and
make it clear that the distance to the near-Earth object can be estimated from the width of the striations.
[0070] In an exemplary embodiment of the invention, the width of the first lobe in
that is above unit intensity is, in terms of the non-dimensional argument, 1.275. The width of the portion of the striations in an exemplary embodiment of the invention is approximately 130 m. Setting
gives:
[0071] In an embodiment of the invention, the image may show an intensity hole. In an exemplary embodiment of the invention, the diameter roughly 106 pixels, or approximately 2120 m on the average. The imaging module 430 then applies:
where a denotes the average radius of the near-Earth object. Setting W=2120 m and solving for a yields a=20.1 m. Using these values and
[0072] In an embodiment of the invention, the imaging module 430 uses a procedure in the construction of the shadow pattern by directly comparing results with the raw data. To do this the imaging module takes one further processing step: Given estimate of .sup.N.sup.
.sup.21×21 of the complementary silhouette. Each of the N.sub.a rows contains the computed time history of a light collecting aperture, and each time history is a sampled data string M.sub.S elements long. Let M.sub.true be the corresponding data matrix. A correlation coefficient, between the true and the estimated time history data may be defined as:
where ⊙ denotes the Hadamard product. The summations are the squares of vector 2-norms induced in the space of N.sub.a×M.sub.S matrices. Given the above equations, Cor(.sup.21×21; for each element, the imaging module 430 changes the value, and computes M(
[0073] In summary, using the Alignment Method, data analysis module may use the aligned data and the shadow function alone to provide estimates of the apparent velocity of the near-Earth object, its approximate size and distance and Fresnel number of the shadow.
[0074] Photon Arrival Rates
[0075] In an exemplary embodiment of the invention each light collecting aperture will measure intensity in the visible to near infrared range. Even with very narrow band filtering of collected light, the response time of even fast Avalanche Photodiode detectors is much larger than the correlation time of the light. With this “slow detector” condition, the arrival time statistics of photon detections is fully characterized by the average number of photon arrivals per unit time. Further, the standard deviation of the number of photons entering a detector over a given time period is equal to the square root of the average multiplied by the time interval. The average number of photon detections divided by the standard deviation is the signal-to-noise of the measured average intensity over the measurement interval.
[0076] As the basis for our estimate of average photon detection rate, the Planck radiation law for the spectral radiance per unit wavelength is used:
where λ is the wavelength, and T.sub.* is the photospheric temperature of the occulted star. Integrating over the appropriate solid angle, the radiance from a unit area of surface is πrB.sub.λ(T.sub.*) (W−m.sup.−3) Then, dividing by hν, where ν=c/λ, the number of photons emitted per unit surface, per unit wavelength, per second is:
[0077] If n.sub.λ(T.sub.*) denotes the number of photons received per second, per square meter per unit wavelength then, from
we have:
where R.sub.* and d.sub.* are the stellar radius and distance, respectively.
[0078] In one or more embodiments of the invention, the light is passed through the j.sup.th grey band-limited filter discussed above, that admits light in the wavelength range λε[λ.sub.Ck−1,λ.sub.Ck]. Then, n.sub.Δλ.sub.
where j=1, . . . , N.sub.C.
[0079] It is convenient to express this in terms of the stellar apparent magnitude, m.sub.*. In terms of solar parameters, this is given by:
m.sub.⊙=Solar magnitude=−26.73
d.sub.⊙=Solar distance=1.58×10.sup.−5 lyr
L.sub.⊙=Solar luminosity=3.846×10.sup.26 W
where, using B.sub.λT.sub.* the stellar luminosity is:
[0080] Applying
may be put in the form:
[0081] Substituting this into m.sub.* and solving for n.sub.Δλ.sub.
where:
and where L.sub.⊙, d.sub.⊙, and m.sub.⊙ are the solar luminosity, distance and apparent magnitude, respectively.
[0082] In one or more embodiments of the invention, the photon arrival processes within the various non-overlapping wavelength channels are mutually statistically independent Poisson processes. Thus the average number of photons, denoted by n.sub.Δλ received per second, per square meter in the entire wavelength band, λε[λ.sub.V1, λ.sub.V2], spanned by the N.sub.C channels is, by virtue of
where:
and where:
[0083] Note that intensity measurements determine only the magnitude, not the phase of U (x,y) However, given that U(x,y) is essentially the Fourier transform of Γ(θ.sub.x,θ.sub.y), and given the constraints on the form of Γ(θ.sub.x,θ.sub.y) the phase can be determined by phase retrieval algorithms.
[0084] Moreover, the second central moment of the number of photon arrivals, denoted received per second, per square meter in the entire wavelength band is simply n.sub.Δλ. In an exemplary embodiment of the invention, the average arrival rate is a weak function of the temperature. In an embodiment of the invention, n.sub.Δλ increases slightly for the cooler stars, reflecting the fact that for a given radiance, the number of photons increases with wavelength. Hotter stars are relatively rare, while nearly 90%/o of stars are M-class. In an exemplary embodiment of the invention, when these results are averaged over the numerical distribution of spectral classes, the close the case T.sub.*=3000K which is near the high end of temperatures for M stars. Thus, in this exemplary embodiment, it is reasonable to approximate all stars with the result for T.sub.*=3000K and
[0085] Signal-to-Noise Ratio
[0086] To characterize the accuracy with which the silhouette may be determined the imaging module 430, in one or more embodiments of the invention, determines a signal-to-noise ratio that is reasonably convenient to calculate, yet is consistent with the phase retrieval algorithm, and relates to those critical factors in determining the accuracy of the silhouette function. In an embodiment of the invention, the imaging module 430 characterizes the error in the image of the near-Earth object as a function of the levels of noise in the intensity measurements.
[0087] In an alternate embodiment of the invention, the imaging module 430 uses intensity measurement statistics as a measure of silhouette estimation accuracy. As described above, the silhouette reconstruction method scans the pixels in the silhouette plane by some algorithm and for each pixel, changes the value (from unity to zero or vice versa), computes the modified shadow pattern from, and then determines the cross correlation with the shadow pattern data. If the cross-correlation increases, the value change is accepted; otherwise it is rescinded. This is a convex optimization problem, and as the measurement noise approaches zero, the silhouette errors also vanish. Therefore, the imaging module 430 may, in an embodiment of the invention, use the measurement noise statistics to define an signal-to-noise ratio, taking care that the “signal” part of the signal-to-noise ratio encompasses only that fraction of the intensity measurements that convey the most pertinent information on the silhouette function.
[0088] In this embodiment of the invention, the imaging module 430 characterizes the silhouette accuracy via the accuracy of the shadow pattern measurements relevant to silhouette information. In an exemplary embodiment of the invention, the imaging module 430 determines that there will be 20 pixels on a side, N.sub.pix of the final pixellated image of the asteroid. In this embodiment of the invention the number of apertures distributed across the cross-track direction must be approximately N.sub.pix. The end-to-end extent of the array may be at least as big as the width of the shadow region, W pertaining to the mid-band wavelength,
a=asteroid radius
The array need not be much larger than this because the Fresnel integral fluctuations far from this shadow region contain little information about the silhouette.
[0089] In an exemplary embodiment of the invention, at 0.28 AU a 40 m diameter asteroid has a shadow width of over 2 km. A typical value for the duration of the asteroid velocity perpendicular to the line of sight, V, is approximately 10 km/s, so the entire duration of the shadow passage is roughly W/V≈0.2 s. Further, for each pixel, the intensity measurement is the total number of photodetection events recorded by the photodetector in time period Δy/V where .fwdarw.y=W/N.sub.pix, the extent of a pixel in the observation plane.
[0090] In this embodiment of the invention, each aperture is equipped with a bank of N.sub.C non-overlapping narrow band filters with associated photodetectors. Therefore,
This relation states that the shadow pattern produced by light centered at wavelength
[0091] In one or more embodiments of the invention, for each pixel, the data alignment process merely adds up all the photodetector measurements that contribute to the estimated intensity associated with that pixel. The pixel measurement is a sum of independent Poisson processes corresponding to all the light collecting apertures and all the narrow band filter channels in each light collecting aperture whose along-axis tracks pass through the pixel during occultation. The main factor contributing to both the signal and the noise in the signal-to-noise ratio is the average of this quantity, evaluated for the un-occulted star. This is a function of the cross-track coordinate only. Using the equations
we have:
where:
This function has a continuous limit as N.sub.C.fwdarw.□, and Δy.fwdarw.0, and is well approximated by the limit. In an embodiment of the invention ρ(y.sub.pix) is a function of y.sub.pix divided by the aperture array width for N.sub.C=N.sub.pix=200, and for various values of the reciprocal of the spectral resolution,
In this embodiment of the invention, ρ(y.sub.pix) is somewhat above unity for a region within the aperture array, but then decreases abruptly. Thus it is desirable to make the cross-track extent of the aperture array somewhat larger than W by the factor
Under this condition, a conservative estimate of ρ(y.sub.pix) is unity.
The true “signal” is not the quantity in
shadow intensity distribution.
[0098] In this embodiment of the invention, the contrast arising from the fluctuations in the intensity is what contributes information about the silhouette. The estimated level of contrast may be expressed as a fraction, κ, of the un-occulted average photodetections given by the above equation. To estimate κ, the imaging module 430 considers that for the range of Fresnel numbers above ˜0.1, the principal fluctuations take the form of nested striations enclosing the deep shadow region of the diffracted silhouette. The extent of the dark spot enclosed by the striations indicates the overall size of the silhouette. The fluctuation of the striations themselves combined with the estimated size indicates the distance and Fresnel number. Generally, if the system has sufficient signal-to-noise ratio to accurately model the striations, it will suffice to determine the silhouette accurately. These striations approximate the knife edge shadow pattern for a point source given by:
where ξ is distance along the axis perpendicular to the striation, and C( . . . ) and S( . . . ) are the Fresnel integrals.
[0099] In summary, using the above estimate of n.sub.Δλ, and the above estimates of ρ and κ, the imaging module 430 may calculate:
Average contrast measurement=ηκA.sub.TN.sub.0(Δy/V)□0.sup.−0.4 m,
N.sub.0=1.46×10.sup.10
[0100] In an embodiment of the invention, the imaging module 430 considers the noise statistics. The variance of the noise due to photon arrival statistics is ηA.sub.Tn.sub.Δλρ(y.sub.pix)Δy/V. To this the photodectector dark count must be added, which is denoted by, n.sub.DC measured in average number of photons per second. The noise variance due to dark count for each pixel measurement and all wavelength channels is n.sub.DCN.sub.C(Δy/V)ρ(y.sub.pix). Noise due to the star and the dark count are independent, so:
[0101] The signal-to-noise ratio (SNR) of each pixel measurement of the shadow pattern is κ times expression for the average number of un-occulted photo detections for a pixel centered at y.sub.pix divided by the square root of expression in
Therefore the signal-to-noise ratio of each pixel calculated by the imaging module 430 is:
[0102] In an exemplary embodiment of the invention, at 0.3 AU distance, 0.4 m telescope diameter, quantum efficiency of 0.5, ten wavelength channels, n.sub.DC=4 phot/s, and a 12.sup.th magnitude star, the signal-to-noise ratio is approximately 9.1 for a 20×20 pixel silhouette image.
[0103] While the principles of the invention have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the invention. Further embodiments are contemplated within the scope of the present invention in addition to the exemplary embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention, which is not to be limited except by the following claims.