System and Method for Detection, Characterization, and Imaging of a Stellar Occultation

20170227351 · 2017-08-10

    Inventors

    Cpc classification

    International classification

    Abstract

    An asteroid characterization and imaging system comprising at least one light collecting aperture positioned to collect intensity time history data and a data analysis unit configured to detect an occultation event and process said intensity time history data. Embodiments according to the present invention include a method of detecting, characterizing and imaging a near-Earth object comprising collecting intensity time history data by at least one light collecting aperture positioned to observe a star, detecting a stellar occultation event, recording said intensity time history data, processing said intensity time history data, predicting at least one of a set of object characteristics, and imaging said near-Earth celestial object.

    Claims

    1. An asteroid characterization and imaging system comprising: at least one light collecting aperture positioned to collect intensity time history data; and a data analysis unit configured to detect an occultation event and process said intensity time history data.

    2. The system of claim 1 further comprising: a data receiving station wherein said data receiving station connectively communicates with said at least one light collecting aperture.

    3. The system of claim 1 wherein said data analysis unit comprises: a communications module; a memory; a central processing unit; an imaging module; and a characteristic calculation module wherein said imaging module processes said intensity time history data to produce an image of said asteroid.

    4. The system of claim 3 wherein said characteristic calculation module processes said intensity time history data to calculate asteroid characteristic data.

    5. The system of claim 4 wherein said asteroid characteristic data comprises at least one of: a velocity calculation of said asteroid; a size of said asteroid; a trajectory of said asteroid; and a distance between said asteroid and said at least one light collecting aperture.

    6. A method of detecting, characterizing and imaging a near-Earth celestial object comprising: collecting intensity time history data by at least one light collecting aperture positioned to observe a star; detecting a stellar occultation event; recording said intensity time history data; processing said intensity time history data; predicting at least one of a set of object characteristics; and imaging said near-Earth celestial object.

    7. The method of claim 6 wherein said set of object characteristics comprises: a velocity calculation of said near-Earth celestial object; a size of said near-Earth celestial object; a trajectory of said near-Earth celestial object; and a distance between said near-Earth celestial object and said at least one light collecting aperture.

    8. The method of claim 6 wherein said at least one light collecting aperture is positioned on a surface of Earth.

    9. The method of claim 6 wherein said at least one light collecting aperture is positioned in a geosynchronous orbit of Earth.

    10. The method of claim 6 wherein said imaging of said near-Earth celestial object further comprises: inputting said intensity time history data into a shadow function; applying a phase retrieval algorithm to said shadow function produce an unresolved image; applying a silhouette function to said unresolved image to produce a silhouette image of said near-Earth object to produce a sharpened silhouette image.

    11. The method of claim 10 further comprising applying a signal-to-noise ratio to said silhouette image.

    12. The method of claim 6 wherein said at least one light collecting aperture is positioned with a plane normal to a line of sight to said star.

    13. The method of claim 6 wherein said light collecting aperture continuously collects said intensity time history data.

    14. The method of claim 6 wherein said light collecting aperture collect said intensity time history data at scheduled intervals.

    15. The method of claim 6 wherein said intensity time history data is stored in a memory.

    16. The method of claim 7 wherein said set of object characteristics is stored in a memory.

    17. The method of claim 10 wherein said sharpened silhouette image is stored in a memory.

    18. A method of imaging a near-Earth celestial object comprising: collecting light intensity data as a function of time of a stellar occultation event; processing said light intensity data as a function of time to generate a shadow function; applying a phase retrieval algorithm to said shadow function to generate an unresolved image; applying a silhouette function to said unresolved image.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0008] These and other features and advantages of the present invention will be better understood by reading the following Detailed Description, taken together with the Drawings wherein:

    [0009] FIG. 1 is an elevated system view of an embodiment of the invention exemplifying asteroid occultation detection and characterization system.

    [0010] FIG. 2 is a detailed view illustrating an alternate embodiment of the present invention.

    [0011] FIG. 3 is a flow chart of the methodology of an embodiment of the present invention.

    [0012] FIG. 4 is a detailed view of the imaging module according to an embodiment of the present invention.

    DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

    [0013] The embodiment of FIG. 1 provides an elevated system view exemplifying asteroid occultation detection and characterization in accordance with an embodiment of the invention. An occultation occurs where a celestial object is hidden by another intervening celestial object passing between the first object and the observer. There are many types of occultation including occultation by moons, occultation by planets, and occultation by asteroids. In the stellar occultation by an asteroid, a star is obscured from the view point of the observer by an asteroid. This happens because the asteroid's trajectory passes in front of the star, temporarily blocking its light from the observer's view.

    [0014] In accordance with one embodiment of the invention, observations made during a stellar occultation by an asteroid include detection of the asteroid, determination of the asteroid's size and shape, imaging of the asteroid, and prediction of trajectory. These characteristics are important to observe because they allow identification of asteroids or any other celestial object, especially near-Earth objects. If the orbit of a near-Earth object crosses Earth's orbit, the resulting impact event could be catastrophic. Therefore detection, characterization and identification of orbital paths of near-Earth objects like near-Earth asteroids may allow prediction of objects that may pose a danger to the Earth and the magnitude that that danger. Once identified and characterized, these near-Earth objects may, in the future, be mined for mineral resources.

    [0015] Known methods allow detection of large near-Earth objects, between 10,000 meters and 100,000 meters, which cast a sharp shadow when occulting. These near-Earth objects are within the radiative near-field or Fresnel zone, with Freznel number>1. However, smaller near-Earth objects such as those with diameters between 40 meters and 140 meters do not create these defined shadows with occulting a star. These smaller near-Earth objects have shadows near or within the far field or Fraunhofer region from a distance of astronomical unit (1 AU). Therefore, known methods cannot efficiently detect and characterize these types of objects.

    [0016] In one embodiment of the invention, shown in FIG. 1, an array 101 of light collecting apertures 103 are positioned, each with its plane normal to the line of sight to a star 107. In this embodiment, the array 101 is a circular arrangement where the light collecting apertures 103 are evenly spaced 30 degrees apart along the circumference of a circle; however the array 101 may take on any general arrangement of the light collecting apertures 103. Additionally, this embodiment illustrates twelve (12) light collecting apertures positioned in the array though the array may have any number of apertures.

    [0017] According to this embodiment of the invention, the star 107 is being occulted by an asteroid 105. Each of the light collecting apertures 103 continuously records intensity time history data. This intensity time history data represents the intensity of light reaching the light collecting aperture from the star as a function of time. By continuously recording the intensity of light from the star, the light collecting apertures measure the relative intensities before, during and after the occultation of the star. In an embodiment of the invention, light collecting aperture includes a reasonably high bandwidth, at least 10 MHz, Avalanche Photodiode, run in Geiger mode, or a high bandwidth, high sensitivity photomultiplier to measure the light intensity. In this embodiment, the light is passed through a band-limited filter with center at 0.5μ wavelength before its intensity is recorded. The intensity time history data is communicated to a receiving station 109. The receiving station 109 collects the recorded intensity time history data to characterize, image and track the asteroid 105.

    [0018] A second embodiment of the present invention, shown in FIG. 2, illustrates a telescope, or other orbital light-receiving aperture with a photo intensity sensor and memory, as the light collecting aperture. Earth based light collecting apertures are restricted to collecting intensity time history data during night hours and preferably when the skies are clear. In this case, the telescope may collect intensity time history data without time and weather constraints.

    [0019] FIG. 2 shows telescopes 203 positioned in geosynchronous orbit around Earth 200. While this embodiment depicts the telescopes 203 in orbit around Earth 200, the telescopes 203 or other light collecting apertures may be positioned anywhere in space where they may capture stellar occultation events. The telescopes 203 are positioned in an array 201 and are focused on a star 207 that is occulted by an asteroid 205. Each of the telescopes 203 in the array 201 is equipped with a photo intensity detector 211 and records intensity time history data while the asteroid passes between the array and the star. The intensity time history data is communicated to a receiving station 209 on Earth where it is used to characterize, image, and track the asteroid 205.

    [0020] While FIG. 2 shows that the receiving station 209 is located on Earth 200, in another embodiment of the invention the receiving station 209 may be on a space station, a satellite, a cubesat, a space outpost, space colony, on the Moon, or on another planet. In another embodiment of the invention, the individual telescopes 203 or other light collecting aperture communicate the intensity time history data to each other prior to transmitting to the receiving station 209.

    [0021] FIG. 3 is a flow chart of the methodology of an embodiment of the present invention. In this embodiment, an asteroid occults a star causing its shadow to pass over the plane occupied by the array 301. The x and y axes define the local coordinate system of the array. A basic assumption here is that the array diameter encompasses most of the cross-section of the territory swept through by the asteroid shadow. Each light collecting aperture continuously collects and records the intensity of the light it receives as a function of time 302. This data of the intensity of light as a function of time is called intensity time history data. Since the intensity of the star before occultation is measured, the intensity time history data comprises the time histories of the intensities normalized by the non-occulted stellar intensity as well as the time history of the intensities during the stellar occultation. The intensity time history data is wirelessly sent to a receiving station 303 where it is collected and stored by a data analysis module at the receiving station 304. The data analysis module analyzes the intensity time history data to characterize and image the asteroid by running several calculations and algorithms 305. In this embodiment, these algorithms are used to produce a shadow function on the plane normal to the line-of-sight at the location of the array. This shadow function represents a pattern of intensity. From the shadow function and other analysis performed by the data analysis module, the asteroid's characteristics are determined and a sharp silhouette image of the asteroid is created 306.

    [0022] As in FIG. 3, the intensity time history data may be sent wirelessly or through a wired connection to the receiving station. In an embodiment of the invention, one or more light collecting apertures may act as a receiving station such that the intensity time history data is communicated to at least one light collecting aperture from the other light collecting apertures where it is used to characterize and image the asteroid. In another embodiment of the invention, each light collecting aperture sends the intensity time history data to the others where they collectively process the data to characterize the asteroid. In this embodiment of the invention, the intensity time history data is sent upon completion of a stellar occultation event. In an alternate embodiment of the invention, the receiving station is continuously transmitting intensity time history data, not distinguishing whether a stellar occultation event has occurred.

    [0023] In one or more embodiments of the invention, the light collecting apertures may send a signal to the receiving station when a stellar occultation is detected prior to sending the intensity time history data. In an embodiment of the invention, the light collecting apertures collect intensity time history data but do not record it until an occultation event is detected. In one or more embodiments of the invention, the light collecting apertures record the intensity time history data at programmed intervals. In an embodiment of the invention, the programmed intervals coincide with anticipated stellar occultation events.

    [0024] FIG. 4 shows a detailed view of the data analysis module 400 in accordance with one or more embodiments of the present invention. In this embodiment of the invention, the data analysis module 400 is a processing unit configured to collect, record, and analyze intensity time history data. In one or more embodiments of the invention, the data analysis module 400 may be a computer capable of processing the intensity time history data and having a screen for displaying the image of the occulting near-Earth object. In an alternate embodiment of the invention, the data analysis module 400 is a server or other data processing unit with a memory. In an embodiment of the invention, the data analysis module 400 is connected to a printer or other peripheral imaging device. In one or more embodiments of the invention, the data analysis module 400 is connected to a database 480 or other data repository for storage of intensity time history data, object characteristic, or imaging data for each occultation event processed by the receiving station.

    [0025] The data analysis module 400 may include a communications module 410, an internal memory 420, an imaging module 430, a size calculation module 440, a distance calculation module 450, a velocity calculation module 460, and a trajectory calculation module 470.

    [0026] The communications module 410 may be configured to receive intensity time history data from the light collecting apertures. In an embodiment of the invention the communications module 410 may be further configured to receive a signal from the light collecting aperture indicating that an occultation event has been detected. In one or more embodiments of the invention the communication module 410 is further configured to send intensity time history data, object characteristic, or imaging data of an occulting near-Earth object. In an embodiment of the invention, the communications module 410 may send intensity time history data, object characteristic, or imaging data of an occulting near-Earth object to a database or other data repository. In an embodiment of the invention the communications module 410 may be configured to send a signal to the light collecting apertures to begin collecting intensity time history data. In an embodiment of the invention, the communications module 410 may be configured to send a signal to the light collecting apertures to end collecting intensity time history data. In an embodiment of the invention, the communications module 410 may be configured to send an instruction to the light collecting apertures including a schedule of occultation events indicating start and end times for collecting intensity time history data.

    [0027] The internal memory 420 of the data analysis module 400 may be configured to store intensity time history data of an occultation event received by the communications module. The internal memory 420 may be further configured to store processing data that may need to be accessed by the individual calculation modules including the imaging module, size calculation module 440, distance calculation module 450, velocity calculation module 460, and trajectory calculation module 470. The size of the occulting near-Earth object may be determined by the size calculation module 440. The distance of the occulting near-Earth object may be determined by the distance calculation module 450. The velocity of the occulting near-Earth object may be determined by the velocity calculation module 460. In an embodiment of the invention, the diameter of the array projected onto the direction of motion of the shadow of the occulting near-Earth object factoring in a time delay is used to calculate the velocity of the object. The trajectory of the occulting near-Earth object may be determined by the trajectory calculation module 470. In one or more embodiments of the invention, the trajectory of the occulting near-Earth object may be used to determine future occultation events.

    [0028] Returning to FIG. 1, a circular array of twelve light collecting apertures 2500 m in radius, with its plane normal to the line of sight to the star are positioned on Earth's surface. The light collecting apertures are evenly spaced at 30 degrees separation on the perimeter of the circular array. In this embodiment of the invention each light collecting aperture records the intensity of light from a star, passed through a band-limited filter with center at 0.5μ wavelength. When an asteroid occults the star, its shadow passes over the plane occupied by the array. The (X,Y) axes define the local coordinate system of the array. It is assumed that the array diameter encompasses most of the cross-section of the territory swept through by the asteroid shadow

    [0029] Note that for a circular array, as in FIG. 1, if the intensity time history data collected by two diametrically opposite light collecting apertures are nearly identical except for a time delay, the diameter connecting them is approximately parallel to the direction of motion of the shadow. The light collecting aperture with the temporal lead establishes the sense of direction along the diameter. The temporal cross-correlations of each of the six pairs of opposed apertures and the maximum correlation and corresponding time delay may be computed so that the diameter connecting the pair with the largest correlation maximum determines an estimate of the direction of motion. In FIG. 1, the light collecting apertures with the largest maximum cross-correlation give an initial estimate of direction to within approximately 30 degrees. To refine the estimate, Lagrange interpolation along the Y axis is used to provide a nearly continuous map of the shadow distribution as a function of time and distance i.e. distance in the (X,Y) plane normal to the preliminary estimate of the direction of motion. Then the cross-correlations of all diametrically opposed pairs of the light collecting apertures on the array periphery may be calculated as before, but using a finer increment of angular direction. Again, the diameter corresponding to the largest maximum cross-correlation indicates the direction. Performing iterations of this method calculates the direction of motion as right-to-left along the line 5° clockwise from the line connecting the center of the array and a light collecting aperture. These calculations may be used by the trajectory calculation module to determine the trajectory of the near-Earth object.

    [0030] Using just this data, a number of significant parameters can be estimated. The intensity distribution is heavily diffracted, but some well known diffraction algorithms allow us to estimate the size, with may be performed by the size calculation module 440, and distance, which may be performed by the distance calculation module 450, of the near-Earth object from the extent of the relatively deep shadow and the width of the striations that immediately surround it.

    [0031] In one or more embodiments of the invention the size, distance, velocity, and trajectory of the occulting near-Earth object may be analyzed to indicate whether the occulting near-Earth object poses an impact threat to Earth. In an embodiment of the invention, the size, distance, velocity, and trajectory of the occulting near-Earth object may be sent via the communications module to a central impact alert system.

    [0032] The imaging module 430 may be configured to run an algorithm to create an image of the occulting near-Earth object using intensity time history data. In one or more embodiments of the invention, the imaging module 430 creates an actual silhouette of the near-Earth object in the form of a pixellated image. In an exemplary embodiment of the invention, this pixellated image has roughly 10 to 20 pixels on a side. This embodiment implies the shadow pattern needs to have a spatial resolution corresponding to 10-20 measurements across the cross-track direction and time history measurements (representing intensity variation in the along-track direction) during time intervals at most 0.1 to 0.05 of the duration of the passage of the near-Earth object across the array. In one or more embodiments of the invention, the imaging module 430 clarifies the accuracy of the image using a signal-to-noise ratio.

    [0033] The imaging algorithm is detailed below. In an embodiment of the invention, the algorithm for creating the a sharp silhouette image of the occulting near-Earth object includes producing a shadow function, applying a phase retrieval algorithm to determine the complex field amplitude at position vector x=(x,y) on the plane normal to the line-of-sight at the array location, and applying an inversion technique using a silhouette function.

    [0034] Shadow Function

    [0035] The shadow function is the shadow pattern intensity relative to that of the unobstructed star is the square of the magnitude of the complex field amplitude, U(x), at position vector x=(x,y) on the plane normal to the line-of-sight at the array location. This may be represented as:

    [00001] I ( x ) .Math. = .Math. U ( x ) .Math. .Math. 2 .

    [0036] The complex field amplitude U(x,y) at coordinates (x,y) on the plane normal to the line-of-sight at the array location is:

    [00002] U ( x , y ) = z _ λ .Math. d .Math. .Math. θ x .Math. d .Math. .Math. θ y [ Γ ( θ x , θ y ) .Math. e l .Math. .Math. π .Math. .Math. z _ λ .Math. ( θ x 2 + θ y 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. .Math. 1 λ .Math. ( x .Math. .Math. θ x + y .Math. .Math. θ y )

    where λ is the mid-band wavelength, z the distance of the observers to the occulting near-Earth object, and (θ.sub.x, θ.sub.y) are coordinates on a plane very close to the occulting near-Earth object divided by z.

    [0037] If the star is treated as a point source, the complex field amplitude at position vector x=(x,y) is given by:

    [00003] U ( x .Math. ) = U p .Math. .Math. s ( λ , z _ , x .Math. ) = z _ λ .Math. d 2 .Math. θ [ Γ ( θ .Math. ) .Math. e i .Math. .Math. x .Math. .Math. z λ .Math. .Math. θ .Math. .Math. 2 ] .Math. e - 2 .Math. π .Math. .Math. i .Math. .Math. Γ λ .Math. x .Math. .Math. .Math. .Math. θ .Math. .

    The imaging module 430 may calculate U(x) at position vector x=(x,y) on the plane normal to the line-of-sight at the array location and use it to determine a silhouette function.

    [0038] In an alternate embodiment of the invention, if the light source is treated as an extended incoherent source, then the total shadow pattern is given by the convolution integral

    [00004] I tot ( x ) .Math. = λ _ - Δλ / 2 λ _ + Δλ / 2 .Math. d .Math. .Math. λ .Math. d 2 .Math. θ .Math. .Math. B ( λ , θ _ ) .Math. .Math. U ps .Math. ( x - z _ _ .Math. .Math. θ ) .Math. _ .Math. 2

    where B(λ,θ′) is the normalized spectral radiance of the light from the source transmitted by a suitable gray band-limited filter with band-pass width, Δλ, and center band wavelength λ, at the look angle position vector θ′. Given the intensity pattern I.sub.tot(x), the imaging module may use I.sub.tot(x) and U(x) to compute the silhouette function.

    [0039] The integral over the wavelength tends to smooth the fluctuations in

    [00005] .Math. U ps ( x - z _ _ .Math. .Math. θ ) .Math. .Math. 2 ,

    reducing the contrast of the striations bordering the shadow interior. To maintain an acceptable level of contrast, the spectral resolution of the filter, R.sub.j=λ/Δλ, must be approximately 10. If the spectral radiance of the filtered source is slowly varying over the filter wave-band, then I.sub.tot(x) may be approximated by:

    [00006] I tot ( x ) .Math. Δλ .Math. d 2 .Math. θ .Math. .Math. B ( λ _ , θ ) .Math. .Math. U ps .Math. ( λ _ , x - z _ _ .Math. .Math. θ ) .Math. .Math. 2

    [0040] In this form, the imaging module 430 is able to calculate the silhouette function using a phase retrieval algorithm.

    [0041] Phase Retreival Algorithm

    [0042] The phase retrieval problem occurs widely in astronomy, crystallography and electron microscopy. In these applications, the images (intensity distributions) being reconstructed are typically some object, say a planet or molecular structure, with a dark background, thus having finite support. The objective is to reconstruct the image, typically pixellated, when only the magnitude of its Fourier transform is known. This is possible when the dark background furnishes sufficiently many image-domain constraints, i.e. the restriction that the background pixels are black. For the most part, phase retrieval is formulated to address discretized data, and the object is to determine a pixellated image given the modulus of the (discrete) Fourier transform.

    [0043] In one embodiment of the invention, the imaging module 430 calculates |U(x,y)|.sup.2, while U(x,y) is the Fourier transform of a known function,

    [00007] e i .Math. .Math. π .Math. .Math. z _ λ .Math. ( θ x 2 + θ y 2 )

    multiplied by Γ(θ.sub.x,θ.sub.y), or the silhouette function. The imaging module 430 exploits some special features that differ from the typical phase retrieval situation. For example, there are more image domain constraints here than in the traditional problem: All background pixels in Γ(θ.sub.x,θ.sub.y) are uniformly unity, while pixels within the silhouette are zero. Since all pixels are constrained, this leads to faster, more reliable convergence and much greater tolerance of measurement noise in the modulus data. Once U(x,y) is determined, one can invert

    [00008] I ( x ) .Math. = .Math. U ( x ) .Math. .Math. 2

    to obtain:

    [00009] Γ ( θ x , θ y ) = 1 λ .Math. .Math. z _ .Math. e - i .Math. .Math. π .Math. .Math. z _ λ .Math. ( θ x 2 + θ y 2 ) .Math. dx .Math. dy [ U ( x .Math. ) ] .Math. e 2 .Math. π .Math. .Math. i .Math. .Math. 1 λ .Math. ( x .Math. .Math. θ x + y .Math. .Math. θ y )

    [0044] Borrowing the notations of Equations

    [00010] I ( x ) .Math. = .Math. U ( x ) .Math. .Math. 2

    and

    [00011] U ( x , y ) = z _ λ .Math. d .Math. .Math. θ x .Math. d .Math. .Math. θ y [ Γ ( θ x , θ y ) .Math. e i .Math. .Math. π .Math. .Math. z _ λ .Math. ( θ x 2 + θ y 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. .Math. 1 λ .Math. ( x .Math. .Math. θ x + y .Math. .Math. θ y ) ,

    we are given the data: J(x,y)=|Ũ(x,y)|.sup.2 where Ũ(x,y) is the counterpart of U(x,y) in

    [00012] I ( x ) .Math. = .Math. U ( x ) .Math. .Math. 2 .

    In phase retrieval applications, Ũ(x,y) is the optical coherence, not the complex field amplitude. Also far-field radiation can be assumed, in which case the exponential,

    [00013] exp ( i .Math. .Math. π .Math. .Math. z _ λ .Math. ( θ x 2 + θ y 2 ) )

    in

    [00014] I ( x ) .Math. = .Math. U ( x ) .Math. .Math. 2

    is approximately unity. Then Ũ(x,y) is given by the Fourier transform relation:

    [00015] U ~ ( x , y ) = z _ λ .Math. d .Math. .Math. θ x .Math. d .Math. .Math. θ y .Math. Γ ~ ( θ x , θ y ) .Math. e - 2 .Math. π .Math. .Math. i .Math. .Math. 1 λ .Math. ( x .Math. .Math. θ x + y .Math. .Math. θ y )

    where {hacek over (Γ)}(θ.sub.x,θ.sub.y), the counterpart of the term

    [00016] I ( x ) .Math. = .Math. U ( x ) .Math. .Math. 2 ,

    is the image intensity distribution, not the silhouette function. In phase retrieval, {hacek over (Γ)}(θ.sub.x,θ.sub.y) is a function that may assume any real, non-negative values. With relation Ũ(x,y), the challenge is to compute {hacek over (Γ)}(θ.sub.x,θ.sub.y) given knowledge of the data, J(x,y)=|Ũ(x,y)|.sup.2.

    [0045] Since the phase of Ũ(x,y) is not available, a solution cannot exist unless there are additional constraints on {hacek over (Γ)}(θ.sub.x,θ.sub.y). These usually take the form of constraints on the value of {hacek over (Γ)}(θ.sub.x,θ.sub.y), (usually that it vanishes), over some region in the (θ.sub.x,θ.sub.y) plane. Many methods of phase retrieval are based on known methods that proceed by imposing a sequence of projections designed to converge to satisfaction of both the image-domain constraints (zero values of {hacek over (Γ)}(θ.sub.x,θ.sub.y) in a specified region), and the Fourier domain constraints (the specified values of J(x,y)=|Ũ(x,y)|.sup.2). This involves starting with an estimate of {hacek over (Γ)}(θ.sub.x,θ.sub.y), nulling its values in the specified region, taking its Fourier transform, correcting the modulus in conformity with J(x,y)=|Ũ(x,y)|.sup.2, taking the inverse transform to obtain the next estimate of {hacek over (Γ)}(θ.sub.x,θ.sub.y) and repeating the process until (or if) acceptable convergence is attained. The algorithm, however, is very susceptible to local minima and frequently stagnates before convergence.

    [0046] In another embodiment of the invention, the image domain constraint imposition is modified through known methods so that instead of rigidly setting the values of {hacek over (Γ)}(θ.sub.x,θ.sub.y) in the constraint region to zero, a step is taken towards imposing the constraint based on the previous iterate. This offers enormous improvement with very infrequent incidence of stagnation.

    [0047] The two methods above assume that the image domain constraint region is known. This work suggests that a priori knowledge of the (θ.sub.x,θ.sub.y) plane region wherein {hacek over (Γ)}(θ.sub.x,θ.sub.y)=0 may not be necessary. Successful phase retrieval appears possible as long as it is known a priori that the constraint region encompasses at least half the total extent of the image (half the number of pixels in the discrete case).

    [0048] If the light source is treated as an extended incoherent source, then the Fourier transform pair may be defined as:

    [00017] F u [ f ( x ) .Math. ] = - .Math. d 2 .Math. xf ( x ) .Math. e .Math. 2 .Math. π .Math. i .Math. u .Math. .Math. .Math. .Math. .Math. x .Math. .Math. F x - 1 [ F ( u ) .Math. ] = - .Math. d 2 .Math. uF ( u ) .Math. e .Math. - 2 .Math. π .Math. i .Math. u .Math. .Math. .Math. .Math. .Math. x .Math. .Math. .

    [0049] Then, taking the Fourier transform of both sides of the above equation and employing the convolution theorem, results in:

    [00018] F u [ I tot ( x .Math. ) ] = Δ .Math. .Math. λ .Math. .Math. 1 z 2 .Math. F u [ B ( λ _ , ϕ .Math. / z _ ) ] .Math. F u [ .Math. U p .Math. .Math. s ( λ _ , x .Math. ) .Math. 2 ]

    [0050] Solving for |U.sub.ps(λ,x)| in the above equation and inverting the relationship for Γ(θ.sub.x,θ.sub.y) gives:

    [00019] .Math. U p .Math. .Math. s ( λ _ , x .Math. ) .Math. = F x - 1 [ z _ 2 Δ .Math. .Math. λ .Math. F u [ I tot ( x .Math. ) ] F u [ B ( λ _ , ϕ / z _ ) ] ] Γ ( θ .Math. ) = 1 λ .Math. .Math. z .Math. e - i .Math. .Math. x .Math. .Math. z _ λ .Math. .Math. θ .Math. .Math. 2 .Math. d 2 .Math. x [ U p .Math. .Math. s ( λ _ , x .Math. ) ] .Math. e 2 .Math. π .Math. .Math. i .Math. .Math. Γ λ .Math. x .Math. .Math. .Math. .Math. θ .Math.

    [0051] where in most instances one may employ a Gaussian approximation for F.sub.u[B(λ,φ/z)], so that the denominator of |U.sub.ps(λ,x)| has no zeros. |U.sub.ps(λ,x)| gives the magnitude of the field amplitude in terms of the recorded intensity. When combined with image-domain constraints on the silhouette (pixels are either zero or unity), this information can yield the complete field amplitude (both magnitude and phase) via the phase retrieval algorithm. The second equation resulting from the inversion above shows that the silhouette is then determined by the inverse Fourier transform of U.sub.ps(λ,x)

    [0052] The restriction on the spectral resolution, while preserving the contrast in the shadow pattern may sometimes unduly impair the signal-to-noise ratio. In an embodiment of the invention, a battery of narrow band filters with contiguous, non-overlapping pass bands that span a wide wavelength range is used as the light collecting apertures. Each filter band has an associated photodetector and a high spectral resolution. To be specific, suppose that there are N.sub.C such wavelength channels, all of equal width, Δλ.sub.C, spanning the visible band, Δλ.sub.V=λ.sub.V2−λ.sub.V1. The mid-band wavelength of channel k is:


    λ.sub.Ck=λ.sub.V1+(2k−1)Δλ.sub.C


    for k=1,2, . . . ,N. Then

    [00020] I tot ( x ) .Math. Δλ .Math. d 2 .Math. θ .Math. .Math. B ( λ _ , θ ) .Math. .Math. U ps .Math. ( λ _ , x - z _ _ .Math. .Math. θ ) .Math. .Math. 2

    [00021] I tot ( x .Math. ) = .Math. k = 1 N .Math. I k ( λ _ Ck , x .Math. )

    can be written:

    [00022] I k ( λ _ Ck , x .Math. ) = Δλ C .Math. d 2 .Math. θ .Math. .Math. B ( λ _ Ck , θ .Math. ) .Math. .Math. U ps .Math. ( λ _ Ck , x - z _ _ .Math. .Math. θ ) .Math. _ .Math. 2 .

    [0053] Each channel may be processed independently by the imaging module 430 in a manner analogous to

    [00023] .Math. U p .Math. .Math. s ( λ _ , x .Math. ) .Math. = F x - 1 [ z _ 2 Δ .Math. .Math. λ .Math. F u [ I tot ( x .Math. ) ] F u [ B ( λ _ , ϕ .Math. / z _ ) ] ]

    and then the results may be combined to

    [00024] Γ ( θ _ ) = 1 λ .Math. .Math. z .Math. e - i .Math. .Math. x .Math. .Math. z _ λ .Math. .Math. θ .Math. .Math. 2 .Math. d 2 .Math. x [ U p .Math. .Math. s ( λ _ , x .Math. ) ] .Math. e 2 .Math. π .Math. .Math. i .Math. .Math. Γ λ .Math. x .Math. .Math. .Math. .Math. .Math. θ _

    achieve much improved signal-to-noise ratio while preserving adequate spectral resolution and contrast.

    [0054] It should be noted that even when all constraints are satisfied, there can be ambiguities in the solution. The trivial ambiguities are those transformations of the image that do not affect the Fourier transform modulus. These comprise translations of the image in the picture plane and 180° rotations. Nontrivial ambiguities arise when the image is itself the convolution of two or more other images. This relatively rare phenomenon gives rise to multiple possible solutions.

    [0055] Silhouette Function

    [0056] Generally, the imaging module 430 scans the pixels in the silhouette plane using a simple raster scan or some other scanning algorithm and for each pixel, changes the value (from unity to zero or vice versa), computes the modified shadow pattern, and then determines the cross correlation with the shadow pattern data to generate an image of the near-Earth object.

    [0057] Silhouette function calculation can be seen to be very similar to phase retrieval: data on the modulus of a quantity, U(x,y), is known, which is related via Fourier transformation to a known function

    [00025] exp ( i .Math. .Math. π .Math. .Math. z _ λ .Math. ( θ x 2 + θ y 2 ) )

    multiplied by the function of interest, Γ(θ.sub.x,θ.sub.y). Provided that the detection array is appropriately sized, a region of sufficient extent wherein Γ(θ.sub.x,θ.sub.y) vanishes is known. Additionally the silhouette function only assumes the values zero or unity. The silhouette function may be defined by:

    [00026] Γ ( θ x , θ y ) = { 0 , if .Math. .Math. θ .Math. .Math. is .Math. .Math. inside .Math. .Math. the .Math. .Math. asteroid .Math. .Math. profile 1 , otherwise .

    [0058] In one or more embodiments, the silhouette function is represented as a pixellated image in the (x.sub.a,y.sub.a) plane, where each matrix element is either zero or unity. In principle, the search space is finite, and there is a finite-step algorithm that may converge to a solution for which all constraints are satisfied. With respect to ambiguities, note that by the conditions of the problem, the silhouette is known to be surrounded by uniform intensity, so translations may be eliminated by shifting the silhouette pattern in the picture frame. There is also a one-to-one relation between the orientation of the shadow pattern and the silhouette so that rotational ambiguity is not possible. Further, nontrivial ambiguities are unlikely because the convolution of silhouette functions with other silhouette functions must have values greater than unity.

    [0059] Let {hacek over (Γ)}εcustom-character.sup.M×M and Jεcustom-character.sup.M×M and be the matrices of the trial values of the image and of the specified Fourier transform modulus, and denote the discrete Fourier transform by F( . . . ). The error metric is defined by:

    [00027] .Math. = 1 2 .Math. .Math. m = 1 M .Math. .Math. n = 1 M .Math. ( J mn - .Math. F mn ( F ~ ) .Math. ) 2 .

    [0060] The silhouette function conducts a raster scan of the M×M image plane. At each pixel, the color (0 or 1) is changed and the error metric is computed. If the error is reduced from the previous step, the new color is retained; otherwise the color change is rescinded. The raster scan is performed iteratively until the actual image is sharp enough to meet a threshold or perfectly reproduced, except for a translation in the image plane.

    [0061] From

    [00028] U ( x , y ) = z _ λ .Math. d .Math. .Math. θ x .Math. d .Math. .Math. θ y [ Γ ( θ x , θ y ) .Math. e i .Math. .Math. π .Math. z _ λ .Math. ( θ x 2 + θ y 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. 1 λ .Math. ( x .Math. .Math. θ x + y .Math. .Math. θ y ) ,

    the shadow intensity distribution is:

    [00029] Ψ ( x , y ) = .Math. 1 λ .Math. z _ .Math. dx a .Math. dy a [ ( 1 - Γ _ ( x a , y a ) ) .Math. e i .Math. .Math. π λ .Math. z _ .Math. ( x a 2 + y a 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. 1 λ .Math. z _ .Math. ( xx a + yy a ) .Math. 2

    where Γ(x.sub.a,y.sub.a) is a complementary silhouette function:

    [00030] Γ _ ( x a , y a ) = { 1 , if .Math. .Math. x a .Math. .Math. is .Math. .Math. inside .Math. .Math. the .Math. .Math. asteriod .Math. .Math. profile 0 , otherwise .

    In other words, this is the reverse color version of the sharp shadow. The introduction of Γ(x.sub.a,y.sub.a) is convenient because the integration over the unity integrand in

    [00031] Ψ ( x , y ) = .Math. 1 λ .Math. z _ .Math. dx a .Math. dy a [ ( 1 - Γ _ ( x a , y a ) ) .Math. e i .Math. .Math. π λ .Math. z _ .Math. ( x a 2 + y a 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. 1 λ .Math. z _ .Math. ( xx a + yy a ) .Math. 2

    can be carried out at once to yield:


    Ψ(x,y)=|U|.sup.2


    U=i−F∫.sub.−∞.sup.∞.sub.x∫.sub.−∞.sup.∞.sub.yΓ(φ.sub.x,φ.sub.y)e.sup.iπF[(φ.sup.x.sup.−w.sup.x.sup.).sup.2.sup.+(φ.sup.y.sup.−w.sup.y.sup.).sup.2.sup.]

    where the previous estimate of the near-Earth object radius, a, is used to define non-dimensional variables:

    [00032] ϕ x = x a / a , ϕ y = y a / a w x = x / a , w y = y / a

    and where F is the corresponding Fresnel number.

    [0062] With the above expressions, relating Γ(φ.sub.x,φ.sub.y) to Ψ(x,y) may be performed using a data processing device such as a computer or server. In an embodiment of the invention, these quantities are represented by matrices having numbers of rows and columns as (294×294). This gives us a digital framework for fine resolution rendering of the shadow pattern and complementary silhouette function. For approximating the silhouette, we define a pixellated image where the pixels are squares that are multiple fine-resolution pixels on a side. In this specific case there are 21 by 21 low resolution pixels that are used to define the complementary silhouette. The shadow function is the square of the magnitude of a field amplitude that is built up from a combination of the field amplitudes of individual 14 by 14 pixel squares. These amplitude distributions are computed using the analytical formula for the integral in U of


    Ψ(x,y)=|U|.sup.2


    U=i−F∫.sub.−∞.sup.∞.sub.x∫.sub.−∞.sup.∞.sub.yΓ(φ.sub.x,φ.sub.y)e.sup.iπF[(φ.sup.x.sup.−w.sup.x.sup.).sup.2.sup.+(φ.sup.y.sup.−w.sup.y.sup.).sup.2.sup.]

    assuming Γ(φ.sub.x,φ.sub.y) is unity only over the single coarse resolution pixel. For example, the amplitude of a single coarse resolution pixel that is β fine resolution pixels wide, and is centered at the origin is:

    [00033] U ( w x , w y ) = i - 1 2 [ E ( 2 .Math. F .Math. ( β - w x ) ) + E ( 2 .Math. F .Math. ( β + w x ) ) ] .Math. [ E .Math. ( 2 .Math. F .Math. ( β - w y ) ) + E ( 2 .Math. F .Math. ( β + w y ) ) ]

    where: E(ξ)=C(ξ)+iS(ξ) and F′ is the corresponding Fresnel number. This calculation is not repeated for every coarse resolution pixel for which Γ(φ.sub.x,φ.sub.y) is unity; rather, the imaging module 430 computes U(w.sub.x, w.sub.y) for a double sized domain to obtain a 588×588 matrix, and then uses appropriately shifted versions of this matrix to accumulate the total field amplitude for all coarse resolution pixels having Γ(φ.sub.x,φ.sub.y)=1. Taking the square of the magnitude of the total field amplitude, we get the matrix of values of Ψ(x,y) at all the fine-resolution pixel locations.

    [0063] In an embodiment of the invention, the imaging module 430 may use this method of mapping Γ(φ.sub.x,φ.sub.y) into Ψ(x,y) to implement the simple algorithm described above for phase retrieval: scan the coarse resolution pixels, at each pixel, change the value of Γ(φ.sub.x, φ.sub.y), evaluate a mean square error between the estimated shadow pattern and the data analogous to criterion

    [00034] .Math. = 1 2 .Math. .Math. m = 1 M .Math. .Math. n = 1 M .Math. ( J mn - .Math. F mn ( Γ ~ ) .Math. ) 2 ,

    and retain the value change if there is improvement, or rescind the change if not.

    [0064] However, in another embodiment of the invention, the imaging module uses a procedure that avoids any errors in the construction of the shadow pattern by directly comparing results with the raw data. To do this we take one further processing step: For given estimate of Γ(φ.sub.x,φ.sub.y), use the shadow pattern computed as above to synthesize the intensity time histories of the various apertures that would be observed if Ψ(x,y) were the true shadow pattern. This is described further below under The Alignment Method.

    [0065] Alignment Method

    [0066] In another embodiment of the invention, the time history data may be aligned by using the travel velocity to transform times to distances normal to the direction of travel so that the phase retrieval may be applied directly without a complete reconstruction of the shadow function. This is the shadow pattern, i.e. the normalized intensity distribution over the plane parallel to the plane of the array that moves with the shadow. Using just this data, a number of significant parameters may be estimated. The intensity distribution is heavily diffracted, but the relations and some well known diffraction results allow the data analysis module to estimate the size and distance of the near-Earth Object from the extent of a relatively deep shadow and the width of the striations that immediately surround it.

    [0067] First, let us consider the information on the near-Earth object distance implied by the shadow function. Referring to

    [00035] U ( x , y ) = z _ λ .Math. d .Math. .Math. θ x .Math. d .Math. .Math. θ y [ Γ ( θ x , θ y ) .Math. e i .Math. .Math. π .Math. z _ λ .Math. ( θ x 2 + θ y 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. 1 λ .Math. ( x .Math. .Math. θ x + y .Math. .Math. θ y ) ,

    recall that (φ.sub.x,φ.sub.y) are coordinates on a plane very close to the occulting object divided by z. Suppose that (x.sub.a,y.sub.a) are the corresponding Cartesian coordinates on this plane, so that:


    θ.sub.x=x.sub.a/z.sub.y=y.sub.a/z

    [0068] In terms of

    [00036] ( x a , y a ) , .Math. U ( x , y ) = z _ λ .Math. d .Math. .Math. θ x .Math. d .Math. .Math. θ y [ Γ ( θ x , θ y ) .Math. e i .Math. .Math. π .Math. z _ λ .Math. ( θ x 2 + θ y 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. 1 λ .Math. ( x .Math. .Math. θ x + y .Math. .Math. θ y )

    may be written:

    [00037] U ( x , y ) = 1 λ .Math. z _ .Math. d .Math. .Math. x a .Math. dy a [ Γ ( x a , y a ) .Math. e i .Math. .Math. π λ .Math. z _ .Math. ( x a 2 + y a 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. 1 λ .Math. z _ .Math. ( xx a + yy a ) .

    [0069] While the field amplitude depends on the geometry of the silhouette function, the only other parameter is λz. Furthermore the edges of the silhouette are adjoined by linear bands of intensity maxima. When the width of the silhouette is somewhat larger than the width of these striations, the intensity pattern in the vicinity of an edge asymptotically approaches that of an infinite straight edge shadow, where light is blocked by an opaque half-plane, y<0. This intensity distribution takes the form:

    [00038] Ψ ( y ) = 1 2 .Math. .Math. C ( 2 λ .Math. z _ .Math. y ) + iS ( 2 λ .Math. z _ .Math. y ) + 1 2 .Math. ( 1 + i ) .Math. 2

    where C( . . . ) and S( . . . ) are the Fresnel integrals. Equations

    [00039] U ( x , y ) = 1 λ .Math. z _ .Math. d .Math. .Math. x a .Math. dy a [ Γ ( x a , y a ) .Math. e i .Math. .Math. π λ .Math. z _ .Math. ( x a 2 + y a 2 ) ] .Math. e - 2 .Math. π .Math. .Math. i .Math. 1 λ .Math. z _ .Math. ( xx a + yy a )

    and

    [00040] Ψ ( y ) = 1 2 .Math. .Math. C ( 2 λ .Math. z _ .Math. y ) + iS ( 2 λ .Math. z _ .Math. y ) + 1 2 .Math. ( 1 + i ) .Math. 2

    make it clear that the distance to the near-Earth object can be estimated from the width of the striations.

    [0070] In an exemplary embodiment of the invention, the width of the first lobe in

    [00041] Ψ ( y ) = 1 2 .Math. .Math. C ( 2 λ .Math. z _ .Math. y ) + iS ( 2 λ .Math. z _ .Math. y ) + 1 2 .Math. ( 1 + i ) .Math. 2

    that is above unit intensity is, in terms of the non-dimensional argument, 1.275. The width of the portion of the striations in an exemplary embodiment of the invention is approximately 130 m. Setting

    [00042] 2 λ .Math. z _ .Math. Δ .Math. .Math. y ~ 1.275 ; Δ .Math. .Math. y = 130 .Math. .Math. m

    gives: z=4.19×10.sup.10 m=0.280 AU

    [0071] In an embodiment of the invention, the image may show an intensity hole. In an exemplary embodiment of the invention, the diameter roughly 106 pixels, or approximately 2120 m on the average. The imaging module 430 then applies:

    [00043] W = Width .Math. .Math. of .Math. .Math. deep .Math. .Math. shadow = 2 .Math. ( a + λ .Math. .Math. z _ a )

    where a denotes the average radius of the near-Earth object. Setting W=2120 m and solving for a yields a=20.1 m. Using these values and z=4.19×10.sup.10 m=0.280 AU, the Fresnel number is:

    [00044] F = a 2 λ .Math. .Math. z _ = 0.0193 .

    [0072] In an embodiment of the invention, the imaging module 430 uses a procedure in the construction of the shadow pattern by directly comparing results with the raw data. To do this the imaging module takes one further processing step: Given estimate of Γ(φ.sub.x,φ.sub.y), the imaging module 430 uses the shadow pattern computed as above to synthesize the intensity time histories of the various light collecting apertures that would be observed if Ψ(x,y) were the true shadow pattern. To compare the computed time histories with the basic data, the imaging module 430 may use a mean-square criterion or a correlation coefficient between the computed histories and the time history data. In an exemplary embodiment of the invention, M (Γcustom-character.sup.N.sup.a.sup.×M.sup.S denotes the matrix of computed time histories corresponding to an estimate, Γεcustom-character.sup.21×21 of the complementary silhouette. Each of the N.sub.a rows contains the computed time history of a light collecting aperture, and each time history is a sampled data string M.sub.S elements long. Let M.sub.true be the corresponding data matrix. A correlation coefficient, between the true and the estimated time history data may be defined as:

    [00045] Cor ( Γ _ ) = .Math. m = 1 N a .Math. .Math. n = 1 M S .Math. ( M ( Γ _ ) M true ) m , n [ .Math. p = 1 N a .Math. .Math. q = 1 M S .Math. ( M ( Γ _ ) M ( Γ _ ) ) p - q ] [ .Math. r = 1 N a .Math. .Math. s = 1 M S .Math. ( M true M true ) r , s ]

    where ⊙ denotes the Hadamard product. The summations are the squares of vector 2-norms induced in the space of N.sub.a×M.sub.S matrices. Given the above equations, Cor(Γ)ε[0,1] and Cor(Γ)=1 if and only if M(Γ)=M.sub.true. The imaging module 430 raster scans the elements of Γεcustom-character.sup.21×21; for each element, the imaging module 430 changes the value, and computes M(Γ) and Cor(Γ)ε[0,1]. If Cor(Γ) is larger than its previous value, the imaging module 430 retains the new value (0 or 1). Otherwise, it restores the old value. In an embodiment of the invention, the imaging module 430 assigns unity value to the coarse resolution pixel that is at the center of the image in order to help center the image.

    [0073] In summary, using the Alignment Method, data analysis module may use the aligned data and the shadow function alone to provide estimates of the apparent velocity of the near-Earth object, its approximate size and distance and Fresnel number of the shadow.

    [0074] Photon Arrival Rates

    [0075] In an exemplary embodiment of the invention each light collecting aperture will measure intensity in the visible to near infrared range. Even with very narrow band filtering of collected light, the response time of even fast Avalanche Photodiode detectors is much larger than the correlation time of the light. With this “slow detector” condition, the arrival time statistics of photon detections is fully characterized by the average number of photon arrivals per unit time. Further, the standard deviation of the number of photons entering a detector over a given time period is equal to the square root of the average multiplied by the time interval. The average number of photon detections divided by the standard deviation is the signal-to-noise of the measured average intensity over the measurement interval.

    [0076] As the basis for our estimate of average photon detection rate, the Planck radiation law for the spectral radiance per unit wavelength is used:

    [00046] B λ ( T * ) = 2 .Math. hc 2 λ 5 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math. ( W - sr - 1 - m - 3 ) h = 6.626 × 10 - 34 .Math. J - s k = 1.38 × 10 - 23 .Math. J - K - 1 c = 2.998 × 10 8 .Math. m - s - 1

    where λ is the wavelength, and T.sub.* is the photospheric temperature of the occulted star. Integrating over the appropriate solid angle, the radiance from a unit area of surface is πrB.sub.λ(T.sub.*) (W−m.sup.−3) Then, dividing by hν, where ν=c/λ, the number of photons emitted per unit surface, per unit wavelength, per second is:

    [00047] N λ ( T * ) = 2 .Math. π .Math. c λ 4 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .

    [0077] If n.sub.λ(T.sub.*) denotes the number of photons received per second, per square meter per unit wavelength then, from

    [00048] N λ ( T * ) = 2 .Math. π .Math. c λ 4 .Math. 1 e hc / λ .Math. .Math. kT * - 1 ,

    we have:

    [00049] N λ ( T * ) = 2 .Math. π ( R * d * ) .Math. c λ 4 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math. ( s - 1 - m - 3 )

    where R.sub.* and d.sub.* are the stellar radius and distance, respectively.

    [0078] In one or more embodiments of the invention, the light is passed through the j.sup.th grey band-limited filter discussed above, that admits light in the wavelength range λε[λ.sub.Ck−1,λ.sub.Ck]. Then, n.sub.Δλ.sub.C(T.sub.*, j), the average number of photons received per second, per square meter in λε[λ.sub.Cj−1,λ.sub.Cj] is:

    [00050] n Δλ C ( T * , j ) = 2 .Math. π .Math. .Math. c ( R * d * ) 2 .Math. λ _ Cj - 1 λ _ Cj - 1 + Δλ C .Math. d .Math. .Math. λ .Math. 1 λ 4 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math.

    where j=1, . . . , N.sub.C.

    [0079] It is convenient to express this in terms of the stellar apparent magnitude, m.sub.*. In terms of solar parameters, this is given by:

    [00051] m * = m - 2.5 .Math. log 10 ( L * L .Math. ( d d * ) 2 )

    m.sub.⊙=Solar magnitude=−26.73
    d.sub.⊙=Solar distance=1.58×10.sup.−5 lyr
    L.sub.⊙=Solar luminosity=3.846×10.sup.26 W
    where, using B.sub.λT.sub.* the stellar luminosity is:

    [00052] L * = 4 .Math. π .Math. .Math. R * 2 × 2 .Math. π .Math. .Math. B λ = 8 .Math. π 2 .Math. R * 2 .Math. hc 2 .Math. 0 .Math. d .Math. .Math. λ .Math. 1 λ 5 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math.

    [0080] Applying

    [00053] n Δλ C ( T * , j ) = 2 .Math. π .Math. .Math. c ( R * d * ) 2 .Math. λ _ C j - 1 λ _ C j - 1 + Δλ C .Math. d .Math. .Math. λ .Math. 1 λ 4 .Math. 1 e hc / λ .Math. .Math. kT * - 1 , .Math. .Math. L * = 4 .Math. π .Math. .Math. R * 2 × 2 .Math. π .Math. .Math. B λ = 8 .Math. π 2 .Math. R * 2 .Math. hc 2 .Math. 0 .Math. d .Math. .Math. λ .Math. 1 λ 5 .Math. 1 e hc / λ .Math. .Math. kT * - 1

    may be put in the form:

    [00054] L * d * 2 = 4 .Math. π .Math. .Math. hc ( 0 .Math. d .Math. .Math. λ .Math. 1 λ 5 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math. λ _ C j - 1 λ _ C j - 1 + Δλ C .Math. d .Math. .Math. λ .Math. 1 λ 4 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math. ) .Math. n Δλ C

    [0081] Substituting this into m.sub.* and solving for n.sub.Δλ.sub.C, we get

    [00055] n Δλ C ( T * , j ) = 1 4 .Math. π .Math. .Math. hc .Math. L d 2 .Math. Ψ j ( λ _ Cj , T * ) .Math. 10 0.4 .Math. ( m - m * )

    where:

    [00056] Ψ j ( λ Cj , T * ) = λ _ C j - 1 λ _ C j - 1 + Δλ C .Math. d .Math. .Math. λ .Math. 1 λ 4 .Math. 1 e hc / λ .Math. .Math. kT * - 1 0 .Math. d .Math. .Math. λ .Math. 1 λ 5 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math.

    and where L.sub.⊙, d.sub.⊙, and m.sub.⊙ are the solar luminosity, distance and apparent magnitude, respectively.

    [0082] In one or more embodiments of the invention, the photon arrival processes within the various non-overlapping wavelength channels are mutually statistically independent Poisson processes. Thus the average number of photons, denoted by n.sub.Δλ received per second, per square meter in the entire wavelength band, λε[λ.sub.V1, λ.sub.V2], spanned by the N.sub.C channels is, by virtue of

    [00057] n Δλ C ( T * , j ) = 1 4 .Math. π .Math. .Math. hc .Math. L d 2 .Math. Ψ j ( λ _ Cj , T * ) .Math. 10 0.4 .Math. ( m - m * ) .Math. .Math. and .Math. Ψ j ( λ Cj , T * ) = λ _ C j - 1 λ _ C j - 1 + Δλ C .Math. d .Math. .Math. λ .Math. 1 λ 4 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math. 0 .Math. d .Math. .Math. λ .Math. 1 λ 5 .Math. 1 e hc / λ .Math. .Math. kT * - 1 .Math. : n Δλ = .Math. j = 1 N C .Math. n Δλ ( T * , j ) = 1 4 .Math. π .Math. .Math. hc .Math. L d 2 .Math. Ψ ( λ _ , T * ) .Math. 10 0.4 .Math. ( m - m * )

    where:

    [00058] Ψ ( λ _ , T * ) = λ _ - Δλ / 2 λ _ + Δλ / 2 .Math. d .Math. λ .Math. 1 λ 4 .Math. 1 e hc / λ .Math. kT * - 1 .Math. 0 .Math. d .Math. λ .Math. 1 λ 5 .Math. 1 e hc / λ .Math. kT * - 1

    and where: λ=½(λ.sub.V1+λ.sub.V2), Δλ=λ.sub.V2−λ.sub.V1

    [0083] Note that intensity measurements determine only the magnitude, not the phase of U (x,y) However, given that U(x,y) is essentially the Fourier transform of Γ(θ.sub.x,θ.sub.y), and given the constraints on the form of Γ(θ.sub.x,θ.sub.y) the phase can be determined by phase retrieval algorithms.

    [0084] Moreover, the second central moment of the number of photon arrivals, denoted received per second, per square meter in the entire wavelength band is simply n.sub.Δλ. In an exemplary embodiment of the invention, the average arrival rate is a weak function of the temperature. In an embodiment of the invention, n.sub.Δλ increases slightly for the cooler stars, reflecting the fact that for a given radiance, the number of photons increases with wavelength. Hotter stars are relatively rare, while nearly 90%/o of stars are M-class. In an exemplary embodiment of the invention, when these results are averaged over the numerical distribution of spectral classes, the close the case T.sub.*=3000K which is near the high end of temperatures for M stars. Thus, in this exemplary embodiment, it is reasonable to approximate all stars with the result for T.sub.*=3000K and

    [00059] n Δλ N 0 × 10 - 0.4 .Math. m * N 0 1.46 × 10 10 .

    [0085] Signal-to-Noise Ratio

    [0086] To characterize the accuracy with which the silhouette may be determined the imaging module 430, in one or more embodiments of the invention, determines a signal-to-noise ratio that is reasonably convenient to calculate, yet is consistent with the phase retrieval algorithm, and relates to those critical factors in determining the accuracy of the silhouette function. In an embodiment of the invention, the imaging module 430 characterizes the error in the image of the near-Earth object as a function of the levels of noise in the intensity measurements.

    [0087] In an alternate embodiment of the invention, the imaging module 430 uses intensity measurement statistics as a measure of silhouette estimation accuracy. As described above, the silhouette reconstruction method scans the pixels in the silhouette plane by some algorithm and for each pixel, changes the value (from unity to zero or vice versa), computes the modified shadow pattern from, and then determines the cross correlation with the shadow pattern data. If the cross-correlation increases, the value change is accepted; otherwise it is rescinded. This is a convex optimization problem, and as the measurement noise approaches zero, the silhouette errors also vanish. Therefore, the imaging module 430 may, in an embodiment of the invention, use the measurement noise statistics to define an signal-to-noise ratio, taking care that the “signal” part of the signal-to-noise ratio encompasses only that fraction of the intensity measurements that convey the most pertinent information on the silhouette function.

    [0088] In this embodiment of the invention, the imaging module 430 characterizes the silhouette accuracy via the accuracy of the shadow pattern measurements relevant to silhouette information. In an exemplary embodiment of the invention, the imaging module 430 determines that there will be 20 pixels on a side, N.sub.pix of the final pixellated image of the asteroid. In this embodiment of the invention the number of apertures distributed across the cross-track direction must be approximately N.sub.pix. The end-to-end extent of the array may be at least as big as the width of the shadow region, W pertaining to the mid-band wavelength, λ, given by:

    [00060] W = 2 .Math. ( a + λ _ .Math. z _ a )

    a=asteroid radius
    λ=mid-band wavelength
    z=distance of asteroid from the array
    The array need not be much larger than this because the Fresnel integral fluctuations far from this shadow region contain little information about the silhouette.

    [0089] In an exemplary embodiment of the invention, at 0.28 AU a 40 m diameter asteroid has a shadow width of over 2 km. A typical value for the duration of the asteroid velocity perpendicular to the line of sight, V, is approximately 10 km/s, so the entire duration of the shadow passage is roughly W/V≈0.2 s. Further, for each pixel, the intensity measurement is the total number of photodetection events recorded by the photodetector in time period Δy/V where .fwdarw.y=W/N.sub.pix, the extent of a pixel in the observation plane.

    [0090] In this embodiment of the invention, each aperture is equipped with a bank of N.sub.C non-overlapping narrow band filters with associated photodetectors. Therefore,

    [00061] U ps ( λ _ Ck , z _ , x .Math. ) = U px ( λ _ , λ _ Ck λ _ .Math. z _ , λ _ .Math. λ _ Ck .Math. x ) .

    This relation states that the shadow pattern produced by light centered at wavelength λ.sub.Ck (over a wavelength band Δλ.sub.C) is the same as that produced by light centered at λ (with the same bandwidth), but with a distance to the source altered by λ.sub.Ck/λ, and with a size that is dilated or contracted by factor λ.sub.Ck/λ. Both sides of the equation pertain to the same silhouette at distance z. Since, physically, z is fixed in this embodiment, the individual filters associated with each light collecting aperture sample a fan of locations on the shadow pattern observation plane. Thus, if a light collecting aperture moves along the along-track axis, the x-axis in this embodiment of the invention, such that the cross-track coordinate is a constant y=ξ, then the N.sub.C filters actually sample the sample pattern at the y-coordinates:

    [00062] y k = ζ ( 1 + k N C .Math. Δλ λ _ ) , k = - 1 2 .Math. N C , .Math. .Math. , - 1 , 0 , 1 , .Math. .Math. , 1 2 .Math. N C .

    [0091] In one or more embodiments of the invention, for each pixel, the data alignment process merely adds up all the photodetector measurements that contribute to the estimated intensity associated with that pixel. The pixel measurement is a sum of independent Poisson processes corresponding to all the light collecting apertures and all the narrow band filter channels in each light collecting aperture whose along-axis tracks pass through the pixel during occultation. The main factor contributing to both the signal and the noise in the signal-to-noise ratio is the average of this quantity, evaluated for the un-occulted star. This is a function of the cross-track coordinate only. Using the equations

    [00063] n Δλ C ( T * , j ) = 1 4 .Math. π .Math. .Math. hc .Math. L d 2 .Math. Ψ j ( λ _ Cj , T * ) .Math. 10 0.4 .Math. ( m - m * ) .Math. .Math. and n Δλ = .Math. j = 1 N C .Math. .Math. n Δλ ( T * , j ) = 1 4 .Math. π .Math. .Math. hc .Math. L d 2 .Math. Ψ ( λ _ , T * ) .Math. 10 0.4 .Math. ( m - m * ) ,

    we have:

    [00064] ( .Math. Average .Math. .Math. no . .Math. of .Math. .Math. ( un .Math. - .Math. occulted ) .Math. .Math. photo detections .Math. .Math. for .Math. .Math. a .Math. .Math. pixel .Math. .Math. centered .Math. .Math. at .Math. .Math. y pix ) .Math. .Math. η .Math. .Math. A T .Math. n Δλ .Math. ρ .Math. .Math. ( y pix ) .Math. Δ .Math. .Math. y / V [0092] η=Quantum efficiency of detector (=0.1-0.5) [0093] A.sub.T=Telescope aperture area [0094] ρ(y.sub.pix)=Number of tracks passing through [y.sub.pix−½Δy, y.sub.pix+½Δy) divided by N.sub.C
    where:

    [00065] ρ ( y pix ) = 1 N C .Math. .Math. m = - 1 2 .Math. N pix 1 2 .Math. N pix .Math. .Math. .Math. k = - 1 2 .Math. N C 1 2 .Math. N C .Math. { 1 , m .Math. .Math. Δ .Math. .Math. y ( 1 + k N C .Math. Δλ λ _ ) [ y pix - 1 2 .Math. Δ .Math. .Math. y , y pix + 1 2 .Math. Δ .Math. .Math. y ) 0 , otherwise } .

    This function has a continuous limit as N.sub.C.fwdarw.□, and Δy.fwdarw.0, and is well approximated by the limit. In an embodiment of the invention ρ(y.sub.pix) is a function of y.sub.pix divided by the aperture array width for N.sub.C=N.sub.pix=200, and for various values of the reciprocal of the spectral resolution,

    [00066] 1 R = Δλ / λ _ .

    In this embodiment of the invention, ρ(y.sub.pix) is somewhat above unity for a region within the aperture array, but then decreases abruptly. Thus it is desirable to make the cross-track extent of the aperture array somewhat larger than W by the factor

    [00067] 2 / ( 1 - Δλ 2 .Math. λ _ ) .

    Under this condition, a conservative estimate of ρ(y.sub.pix) is unity.
    The true “signal” is not the quantity in

    [00068] ( Average .Math. .Math. no . .Math. of .Math. .Math. ( un .Math. - .Math. occulted ) .Math. .Math. photo detections .Math. .Math. for .Math. .Math. a .Math. .Math. pixel .Math. .Math. centered .Math. .Math. at .Math. .Math. y pix ) .Math. .Math. η .Math. .Math. A T .Math. n Δλ .Math. ρ .Math. .Math. ( y pix ) .Math. Δ .Math. .Math. y / V [0095] η=Quantum efficiency of detector (=0.1-0.5) [0096] A.sub.T=Telescope aperture area but rather the contrast in the [0097] ρ(y.sub.pix)=Number of tracks passing through [y.sub.pix−½Δy, y.sub.pix+½Δy) divided by N.sub.C
    shadow intensity distribution.

    [0098] In this embodiment of the invention, the contrast arising from the fluctuations in the intensity is what contributes information about the silhouette. The estimated level of contrast may be expressed as a fraction, κ, of the un-occulted average photodetections given by the above equation. To estimate κ, the imaging module 430 considers that for the range of Fresnel numbers above ˜0.1, the principal fluctuations take the form of nested striations enclosing the deep shadow region of the diffracted silhouette. The extent of the dark spot enclosed by the striations indicates the overall size of the silhouette. The fluctuation of the striations themselves combined with the estimated size indicates the distance and Fresnel number. Generally, if the system has sufficient signal-to-noise ratio to accurately model the striations, it will suffice to determine the silhouette accurately. These striations approximate the knife edge shadow pattern for a point source given by:

    [00069] .Math. U ( ξ ) .Math. 2 = 1 2 .Math. .Math. C ( 2 z _ .Math. λ _ .Math. ξ ) + iS ( 2 z _ .Math. λ _ .Math. ξ ) + 1 2 .Math. ( 1 + i ) .Math. 2

    where ξ is distance along the axis perpendicular to the striation, and C( . . . ) and S( . . . ) are the Fresnel integrals.

    [0099] In summary, using the above estimate of n.sub.Δλ, and the above estimates of ρ and κ, the imaging module 430 may calculate:


    Average contrast measurement=ηκA.sub.TN.sub.0(Δy/V)□0.sup.−0.4 m,


    N.sub.0=1.46×10.sup.10

    [0100] In an embodiment of the invention, the imaging module 430 considers the noise statistics. The variance of the noise due to photon arrival statistics is ηA.sub.Tn.sub.Δλρ(y.sub.pix)Δy/V. To this the photodectector dark count must be added, which is denoted by, n.sub.DC measured in average number of photons per second. The noise variance due to dark count for each pixel measurement and all wavelength channels is n.sub.DCN.sub.C(Δy/V)ρ(y.sub.pix). Noise due to the star and the dark count are independent, so:

    [00070] ( Variance .Math. .Math. of .Math. .Math. noise .Math. .Math. in one .Math. .Math. pixel .Math. .Math. measurement ) = ( η .Math. .Math. A T .Math. n Δλ + N C .Math. n D .Math. .Math. C ) .Math. ρ ( y pix ) .Math. ( Δ .Math. .Math. y / V ) .

    [0101] The signal-to-noise ratio (SNR) of each pixel measurement of the shadow pattern is κ times expression for the average number of un-occulted photo detections for a pixel centered at y.sub.pix divided by the square root of expression in

    [00071] ( Variance .Math. .Math. of .Math. .Math. noise .Math. .Math. in one .Math. .Math. pixel .Math. .Math. measurement ) = ( η .Math. .Math. A T .Math. n Δλ + N C .Math. n D .Math. .Math. C ) .Math. ρ ( y pix ) .Math. ( Δ .Math. .Math. y / V ) .

    Therefore the signal-to-noise ratio of each pixel calculated by the imaging module 430 is:

    [00072] SNR = ηκ .Math. .Math. A T .Math. n Δλ .Math. ρ .Math. .Math. ( y pix ) .Math. Δ .Math. .Math. y / V ( η .Math. .Math. A T .Math. n Δλ + N C .Math. n D .Math. .Math. C ) n Δλ N 0 × 10 - 0.4 .Math. m * N 0 1.46 × 10 10 ρ ( y pix ) 1 κ 0.6

    [0102] In an exemplary embodiment of the invention, at 0.3 AU distance, 0.4 m telescope diameter, quantum efficiency of 0.5, ten wavelength channels, n.sub.DC=4 phot/s, and a 12.sup.th magnitude star, the signal-to-noise ratio is approximately 9.1 for a 20×20 pixel silhouette image.

    [0103] While the principles of the invention have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the invention. Further embodiments are contemplated within the scope of the present invention in addition to the exemplary embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention, which is not to be limited except by the following claims.