Asynchronous pulse detection through sequential time sampling of optically spread signals
09568583 ยท 2017-02-14
Assignee
Inventors
Cpc classification
F41G7/2253
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F41G7/226
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F41G7/2293
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F41G7/26
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
International classification
F41G7/26
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F41G7/00
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F41G7/22
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
Abstract
A method to spread laser photon energy over separate pixels to improve the likelihood that the total sensing time of all the pixels together includes the laser pulse. The optical signal is spread over a number of pixels, N, on a converter array by means of various optical components. The N pixels are read out sequentially in time with each sub-interval short enough that the integration of background photons competing with the laser pulse is reduced. Likewise, the pixel read times may be staggered such that laser pulse energy will be detected by at least one pixel during the required pulse interval. The arrangement of the N pixels may be by converter array column, row, two dimensional array sub-window, or any combination of sub-windows depending on the optical path of the laser signal and the capability of the ROIC control.
Claims
1. A system for asynchronous pulse detection comprising: a converter array; an optical system configured to pass laser energy at a predetermined band of wavelengths to said converter array, wherein said optical system is further configured to spread said predetermined band of wavelengths across a predetermined area; and a read out integrated circuit (ROIC) that scans said predetermined area of said array at an exposure rate that matches a predetermined PRF to thereby decode said predetermined band of wavelengths spread by said optical systems across said predetermined area of said array.
2. The system of claim 1 wherein said optical system comprises a defocusing element that spreads said predetermined band of wavelengths across a predetermined area of said array.
3. The system of claim 1 further comprising a seeker system that uses said decoded predetermined band of wavelengths to derive control information capable of being used to steer an ordnance to a target.
4. The system of claim 1 wherein said optical system spreads said predetermined band of wavelengths over a predetermined number of separate pixels such that a total sensing time of said separate pixels includes said predetermined band of wavelengths.
5. The system of claim 4 wherein said predetermined number of separate pixels are read out sequentially.
6. The system of claim 5 wherein said read out times of said separate pixels are staggered.
7. A method of detecting laser energy comprising: filtering the laser energy having a predetermined bandwidth; spreading said predetermined bad width across a predetermined area of an array; and scanning said predetermined area of said array at an exposure rate that matches a predetermined pulse repetition frequency (PRF) to thereby decode said predetermined band of wavelengths that has been spread across said predetermined area of said array.
8. The method of claim 7 further comprising using said decoded predetermined band of wavelengths to derive control information capable of being used to steer an ordnance to a target.
9. The method of claim 7 further comprising: capturing image energy focused on said array; and using said captured image energy to derive control information capable of being used to steer an ordnance to a target.
10. The method of claim 9 further comprising locating a laser spot of said decoded predetermined band of wavelengths on said image.
11. The method of claim 7 wherein said predetermined area comprises a predetermined number of separate pixels such that a total sensing time of said separate pixels includes said predetermined band of wavelengths.
12. The method of claim 11 further comprising reading out said predetermined number of separate pixels sequentially.
13. The method of claim 12 further comprising staggering said reading out of said separate pixels.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The accompanying drawings are presented to aid in the description of embodiments of the invention and are provided solely for illustration of the embodiments and not limitation thereof.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12) In the accompanying figures and following detailed description of the disclosed embodiments, the various elements illustrated in the figures are provided with three-digit reference numbers. The leftmost digit of each reference number corresponds to the figure in which its element is first illustrated.
DETAILED DESCRIPTION
(13) Aspects of the invention are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention.
(14) The word exemplary is used herein to mean serving as an example, instance, or illustration. Any embodiment described herein as exemplary is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term embodiments of the invention does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation.
(15) The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used herein, the singular forms a, an and the are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms comprises, comprising,, includes and/or including, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
(16) Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, the sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, logic configured to perform the described action.
(17)
(18) Continuing with
(19) Turning now to an overview of the disclosed embodiments, an important performance parameter for seeker systems, particularly multi-mode systems, includes how quickly, reliably and efficiently the seeker system detects, decodes and localizes the energy it receives in its FOV. The seeker system must be sensitive to both background scene energy and reflected laser energy. Because the arrival time of a reflected laser energy pulse is not known to the seeker system, the PRF of the laser pulse energy is asynchronous to the seeker system's internal clock. Thus, the disclosed embodiments provide configurations and methodologies to sense the asynchronous laser pulse while receiving the background scene photons. Further, the disclosed embodiments take advantage of the capability to merge two uniquely different types of seeker functionality (e.g., semi-active laser-based and passive ambient energy-based images) into a single, dual-mode seeker, using only an FPA as the active sensor to achieve both modes of operation. Known methods of detecting, decoding and localizing energy within a seeker's field of view typically require expensive and complicated systems to compensate for the likelihood of not detecting, decoding or localizing a received pulse when the received pulse actually matches the seeker's pre-loaded PRF. Because the pulses are typically ten to twenty nanoseconds wide with a pre-loaded PRF from ten to twenty hertz, conventional imagers use an integration process to detect and decode the pulse. Using an integration process precludes the use of a camera having a relatively long exposure time because a long exposure time would increase the likelihood of capturing several pulses when the imager opens the shutter. Also, because every shutter cycle has an expose time when the shutter is open and a dark time when the shutter is closed, conventional integration processes increase the likelihood that a pulse will be missed during the dark time.
(20) One way to improve the detection, decoding and localization of a seeker system is to provide the seeker system with the capability of processing more than one type of energy to identify a target, for example, radar, laser and/or imaging energy. As described previously herein, a seeker system capable of processing more than one type of energy for target acquisition is known generally as a multi-mode seeker. Multi-mode seeker systems have the advantage of being robust and reliable and may be operated over a range of environments and conditions. However, combining more than one target acquisition mode into a single seeker typically adds redundancy. For example, conventional multi-mode implementations require two disparate sensor systems, with each sensor system having its own antenna and/or lens, along with separate processing paths. This increases the number of parts, thereby increasing cost. Cost control is critical for single-use weapons that may sit on a shelf for 10 years then be used one time. More parts also increase the probability of a part malfunctioning or not performing the way it is expected to perform.
(21) Accordingly, the present disclosure recognizes that providing configurations and methodologies to asynchronously detect and decode the arrival time of a reflected laser energy pulse has the potential to improve reliability and performance. The present disclosure further recognizes that multi-tasking components/functionality of a multi-mode seeker so that one component (e.g., sensor, lens) can operate in both modes has the potential to control costs and improve reliability and performance. For example, the converter array (which may be implemented as a focal-plane-array (FPA)) of a seeker system converts reflected energy in the seeker's FOV into electrical signals that can then be read out, processed and/or stored. Using only a single, conventional FPA as the converter array of the primary optical component in more than one mode would potentially reduce the complexity and cost, and improve the reliability of multi-mode seeker systems.
(22) The design challenges of using only the converter array output to detect, decode and localize the laser spot in a seeker's FOV include challenges associated with the digital imager, the exposure gap, avoiding ambient confusion and avoiding designator confusion. Conventional digital imagers, as previously described, are inherently sampled data, integrate-and-dump systems. The imager accumulates or integrates all of the received energy across the entire expose time, effectively low-pass filtering the signals, blending multiple pulses arriving at different times into a single image. Given that two or more designators can be active in the same target area, the sample time resolution of conventional digital imagers is typically insufficient to reconstruct all the incoming pulses. This typically requires expensive and complicated systems to compensate for a higher likelihood of not detecting, decoding or localizing a received pulse when the received pulse actually matches the seeker's pre-loaded PRF. Using an integration process precludes the use of a camera having a relatively long exposure time because a long exposure time would increase the likelihood of capturing several pulses when the imager opens the shutter. Imager exposure gaps, or exposure windows, typically span the pulse repetition interval of the predetermined PRF so cannot distinguish constant light sources from designator pulses. Accordingly, sub-interval exposure windows cannot be made to cover 100% of a pulse interval due to a minimum time to complete a frame, capture and initialize the imager for the next frame. In other words, the dead-time (also known as the dark time of the imager) between exposure windows (measured in microseconds) is wider than typical designator pulse widths (measured in 10-100 nanoseconds). Background clutter levels may potentially be reduced by decreasing the exposure time, but this increases the probability that a laser pulse will be missed altogether. Ambient confusion occurs when the imager has difficulty distinguishing between ambient light features and designator energy. Reflected energy is proportional to the angle of reflection of the target, i.e., acute angles between light source and imager yield higher reflected energy, and obtuse angles yield lower reflected energy. Also, solar glint or specular reflection off background clutter is a difficult problem with respect to relative energy. For example, a top-down attack with the sun over the shoulder of the weapon, and a ground-based designator with an almost 90 degree reflection angle is the worst geometry for engagement/designation with respect to received laser energy. So a clear day at noon time is the most challenging. Finally, so that multiple designators can operate simultaneously in the same target area, a single converter array design must reliably distinguish its assigned designator from other, confuser designators operating simultaneously in the same target area.
(23)
(24) In alternative configurations of the disclosed embodiments (best shown in
(25) Thus, the disclosed embodiments allow two uniquely different types of seeker functionality to be merged into a single, dual-mode seeker, using only an FPA as the active sensor to achieve both modes of operation. Additionally, the disclosed embodiments also provide a means of sampling target-reflect laser pulses at sufficient sample rates to decode any pulse modulated data transmitted from a laser target designator. This allows for a channel of communications between the weapon receiving the transmission and the laser target designator. The weapon and designator can thus to be paired to work in unison and mission critical information, such a designator determined target position (GPS coordinates) and velocity, can be transmitted to the weapon in flight.
(26) With reference now to the accompanying illustrations,
(27) Thus, the seeker system 104a of
(28)
(29) The imager 214a is configured and arranged to control the exposure timing of each pixel in the FPA 217a. The pixels can either be shuttered (or exposed) at the same time, creating a snapshot mode, or the pixels may be scanned, creating a pixel-by-pixel rolling mode. The seeker system 104a further includes an optical system 216 (shown in
(30) Because the laser light energy 106 has photon density that is much higher than the photon density of the ambient light energy 107, the amount of time necessary to accumulate a single laser pulse's photons is typically measured in nanoseconds (i.e., 10-200 nanoseconds), while the amount of time necessary to accumulate ambient light energy photons to the same photon count as the laser light energy is measured in microseconds. The actual exposure time depends on the total field-of-view of each lens, but typically falls in this time range under full sunlight conditions. This exposure difference means that the dominant photons captured by the FPA 217a will be from the laser light energy 106 when the exposure time of each pixel is in the nanosecond range. However, as the exposure times are lengthened the ambient light energy photons will dominate the resulting image with some blurring due to the laser pulse and ambient light at the laser wavelength.
(31) If the defocused sub-window 310 of a high pixel density FPA (e.g., a 12801024=1.3M pixel array) is scanned so that the scan time across the array is matched to the PRF of the laser, then incoming laser pulses will be digitized at a very high sample rate because the pulse will arrive at all pixels at roughly the same time (hence the reason for the defocusing lens). This would allow the system to optionally transmit data via pulse modulation on the laser beam to be received and decoded by the system. The exposure overlap ensures that very narrow time pulses are not missed between pixel readouts. It should be noted that the optical communication channel described herein is additional capability but is not necessary for the disclosed embodiments to decode the PRF of the laser designator. A system for receiving additional data encoded on a laser beam is disclosed in U.S. Pat. No. 8,344,302, and the entire disclosure of this patent is incorporated by reference in its entirety.
(32) The defocused window 310 may be switched to snapshot mode and exposed long enough so that ambient light dominates the image. Under this configuration, the resultant image can be combined with the image from sub-window 312 of the FPA 217a to recover pixel resolution of the entire array via image processing super-resolution techniques. Ideally the two images would not be perfectly co-aligned but offset by half a pixel distance to maximize the super resolution results, but this is not necessary for the super-resolution techniques to work. Because the asynchronous pulse arrival times can now be predicted by the scanning sub-window 310, the longer duration exposures of both sub-windows 310, 312 can be optimized to either enhance the laser spot (by centering the sub-window 312 exposure time on the laser pulses and shortening the total exposure) or eliminate the laser spot entirely (by shuttering out of sync with the laser pulses).
(33) Sub-window 312 images the laser spot in order to spatially localize the spot and determine bearing angles to the spot. Effectively, the defocused sub-window 310 cannot spatially resolve the laser pulses but can temporally resolve them. Conversely, sub-window 312 cannot temporally resolve the laser pulses but is capable of spatially resolving the laser pulses. Both sub-windows 310, 312 can spatially resolve ambient light images given sufficient exposure times.
(34)
(35)
(36) =solid angle, typically one pixel, (IFOV)^2 [SR steradians].
(37) IFOV=Incremental field of view angle subtended by one element of a FPA [radians].
(38) R=Range
(39) Pulse=laser pulse energy in Joules [J], typically 100 mJ, typical pulse duration 20 ns.
(40) L=scene radiance [W/(m^2 SR] SR=steradian.
(41) L.sub.S=typical scene radiance of entire SWIR spectral band (900 nm to 1700 nm typical).
(42) L.sub.L=typical scene radiance filtered by narrow bandpass laser line filter (e.g. 1024 nm+/2 nm).
(43) t=time (sec).
(44) t.sub.int=integration time of one detector element (pixel).
(45) t.sub.i=ith sequential pixel read out.
(46) N=number of pixels (total).
(47) M=number of pixels (total).
(48) A.sub.s=aperture area of optical path for SWIR scene.
(49) A.sub.L=aperture area of optical path for laser signal.
(50) k=generic constant not derived here.
(51) SWIR=Short Wave Infrared (900 nm to 1700 nm, typically).
(52) Bsig=Background signal received from the scene.
(53) Lsig=Laser signal received from the scene.
(54) HOE=Holographic Optical Element.
(55) The optical system 216a shown in
Bsig=A.sub.S.Math..Math.(L.sub.S.Math.t).Math.k
(56) Where k is a general constant related to atmospheric transmission, target and background reflectivity, optical assembly transmission, among other factors. The derivation of k will be familiar to anyone skilled in the art and is not discussed further.
(57) Assume that the optical system 216a will also receive a laser energy pulse reflected from the target, Lsig, which is determined by the laser optical path aperture A.sub.L.
(58)
(59) Assume that the background scene is the dominant noise term as compared to the sensor internal noise. Thus, the requirement for a minimum signal to noise ratio is determined by,
(60)
(61) Thereby, the laser signal 106 is spread over a number of pixels, and each pixel with integration time t.sub.int is sampled sequentially in reading out the FPA data with the ROIC. The laser energy is spread evenly over the total number of pixels. The minimum and maximum integration time available to the pixel is determined by several outside constraints such as sensor frame rate, FPA size, ROIC timing functions, and others.
(62) As shown in
(63)
(64) Thus, the total number of pixels N is given by,
(65)
or by,
(66)
(67) For the configuration shown in
(68)
(69) Thus, L.sub.S is the scene radiance of the entire imager spectral band and t.sub.int is the integration time of one detector element. The laser pulse, producing the desired SNR in that pixel, will be detected in the jth pixel to be read out sequentially and the laser pulse time offset from the beginning t.sub.0 readout time is,
t.sub.pulse=t.sub.0+(jt.sub.int)
Thus, the arrival time of the asynchronous laser pulse is detected and decoded for further processing. In the dual mode seeker implementation of the disclosed embodiments, further processing includes spatial and temporal resolution and further integration of image energy and laser energy.
(70) The optical system 216 (shown in
(71)
(72) The total number of pixels M is given by the equation,
(73)
Thus, the total number of pixels M is given by the equation,
(74)
where L.sub.L is the scene radiance filtered by a narrow bandpass laser line filter, and t.sub.int is the integration time of one detector element. The laser pulse producing the desired SNR in that pixel will be detected by the jth pixel to be read out sequentially and the laser pulse time offset from the beginning to readout time is,
t.sub.pulse=t.sub.0+(jt.sub.int)
Thus, the arrival time of the asynchronous laser pulse is detected and decoded for further processing. In the dual mode seeker implementation of the disclosed embodiments, further processing includes spatial and temporal resolution and further integration of image energy and laser energy.
(75) For the embodiment illustrated in
(76)
(77) The laser pulse producing the desired SNR in that pixel, will be detected in the jth pixel to be read out sequentially and the laser pulse time offset from the beginning t.sub.0 readout time is
t.sub.pulse=t.sub.0+(jt.sub.int)
Thus, the arrival time of the asynchronous laser pulse is detected and decoded for further processing. In the dual mode seeker implementation of the disclosed embodiments, further processing includes spatial and temporal resolution and further integration of image energy and laser energy.
(78) Accordingly, it can be seen from the foregoing disclosure and the accompanying illustrations that one or more embodiments may provide some advantages. For example, the disclosed embodiments allow for the merging of two uniquely different types of seeker functionality into a single, dual-mode seeker, using only an FPA as the active sensor to achieve both modes of operation. The disclosed embodiments also provides a method of spreading laser photon energy over many separate pixels to improve the likelihood that the total sensing time of all the pixels together includes the laser pulse. The optical signal is spread over a number of pixels, N, on a FPA by means of optical components (e.g. defocus lens or diffractive optical element). The N pixels are read out sequentially in time with each sub-interval short enough that the integration of background photons competing with the laser pulse is reduced. Likewise, the pixel read times are staggered such that laser pulse energy will be detected by at least one pixel during the required pulse interval. The arrangement of the N pixels may be by FPA column, row, two dimensional array sub-window, or any combination of sub-windows depending on the optical path of the laser signal and the capability of the ROIC control.
(79) Those of skill in the relevant arts will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
(80) Those of skill in the relevant arts will also appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments.
(81) Finally, the methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. Accordingly, the disclosed embodiments can include a computer readable media embodying a method for performing the disclosed and claimed embodiments. Accordingly, the invention is not limited to illustrated examples and any means for performing the functionality described herein are included in the disclosed embodiments. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. Additionally, while various embodiments have been described, it is to be understood that aspects of the embodiments may include only some aspects of the described embodiments. Accordingly, the disclosed embodiments are not to be seen as limited by the foregoing description, but are only limited by the scope of the appended claims.