Method for detecting and classifying events of a scene that include firing threat type events
09953226 ยท 2018-04-24
Assignee
Inventors
Cpc classification
G06V20/52
PHYSICS
G01J1/0242
PHYSICS
G01J1/4228
PHYSICS
G01J5/0275
PHYSICS
F41G3/147
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
G01J5/025
PHYSICS
International classification
F41G3/14
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
Abstract
The subject of the invention is a method for detecting and classifying events of a scene by means of a single-pupil imaging system equipped with a VisNIR detector in the 0.6 ?m-1.1 ?m band and with an SWIR detector, which comprises steps of acquiring synchronized VisNIR and SWIR successive 2D images, of displaying the VisNIR images, and of processing these images, which consists in: comparing the SWIR images so as to determine, for each pixel, the variation in illumination from one SWIR image to another and the peak value of these SWIR illuminations, if this variation in SWIR illumination is greater than a threshold, then an event associated with said pixel is detected and: its date, its temporal shape and its duration are determined, in the VisNIR images, the coordinates are determined of the corresponding pixel for which: the variation in the illumination from one VisNIR image to another and the peak value of these VisNIR illuminations are calculated, and these variations in SWIR and VisNIR illumination and their peak values are compared so as to estimate a temperature of the event, the distance of the corresponding point of the scene is estimated so as to calculate the intensity of the event on the basis of the SWIR and VisNIR illuminations and on the basis of this distance, the total energy of the event is estimated on the basis of its temporal shape and of its intensity, the event is classified as a function of its duration, its temperature, its intensity and its energy, the previous steps are repeated for another pixel of the SWIR images.
Claims
1. A method for detecting and classifying events of firing threats type of a scene, the method comprising: providing a single-pupil imaging system mounted on a mobile platform and equipped with several detectors, including a detector configured to operate in the 0.6 ?m-1.1 ?m wavelength band that comprises a Visible Near Infra Red (VisNIR) detector and a detector configured to operate in the 0.9 ?m-1.7 ?m wavelength band that comprises a Short Wave Infra Red (SWIR) detector, associated with a detection processing unit, which is configured to implement a step of acquiring successive 2D images of the scene which arise from the VisNIR detector that comprise VisNIR images, and successive 2D images of the scene which arise from the SWIR detector and are synchronized with the VisNIR images that comprise SWIR images, a step of displaying the VisNIR images, and a step of processing the VisNIR and SWIR images via the detection processing unit, wherein the step of processing the images comprises the following sub-steps: comparing the successive SWIR images so as to determine, for each pixel (x.sub.1, y.sub.1) and neighboring pixels, a variation in illumination from one SWIR image to another SWIR image and determine a peak value e.sub.1(t) of SWIR illuminations, if the variation in SWIR illumination is greater than a predetermined illumination threshold, then the variation in SWIR illumination during which said threshold is exceeded is designated as a signature of an event i and an event is associated with said pixel (x.sub.1, y.sub.1) or with a barycenter of the pixels considered, and the step of processing the images further comprises the sub-steps of: determining a date t.sub.i of the event i, determining a temporal shape and a duration of the signature of the event, determining the coordinates of a pixel (x.sub.2, y.sub.2) and of neighboring pixels corresponding to the pixel (x.sub.1, y.sub.1) or to a barycenter in the VisNIR images synchronized with the SWIR images, and for the pixel (x.sub.2, y.sub.2) and the neighboring pixels: calculating a variation in the illumination from one VisNIR image to another VisNIR image and calculate a peak value e.sub.2(t) of VisNIR illuminations, comparing these variations in SWIR and VisNIR illumination, and comparing the peak values e.sub.1(t) and e.sub.2(t) so as to estimate a temperature of the event on a basis of a predetermined lookup table, if the scene is a daytime scene and the temperature is greater than a temperature threshold, then the event is a false alarm and the previous steps from the comparing step are repeated with another pixel of the SWIR images, otherwise: estimating a distance Ri of a corresponding point of the real-time scene on a basis of measurements of angular speeds of elements of the scene, of a speed of the platform, and of the VisNIR images, calculating an intensity I.sub.i of the event i on a basis of the SWIR and VisNIR illuminations of the pixel and on a basis of the distance R.sub.i, estimating a total energy E.sub.i of the event on a basis of a temporal shape of the signature and of the intensity I.sub.i, classifying the event i as a function of its duration ?t.sub.i, its temperature T.sub.i, its intensity I.sub.i and its energy E.sub.i, estimating an effectiveness range Pi of the event i on a basis of its classification, comparing the effectiveness range Pi with a distance Ri, if the distance Ri is less than the effectiveness range Pi, then if possible, triggering a suitable retaliation in real time.
2. The method for classifying events of a scene as claimed in claim 1, wherein when an event is associated with adjacent pixels, then a luminance of a set of these events is determined, and this set of events is also classified as a function of its luminance.
3. The method for classifying events of a scene as claimed in claim 1, wherein the event is inlayed on the VisNIR image displayed.
4. The method for classifying events of a scene as claimed in claim 3, wherein the distance associated with the event is displayed on the VisNIR image displayed.
5. The method for classifying events of a scene as claimed in claim 1, wherein the retaliation is triggered automatically or manually by an operator.
6. A non-transitory computer program product, said non-transitory computer program product comprising code instructions configured to perform the steps of the method as claimed in claim 1, when said non-transitory computer program is executed on a computer.
7. A system for detecting and classifying events of a scene, which comprises: a system for single-pupil imaging of the scene, mounted on a mobile platform and equipped with several detectors, including a Visible Near Infra Red (VisNIR) detector and a Short Wave Infra Red (SWIR) detector, a detection processing unit linked to the detectors, means for estimating the distance between points of the scene and the imaging system, an events management system linked to the detection processing unit and configured so as to be triggered, on the basis of the classified event and of its distance, a display device linked to the detection processing unit, the detection processing unit being configured to compare the successive SWIR images so as to determine, for each pixel (x.sub.1, y.sub.1) and neighboring pixels, a variation in illumination from one SWIR image to another SWIR image and determine a peak value e.sub.1(t) of SWIR illuminations, if the variation in SWIR illumination is greater than a predetermined illumination threshold, then the variation in SWIR illumination during which said threshold is exceeded is designated as a signature of an event i and an event is associated with said pixel (x.sub.1, y.sub.1) or with a barycenter of the pixels considered, and the detection processing unit configured to: determine a date t.sub.i of the event i, determine a temporal shape and a duration of the signature of the event, determine the coordinates of a pixel (x.sub.2, y.sub.2) and of neighboring pixels corresponding to the pixel (x.sub.1, y.sub.1) or to a barycenter in the VisNIR images synchronized with the SWIR images, and for the pixel (x.sub.2, y.sub.2) and the neighboring pixels: the detection processing unit is further configured to calculate a variation in the illumination from one VisNIR image to another VisNIR image and calculate a peak value e.sub.2(t) of VisNIR illuminations, comparing these variations in SWIR and VisNIR illumination, and comparing the peak values e.sub.1(t) and e.sub.2(t) so as to estimate a temperature of the event on a basis of a predetermined lookup table, if the scene is a daytime scene and the temperature is greater than a temperature threshold, then the event is a false alarm and the detection processing unit is configured to repeat the comparison with another pixel of the SWIR images, otherwise: the detection processing unit is further configured to estimate a distance Ri of a corresponding point of the real-time scene on a basis of measurements of angular speeds of elements of the scene, of a speed of the platform, and of the VisNIR images, the detection processing unit is further configured to calculate an intensity I.sub.i of the event i on a basis of the SWIR and VisNIR illuminations of the pixel and on a basis of the distance R.sub.i, the detection processing unit is further configured to estimate a total energy E.sub.i of the event on a basis of a temporal shape of the signature and of the intensity I.sub.i, the detection processing unit is further configured to classify the event i as a function of its duration ?t.sub.i, its temperature T.sub.i, its intensity I.sub.i and its energy E.sub.i, the detection processing unit is further configured to estimate an effectiveness range Pi of the event i on a basis of its classification, the detection processing unit is further configured to compare the effectiveness range Pi with a distance Ri, if the distance Ri is less than the effectiveness range Pi, then if possible, the detection processing unit is further configured trigger a suitable retaliation in real time.
8. A system for detecting and classifying events of a scene, which comprises: a system for single-pupil imaging of the scene, mounted on a mobile platform and equipped with several detectors, including a Visible Near Infra Red (VisNIR) detector and a Short Wave Infra Red (SWIR) detector, a detection processing unit linked to the detectors, means for estimating a distance between points of the scene and the imaging system, an events management system linked to the detection processing unit and configured to be triggered, on a basis of a classified event and of its distance, a display device linked to the detection processing unit, wherein the detection processing unit comprises means for implementing the detection and classification that comprises a non-transitory computer program comprising code instructions configured to perform the steps of the method as claimed in claim 1, when said non-transitory computer program is executed on a computer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Other characteristics and advantages of the invention will become apparent on reading the detailed description which follows, given by way of nonlimiting example and with reference to the appended drawings in which:
(2)
(3)
(4)
(5)
(6)
(7) Across the figures, the same elements are labeled with the same references.
DETAILED DESCRIPTION
(8) The detection and classification method according to the invention is implemented by means of a single-pupil imaging system depicted in
(9) The SWIR matrix detector 21, which is for example based on InGaAs, typically has a matrix format of 320 rows?256 columns at a spacing of 30 ?m with an image rate of 400 Hz, or 640 rows?512 columns at a spacing of 15 ?m with an image rate of 300 Hz, which correspond to the current state of the art. This detector typically has a spatial resolution of 4 mrd for a pixel with the 640?512 format. Larger formats can be envisaged. However, the image rate and temporal response requirements for the classification of threats which are for the most part very brief (duration generally <100 ms) demand acquisition rates equal to or greater than 200 Hz or even 1000 Hz if it is desired to classify small calibers. The detector integrates its readout and multiplexing circuit, termed ROIC (standing for Read Out Integrated Circuit).
(10) The pixels are furnished with one or more suitable filters such as:
(11) A single filter over the whole matrix of pixels 211 which simultaneously allows through a narrow band around 1.064 ?m and a wide band above 1.4 ?m up to the detector's cutoff wavelength, for example 1.7 ?m or beyond.
(12) Several different filters each suitable for a pixel, for example on a 2?2 tiling, with:
(13) A sub-band for the detection of start-of-blast signatures (1.4-1.7 ?m)
(14) Narrow bands tuned to the laser spectral lines (1.06 ?m, 1.5 ?m, . . . )
(15) The readout circuit or ROIC is linked to proximity electronics (or E-prox) 210 of the detector; the ROIC and/or the E-prox integrate particular functions such as for example:
(16) A high-rate readout above 200 Hz or less than 5 ms of period and of integration time with no dead time; the integration time is equal or very close to the image frame acquisition period.
(17) A high dynamic range (HDR mode) that can be achieved in various ways: dual-slope, root or logarithmic nonlinear response for example, reset before saturation countdown . . . .
(18) A CMOS Silicon detector is for example used as visible detector 22. It has a high spatial resolution (for example 1 mrd for 1 pixel) and operates at a rate of 25 Hz or more generally between 25 Hz and 100 Hz. A 4T or 5T CMOS matrix 221 (with 4 or 5 transistors in the pixel) with low noise and amplification and column-wise parallel analog-digital conversion, such as an s-CMOS (scientific CMOS), may be cited as an example of such a detector. The CMOS detector 221 integrates its readout and multiplexing circuit (or ROIC) which is linked to the proximity electronics (or E-prox) 220 of the detector. This proximity electronics associated with the ROIC carries out all the operations of analog-digital conversion and restoration of the signal (NUCsNon Uniformity Corrections) to make it possible to utilize the images acquired with maximum performance in daytime and at nighttime. It is linked to a display device 40 so as to ensure a scene viewing function for an operator. The visible image 42 is displayed, but not necessarily the SWIR image 41.
(19) These detectors 22, 21 are linked to a unit 50 for processing the visible 42 and SWIR 41 images obtained respectively by the two detectors, and which is able to implement the following steps described in conjunction with
(20) All the threats are characterized by very brief optical signatures which are therefore very difficult for an operator to discern by eye when looking at a screen or even when viewing directly, on account of the spectral emission region; they must therefore be detected through automatic processing.
(21) The temporal profiles of the signatures, that is to say their duration, and possibly rise time and descent profile, constitute a first attribute, which is obtained in the following manner.
(22) An event is firstly detected by detection means 51 and certain of its characteristics are estimated in the following manner. Images of the scene to be observed, termed SWIR images 41, are acquired successively by the SWIR detector. Images of this scene, termed VisNIR images 42, are acquired simultaneously by the visible detector, these SWIR and VisNIR images being temporally synchronized with each other by means of electronics 30 for driving and synchronizing the detectors, and then stored in memory for classification purposes. This synchronization may result from acquiring images at the same rate for both detectors, but the rates are generally different as mentioned previously.
(23) The SWIR images 41 are compared with one another to determine for each pixel (x.sub.i1, y.sub.i1) the variation in SWIR illumination. The illumination e.sub.i1 is given by the signal integrated over the pixel on which the image of the threat is formed. If this variation in illumination is greater than a predetermined threshold or one that can be adapted according to the signatures of the temporal spatial backgrounds, then it is considered to represent the SWIR signature of an event: an event i is detected as shown in
(24) Its duration ?t.sub.i in seconds is also determined, that is to say the duration for which this variation in illumination is greater than this threshold. This duration ?t.sub.i therefore constitutes the first attribute for classifying the event (step A).
(25) On the basis of this event detection carried out in the SWIR region, it will be possible to measure on the VisNIR images 42 the flux level collected in a synchronous manner in the same viewing field. The following is carried out.
(26) Means 53 are used to determine the coordinates of the pixel (x.sub.i2, y.sub.i2) corresponding to this event in the VisNIR images 42 synchronized with said SWIR images. When the dimensions of the SWIR and VisNIR images are the same, the coordinates of the pixel in the visible images are identical to those of the SWIR images; we have x.sub.i1=x.sub.i2 and y.sub.i1=y.sub.i2. If the VisNIR images are more resolved than the SWIR images because the spatial resolution of the VisNIR detector is greater than that of the SWIR detector as in the example of
(27) The ratio e.sub.i2/e.sub.i1 is calculated for this pixel. This ratio of VisNIR illumination to SWIR illumination makes it possible to estimate a temperature T.sub.i (in K) of the event (step B), via the means 54 for calculating the attributes. This temperature is the second attribute of the optical signature. For this purpose a predetermined table is used which makes it possible to establish a correspondence between these ratios of illuminations and the corresponding black body or gray body temperature, using for example Planck's law, the contribution of which is integrated for the two spectral bands, SWIR and VisNIR, as a function of temperature. The digital signal arising from the two channels is calibrated in W/m.sup.2 to provide the measurements in this unit. Alternatively, the detectors can be calibrated by measuring the signal that they deliver when sighting a calibration black body in the laboratory.
(28) This temperature is typically used to reject a source of false alarms consisting of the Sun or its modulation or its scattering by clouds, or else its reflection by mirrors, metallic contours (such as those of windows or signposts) or else by reflections on the water. Indeed, on a mobile platform these solar reflections are liable to generate spatio-temporal signatures in the SWIR region that are very close to those of the munitions sought. The Sun or its reflections, the black body temperature of which is around 5800K, will generate an intense signature at shorter wavelength (in the visible and the near IR) that is much stronger than those associated with the pyrotechnic signatures of the start of blasts having a black body temperature of less than 2000 K. The detection of intense signatures in the visible or near IR region thus makes it possible to neutralize possible detections in the SWIR on the pixels covering the same instantaneous viewing field. The flux level collected on the visible detector makes it possible, via the temperature, to validate or to reject the detection of the event: for a daytime scene, if the temperature is greater than a threshold (5000 K for example), then this event is a false alarm, and if it is not, then the event is validated (step C). This validated event can furthermore be inlaid on the visible image for the operator's attention.
(29) This temperature attribute is advantageously determined by the processing unit in parallel with the first attribute (duration).
(30) All materiel from which these threats originate is characterized by lethality or effectiveness ranges P beyond which the threat is no longer effective. They are in class 100 m for an RPG or a short-range anti-tank rocket, and in class 500 m to 8 km for anti-tank missiles, depending on their type, or shell rounds, depending on their caliber and their charge.
(31) According to the invention, the processing unit comprises means 52 for estimating the distance R.sub.i of a point of the visible image 42 (object point of the scene-imaging system).
(32) The imaging system is installed aboard a platform, for example terrestrial. By ascertaining the elevation and bearing directions in the frame of reference of the VisNIR detector and by ascertaining its position in terms of height and orientation with respect to the platform, or better still with respect to the ground (by utilizing the information regarding the relative position of the platform with respect to the ground), it is possible to estimate the distance of the points of the image from the ground as a function of their apparent elevation by assuming a horizontal plane ground, or better still by utilizing a digital terrain model (DTM) charted for example by the GPS position of the vehicle and by data arising from the images delivered by the visible detector, providing a horizon profile, or by the location of landmarks in the field of the image. When the platform is fixed on an infrastructure, it is possible, during its installation, to pair all the points of the visible image of the terrestrial scene with its distance. When the platform is mobile, the angular velocities of the characteristic points of the scene can be measured between successive VisNIR images with good precision, on account of the angular resolution of the imaging system. This field of angular velocities throughout the scene is called the optical flow. It makes it possible to measure the rotation or rotations of the field and the direction of the velocity vector of the platform (after derotation of the field), for which direction the optical flow is zero. The wide coverage of the angular field of the panoramic imaging system makes it possible to ensure that the direction of the platform's velocity vector is in the observation field and coincides with a pixel. By ascertaining the velocity of the platform, information delivered by its conduct and its propulsion, or by utilizing the information from an inertial unit or else possibly measured by utilizing the short-range ground optical flow, the measurement of the angular velocity of the elements of the scene and the measurement of the angular deviation between the pixel in coincidence with the velocity vector of the platform and the direction associated with the image of an element of the scene of which an angular velocity is estimated, the distance R thereof may be estimated.
(33) As shown in
(34) On the basis of this distance and of the visible illumination obtained for this event, the means 54 are used to calculate its intensity I.sub.i (in W/sr), this being the third attribute (step D). Indeed, it is recalled that the amplitude of the SWIR illumination depends on the distance R through a 1/R.sup.2 law and on the atmospheric attenuation T.sub.atm which will afford a transmission coefficient which depends on the distance R. We have:
I.sub.i=(1/T.sub.atm)e.sub.iR.sub.i.sup.2.
(35) In the SWIR images 41, the optical signatures of the sources are either resolved (extended over several pixels) or unresolved (the image of the source is formed on a single pixel) or, in an intermediate situation, are hardly extended with respect to the instantaneous field of view defined by the size of the pixel, the focal length of the optic and its MTF (Modulation Transfer Function) in band 1.
(36) When the signature is spatially resolved as shown in the example of
L.sub.i=e.sub.i1/[T.sub.atm(IFOV).sup.2],
with IFOV being the solid angle of the instantaneous field of view of the detector. Alternatively, the mean luminance can be given by integrating the signal e.sub.i1 collected by the adjacent pixels on which the image of the source is formed, divided by the solid angle ? in steradians at which the resolved event is seen. Since the illumination received is dependent on the distance R (1/R.sup.2 law), it does not constitute an attribute for classifying the threat. Only the barycenter of this signature has significance, by allowing angular location in terms of elevation and bearing in the frame of reference of the SWIR detector or in the frame of reference of the platform by knowing the motion of the former with respect to the chassis if the imaging system is mounted on a member articulated to the platform (turret, pan and tilt platform). This event is then classified resolved as a function of its duration, its temperature, its intensity and its luminance.
(37) Finally, the intensity of the event or its luminance associated with its magnitude and with its duration make it possible to estimate the energy E.sub.i (in J) of the event (step F) which can be linked to an estimation of the range of the threat and of its munition. They are calculated by the means 54.
(38) When the event i is resolved we have: E.sub.i=L.sub.i ?R.sub.i.sup.24??t.sub.i.
(39) When the event is unresolved we have: E.sub.i=I.sub.i4??t.sub.i.
(40) These four (if signature unresolved) or five (if signature resolved) attributes make it possible to robustly classify the threat with the means 55. When the event has been validated on the basis of its temperature or, rather, has not been considered to be a false alarm, it is then classified on the basis of its duration, its temperature, its intensity, its energy and possibly its luminance and predetermined classes as shown in the table of
(41) By associating the classification of the threat and the estimation of the distance and energy of the event, it is possible to decide whether or not the imaging system or the observer is situated in the threat lethality region P and thus to undertake, if necessary and possible, a suitable retaliation in real time. This retaliation can be triggered automatically by the means 56 of the processing unit as shown in
(42) This detection and classification method can in particular be implemented on the basis of a computer program product, this computer program comprising code instructions making it possible to perform the steps of the method. It is recorded on a medium readable by computer, also used to synchronize SWIR images and visible images. The medium can be electronic, magnetic, optical, electromagnetic or be a dissemination medium of infrared type. Such media are for example, semi-conductor memories (Random Access Memory (RAM), Read-Only Memory (ROM)), tapes, magnetic or optical diskettes or disks (Compact DiskRead Only Memory (CD-ROM), Compact DiskRead/Write (CD-R/W) and DVD).