SMALL UNMANNED AERIAL SYSTEMS DETECTION AND CLASSIFICATION USING MULTI-MODAL DEEP NEURAL NETWORKS
20230039196 · 2023-02-09
Assignee
Inventors
Cpc classification
B64U2201/102
PERFORMING OPERATIONS; TRANSPORTING
B64C39/024
PERFORMING OPERATIONS; TRANSPORTING
G06N3/0442
PHYSICS
International classification
G05D1/10
PHYSICS
G05D1/00
PHYSICS
Abstract
Provided is a detection and classification system and method for small unmanned aircraft systems (sUAS). The system and method detect and classify multiple simultaneous heterogeneous RC transmitters/sUAS downlinks from the RF signature using Object Detection Deep Convolutional Neural Networks (DCNNs). The method further utilizes not only passive RF, but may also utilize Electro Optic/Infrared (EO/IR), radar and acoustic sensors as well, with a fusion of the individual sensor classifications. Detection and classification with Identification Friend or Foe (IFF) of individual sUAS in a swarm, multi-modal approach for high confidence classification, decision, and implementation on a low C-SWaP (cost, size, weight and power) NVIDIA Jetson TX2 embedded AI platform is achieved.
Claims
1. A small unmanned aerial systems (sUAS) detection and classification system comprising: an RF sensor configured to monitor RF frequencies in an environment in which at least one sUAS is being operated and determine one or more RF spectrograms based on the monitored RF frequencies; an optical sensor configured to capture at least one of electro-optic information or infrared (IR) information about the at least one sUAS; a radar sensor configured to measure radar information and determine one or more radar spectrograms from the measured radar information for the environment in which at least one sUAS is being operated; a sound sensor configured to record acoustical information in the environment in which at least one sUAS is being operated and generate one or more acoustic spectrograms based on the recorded acoustical information; and at least one deep convolutional neural network (DCNN) coupled with one or more of the RF sensor, the optical sensor, the radar sensor, and the sound sensor, the DCNN configured to identify and/or classify the at least one sUAS based on one or more of the one or more RF spectrograms, the electro-optic information or infrared (IR) information, the one or more radar spectrograms, and the one or more acoustic spectrograms.
2. The sUAS detection and classification system of claim 1, wherein the system is configured to: collect at least one frequency hop sequence of the RF signal corresponding to at least one particular sUAS.
3. The sUAS detection and classification system of claim 2, wherein the system is configured to: train at least one long short-term memory (LSTM) recurrent neural network (RNN) based on the collected least one frequency hop sequence to enable prediction of subsequent transmission of the at least one frequency hop sequence.
4. The sUAS detection and classification system of claim 1, further comprising: an ultra wideband (UWB) omnidirectional antenna coupled to the RF sensor.
5. The sUAS detection and classification system of claim 1, further comprising: the at least one deep convolutional neural network (DCNN) comprising a Siamese DCNN (SDCCN).
6. The sUAS detection and classification system of claim 5, wherein the SDCNN comprises: a first DCNN configured to receive a first input and output a first weighted data; a second DCNN configured to receive a second input and output a second weighted data, wherein the first and second DCNN share input weighting factors; a distance module configured to compute a Euclidean distance between the output first and second weighted data; and a hinge embedding loss (HEL) computation module configured to measure the degree of similarity of the first and second inputs from the Euclidean distance and determine a gradient of the loss with respect to DCNN parameters of the SDCNN
7. The sUAS detection and classification system of claim 6, wherein the SDCNN is further configured to propagate the gradient of the loss backwards using a back propagation algorithm to update the SDCNN parameters such that the HEL is a first value for inputs from a same RF transmitter and second value for inputs from different transmitters, wherein the first value is less than the second value.
8. A method for small unmanned aerial systems (sUAS) detection and classification comprising: monitoring RF frequencies in an environment in which at least one sUAS is being operated and determine one or more RF spectrograms based on the monitored RF frequencies; capturing at least one of electro-optic information or infrared (IR) information about the at least one sUAS; measuring radar information and determine one or more radar spectrograms from the measured radar information for the environment in which at least one sUAS is being operated; recording acoustical information in the environment in which at least one sUAS is being operated and generate one or more acoustic spectrograms based on the recorded acoustical information; and identifying and/or classifying the at least one sUAS using at least one deep convolutional neural network (DCNN) coupled with one or more of the RF sensor, the optical sensor, the radar sensor, and the sound sensor, the DCNN configured based on one or more of the one or more RF spectrograms, the electro-optic information or infrared (IR) information, the one or more radar spectrograms, and the one or more acoustic spectrograms.
9. The method for sUAS detection and classification of claim 8, further comprising: collecting at least one frequency hop sequence of the RF signal corresponding to at least one particular sUAS.
10. The method for sUAS detection and classification of claim 9, training at least one LSTM RNN based on the collected least one frequency hop sequence to enable prediction of subsequent transmission of the at least one frequency hop sequence.
11. The method for sUAS detection and classification of claim 8 further comprising: the at least one deep convolutional neural network (DCNN) comprising a Siamese DCNN (SDCCN).
12. The method for sUAS detection and classification of claim 11, wherein the SDCNN comprises: a first DCNN configured to receive a first input and output a first weighted data; a second DCNN configured to receive a second input and output a second weighted data, wherein the first and second DCNN share input weighting factors; a distance module configured to compute a Euclidean distance between the output first and second weighted data; and a hinge embedding loss (HEL) computation module configured to measure the degree of similarity of the first and second inputs from the Euclidean distance and determine a gradient of the loss with respect to DCNN parameters of the SDCNN.
13. The method for sUAS detection and classification of claim 12, wherein the SDCNN is further configured to propagate the gradient of the loss backwards using a back propagation algorithm to update the SDCNN parameters such that the HEL is a first value for inputs from a same RF transmitter and second value for inputs from different transmitters, wherein the first value is less than the second value.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The detailed description of the drawings particularly refers to the accompanying figures in which:
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
DETAILED DESCRIPTION
[0017] The embodiments of the present invention described herein are not intended to be exhaustive or to limit the invention to precise forms disclosed. Rather, the embodiments selected for description have been chosen to enable one skilled in the art to practice the invention.
[0018] The present disclosure provides systems and methods for detecting and classifying a sUAS based on monitored uplink and downlink radio frequency (RF) signals, radar and acoustic spectrograms based on monitored radar and acoustical/sound data, and Electro-Optic/Infrared (EO-IR) images captured using EO/IR device such as cameras, as just a few examples. Detection and classification of an sUAS may further include the determination of hostile/non-hostile intent of the sUAS, the prediction of frequency hopping (FH) sequences of FH Spread Spectrum (FHSS) RC transmitters, and specific emitter identification (SEI) can be used to differentiate between multiple sUAS of the same type.
[0019]
[0020] Furthermore, the system 100 may include radar devices (not explicitly shown in
[0021] As further shown in
[0022]
[0023] In further aspects, a background class may be recorded for each of the bands (e.g., the four bands 433 MHz, 915 MHz, 2.4 GHz and 5.8 GHz). Variable signal to noise ratios (SNR) may also be simulated by attenuating the pixel values by a factor between 0.1 and 1.0. In one example, approximately 70% of the spectrograms may be used to train the DCNN and the remaining approximate 30% may be used for testing. In one example, training and testing may be accomplished using an NVIDIA DIGITS/CAFFE Deep Learning (DL) framework, which is browser driven and allows to easily change DCNN hyperparameters and visualize the accuracy and loss curves during training. In one example, using such a framework may result in classification accuracies of 100% for the 433 and 915 MHz bands, 99.8% for the 2.4 GHz band and 99.3% for the 5.8 GHz band. Similar high classification scores may be obtained on a dataset generated from limited look windows distributed over the one second waveform to simulate the scan and jam cycles of counter measure jammers (e.g., CREW jammers).
[0024]
[0025]
[0026] In further aspects, it is noted that an object detection DCNN may also be applied to an EO simulation input in the form of RC flight simulator video frames from a source such as a flight simulator (e.g., RealFlight 7.5 simulator). Here, different types of aircraft may be “flown” in this experiment while capturing video of the flight simulator screen to a MPEG-4 file. In further aspects, a time period, such as 90 seconds of 30 fps video, may be recorded for each aircraft with a number of frames per video extracted (e.g., 2,700 frames). The method also includes labeling by drawing a bounding box around the aircraft in each frame and labeling automatically by running the frames through an ImageNet pre-trained YOLOv2 model, as an example, which had aircraft as one of its categories followed by overwriting the “aircraft” label with the actual type of aircraft being flown by modifying the YOLOv2 C code. The YOLOv2 object Detection DCNN may be trained on the frames and annotations of all selected training aircraft. In further implementations of object detection, pixel coordinates of a detected aircraft and the confidence level are continuously displayed as the aircraft is tracked frame-by frame, and this procedure is applied to detect and classify sUAS in the present systems and methods. There are a number of quadcopter models that may be input or are resident in flight simulators and video captured from such flight simulators can be used to initially train an EO Object Detection DCNN. Real-time video of quadcopters in flight may then be further used for training the DCNN. The tracking of bounding box pixel coordinates may be used to control the rotation of a video camera (Slew to Cue) mounted on a servo motor driven platform so that an aircraft is always in view at the center of the video frame. Object Detection DCNN may also be trained on the MS-COCO database to detect aircraft in the DIGITS/CAFFE DL framework. This alternative (backup) method can be used in inference mode on an NVIDIA Jetson TX2 embedded GPU platform (e.g., 406 as shown in
[0027]
[0028] For detection and classification of frequency hops, DJI Mavic Pro uplink and background noise can be visualized. It is also noted that YOLOv2 software (and updated versions up to YOLOv7) is open source C code that uses the OpenCV and NVIDIA CUDA (Compute Unified Device Architecture) GPU libraries. Gr-fosphor uses the OpenCL library for parallel FFT computing and OpenGL for rendering the graphics, with both libraries running on the GPU (e.g., 406). The training set can be expanded to include all the different types of RC transmitters for sUASs referred to above. In addition, a method for extracting the frequency hopping sequences from FHSS RC transmitters may use the RF labeled output of the YOLOv2 software. A python script processes the sequential labeled sub spectrogram frames (24 per 1 second I/Q recording) and output time-sorted hop sequences for a long short-term memory (LSTM) Recurrent Neural Network (RNN) may be used for training for frequency hop prediction.
[0029]
[0030] In yet further aspect, another part of the system and method focuses on specific emitter identification or RF finger printing. In order to accomplish this finger printing, a Siamese DCNN (SDCNN) may be used for this task to train on the nonlinearities (due to power amplifiers, manufacturing defects etc.) of radio transmitters according to some aspects as is illustrated in
[0031] As shown in
[0032] Following the forward propagation, the gradient of the loss with respect to the DCNN parameters is computed and propagated backwards using a back propagation algorithm to update the DCNN parameters in such a way as to make the HEL small for inputs from the same transmitter and large for inputs from different transmitters. That is, the hinge embedding loss module 708 is configured to measure the degree of similarity of the first and second inputs from the Euclidean distance and determine a gradient of the loss with respect to DCNN parameters of the SDCNN and ensure that loss is small for same transmitters and large for different transmitters.
[0033]
[0034]
[0035] Next, method 900 includes measuring radar information and determine one or more radar spectrograms from the measured radar information for the environment in which at least one sUAS is being operated as shown at block 906. In block 908, method 900 includes recording/capturing acoustical information in the environment in which at least one sUAS is being operated and generate one or more acoustic spectrograms based on the recorded acoustical information. Finally, method 900 includes identifying and/or classifying the at least one sUAS using at least one deep convolutional neural network (DCNN) coupled with one or more of the RF sensor, the optical sensor, the radar sensor, and the sound sensor, the DCNN configured based on one or more of the one or more RF spectrograms, the electro-optic information or infrared (IR) information, the one or more radar spectrograms, and the one or more acoustic spectrograms as shown in block 910.
[0036] Although the invention has been described in detail with reference to certain preferred embodiments, variations and modifications exist within the spirit and scope of the invention as described and defined in the following claims.