RELATIVE POSITION DETERMINATION METHOD FOR MULTIPLE UNMANNED AERIAL, MARINE AND LAND VEHICLES
20240338031 ยท 2024-10-10
Assignee
- Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi (Ankara, TR)
- GEBZE TEKNIK UNIVERSITESI (Kocaeli, TR)
Inventors
Cpc classification
G05D1/686
PHYSICS
G06V20/56
PHYSICS
International classification
G05D1/686
PHYSICS
Abstract
A camera-based and direct observation based relative position determination method for multiple unmanned aerial, naval and ground vehicles is provided. The method calculates the relative position between the relevant vehicles in multiple UAV, UNV and UGV systems.
Claims
1-4. (canceled)
5. A relative position determination method for multiple unmanned aerial/ground/naval vehicles based on a camera and direct observation in multiple unmanned aerial/ground/naval vehicle systems and characterized by comprising the following process steps; a. performing a predefined movement in different relative poses of target unmanned aerial vehicle (UAV.sub.T)/ground/naval vehicle, b. data collection by recording the trajectory on the image plane corresponding to the observer unmanned aerial vehicle (UAV.sub.O)/ground/naval vehicle performing the predefined movement of the target unmanned aerial vehicle (UAV.sub.T)/ground/naval vehicle at different relative poses to enable recording of trajectory and actual data for many different relative pose values, c. realization of deep learning model training using the collected data, d. starting to perform a predefined movement of target (collaborator/friend) unmanned aerial (UAV.sub.T)/ground/naval vehicle, e. taking the image of an observer unmanned aerial (UAV.sub.O)/ground/naval vehicle by means of an image capture unit and checking whether there is a target unmanned aerial/ground/naval vehicle on the image taken, f. determining the bounding box information if the target unmanned aerial (UAV.sub.T)/ground/naval vehicle is detected in the received image, g. returning to step e if the target unmanned aerial (UAV.sub.T)/ground/naval vehicle is not detected in the received image, h. tracking the target using image tracking algorithms using bounding box information, i. extracting features between consecutive points using bounding box information, preferably taking into account the relationships between the position of the center point of the bounding box in the current and previous images, and j. calculating the 6-DOF relative position between the observer aerial (UAV.sub.O)/ground/naval vehicle and the target aerial (UAV.sub.T)/ground/naval vehicle by providing the extracted features as input to a deep learning model.
6. The method according to claim 5, wherein said bounding box information is corner point, width and height information.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0029]
[0030]
[0031]
[0032]
[0033]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0034] The reference characters used in the FIGS. are as follows: [0035] 1: unmanned aerial vehicle (UAV). [0036] 2: Overlapping eight trajectories (OET). [0037] 3: Discriminative, rotated second eight trajectories (SET).
[0038] In this detailed description, the preferred embodiments of the inventive relative positioning method for multiple unmanned aerial vehicles are described by means of examples only for clarifying the subject matter.
[0039] An image-based method is being developed to calculate the relative positions between multiple unmanned aerial, marine and ground vehicles so that they can operate together in the same environment. The developed method is used for the solution of the relative pose transformation between vehicles in the collaborative visual simultaneous localization and mapping problem in multiple UAV, UNV and UGV systems.
[0040] In a multi-UAV system, UAVs are divided into two groups as observer UAVs (UAV.sub.O) and target UAVs (UAV.sub.T) in the relative position calculation method between UAVs. The target UAV (UAV.sub.T) makes a predefined movement (e.g. drawing a eight-shaped trajectory), while the observer UAV (UAV.sub.O) watches it through a camera on itself. In the case where the motion (H(t)) of the UAV.sub.T is known (see Equation 1), certain parameters of the relative position between the UAV.sub.T and the UAV.sub.O can be calculated (see
In the simplest case, where the UAV.sub.O is stationary (suspended) at a certain position in the air and visually tracks the movement of the UAV.sub.T, assuming that the distance between the UAVs is large, the UAV.sub.O tracks the movement of the UAV.sub.T as a point on the beam connecting the instantaneous position of the UAV.sub.T and the camera center of the UAV.sub.O. The trajectory (P(t)) on the camera plane is formed as the projection of the motion (H(t)) of the UAV.sub.T in three dimensions. This projection depends on the camera calibration matrix (K), the relative position between the UAVs ([R.sub.TG, T.sub.TG]) and the motion (H(t)) of the UAV.sub.T in three dimensions as specified in Equation 2. For example, at different ([R.sub.TG, T.sub.TG]) values, when the UAV.sub.T draws octagonal trajectories of the same size, the projections of these trajectories on the camera will be different (see
[0041] The movement of the UAV.sub.T is locally known to the extent that it can accommodate an error (?.sub.0), which depends on its own positioning system. For example, if UAV.sub.T uses the Visual Simultaneous Localization and Mapping (Visual SLAM) method as a positioning system, ?.sub.0 is in a structure that changes and accumulates over time. Ideally, the translation function from the trajectory in the image plane to the three-dimensional plane, defined in Equation 3, is realized analytically. However, the characteristics of ?.sub.0 may make it analytically impossible to perform this conversion. The proposed method is based on the realization of this cycle with a data-driven method.
[0042] In the training phase, both UAV.sub.T and UAV.sub.O use GNSS as the positioning system, so that the ground truth value of the relative position between both UAVs is known. While the UAV.sub.T performs its predefined motion (H(t)), the UAV.sub.O records the trajectory (P(t)) on the image plane corresponding to this motion. With the data collected in this way, a deep learning model is trained to estimate the conversion function f(P(t)) (see Equation-3) between each H(t) and P(t).
[0043] The training phase is carried out in the following three sub-stages: [0044] 1. Data collecting: At this stage, the UAV.sub.T performs its predefined motion (H(t)) at different relative poses, while the UAV.sub.O records the trajectory (P(t)) on the image plane corresponding to this motion. In this way, both trajectory data and ground truth data are recorded for many different relative pose values. [0045] 2. Offline Deep Learning Model Training: Using the data collected in Stage 1, the training of the deep learning model used in the relative pose estimation method for multiple unmanned aerial, naval and ground vehicles is carried out. At this stage, the trained model is used as the base model for all configurations (different platforms, different cameras, etc.). [0046] 3. Online Deep Learning Model Training: In this phase, data is collected in an environment known with the UAV and camera configuration to be used in the mission, and the model parameters trained in Stage 2 are updated again in a way specific to the current configuration. The amount of data collected at this stage may be less than the amount of data collected at Stage 1.
[0047] In the proposed method, no parametric calibration such as camera calibration etc. is required beforehand. During a training flight in the test area, aircraft perform certain predefined movements and collect data to learn the configuration between themselves. Using this data, a deep learning model is trained to generate 6 degrees of freedom position information between the air vehicles. Air vehicles then perform their missions using this model. Thus, the proposed method is generalizable for different types of unmanned aerial vehicles and different flight configurations.
[0048] The predefined movement (trajectory) to be performed by the UAV.sub.T can be configured in such a way that it can be easily distinguished at any distance and angle on the camera of the UAV.sub.O and can be any trajectory (size, geometry, etc.). In some cases, at two different relative positions ([R.sub.TG, T.sub.TG]) between the UAVs, the projection of the trajectory drawn by the UAV.sub.T on the UAV.sub.O's camera may be the same. The predefined movement (trajectory) to be performed by the UAV.sub.T is selected in a structure that eliminates such ambiguity. For example, when eight shaped trajectory is considered, the projection of the eight shapes drawn on the UAV.sub.O's camera by the UAV.sub.T at two different yaw angles would be the same. To prevent this situation, drawing another rotated eight shape after the first eight shape can eliminate this uncertainty. (see
[0049] In the relative pose determination method, the flowchart of the relative pose calculation algorithm, which uses the previously trained deep learning model that will run on the UAV.sub.O while the UAV.sub.T performs its defined movement during the mission, is shown in
[0054] The proposed method can be used in two different ways depending on whether there is communication between UAV.sub.O and UAV.sub.T. In case there is communication between the UAVs, the UAV.sub.T can share the type and starting/finishing times of the movement with the UAV.sub.O. In addition, thanks to the data to be shared by the UAVs with each other, the detection of UAV.sub.T in the Air Vehicle Detection Block is facilitated. In the case of communication, the difficulty of the problem and the complexity of the method are somewhat reduced. However, it is considered important that the method can also work in cases where the communication system does not work due to jamming etc. in the environment. In the absence of communication between the UAVs, the observer UAV also detects the type of pattern performed by the target UAV and the start/end of the movement. In this case, the complexity of the problem and the method to be used is higher.
[0055] In the proposed method, the UAV.sub.T performs a predefined movement (H(t)) at certain times. This movement can be performed periodically at a certain frequency or in case of a specific need (for example, when the positioning accuracy of aircraft falls below a certain value).
[0056] In the proposed method, all air vehicles used during the mission have the same configuration (same processor and sensor units). Thus, all air vehicles can fulfil the role of both UAV.sub.O and UAV.sub.T. In this way, it is considered that the proposed method can be used in any type of UAV team/swarm with two or more members.
REFERENCES
[0057] [1] Kaustav Chakraborty, Martin Deegan, Purva Kulkarni, Christine Searle and Yuanxin Zhong, JORB-SLAM: A Jointly optimized Multi-Robot Visual SLAM, 2020. [0058] [2] T. Krajnik, M. Nitsche, J. Faigl, P. Vanek, M. Saska, A practical multirobot localization sysem, Journal of Intelligent & Robotic Systems, vol. 76, no. 3-4, pp. 539-562, 2014. [0059] [3] Viktor Walter, Nicolas Staub, Antonio Franchi, Martin Saska, UVDAR System for Visual Relative Localization with application to Leader-Follower Formations of Multirotor UAVs, IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 2637-2644, 2019. [0060] [4] Matous Vrba, Martin Saska, Marker-Less Micro Aerial Vehicle Detection and Localization Using Convolutional Neural Networks, IEEE ROBOTICS AND AUTOMATION LETTERS, vol. 5, no. 2, pp. 2459-2466, 2020 [0061] [5] Roberto Opromolla, Giancarmine Fasano and Domenico Accardo, A Vision-Based Approach to UAV Detection and Tracking in Cooperative Applications, Sensors (Switzerland), vol. 18, no. 10, p. 3391, 2018. [0062] [6] Patrik Schmuck and Margarita Chli, Multi-UAV Collaborative Monocular SLAM, in IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017. [0063] [7] Pierre-Yves Lajoie, Benjamin Ramtoula, Yun Chang, Luca Carlone, Giovanni Beltrame, DOOR-SLAM: Distributed, Online, and Outlier Resilient SLAM for Robotic Teams, IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1656-1663, 2020. [0064] [8] Hui Zhang, Xieyuanli Chen, Huimin Lu and Junhao Xiao, Distributed and Collaborative Monocular Simultaneous Localization and Mapping for Multi-robot System in Large-scale Environments, International Journal of Advanced Robotic Systems, vol. 15, no. 3, pp. 1-20, 2018.