RELATIVE POSITION DETERMINATION METHOD FOR MULTIPLE UNMANNED AERIAL, MARINE AND LAND VEHICLES

20240338031 ยท 2024-10-10

Assignee

Inventors

Cpc classification

International classification

Abstract

A camera-based and direct observation based relative position determination method for multiple unmanned aerial, naval and ground vehicles is provided. The method calculates the relative position between the relevant vehicles in multiple UAV, UNV and UGV systems.

Claims

1-4. (canceled)

5. A relative position determination method for multiple unmanned aerial/ground/naval vehicles based on a camera and direct observation in multiple unmanned aerial/ground/naval vehicle systems and characterized by comprising the following process steps; a. performing a predefined movement in different relative poses of target unmanned aerial vehicle (UAV.sub.T)/ground/naval vehicle, b. data collection by recording the trajectory on the image plane corresponding to the observer unmanned aerial vehicle (UAV.sub.O)/ground/naval vehicle performing the predefined movement of the target unmanned aerial vehicle (UAV.sub.T)/ground/naval vehicle at different relative poses to enable recording of trajectory and actual data for many different relative pose values, c. realization of deep learning model training using the collected data, d. starting to perform a predefined movement of target (collaborator/friend) unmanned aerial (UAV.sub.T)/ground/naval vehicle, e. taking the image of an observer unmanned aerial (UAV.sub.O)/ground/naval vehicle by means of an image capture unit and checking whether there is a target unmanned aerial/ground/naval vehicle on the image taken, f. determining the bounding box information if the target unmanned aerial (UAV.sub.T)/ground/naval vehicle is detected in the received image, g. returning to step e if the target unmanned aerial (UAV.sub.T)/ground/naval vehicle is not detected in the received image, h. tracking the target using image tracking algorithms using bounding box information, i. extracting features between consecutive points using bounding box information, preferably taking into account the relationships between the position of the center point of the bounding box in the current and previous images, and j. calculating the 6-DOF relative position between the observer aerial (UAV.sub.O)/ground/naval vehicle and the target aerial (UAV.sub.T)/ground/naval vehicle by providing the extracted features as input to a deep learning model.

6. The method according to claim 5, wherein said bounding box information is corner point, width and height information.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0029] FIG. 1 is a representative view of the multi-UAV movement geometry of the invention.

[0030] FIGS. 2A-2B are representative views of the image of the UAV.sub.T's trajectory on the UAV.sub.O's camera for two different relative positions of the invention.

[0031] FIG. 3 is a representative view of solving (preventing) uncertain situations (such as ambiguities) of the invention with a special trajectory.

[0032] FIG. 4 is a representative view of the observer UAV (UAV.sub.O) relative pose calculation flowchart of the invention.

[0033] FIG. 5 is a representative view of the flow chart of the observer UAV (UAV.sub.O) air vehicle detection block of the invention.

DETAILED DESCRIPTION OF THE EMBODIMENTS

[0034] The reference characters used in the FIGS. are as follows: [0035] 1: unmanned aerial vehicle (UAV). [0036] 2: Overlapping eight trajectories (OET). [0037] 3: Discriminative, rotated second eight trajectories (SET).

[0038] In this detailed description, the preferred embodiments of the inventive relative positioning method for multiple unmanned aerial vehicles are described by means of examples only for clarifying the subject matter.

[0039] An image-based method is being developed to calculate the relative positions between multiple unmanned aerial, marine and ground vehicles so that they can operate together in the same environment. The developed method is used for the solution of the relative pose transformation between vehicles in the collaborative visual simultaneous localization and mapping problem in multiple UAV, UNV and UGV systems.

[0040] In a multi-UAV system, UAVs are divided into two groups as observer UAVs (UAV.sub.O) and target UAVs (UAV.sub.T) in the relative position calculation method between UAVs. The target UAV (UAV.sub.T) makes a predefined movement (e.g. drawing a eight-shaped trajectory), while the observer UAV (UAV.sub.O) watches it through a camera on itself. In the case where the motion (H(t)) of the UAV.sub.T is known (see Equation 1), certain parameters of the relative position between the UAV.sub.T and the UAV.sub.O can be calculated (see FIG. 1).

[00001] H ( t ) = [ R 0 T 0 0 1 ] [ R ( t ) T ( t ) 0 1 ] + ? 0 ( t ) ( Equation - 1 )

In the simplest case, where the UAV.sub.O is stationary (suspended) at a certain position in the air and visually tracks the movement of the UAV.sub.T, assuming that the distance between the UAVs is large, the UAV.sub.O tracks the movement of the UAV.sub.T as a point on the beam connecting the instantaneous position of the UAV.sub.T and the camera center of the UAV.sub.O. The trajectory (P(t)) on the camera plane is formed as the projection of the motion (H(t)) of the UAV.sub.T in three dimensions. This projection depends on the camera calibration matrix (K), the relative position between the UAVs ([R.sub.TG, T.sub.TG]) and the motion (H(t)) of the UAV.sub.T in three dimensions as specified in Equation 2. For example, at different ([R.sub.TG, T.sub.TG]) values, when the UAV.sub.T draws octagonal trajectories of the same size, the projections of these trajectories on the camera will be different (see FIGS. 2A-2B). The backfolding of the relative position between the UAVs ([R.sub.TG, T.sub.TG]) is in principle obtained by inversion of this trajectory into three-dimensional space (see Equation 3).

[00002] P ( t ) = ? ( K , R TC , T TG , H ( t ) ) ( Equation - 2 ) R TC , T TG = f ( P ( t ) ) ( Equation - 3 )

[0041] The movement of the UAV.sub.T is locally known to the extent that it can accommodate an error (?.sub.0), which depends on its own positioning system. For example, if UAV.sub.T uses the Visual Simultaneous Localization and Mapping (Visual SLAM) method as a positioning system, ?.sub.0 is in a structure that changes and accumulates over time. Ideally, the translation function from the trajectory in the image plane to the three-dimensional plane, defined in Equation 3, is realized analytically. However, the characteristics of ?.sub.0 may make it analytically impossible to perform this conversion. The proposed method is based on the realization of this cycle with a data-driven method.

[0042] In the training phase, both UAV.sub.T and UAV.sub.O use GNSS as the positioning system, so that the ground truth value of the relative position between both UAVs is known. While the UAV.sub.T performs its predefined motion (H(t)), the UAV.sub.O records the trajectory (P(t)) on the image plane corresponding to this motion. With the data collected in this way, a deep learning model is trained to estimate the conversion function f(P(t)) (see Equation-3) between each H(t) and P(t).

[0043] The training phase is carried out in the following three sub-stages: [0044] 1. Data collecting: At this stage, the UAV.sub.T performs its predefined motion (H(t)) at different relative poses, while the UAV.sub.O records the trajectory (P(t)) on the image plane corresponding to this motion. In this way, both trajectory data and ground truth data are recorded for many different relative pose values. [0045] 2. Offline Deep Learning Model Training: Using the data collected in Stage 1, the training of the deep learning model used in the relative pose estimation method for multiple unmanned aerial, naval and ground vehicles is carried out. At this stage, the trained model is used as the base model for all configurations (different platforms, different cameras, etc.). [0046] 3. Online Deep Learning Model Training: In this phase, data is collected in an environment known with the UAV and camera configuration to be used in the mission, and the model parameters trained in Stage 2 are updated again in a way specific to the current configuration. The amount of data collected at this stage may be less than the amount of data collected at Stage 1.

[0047] In the proposed method, no parametric calibration such as camera calibration etc. is required beforehand. During a training flight in the test area, aircraft perform certain predefined movements and collect data to learn the configuration between themselves. Using this data, a deep learning model is trained to generate 6 degrees of freedom position information between the air vehicles. Air vehicles then perform their missions using this model. Thus, the proposed method is generalizable for different types of unmanned aerial vehicles and different flight configurations.

[0048] The predefined movement (trajectory) to be performed by the UAV.sub.T can be configured in such a way that it can be easily distinguished at any distance and angle on the camera of the UAV.sub.O and can be any trajectory (size, geometry, etc.). In some cases, at two different relative positions ([R.sub.TG, T.sub.TG]) between the UAVs, the projection of the trajectory drawn by the UAV.sub.T on the UAV.sub.O's camera may be the same. The predefined movement (trajectory) to be performed by the UAV.sub.T is selected in a structure that eliminates such ambiguity. For example, when eight shaped trajectory is considered, the projection of the eight shapes drawn on the UAV.sub.O's camera by the UAV.sub.T at two different yaw angles would be the same. To prevent this situation, drawing another rotated eight shape after the first eight shape can eliminate this uncertainty. (see FIG. 3).

[0049] In the relative pose determination method, the flowchart of the relative pose calculation algorithm, which uses the previously trained deep learning model that will run on the UAV.sub.O while the UAV.sub.T performs its defined movement during the mission, is shown in FIG. 4. Accordingly, UAV.sub.O performs the relative pose calculation by performing the following steps respectively. [0050] 1. The flow chart of the Air Vehicle Detection Block is shown in FIG. 5. Accordingly, an image frame is taken from the front-facing camera and UAV.sub.T is tried to be detected within this frame. This step continues to be carried out until the UAV.sub.T is detected; in case the UAV.sub.T is detected, Step 2 (Air Vehicle Tracking) is started and Bounding Box information is provided. Bounding Box information includes the positions of the corner points and the center of the rectangle surrounding the detected aircraft in the image in image coordinates. In an alternative embodiment of the invention, said bounding box information is the corner point, width and length. By using the bounding box information, the target is tracked on the image using tracking method algorithms. The mentioned tracking methods are mean shift or deep sort etc. algorithms. [0051] 2. The Air Vehicle Tracking Block utilizes the bounding box information received from the Air Vehicle Detection Block to track the UAV on the image. It continues tracking by updating the bounding box information until the tracking is lost. If the tracking is lost, it is returned to step 1 (Air Vehicle Detection Block). The bounding box information maintained as a result of the tracking of the UAV.sub.T is forwarded to the Feature Extraction step. [0052] 3. In the feature extraction process, the bounding box information of the UAV.sub.T is used to extract the features to be provided to the deep learning block. When extracting the features, the relationships (Spline, B?zier Curve, etc.) between the position of the Centre point of the bounding box in the current and previous images (consecutive points) are taken into account. According to the deep learning method applied in the step 4, an end-to-end model can also be created by taking the feature extraction step into the deep learning block. [0053] 4. In the proposed method, a deep learning model based on sequential data is used to calculate the relative pose transformation between aircraft by taking the extracted attributes as input. This model calculates the relative pose transformation between the UAVs by taking the projection (P(t)) of the motion of the UAV.sub.T on the image plane sequentially. In addition, it is also able to detect which of the predefined movement patterns the UAV.sub.T performs and the start/end moments of the movement. As soon as the movement is complete (or as soon as the relative pose can be estimated), the result produced is usable. This deep learning model may consist of a convolutional neural network (CNN) layer and a LSTM layer, respectively.

[0054] The proposed method can be used in two different ways depending on whether there is communication between UAV.sub.O and UAV.sub.T. In case there is communication between the UAVs, the UAV.sub.T can share the type and starting/finishing times of the movement with the UAV.sub.O. In addition, thanks to the data to be shared by the UAVs with each other, the detection of UAV.sub.T in the Air Vehicle Detection Block is facilitated. In the case of communication, the difficulty of the problem and the complexity of the method are somewhat reduced. However, it is considered important that the method can also work in cases where the communication system does not work due to jamming etc. in the environment. In the absence of communication between the UAVs, the observer UAV also detects the type of pattern performed by the target UAV and the start/end of the movement. In this case, the complexity of the problem and the method to be used is higher.

[0055] In the proposed method, the UAV.sub.T performs a predefined movement (H(t)) at certain times. This movement can be performed periodically at a certain frequency or in case of a specific need (for example, when the positioning accuracy of aircraft falls below a certain value).

[0056] In the proposed method, all air vehicles used during the mission have the same configuration (same processor and sensor units). Thus, all air vehicles can fulfil the role of both UAV.sub.O and UAV.sub.T. In this way, it is considered that the proposed method can be used in any type of UAV team/swarm with two or more members.

REFERENCES

[0057] [1] Kaustav Chakraborty, Martin Deegan, Purva Kulkarni, Christine Searle and Yuanxin Zhong, JORB-SLAM: A Jointly optimized Multi-Robot Visual SLAM, 2020. [0058] [2] T. Krajnik, M. Nitsche, J. Faigl, P. Vanek, M. Saska, A practical multirobot localization sysem, Journal of Intelligent & Robotic Systems, vol. 76, no. 3-4, pp. 539-562, 2014. [0059] [3] Viktor Walter, Nicolas Staub, Antonio Franchi, Martin Saska, UVDAR System for Visual Relative Localization with application to Leader-Follower Formations of Multirotor UAVs, IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 2637-2644, 2019. [0060] [4] Matous Vrba, Martin Saska, Marker-Less Micro Aerial Vehicle Detection and Localization Using Convolutional Neural Networks, IEEE ROBOTICS AND AUTOMATION LETTERS, vol. 5, no. 2, pp. 2459-2466, 2020 [0061] [5] Roberto Opromolla, Giancarmine Fasano and Domenico Accardo, A Vision-Based Approach to UAV Detection and Tracking in Cooperative Applications, Sensors (Switzerland), vol. 18, no. 10, p. 3391, 2018. [0062] [6] Patrik Schmuck and Margarita Chli, Multi-UAV Collaborative Monocular SLAM, in IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017. [0063] [7] Pierre-Yves Lajoie, Benjamin Ramtoula, Yun Chang, Luca Carlone, Giovanni Beltrame, DOOR-SLAM: Distributed, Online, and Outlier Resilient SLAM for Robotic Teams, IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1656-1663, 2020. [0064] [8] Hui Zhang, Xieyuanli Chen, Huimin Lu and Junhao Xiao, Distributed and Collaborative Monocular Simultaneous Localization and Mapping for Multi-robot System in Large-scale Environments, International Journal of Advanced Robotic Systems, vol. 15, no. 3, pp. 1-20, 2018.