UNMANNED AERIAL VEHICLE AND LOCALIZATION METHOD FOR UNMANNED AERIAL VEHICLE
20230314548 · 2023-10-05
Inventors
- Lihua Xie (Singapore, SG)
- Pham Nhat Thien Minh NGUYEN (Singapore, SG)
- Muqing CAO (Singapore, SG)
- Shenghai YUAN (Singapore, SG)
Cpc classification
B64U2201/00
PERFORMING OPERATIONS; TRANSPORTING
G01S5/0264
PHYSICS
B64U50/13
PERFORMING OPERATIONS; TRANSPORTING
B64U10/14
PERFORMING OPERATIONS; TRANSPORTING
B64U2101/30
PERFORMING OPERATIONS; TRANSPORTING
G01S13/874
PHYSICS
G01S5/14
PHYSICS
B64C39/024
PERFORMING OPERATIONS; TRANSPORTING
G01S13/765
PHYSICS
International classification
B64U10/14
PERFORMING OPERATIONS; TRANSPORTING
Abstract
An unmanned aerial vehicle aerial vehicle and a localization method for an unmanned aerial vehicle aerial vehicle are described. In an embodiment, an unmanned aerial vehicle comprises: a first ultra-wide band node; a second ultra-wide band node; and a localization processor configured to use signals received at the first ultra-wide band node and the second ultra-wide band node from a plurality of anchor nodes located within an environment to estimate a pose state of the unmanned aerial vehicle, wherein the first ultra-wide band node and the second ultra-wide band node are arranged on the unmanned aerial vehicle at positions offset from one another.
Claims
1. An unmanned aerial vehicle comprising: a first ultra-wide band node; a second ultra-wide band node; and a localization processor configured to use signals received at the first ultra-wide band node and the second ultra-wide band node from a plurality of anchor nodes located within an environment to estimate a pose state of the unmanned aerial vehicle, wherein the first ultra-wide band node and the second ultra-wide band node are arranged on the unmanned aerial vehicle at positions offset from one another.
2. An unmanned aerial vehicle according to claim 1, further comprising an inertial measurement unit configured to detect linear acceleration and angular velocity of the unmanned aerial vehicle and wherein the localization processor is further configured to use measurements from the inertial measurement unit to predict the pose state of the unmanned aerial vehicle.
3. An unmanned aerial vehicle according to claim 2, wherein the localization processor is further configured to pre-Integrate the measurements from the inertial measurement unit to obtain a set of pre-integrated inertial measurement unit measurements and to use the set of pre-integrated inertial movement unit measurements to estimate the pose state of the unmanned aerial vehicle.
4. An unmanned aerial vehicle according to claim 2, wherein the localization processor is further configured to compare a predicted distance calculated from the pose state of the unmanned aerial vehicle determined from the inertial measurement measurements with distance predictions determined from measurements by the first ultra-wideband node and the second ultra-wideband node and thereby identify outliners in the measurements by the first ultra-wideband node and the second ultra-wideband node.
5. An unmanned aerial vehicle according to claim 2, further comprising an on-board self-localization module configured to generate on-board self-localization data.
6. An unmanned aerial vehicle according to claim 5, wherein the on-board self-localization module comprises a lidar sensor and the on-board self-localization data comprises laser scan data.
7. An unmanned aerial vehicle according to claim 5, wherein the on-board self-localization module comprises a camera and the on-board self-localization data comprises image data.
8. An unmanned aerial vehicle according to claim 5, wherein the localization processor is further configured to estimate the pose state of the unmanned aerial vehicle using measurement data comprising measurements by the first ultra-wideband node and the second ultra-wideband node, the measurements from the inertial measurement unit and the on-board self-localization data by minimizing a cost function between the measurement data and a model predicted pose state.
9. An unmanned aerial vehicle according to claim 8, wherein the on-board self-localization module comprises a lidar sensor and the on-board self-localization data comprises laser scan data and the localization processor is further configured to use the on-board self-localization data to extract lidar features from the laser scan data and match the lidar features with a local map to obtain a set of feature-map-matching coefficients, wherein the cost function depends on the set of feature-map-matching coefficients.
10. An unmanned aerial vehicle according to claim 8, wherein the on-board self-localization module comprises a camera and the on-board self-localization data comprises image data and localization processor is further configured to use the on-board self-localization data to extract a set of visual features from image data, wherein the cost function depends on the set of visual features.
11. An unmanned aerial vehicle according to claim 8, wherein the localization processor is further configured to estimate the pose state of the unmanned aerial vehicle using measurement data corresponding to a sliding time window.
12. A localization method for an unmanned aerial vehicle, the unmanned aerial vehicle comprising: a first ultra-wide band node; and a second ultra-wide band node, wherein the first ultra-wide band node and the second ultra-wide band node are arranged on the unmanned aerial vehicle at positions offset from one another, the method comprising using signals received at the first ultra-wideband node and the second ultra-wideband node from a plurality of anchor nodes located within an environment to estimate a pose state of the unmanned aerial vehicle.
13. A localization method according to claim 12, wherein the unmanned aerial vehicle further comprises: an inertial measurement unit configured to detect linear acceleration and angular velocity of the unmanned aerial vehicle, and the method further comprises using measurements from the inertial measurement unit to predict the pose state of the unmanned aerial vehicle.
14. A localization method according to claim 13, further comprising pre-integrating the measurements from the inertial measurement unit to obtain a set of pre-integrated inertial measurement unit measurements and using the set of pre-integrated inertial measurement unit measurements to estimate the pose state of the unmanned aerial vehicle.
15. A localization method according to claim 13, further comprising comparing the predicted distance from the pose prediction of the unmanned aerial vehicle determined from measurements from the inertial measurement unit with the distance measurements by the first ultra-wideband node and the second ultra-wideband node and thereby identify outliners in the measurements by the first ultra-wideband node and the second ultra-wideband node.
16. A localization method according to claim 13, further comprising receiving on-board self-localization data generated by on-board self-localization module of the unmanned aerial vehicle.
17. (canceled)
18. (canceled)
19. A localization method according to claim 16, further comprising estimating the pose state of the unmanned aerial vehicle using measurement data comprising measurements by the first ultra-wideband node and the second ultra-wideband node, the measurements from the inertial measurement unit and the on-board self-localization data by minimizing a cost function between the measurement data and a model predicted pose state.
20. A localization method according to claim 19, wherein the on-board self-localization data comprises laser scan data, and the method further comprises extracting lidar features from the laser scan data and matching the lidar features with a local map and thereby obtaining a set of feature-map-matching coefficients, wherein the cost function depends on the set of feature-map-matching coefficients.
21. A localization method according to claim 19, wherein the on-board self-localization data comprises image data, and the method further comprises extracting visual features from the image data and thereby obtaining a set of visual features, wherein the cost function depends on the set of visual features.
22. (canceled)
23. A non-transitory computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 12.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] In the following, embodiments of the present invention will be described as non-limiting examples with reference to the accompanying drawings in which:
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
DETAILED DESCRIPTION
[0045]
[0046] Four ultra-wide band antenna nodes are arranged on arms extending from the body 101 of the unmanned aerial vehicle. As shown in
[0047] In the unmanned aerial vehicle shown in
[0048] The ultra-wide band antenna nodes are configured to send and receive signals from anchor nodes located at fixed locations in the environment in which the unmanned aerial vehicle operates. By sending and receiving signals from these anchor nodes, the location and orientation of the unmanned aerial vehicle can be estimated.
[0049]
[0050] It is noted that the scheme shown in
[0051]
[0052]
[0053] The flight controller 402, the memory 406 and the localization processor 408 may be implemented as a small form factor computer having a processor and a memory storing computer executable instructions which are executed on the processor to carry out the methods described in this disclosure. The memory 406 may be referred to in some contexts as computer readable storage media and/or non-transitory computer readable media. Alternatively, while a software implementation of the computer program modules is described herein, these may alternatively be implemented as one or more hardware modules (such as field-programmable gate array(s) or application-specific integrated circuit(s)) comprising circuitry which implements equivalent functionality to that implemented in software.
[0054]
[0055] In step 502, the first ultra-wide band node 412 and the second ultra-wide band node 414 of the unmanned aerial vehicle 400 receive signals from anchor nodes. From these signal, the localization processor 408 determines the range to each of the anchor nodes.
[0056] In step 504, the localization processor 408 of the unmanned aerial vehicle 400 estimates the pose state of the unmanned aerial vehicle. The pose state comprises the location and orientation of the unmanned aerial vehicle relative to the anchor nodes in the environment.
[0057]
[0058] As shown in } and pose predictions {circumflex over (χ)}.sub.m, from previous stages in key frame management 640.
[0059] As described above with reference to
[0060] After the self-localization stage, UWB, IMU and OSL data will go through multiple stages before they are fused together in the Optimization-based Sensor Fusion (OSF) algorithm. The system is synchronized by the lidar messages, whose time stamps are used to determine the time steps from time t.sub.w to t.sub.k. This synchronization process is described in more detail below with reference to
[0061] The UWB data 610 is subjected to some preliminary checks 612 to eliminate bad measurements. These checks include comparing the measurement's signal over noise ratio (SNR), leading-edge-detection quality, and rate-of-change with some user-defined thresholds, etc. Only those that meet these thresholds will be passed to the buffer in a bundling 614 stage for subsequent stages. UWB measurements that arrive in the interval (t.sub.m−1, t.sub.m], w+1≤m≤k are grouped into a set {}. Finally, this set will be put into a final outlier check 616 to see if any measurement in it is an outlier. This is done by comparing the measurement with the IMU-predicted range value. As shown in
[0062]
[0063] Returning now to
[0064] For the lidar point cloud data 630, the plane and edge features are then extracted using the common smoothness criteria of LOAM systems in a feature extraction 632 step. The motion distortion in these features is compensated by the IMU-propagated states, and then transform to the inertial frame in the transform and deskew step 634. Hence, these features are then matched with the local map 642 built from marginalized point clouds in an association 644 step to obtain the set of feature-map-matching (FMM) coefficients {}, which are used to create the lidar factors in the final cost function calculated in the local optimization algorithm 626.
[0065] The feature-map-matching coefficients are calculated using the following algorithm:
TABLE-US-00001 Input: .sub.m = (
.sub.m.sup.p,
.sub.m.sup.e),
.sub.w = (
.sub.w.sup.p,
.sub.w.sup.e), {circumflex over (T)}.sub.m Output:
.sub.m
{
.sub.m.sup.1,
.sub.m.sup.2, ...},
.sub.m.sup.i = (.sup.mf.sup.i, n.sup.i, c.sup.i). for each f ∈
.sub.m.sup.p do: Compute .sup.mf.sup.i from f.sup.i and {circumflex over (T)}.sub.m; if f ∈
.sub.m.sup.p then Find
= KNN(f,
.sub.w.sup.p);
.sub.m.sup.e then
[0066] For the camera images 650, the binary robust independent elementary features (BRIEF features) are extracted in a feature extraction sync step 652 and then use the latest pose estimates to triangulate these features for depth initialization in a triangulation step 645. Hence, we can obtain a set of coefficients {} which can be used to construct the visual factors in the final cost functions used in the local optimization algorithm 626.
[0067] Specifically, is defined as:
[0068] where .sup.a.sup.i, .sup.b
.sup.i are the projected coordinates of the feature in camera frames a and b, .sup.af.sup.i, .sup.bf.sup.i are the feature's 3D coordinates w.r.t camera frames a and b, λ.sup.i is the inverse depth of visual feature i, which is only associated with the camera frame that first observes the feature.
[0069] The cost function will be then optimized to provide the estimate of the robot states in the sliding windows. The latest state estimate in the sliding windows will be combined with the latest IMU measurements in the buffer, which are leftovers from the last extraction plus the newly arrived during the optimization process, to calculate the high-rate pose estimate. This high-rate pose estimate is also used to perform the outlier check on the UWB measurements that was mentioned earlier.
[0070] After the optimization has been completed, the sliding window will be slid forward, the oldest state estimate and its associated measurements will be abandoned, and a new time step will be registered when the conditions mentioned earlier are met.
[0071] .sub.k,
.sub.k+1 and
.sub.k+2. The UWB measurements between the UWB nodes 802 of the UAV and the anchor nodes are expressed as {circumflex over (d)}.sub.k+3.sup.1 where the superscript number indicates a label of the anchor node.
[0072] We denote all of the state estimates on the sliding window at time t.sub.k as .sub.k as follows:
=
.sub.w, . . . ,
.sub.k),
.sub.m=({circumflex over (q)}.sub.m, {circumflex over (p)}.sub.m, {circumflex over (v)}.sub.m, {circumflex over (b)}.sub.m.sup.ω, {circumflex over (b)}.sub.m.sup.a)∈SE(3)×
.sup.12, {circumflex over (Λ)}.sub.k=({circumflex over (λ)}.sup.1, {circumflex over (λ)}.sup.2, . . . {circumflex over (λ)}.sup.N.sup.
.sup.N.sup.
where {circumflex over (q)}.sub.k is the orientation quaternion estimate, {circumflex over (p)}.sub.k and {circumflex over (v)}.sub.k are the robot's position and velocity estimates w.r.t the world frame, {circumflex over (b)}.sub.k.sup.ω and {circumflex over (b)}.sub.k.sup.a are the estimates of the gyroscope and accelerometer biases of the IMU, {circumflex over (λ)}.sup.1, {circumflex over (λ)}.sup.2, . . . {circumflex over (λ)}.sup.N.sup.
[0073] Hence, we can construct a cost function based on the observations from UWB, IMU and OSL measurements. The cost function can be constructed at each time step t.sub.k+M as follows:
where (⋅),
(⋅),
(⋅),
(⋅),
(⋅) are the residuals constructed from IMU, UWB and
[0074] OSL measurements ,
,
,
are the covariance matrices of the m mm measurement errors, and N.sub.U.sup.m, N.sub.V.sup.k are the number of UWB measurement in the intervals. The cost function above summarizes the coupling of all the states and the observations. Once the cost function has been constructed, we can use open-source non-linear solvers such as g2o or ceres to execute the optimization.
[0075] Below, we will explain in detail the process to construct each factor in the cost function. Essentially, for each measurement, we will find an observation model that links the actual measurements with the states. Then we will calculate the residual (or loosely-speaking, the difference) between the measurement and the model-predicted value. Then, we need to derive the covariance matrix of the measurement errors, to properly apply the weightage among the factor. Finally, to facilitate real-time optimization, the Jacobians of each factor have to be analytically derived. [0076] i. Ranging factor:
[0077] At each time step t.sub.k we have the following observation from a UAV ranging node:
=({circumflex over (d)}.sub.m.sup.i,x.sub.m,y.sub.m,τ.sub.m.sup.i,t.sub.m−1,t.sub.m),
where {circumflex over (d)}.sub.m.sup.i is the distance measurement, and x.sub.m.sup.i is the position of the UWB anchor w.r.t. the world frame, and y.sub.m.sup.i is the coordinate of the UWB ranging node in the body frame, τ.sub.m.sup.i, t.sub.m−1, t.sub.m are the time stamps defined in
[0078] The residual of a range measurement is calculated by taking the difference between the distance estimate {circumflex over (d)}.sub.m and the distance measurement {circumflex over (d)}.sub.k:
r.sub.U(.sub.m−1,
.sub.m,
)={circumflex over (d)}.sub.m−{circumflex over (d)}.sub.m.sup.i,
where {circumflex over (d)}.sub.k represents the estimated distance:
And the covariance of the measurement error is chosen as
≡σ.sub.U.sup.2, [0079] ii. OSL factors:
[0080] For lidar factors, the residual and covariance constructed from each feature's FMM coefficients =(f.sup.i, n.sup.i, c.sup.i) is defined as
=(n.sup.i).sup.T[{circumflex over (R)}.sub.m.sup.mf.sup.i+{circumflex over (p)}.sub.m]+c.sup.i,
=σ.sub.L.sup.2
[0081] For visual feature factors, the residual and covariance from each feature's observation is defined as
r.sub.V(.sub.a,
.sub.b,{circumflex over (λ)}.sup.i,
)=π(.sup.b{circumflex over (f)}.sup.i(.sup.W{circumflex over (f)}.sup.i(π.sup.−1({circumflex over (λ)}.sup.i,.sup.a
.sup.i){circumflex over (p)}.sub.a,{circumflex over (R)}.sub.a),{circumflex over (p)}.sub.b,{circumflex over (R)}.sub.b))−.sup.b
.sup.i, P.sub.V.sub.
iii. IMU pre-integration factors: [0082] The IMU measurements are defined as
{hacek over (ω)}.sub.t=ω.sub.t+b.sub.t.sup.ω+η.sub.t.sup.ω,
{hacek over (a)}.sub.t=a.sub.t+R.sub.tg+b.sub.t.sup.a+η.sub.t.sup.a
{dot over (b)}.sub.t.sup.ω=η.sub.t.sup.bω,
{dot over (b)}.sub.t.sup.a=η.sub.t.sup.ba,
where g=[0, 0, g].sup.T is the gravitational acceleration, b.sub.t.sup.ω, b.sub.t.sup.a are respectively the biases of the gyroscope and the accelerometer, and η.sub.t.sup.ω, η.sub.t.sup.a, η.sub.t.sup.bω, η.sub.t.sup.ba are some zero-mean Gaussian noises, whose standard deviations are σ.sub.η.sub.
[0083] Given the noisy IMU measurements ({hacek over (ω)}.sub.t, {hacek over (a)}.sub.t) and some nominal bias values
=({hacek over (α)}.sub.k+1,{hacek over (β)}.sub.k+1,{hacek over (γ)}.sub.k+1),
where
{hacek over (α)}.sub.k+1∫.sub.t.sub.
{hacek over (β)}.sub.k+1∫.sub.t.sub.
{hacek over (γ)}.sub.k+1∫.sub.t.sub.
[0084] In practice, the above integration can be implemented by zero-order-hold (ZOH) or higher order methods (Runge-Kutta methods). During the period from t.sub.k to t.sub.k+1, we have a sequence of m IMU samples ({hacek over (ω)}.sub.0, {hacek over (a)}.sub.0), ({hacek over (ω)}.sub.1, {hacek over (a)}.sub.1), . . . , ({hacek over (ω)}.sub.m−1, {hacek over (a)}.sub.m−1). Thus, if ZOH is chosen, we use the following recursive rule to update the pre-integrated measurements:
where τ.sub.n is the time stamp of the n-th sample, τ.sub.0=t.sub.k, τ.sub.m=t.sub.k+1, and Δτ.sub.nτ.sub.n+1−τ.sub.n.
[0085] The residual of the IMU preintegration factor is defined as follows
and the Jacobians A, B, C are computed via an iterative scheme as follows:
[0086] The covariance of the IMU pre-integration errors are calculated based on a propagation scheme.
[0087]
[0088] The methods VINS-Mono, VINS-Fusion, A-LOAM, LIO-SAM, M-LOAM are prior art methods and the methods labelled VIRAL (horiz. lidar) and VIRAL (2 lidars) correspond to localization methods according to embodiments of the using a single horizontal lidar and two lidars respectively. All algorithms are run on an NUC 10 computer with core i7 processor. Each method is slightly modified and configured for their best performance with the dataset. Since several lidar-based methods are not designed to incorporate multiple lidar inputs, we also include experiments of VIRAL using only the horizontal lidar for a fairer comparison. The Absolute Trajectory Error (ATE) result is summarized in
[0089] To further verify the efficacy of the localization scheme, we develop a platform consisting of a camera system, one IMU, two LIDAR sensors, and two UWB modules. We improve the utility of the UWB sensor by using the two antennas mounted at different points on the platform, Thus, effectively we have a total of four UAV ranging nodes. The data rate of the stereo image, IMU, LIDAR 1, LIDAR 2, and UWB topics are 10 Hz, 400 Hz, 10 Hz, 10 Hz, and 69 Hz respectively. A Leica MS60 station is used to provide ground truth for the experiment. All software is run on a small form factor computer with Intel Core i7-8650U processor.
[0090]
[0091]
[0092]
[0093]
[0094] From the results in
[0095] It is anticipated that UAVs implementing the methods described above will find applications in aerial surveillance by UAVs; automated inspection operations by UAVs; and graphical modelling and structural health of 3D objects.
[0096] Whilst the foregoing description has described exemplary embodiments, it will be understood by those skilled in the art that many variations of the embodiments can be made within the scope and spirit of the present invention.