UNMANNED AERIAL VEHICLE AND LOCALIZATION METHOD FOR UNMANNED AERIAL VEHICLE

20230314548 · 2023-10-05

    Inventors

    Cpc classification

    International classification

    Abstract

    An unmanned aerial vehicle aerial vehicle and a localization method for an unmanned aerial vehicle aerial vehicle are described. In an embodiment, an unmanned aerial vehicle comprises: a first ultra-wide band node; a second ultra-wide band node; and a localization processor configured to use signals received at the first ultra-wide band node and the second ultra-wide band node from a plurality of anchor nodes located within an environment to estimate a pose state of the unmanned aerial vehicle, wherein the first ultra-wide band node and the second ultra-wide band node are arranged on the unmanned aerial vehicle at positions offset from one another.

    Claims

    1. An unmanned aerial vehicle comprising: a first ultra-wide band node; a second ultra-wide band node; and a localization processor configured to use signals received at the first ultra-wide band node and the second ultra-wide band node from a plurality of anchor nodes located within an environment to estimate a pose state of the unmanned aerial vehicle, wherein the first ultra-wide band node and the second ultra-wide band node are arranged on the unmanned aerial vehicle at positions offset from one another.

    2. An unmanned aerial vehicle according to claim 1, further comprising an inertial measurement unit configured to detect linear acceleration and angular velocity of the unmanned aerial vehicle and wherein the localization processor is further configured to use measurements from the inertial measurement unit to predict the pose state of the unmanned aerial vehicle.

    3. An unmanned aerial vehicle according to claim 2, wherein the localization processor is further configured to pre-Integrate the measurements from the inertial measurement unit to obtain a set of pre-integrated inertial measurement unit measurements and to use the set of pre-integrated inertial movement unit measurements to estimate the pose state of the unmanned aerial vehicle.

    4. An unmanned aerial vehicle according to claim 2, wherein the localization processor is further configured to compare a predicted distance calculated from the pose state of the unmanned aerial vehicle determined from the inertial measurement measurements with distance predictions determined from measurements by the first ultra-wideband node and the second ultra-wideband node and thereby identify outliners in the measurements by the first ultra-wideband node and the second ultra-wideband node.

    5. An unmanned aerial vehicle according to claim 2, further comprising an on-board self-localization module configured to generate on-board self-localization data.

    6. An unmanned aerial vehicle according to claim 5, wherein the on-board self-localization module comprises a lidar sensor and the on-board self-localization data comprises laser scan data.

    7. An unmanned aerial vehicle according to claim 5, wherein the on-board self-localization module comprises a camera and the on-board self-localization data comprises image data.

    8. An unmanned aerial vehicle according to claim 5, wherein the localization processor is further configured to estimate the pose state of the unmanned aerial vehicle using measurement data comprising measurements by the first ultra-wideband node and the second ultra-wideband node, the measurements from the inertial measurement unit and the on-board self-localization data by minimizing a cost function between the measurement data and a model predicted pose state.

    9. An unmanned aerial vehicle according to claim 8, wherein the on-board self-localization module comprises a lidar sensor and the on-board self-localization data comprises laser scan data and the localization processor is further configured to use the on-board self-localization data to extract lidar features from the laser scan data and match the lidar features with a local map to obtain a set of feature-map-matching coefficients, wherein the cost function depends on the set of feature-map-matching coefficients.

    10. An unmanned aerial vehicle according to claim 8, wherein the on-board self-localization module comprises a camera and the on-board self-localization data comprises image data and localization processor is further configured to use the on-board self-localization data to extract a set of visual features from image data, wherein the cost function depends on the set of visual features.

    11. An unmanned aerial vehicle according to claim 8, wherein the localization processor is further configured to estimate the pose state of the unmanned aerial vehicle using measurement data corresponding to a sliding time window.

    12. A localization method for an unmanned aerial vehicle, the unmanned aerial vehicle comprising: a first ultra-wide band node; and a second ultra-wide band node, wherein the first ultra-wide band node and the second ultra-wide band node are arranged on the unmanned aerial vehicle at positions offset from one another, the method comprising using signals received at the first ultra-wideband node and the second ultra-wideband node from a plurality of anchor nodes located within an environment to estimate a pose state of the unmanned aerial vehicle.

    13. A localization method according to claim 12, wherein the unmanned aerial vehicle further comprises: an inertial measurement unit configured to detect linear acceleration and angular velocity of the unmanned aerial vehicle, and the method further comprises using measurements from the inertial measurement unit to predict the pose state of the unmanned aerial vehicle.

    14. A localization method according to claim 13, further comprising pre-integrating the measurements from the inertial measurement unit to obtain a set of pre-integrated inertial measurement unit measurements and using the set of pre-integrated inertial measurement unit measurements to estimate the pose state of the unmanned aerial vehicle.

    15. A localization method according to claim 13, further comprising comparing the predicted distance from the pose prediction of the unmanned aerial vehicle determined from measurements from the inertial measurement unit with the distance measurements by the first ultra-wideband node and the second ultra-wideband node and thereby identify outliners in the measurements by the first ultra-wideband node and the second ultra-wideband node.

    16. A localization method according to claim 13, further comprising receiving on-board self-localization data generated by on-board self-localization module of the unmanned aerial vehicle.

    17. (canceled)

    18. (canceled)

    19. A localization method according to claim 16, further comprising estimating the pose state of the unmanned aerial vehicle using measurement data comprising measurements by the first ultra-wideband node and the second ultra-wideband node, the measurements from the inertial measurement unit and the on-board self-localization data by minimizing a cost function between the measurement data and a model predicted pose state.

    20. A localization method according to claim 19, wherein the on-board self-localization data comprises laser scan data, and the method further comprises extracting lidar features from the laser scan data and matching the lidar features with a local map and thereby obtaining a set of feature-map-matching coefficients, wherein the cost function depends on the set of feature-map-matching coefficients.

    21. A localization method according to claim 19, wherein the on-board self-localization data comprises image data, and the method further comprises extracting visual features from the image data and thereby obtaining a set of visual features, wherein the cost function depends on the set of visual features.

    22. (canceled)

    23. A non-transitory computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 12.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0031] In the following, embodiments of the present invention will be described as non-limiting examples with reference to the accompanying drawings in which:

    [0032] FIG. 1 shows an unmanned aerial vehicle according to an embodiment of the present invention;

    [0033] FIG. 2 shows the deployment of anchor nodes for use with an unmanned aerial vehicle according to an embodiment of the present invention;

    [0034] FIG. 3 is a table showing communication slots for anchor nodes used in embodiments of the present invention;

    [0035] FIG. 4 is a block diagram showing an unmanned aerial vehicle according to an embodiment of the present invention;

    [0036] FIG. 5 is a flowchart showing a localization method for an unmanned aerial vehicle according to an embodiment of the present invention;

    [0037] FIG. 6 is a flow chart showing the workflow in a localization method for an unmanned aerial vehicle according to an embodiment of the present invention;

    [0038] FIG. 7 is a timing diagram showing a sliding window synchronization scheme used in embodiments of the present invention;

    [0039] FIG. 8 shows multiple poses of an unmanned aerial vehicle and the coupling of these poses with available observations;

    [0040] FIG. 9 is a table showing a comparison absolute trajectory error of localization methods according to embodiments of the present invention with other localization methods;

    [0041] FIG. 10 shows a set of ultra-wide band measurements for a test flight of an unmanned aerial vehicle according to an embodiment of the present invention;

    [0042] FIG. 11 is a table showing position estimates of the anchors in the flight tests;

    [0043] FIG. 12 is a table showing a comparison of absolute trajectory error over building inspection datasets; and

    [0044] FIG. 13A to FIG. 13D show the results of a test flight experiment using an unmanned aerial vehicle according to an embodiment of the present invention.

    DETAILED DESCRIPTION

    [0045] FIG. 1 shows an unmanned aerial vehicle according to an embodiment of the present invention. The unmanned aerial vehicle 100 comprises a body 101 from which six arms 102 extend. A rotor 104 is mounted on each of the arms 102. A landing gear 106 extends downward from the body 101. Two cameras are mounted on the body: a left camera 108 and a right camera 109 which both face forward. A vertical lidar sensor 110 and a horizontal lidar sensor 111 are mounted on the body 101. Each of the vertical lidar sensor 110 and the horizontal lidar sensor 111 comprise a laser and a receiver and operate to determine the range of objects by measuring the return time for reflected light to return to the receiver. The vertical lidar sensor 110 is arranged to carry out ranging in vertical directions and the horizontal lidar sensor 111 is arranged to carryout ranging in horizontal directions. An inertial measurement unit (IMU) 112 is arranged on the body 101. The inertial measurement unit 112 comprises gyroscopes and accelerometers and is configured to detect linear acceleration and angular velocity of the unmanned aerial vehicle.

    [0046] Four ultra-wide band antenna nodes are arranged on arms extending from the body 101 of the unmanned aerial vehicle. As shown in FIG. 1, a first ultra-wide band antenna node 120 is arrange on an arm extending to the front left of the body 101, a second ultra-wide band antenna node 121 is arranged on an arm extending to the rear left of the body 101, a third ultra-wide band antenna node 122 is arrange on an arm extending to the rear right of the body 101 and a fourth ultra-wide band antenna node 123 is arranged on an arm extending to the front right of the body 101. The UWB signal has a frequency band from 3.1 GHz to 4.8 GHz. Ranging measurement is carried out by a two-way time-of-flight technique. The network is synchronized via a Time-Division Multiple Access technique where ranging measurements are scheduled on a slot map.

    [0047] In the unmanned aerial vehicle shown in FIG. 1, a crystal prism 130 which acts as a ground truth tracking point is mounted on top of the body 101. The crystal prism 130 is used to verify the localization scheme in experiments such as those described below with reference to FIG. 13A to FIG. 13D and is not used as part of the localization schemes. Thus, the localization schemes described herein would operate without the crystal prism 130.

    [0048] The ultra-wide band antenna nodes are configured to send and receive signals from anchor nodes located at fixed locations in the environment in which the unmanned aerial vehicle operates. By sending and receiving signals from these anchor nodes, the location and orientation of the unmanned aerial vehicle can be estimated.

    [0049] FIG. 2 shows the deployment of anchor nodes in an environment for use with an unmanned aerial vehicle according to an embodiment of the present invention. As shown in FIG. 2 three ultra-wide band (UWB) ground nodes or anchor nodes are arranged in an environment 200 around a structure 210 under inspection. A first anchor node 222, a second anchor node 224 and a third anchor node 226 are arranged in the environment 200. The UAV 230 having four UWB ranging nodes 232 flies in the environment 200. A world frame 240 is defined having an x-axis 242 in the direction from the first anchor node 222 towards the second anchor node 224 and a y-axis in the direction towards the structure 210 under inspection. A z-axis is defined to point upwards by the right-hand rule. The third anchor node 226 is positioned on the −y half plane. In this scheme by having the +x direction point from one node to another, together with the preferred choice of z pointing upwards, we can easily define a fixed reference frame that can be easily aligned with the environment we wish to conduct the UAV operation in. Additionally, the three anchor nodes are arranged substantially in a horizonal plane and the four UWB ranging nodes 232 on the UAV 230 are also arranged in a horizontal plane.

    [0050] It is noted that the scheme shown in FIG. 2 may be extended to more than three anchor nodes, however, three are shown for simplicity as this is the minimum number required for localization of the UAV.

    [0051] FIG. 3 is a table showing communication slots for anchor nodes used in embodiments of the present invention. FIG. 3 shows the slot map designed for one anchor node, for example, the first anchor node 222 having ID101. Notice that grayed slots where ranging between the anchors are scheduled (i.e. slots 0, 7, 14) are the same for all anchors. By configuring the UWB to echo the last range, all nodes in the network should be able to obtain the distances among the three nodes ID100, ID101, ID102 after 2 cycles and calculate the corresponding coordinates x1, x2, y2 of anchor ID101 and ID102, assuming that the plane made by the three anchors are parallel with the x-y plane of the coordinates system. In this scheme, while ranging between the UAV-mounted nodes and the anchor nodes are different across the slot maps, ranging between the anchors are the same for all slot maps. In this case, slot 0, 7 and 14 are used to conduct ranging between the anchor nodes. By configuring the anchor nodes to echo their last range, all other nodes can receive a message containing the distance between anchors. These distances can then be used to calculate the coordinates of the ground node. The important issue here is to design such a set of slot maps with minimal number of nodes needed for ranging between the anchors (thus allowing more time for UAV to range to the anchors), yet still ensure enough range measurements to efficiently calculate the coordinates of the anchors.

    [0052] FIG. 4 is a block diagram showing an unmanned aerial vehicle according to an embodiment of the present invention. The unmanned aerial vehicle 400 comprises a flight controller 402 which is controls rotors 404 of the unmanned aerial vehicle 400. The unmanned aerial vehicle 400 further comprises a memory 406, a localization processor 408, a first ultra-wide band node 412, a second ultra-wide band node 414, an on-board self-localization module 420 and an inertial measurement unit 430. The memory 406 may store flight plans which are executed by the flight controller 402 in order to carry out an inspection or other task. The memory may also store images, video or other data captured by sensors of the unmanned aerial vehicle during a task. The localization processor 408 executes methods according to embodiments of the present invention to determine the location of the unmanned aerial vehicle using information captured by sensors on the unmanned aerial vehicle. The first ultra-wide band node 412, and the second ultra-wide band node 414 are ultra-wide band antennas which determine the range to anchor ultra-wide band nodes arranged in the environment. The on-board self-localization module 420 comprises a lidar module 422 and a camera 424. The lidar module 422 comprises a laser and a receiver and operate to determine the range of objects by measuring the return time for reflected light to return to the receiver. The inertial measurement unit 430 comprises gyroscopes and accelerometers and is configured to detect linear acceleration and angular velocity of the unmanned aerial vehicle 400.

    [0053] The flight controller 402, the memory 406 and the localization processor 408 may be implemented as a small form factor computer having a processor and a memory storing computer executable instructions which are executed on the processor to carry out the methods described in this disclosure. The memory 406 may be referred to in some contexts as computer readable storage media and/or non-transitory computer readable media. Alternatively, while a software implementation of the computer program modules is described herein, these may alternatively be implemented as one or more hardware modules (such as field-programmable gate array(s) or application-specific integrated circuit(s)) comprising circuitry which implements equivalent functionality to that implemented in software.

    [0054] FIG. 5 is a flowchart showing a localization method for an unmanned aerial vehicle according to an embodiment of the present invention. The method 500 shown in FIG. 5 is carried out by the unmanned aerial vehicle 400 shown in FIG. 4.

    [0055] In step 502, the first ultra-wide band node 412 and the second ultra-wide band node 414 of the unmanned aerial vehicle 400 receive signals from anchor nodes. From these signal, the localization processor 408 determines the range to each of the anchor nodes.

    [0056] In step 504, the localization processor 408 of the unmanned aerial vehicle 400 estimates the pose state of the unmanned aerial vehicle. The pose state comprises the location and orientation of the unmanned aerial vehicle relative to the anchor nodes in the environment.

    [0057] FIG. 6 is a flow chart showing the workflow in a localization method for an unmanned aerial vehicle according to an embodiment of the present invention. The method 600 shown in FIG. 6 is carried out by the unmanned aerial vehicle 400 shown in FIG. 4.

    [0058] As shown in FIG. 6, the inputs to the method are ultra-wide band (UWB) data 610, inertial measurement unit (IMU) data 620, pointclouds 630 which are generated by the lidar module 422 of the unmanned aerial vehicle 400, and images 650 which are generated by the camera 424 of the unmanned aerial vehicle 400. The method 600 also uses UWB measurements {custom-character} and pose predictions {circumflex over (χ)}.sub.m, from previous stages in key frame management 640.

    [0059] As described above with reference to FIG. 2, the UWB ranging and communication 602 network is employed to gather the distance between the anchors, after they have been deployed in the field. These distances will be then used to calculate the position of the anchors by using an anchor self-localization 604 algorithm. Once the anchors' positions 605 have been calculated, they will be associated with the UWB measurements 603. Thus, a UWB measurement which makes up the UWB data 610 consists of three pieces of information, first is the position of the UWB node in the UAV's body frame, second is the UWB anchor position in the chosen world frame, and third is the distance between these two UWB nodes.

    [0060] After the self-localization stage, UWB, IMU and OSL data will go through multiple stages before they are fused together in the Optimization-based Sensor Fusion (OSF) algorithm. The system is synchronized by the lidar messages, whose time stamps are used to determine the time steps from time t.sub.w to t.sub.k. This synchronization process is described in more detail below with reference to FIG. 7.

    [0061] The UWB data 610 is subjected to some preliminary checks 612 to eliminate bad measurements. These checks include comparing the measurement's signal over noise ratio (SNR), leading-edge-detection quality, and rate-of-change with some user-defined thresholds, etc. Only those that meet these thresholds will be passed to the buffer in a bundling 614 stage for subsequent stages. UWB measurements that arrive in the interval (t.sub.m−1, t.sub.m], w+1≤m≤k are grouped into a set {custom-character}. Finally, this set will be put into a final outlier check 616 to see if any measurement in it is an outlier. This is done by comparing the measurement with the IMU-predicted range value. As shown in FIG. 6, the IMU propagated state 629 determined in by the pose prediction 628 algorithm is used in the outlier check 616.

    [0062] FIG. 7 is a timing diagram showing a sliding window synchronization scheme used in embodiments of the present invention. As shown in FIG. 7, The system is synchronized by the lidar messages 710, whose time stamps are used to determine the time steps from time t.sub.w to t.sub.k. The IMU data 720, the UWB data 730 and the image data 740 is then allocated into the sliding windows according to the timing relative to the lidar messages 710. IMU data from a subsequent frame 722 is pre-integrated and used in the processing of earlier frames. IMU data from one or more frames 724 is propagated and used to compensate for motion distortion. A group of UWB data points 732 in a frame may be defined having a timestamp τ.sub.m.sup.i. An image 742 may be defined as having a visual time delay t.sub.d with respect to the timing of the corresponding frame defined by the lidar messages 710.

    [0063] Returning now to FIG. 6, the IMU data 620 is subjected to bundling 622 to arrange the data into frames as described above with reference to FIG. 7, and pre-integration before being input into a local optimization algorithm 626. The local optimization algorithm 626 generates a pose prediction 628.

    [0064] For the lidar point cloud data 630, the plane and edge features are then extracted using the common smoothness criteria of LOAM systems in a feature extraction 632 step. The motion distortion in these features is compensated by the IMU-propagated states, and then transform to the inertial frame in the transform and deskew step 634. Hence, these features are then matched with the local map 642 built from marginalized point clouds in an association 644 step to obtain the set of feature-map-matching (FMM) coefficients {custom-character}, which are used to create the lidar factors in the final cost function calculated in the local optimization algorithm 626.

    [0065] The feature-map-matching coefficients are calculated using the following algorithm:

    TABLE-US-00001 Input: custom-character .sub.m = ( custom-character .sub.m.sup.p, custom-character .sub.m.sup.e), custom-character .sub.w = ( custom-character .sub.w.sup.p, custom-character .sub.w.sup.e), {circumflex over (T)}.sub.m Output: custom-character .sub.m custom-character  { custom-character .sub.m.sup.1, custom-character .sub.m.sup.2, ...}, custom-character .sub.m.sup.i = (.sup.mf.sup.i, n.sup.i, c.sup.i). for each f ∈ custom-character .sub.m.sup.p do:  Compute .sup.mf.sup.i from f.sup.i and {circumflex over (T)}.sub.m;  if f ∈ custom-character .sub.m.sup.p then   Find custom-character  = KNN(f, custom-character .sub.w.sup.p);   [00001] Find n _ = argmin n .Math. x 𝒩 f .Math. n x + 1 .Math. 2 ;   [00002] Compute s = [ 1 - 0.9 .Math. "\[LeftBracketingBar]" n + 1 .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" .Math. "\[LeftBracketingBar]" f .Math. "\[RightBracketingBar]" .Math. "\[RightBracketingBar]" ] and g = s .Math. n .Math. ;   If s > 0.1, add (.sup.mf.sup.i, gn, g) to C.sub.m;  else if f ∈ custom-character .sub.m.sup.e then   [00003] Find the set 𝒩 f = KNN ( f , w p ) , and its centroid p _ = 1 .Math. "\[LeftBracketingBar]" 𝒩 f .Math. "\[RightBracketingBar]" .Math. x 𝒩 f x ;   [00004] Compute : A = Δ 1 .Math. "\[LeftBracketingBar]" 𝒩 f .Math. "\[RightBracketingBar]" .Math. x 𝒩 f ( x - p _ ) ( x - p _ ) ;   Find the eigenvector v.sub.max corresponding to the largest eigenvalue   of A;   Compute: x.sub.0 = f, x.sub.1 = p + 0.1v.sub.max, x.sub.2 = p − 0.1v.sub.max, v.sub.01 = x.sub.0 − x.sub.1,   v.sub.02 = x.sub.0 − x.sub.2, v.sub.12 = x.sub.1 − x.sub.2;   Compute: n.sub.1 = v.sub.12 × (v.sub.10 × v.sub.02), n.sub.1 ← n.sub.1/∥n.sub.1∥, n.sub.2 = v.sub.12 × n.sub.1;   Compute: f.sub.⊥ = f − (n.sub.1n.sub.1.sup.T)v.sub.01;   Compute: c.sub.1 = −n.sub.1.sup.Tf.sub.⊥ and c.sub.2 = −n.sub.2.sup.Tf.sub.⊥;   [00005] Compute : s = [ 1 - 0.9 .Math. v 01 × v 02 .Math. .Math. v 12 .Math. ] , g = s / 2 ;   If s > 0.1, add (.sup.mf.sup.i, gn.sub.1, gc.sub.1) and (.sup.mf.sup.i, gn.sub.2, gc.sub.2) to C.sub.m;

    [0066] For the camera images 650, the binary robust independent elementary features (BRIEF features) are extracted in a feature extraction sync step 652 and then use the latest pose estimates to triangulate these features for depth initialization in a triangulation step 645. Hence, we can obtain a set of coefficients {custom-character} which can be used to construct the visual factors in the final cost functions used in the local optimization algorithm 626.

    [0067] Specifically, custom-character is defined as:

    [00006] 𝒱 a b i = Δ ( a 𝒵 i , b 𝒵 i ) , 𝒵 a i = Δ π ( a f i ) = 1 z a i ( x a i , y a i , z a i ) = Δ λ i ( x a i , y a i , z a i ) , 𝒵 b i = π ( b f i ) = 1 z b i ( x b i , y b i , z b i ) ,

    [0068] where .sup.acustom-character.sup.i, .sup.bcustom-character.sup.i are the projected coordinates of the feature in camera frames a and b, .sup.af.sup.i, .sup.bf.sup.i are the feature's 3D coordinates w.r.t camera frames a and b, λ.sup.i is the inverse depth of visual feature i, which is only associated with the camera frame that first observes the feature.

    [0069] The cost function will be then optimized to provide the estimate of the robot states in the sliding windows. The latest state estimate in the sliding windows will be combined with the latest IMU measurements in the buffer, which are leftovers from the last extraction plus the newly arrived during the optimization process, to calculate the high-rate pose estimate. This high-rate pose estimate is also used to perform the outlier check on the UWB measurements that was mentioned earlier.

    [0070] After the optimization has been completed, the sliding window will be slid forward, the oldest state estimate and its associated measurements will be abandoned, and a new time step will be registered when the conditions mentioned earlier are met.

    [0071] FIG. 8 shows multiple poses of an unmanned aerial vehicle and the coupling of these poses with available observations. As shown in FIG. 8, the unmanned aerial vehicle 802 having UWB nodes 804 moves through a series of positions which are labelled k, k+1, k+2 and k+3. Three anchor UWB nodes 810 are arranged in the environment. The pose of the unmanned aerial vehicle 802 is expressed relative to axes 820. As shown in FIG. 8, the pose of the UAV 802 at position k is χ.sub.k, the poses at positions k+1, k+2 and k+3 are χ.sub.k+1, χ.sub.k+2 and χ.sub.k+3 respectively. The IMU measurements between the positions are custom-character.sub.k, custom-character.sub.k+1 and custom-character.sub.k+2. The UWB measurements between the UWB nodes 802 of the UAV and the anchor nodes are expressed as {circumflex over (d)}.sub.k+3.sup.1 where the superscript number indicates a label of the anchor node.

    [0072] We denote all of the state estimates on the sliding window at time t.sub.k as custom-character.sub.k as follows: custom-character=custom-character.sub.w, . . . , custom-character.sub.k), custom-character.sub.m=({circumflex over (q)}.sub.m, {circumflex over (p)}.sub.m, {circumflex over (v)}.sub.m, {circumflex over (b)}.sub.m.sup.ω, {circumflex over (b)}.sub.m.sup.a)∈SE(3)×custom-character.sup.12, {circumflex over (Λ)}.sub.k=({circumflex over (λ)}.sup.1, {circumflex over (λ)}.sup.2, . . . {circumflex over (λ)}.sup.N.sup.k.sup.v)∈custom-character.sup.N.sup.k.sup.v.

    where {circumflex over (q)}.sub.k is the orientation quaternion estimate, {circumflex over (p)}.sub.k and {circumflex over (v)}.sub.k are the robot's position and velocity estimates w.r.t the world frame, {circumflex over (b)}.sub.k.sup.ω and {circumflex over (b)}.sub.k.sup.a are the estimates of the gyroscope and accelerometer biases of the IMU, {circumflex over (λ)}.sup.1, {circumflex over (λ)}.sup.2, . . . {circumflex over (λ)}.sup.N.sup.k.sup.v are the estimates of the inverse depth of N.sub.k.sup.V visual features that are tracked in the sliding window.

    [0073] Hence, we can construct a cost function based on the observations from UWB, IMU and OSL measurements. The cost function can be constructed at each time step t.sub.k+M as follows:

    [00007] f ( 𝒯 ^ k , Λ ^ k ) = Δ .Math. m = w + 1 k .Math. r 𝒥 ( 𝒳 ^ m - 1 , 𝒳 ^ m , 𝒥 m ) .Math. P 𝒥 m - 1 2 + .Math. m = w k .Math. i = 1 N m .Math. r ( 𝒳 ^ m , m i ) .Math. P m i - 1 2 + .Math. m = w k .Math. i = 1 N 𝒰 m .Math. r 𝒰 ( 𝒳 ^ m - 1 , 𝒳 ^ m , 𝒰 m i ) .Math. P m i - 1 2 + .Math. i = 1 N 𝒱 k .Math. b C i .Math. r 𝒱 ( 𝒳 ^ a , 𝒳 ^ b , λ ^ i , 𝒱 a b i ) .Math. P 𝒱 ab i - 1 2

    where custom-character(⋅), custom-character(⋅), custom-character(⋅), custom-character(⋅), custom-character(⋅) are the residuals constructed from IMU, UWB and

    [0074] OSL measurements custom-character, custom-character, custom-character, custom-character are the covariance matrices of the m mm measurement errors, and N.sub.U.sup.m, N.sub.V.sup.k are the number of UWB measurement in the intervals. The cost function above summarizes the coupling of all the states and the observations. Once the cost function has been constructed, we can use open-source non-linear solvers such as g2o or ceres to execute the optimization.

    [0075] Below, we will explain in detail the process to construct each factor in the cost function. Essentially, for each measurement, we will find an observation model that links the actual measurements with the states. Then we will calculate the residual (or loosely-speaking, the difference) between the measurement and the model-predicted value. Then, we need to derive the covariance matrix of the measurement errors, to properly apply the weightage among the factor. Finally, to facilitate real-time optimization, the Jacobians of each factor have to be analytically derived. [0076] i. Ranging factor:

    [0077] At each time step t.sub.k we have the following observation from a UAV ranging node:


    custom-character=({circumflex over (d)}.sub.m.sup.i,x.sub.m,y.sub.m,τ.sub.m.sup.i,t.sub.m−1,t.sub.m),

    where {circumflex over (d)}.sub.m.sup.i is the distance measurement, and x.sub.m.sup.i is the position of the UWB anchor w.r.t. the world frame, and y.sub.m.sup.i is the coordinate of the UWB ranging node in the body frame, τ.sub.m.sup.i, t.sub.m−1, t.sub.m are the time stamps defined in FIG. 7.

    [0078] The residual of a range measurement custom-character is calculated by taking the difference between the distance estimate {circumflex over (d)}.sub.m and the distance measurement {circumflex over (d)}.sub.k:


    r.sub.U(custom-character.sub.m−1,custom-character.sub.m,custom-character)={circumflex over (d)}.sub.m−{circumflex over (d)}.sub.m.sup.i,

    where {circumflex over (d)}.sub.k represents the estimated distance:

    [00008] d ˆ m = Δ .Math. p ˆ m + R ^ m - 1 Exp ( δ t i Δ t m Log ( R ^ m - 1 - 1 R ^ m ) ) y m - Δ t m 2 - δ t i 2 2 Δ t m v m - 1 - ( Δ t m - δ t i ) 2 2 Δ t m v m - x m .Math. , Δ t m = t m - t m - 1 , δ t m = τ m i - t m - 1 .

    And the covariance of the measurement error is chosen as


    custom-character≡σ.sub.U.sup.2, [0079] ii. OSL factors:

    [0080] For lidar factors, the residual and covariance constructed from each feature's FMM coefficients custom-character=(f.sup.i, n.sup.i, c.sup.i) is defined as


    custom-character=(n.sup.i).sup.T[{circumflex over (R)}.sub.m.sup.mf.sup.i+{circumflex over (p)}.sub.m]+c.sup.i,custom-character=σ.sub.L.sup.2

    [0081] For visual feature factors, the residual and covariance from each feature's observation is defined as


    r.sub.V(custom-character.sub.a,custom-character.sub.b,{circumflex over (λ)}.sup.i,custom-character)=π(.sup.b{circumflex over (f)}.sup.i(.sup.W{circumflex over (f)}.sup.i(π.sup.−1({circumflex over (λ)}.sup.i,.sup.acustom-character.sup.i){circumflex over (p)}.sub.a,{circumflex over (R)}.sub.a),{circumflex over (p)}.sub.b,{circumflex over (R)}.sub.b))−.sup.bcustom-character.sup.i, P.sub.V.sub.ab.sub.i=diag(σ.sub.V.sub.1.sub.2,σ.sub.V.sub.1.sub.2)

    iii. IMU pre-integration factors: [0082] The IMU measurements are defined as


    {hacek over (ω)}.sub.t=ω.sub.t+b.sub.t.sup.ω+η.sub.t.sup.ω,


    {hacek over (a)}.sub.t=a.sub.t+R.sub.tg+b.sub.t.sup.a+η.sub.t.sup.a


    {dot over (b)}.sub.t.sup.ω=η.sub.t.sup.bω,


    {dot over (b)}.sub.t.sup.a=η.sub.t.sup.ba,

    where g=[0, 0, g].sup.T is the gravitational acceleration, b.sub.t.sup.ω, b.sub.t.sup.a are respectively the biases of the gyroscope and the accelerometer, and η.sub.t.sup.ω, η.sub.t.sup.a, η.sub.t.sup.bω, η.sub.t.sup.ba are some zero-mean Gaussian noises, whose standard deviations are σ.sub.η.sub.ω, σ.sub.η.sub.a, σ.sub.η.sub., σ.sub.η.sub.ba, respectively.

    [0083] Given the noisy IMU measurements ({hacek over (ω)}.sub.t, {hacek over (a)}.sub.t) and some nominal bias values b.sub.k.sup.a, b.sub.k.sup.ω, we can calculate the following tuple of IMU pre-integrations:


    custom-character=({hacek over (α)}.sub.k+1,{hacek over (β)}.sub.k+1,{hacek over (γ)}.sub.k+1),


    where


    {hacek over (α)}.sub.k+1custom-character∫.sub.t.sub.k.sup.t.sup.k+1∫.sub.t.sub.k.sup.ukR.sub.s({hacek over (a)}.sub.s−{hacek over (b)}.sub.k.sup.a)dsdu


    {hacek over (β)}.sub.k+1custom-character∫.sub.t.sub.k.sup.t.sup.k+1.sup.kR.sub.s({hacek over (a)}.sub.s−{hacek over (b)}.sub.k.sup.a)ds


    {hacek over (γ)}.sub.k+1custom-character∫.sub.t.sub.k.sup.t.sup.k+1½Ω({hacek over (ω)}.sub.s−{hacek over (b)}.sub.k.sup.ω).sup.k{hacek over (q)}.sub.sds

    [0084] In practice, the above integration can be implemented by zero-order-hold (ZOH) or higher order methods (Runge-Kutta methods). During the period from t.sub.k to t.sub.k+1, we have a sequence of m IMU samples ({hacek over (ω)}.sub.0, {hacek over (a)}.sub.0), ({hacek over (ω)}.sub.1, {hacek over (a)}.sub.1), . . . , ({hacek over (ω)}.sub.m−1, {hacek over (a)}.sub.m−1). Thus, if ZOH is chosen, we use the following recursive rule to update the pre-integrated measurements:

    [00009] α n + 1 = α n + β n Δ τ n + 1 2 k R n ( a n - b ¯ k a ) Δ τ n 2 , β n + 1 = β n + k R n ( a n - b ¯ k a ) Δ τ n , γ n + 1 = γ n [ cos ( Δ τ n 2 .Math. ω n - b ¯ k ω .Math. ) ω n - b ¯ k ω .Math. ω n - b ¯ k ω .Math. sin ( Δ τ n 2 .Math. ω n - b ¯ k ω .Math. ) ] , α 0 = 0 , β 0 = 0 , y 0 = [ 1 0 1 × 3 ] T , for n = 0 , 1 , .Math. , m - 1 ,

    where τ.sub.n is the time stamp of the n-th sample, τ.sub.0=t.sub.k, τ.sub.m=t.sub.k+1, and Δτ.sub.ncustom-characterτ.sub.n+1−τ.sub.n.

    [0085] The residual of the IMU preintegration factor is defined as follows

    [00010] r 𝒥 ( 𝒳 ^ k , 𝒳 ^ k + 1 , 𝒥 k ) = Δ ( r γ , r α , r β , r b ω , r b a ) r γ = Δ 2 vec ( [ 1 - 1 2 C ω k Δ b ˆ k ω ] γ k - 1 q ˆ k - 1 q ˆ k + 1 ) r α = Δ R ^ k - 1 ( p ˆ k + 1 - p ˆ k - v ˆ k Δ t k + 1 2 g Δ t k 2 ) - A ω k + 1 Δ b ˆ k ω - A a k + 1 Δ b ˆ k a - α k + 1 r β = Δ R ^ k - 1 ( v ˆ k + 1 - v ˆ k + g Δ t k ) - B ω k + 1 Δ b ˆ k ω - B a k + 1 Δ b ˆ k a - β k , r b ω = Δ b ˆ k + 1 ω - b ˆ k ω , r b a = Δ b ˆ k + 1 a - b ˆ k a , when Δ b ˆ k ω = Δ b ˆ k ω - b k ω , Δ b ˆ k a = Δ b ˆ k a - b k a , k q ^ k + 1 = Δ q ˆ k - 1 q ˆ k + 1 ,

    and the Jacobians A, B, C are computed via an iterative scheme as follows:

    [00011] { C ω n + 1 n R n + 1 - 1 C ω n - H n Δ τ n , C ω 0 = 0 { B ω n + 1 = B ω n - 1 2 k R n .Math. ( a n - b k a ) .Math. × C ω n Δ τ n , B a n + 1 = B a n k R n Δ τ n , B ω 0 = 0 , B a 0 = 0. { A ω n + 1 = A ω n + B ω n Δ τ n - 1 2 k R n .Math. ( a n - b k a ) .Math. × C ω n Δ τ n 2 , A a n + 1 = A a n + B a n Δτ n - 1 2 k R n Δ τ n 2 , A ω 0 = 0 , A a 0 = 0

    [0086] The covariance of the IMU pre-integration errors are calculated based on a propagation scheme.

    [0087] FIG. 9 is a table showing a comparison absolute trajectory error of localization methods according to embodiments of the present invention with other localization methods.

    [0088] The methods VINS-Mono, VINS-Fusion, A-LOAM, LIO-SAM, M-LOAM are prior art methods and the methods labelled VIRAL (horiz. lidar) and VIRAL (2 lidars) correspond to localization methods according to embodiments of the using a single horizontal lidar and two lidars respectively. All algorithms are run on an NUC 10 computer with core i7 processor. Each method is slightly modified and configured for their best performance with the dataset. Since several lidar-based methods are not designed to incorporate multiple lidar inputs, we also include experiments of VIRAL using only the horizontal lidar for a fairer comparison. The Absolute Trajectory Error (ATE) result is summarized in FIG. 9. The best odometry result is highlighted in bold, and the second best is underlined. All values are in m. From this table we can clearly see that VIRAL consistently achieves better performance compared to existing methods, even when only one lidar is used.

    [0089] To further verify the efficacy of the localization scheme, we develop a platform consisting of a camera system, one IMU, two LIDAR sensors, and two UWB modules. We improve the utility of the UWB sensor by using the two antennas mounted at different points on the platform, Thus, effectively we have a total of four UAV ranging nodes. The data rate of the stereo image, IMU, LIDAR 1, LIDAR 2, and UWB topics are 10 Hz, 400 Hz, 10 Hz, 10 Hz, and 69 Hz respectively. A Leica MS60 station is used to provide ground truth for the experiment. All software is run on a small form factor computer with Intel Core i7-8650U processor.

    [0090] FIG. 10 shows a set of ultra-wide band measurements for a test flight of an unmanned aerial vehicle according to an embodiment of the present invention. The UWB nodes on the UAV are labelled 200.A, 200.B, 201.A and 201.B. The anchor UWB nodes are labelled 100, 101 and 102. Each of the plots shown in FIG. 10 corresponds to measurements from one of the UWB nodes on the UAV to one of the anchor nodes. From the zoomed-in part of the 200.A.fwdarw.101 plot, the variation of the distance is about 5 cm. We also notice some outliers in the 200.A.fwdarw.102 and 200.B.fwdarw.102 ranging measurements.

    [0091] FIG. 11 is a table showing position estimates of the anchors in the flight tests. The position estimates were determined as described above with reference to FIG. 2 and FIG. 3.

    [0092] FIG. 12 is a table showing a comparison of absolute trajectory error over building inspection datasets. A total of four flight tests were conducted. As having introduced earlier, since our work is meant to deliver a solution for UAV-based inspection application, the UAV's trajectory is different from those in common datasets, which are often extensive in x and y directions, but have limited movement in the z direction. In our case, for inspection purposes, the UAV only takes off and moves in a vertical plane parallel to the object. Hence the movements along the x and z directions are extensive, while movement along the y direction is minimal.

    [0093] FIG. 13A to FIG. 13D show the results of a test flight experiment. The results correspond to the flight labelled bid_01 in FIG. 11 and FIG. 12. FIG. 13A shows a 3D view of the UAV trajectory as estimated by individual OSL systems and the sensor fusion result. FIG. 13B shows Orientation estimate from the system over time. FIG. 13C shows velocity estimate from the system over time. FIG. 13D shows position estimates from the OSL systems and sensor fusion results over time. The ground truth position was obtained using a prism mounted on the UAV as shown in FIG. 1.

    [0094] From the results in FIG. 13A to FIG. 13D, we can reaffirm the benefit of UWB-IMU-OSL fusion over the state-of-the-art methods. Moreover, as can be seen on FIG. 13A that while all of the OSL estimates exhibit significant drift over the operation range, our sensor fusion's estimation error remains consistent, which is the most important quality we are looking for. Besides the position error, though we don't have a ground truth for the orientation, during the flight test, the drone's heading was kept relatively unchanged, always pointing towards the object. Hence the yaw value should be around 90 degrees. Indeed, we can see that the heading estimate stays around this value in FIG. 13B.

    [0095] It is anticipated that UAVs implementing the methods described above will find applications in aerial surveillance by UAVs; automated inspection operations by UAVs; and graphical modelling and structural health of 3D objects.

    [0096] Whilst the foregoing description has described exemplary embodiments, it will be understood by those skilled in the art that many variations of the embodiments can be made within the scope and spirit of the present invention.