Bionic visual navigation control system and method thereof for autonomous aerial refueling docking
11427316 · 2022-08-30
Assignee
Inventors
- Haibin Duan (Beijing, CN)
- Yongbin Sun (Beijing, CN)
- Yimin Deng (Beijing, CN)
- Long Xin (Beijing, CN)
- Han Li (Beijing, CN)
- Xiaobin Xu (Beijing, CN)
- Lun Fei (Beijing, CN)
- Mengzhen Huo (Beijing, CN)
- Lin Chen (Beijing, CN)
- Huaxin Qiu (Beijing, CN)
- Daifeng Zhang (Beijing, CN)
- Yankai Shen (Beijing, CN)
- Ning Xian (Beijing, CN)
- Chen Wei (Beijing, CN)
- Rui Zhou (Beijing, CN)
Cpc classification
G05D1/0094
PHYSICS
B64D39/06
PERFORMING OPERATIONS; TRANSPORTING
B64U2101/60
PERFORMING OPERATIONS; TRANSPORTING
G05D1/0088
PHYSICS
B64U2201/102
PERFORMING OPERATIONS; TRANSPORTING
B64C39/024
PERFORMING OPERATIONS; TRANSPORTING
International classification
G05D1/00
PHYSICS
B64D39/06
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A bionic visual navigation control system for autonomous aerial refueling docking includes: a tanker/receiver bottom layer control module, a multi-wind disturbances hose-drogue stable control module, an enable and select module, a close-range bionic vision relative navigation module, and a receiver relative position precise control module. A bionic visual navigation control method for autonomous aerial refueling docking is also provided. The present invention aims at improving the reliability, anti-interference and accuracy of the close-range relative navigation in the autonomous air refueling docking stage, and designs a matching relative position accurate control method with control switch, thereby improving the accuracy of close-range navigation and control, thereby promoting the successful realization of probe-and-drogue autonomous aerial refueling and improving the autonomy of UAVs.
Claims
1. A bionic visual navigation control system for autonomous aerial refueling docking, comprising: a tanker/receiver bottom layer control module, a multi-wind disturbances hose-drogue stable control module, an enable and select module, a close-range bionic vision relative navigation module, and a receiver relative position control module; wherein 1) the tanker/receiver bottom layer control module comprises tanker/receiver pitch angle, roll angle, yaw angle and speed controllers; the pitch angle controller controls an elevator, wherein an inner ring of the pitch angle controller is pitch angle speed PI (Proportional Integral) controller, and a proportional feedback of attack angle is used to stabilize the receiver; an outer ring of the pitch angle controller is a pitch angle PI controller, and an inner ring pitch angle speed command is calculated by the outer ring for pitch angle stability and control; the roll angle controller controls an aileron, wherein an inner ring of the roll angle controller is roll angle speed PI controller; an outer ring of the roll angle controller is roll angle PI control, and an inner ring roll angle speed command is calculated by the outer ring for roll angle stability and control; the yaw angle controller controls a rudder, wherein a proportional feedback of yaw angle speed is used to stabilize the receiver, and yaw angle PI controller is used for yaw angle stability and control; the speed controller controls a throttle thrust, wherein speed PI control is used to calculate a desired engine thrust for speed stability and control; in an autonomous aerial refueling docking stage, the receiver obtains a pitch angle command, a roll angle command and a speed command through the receiver relative position precise control module as inputs of the tanker/receiver bottom layer control module; since a straightened sideslip deviation control is adopted to eliminate lateral position differences, a yaw angle remains at zero; assume that the tanker makes fixed straight level flight, a pitch angle thereof maintains at a trim state value, and roll and yaw angles remain at zero; 2) the multi-wind disturbances hose-drogue stable control module comprises a hose-drogue model, multi-wind disturbances models and drogue position stability controller; wherein, the hose-drogue model comprises hose multi-rigid dynamics and kinematics, as well as drogue dynamics and kinematics; a hose-drogue assembly is fixed with the tanker; the hose-drogue model established is formed by a plurality of mass-concentrated links and a drogue; the multi-wind disturbances models comprises an atmospheric turbulence model, a tanker trailing vortex model and a receiver bow wave model which establish a wind disturbance environment for probe-and-drogue autonomous aerial refueling docking stage, so as to enhance authenticity of a simulation system; wind speed components of three atmospheric disturbances in three directions are obtained by simulation, and wind speed is superimposed; a composite wind speed acts on a mass center of the hose and the drogue, thus affecting aerodynamic forces of the hose and the drogue, leading to a swing of the stable position of the hose and the drogue and difficulties in autonomous docking; the drogue position stability control comprises a drogue lateral position PID (Proportional Integral Derivative) controller, a vertical position PID controller, and a drogue actuator distribution, wherein for reducing difficulties of the autonomous docking, the drogue lateral position PID controller and the vertical position PID controller are designed according to the stable position of the drogue to obtain desired active control forces along lateral and vertical directions; according to drogue aerodynamics, corresponding actuators of the drogue are used to generate actual active control forces, thereby reducing a swinging range of the drogue and reducing the difficulties of the autonomous docking; 3) the enable and select module comprises visual enablement, visual navigation method selection, and control selection, wherein the visual enablement activates the bionic visual navigation system, obtains images in virtual reality (VR) simulation and performs visual navigation processing; the visual navigation method selection is firstly provided, comprising VR visual simulation, marker detection, and judging whether all markers are detected, wherein drogue markers in an VR image are processed with the eagle-eye marker detection, according to whether all designed markers on the drogue are detected, different close-range vision relative navigation methods are selected; the control selection determines whether to use a visual navigation signal according to calculated visual position differences between a probe and the drogue; 4) the close-range bionic vision relative navigation module comprises two situations of the drogue is completely detected, and the drogue is at a distance or partially blocked; according to a visual navigation method selection result, if the drogue is completely detected, then marker matching is performed, so as to perform accurate pose estimation; if the drogue is at a distance or partially blocked, thus the marker are not completely detected, then ellipse fitting is performed according to color information of the drogue, so as to perform pose estimation; and 5) the receiver relative position control module comprises a receiver altitude controller, a lateral deviation controller and a forward deviation controller; wherein the altitude controller uses the pitch angle control as an inner ring of the altitude controller, and the forward deviation controller uses the speed control as an inner ring of the forward deviation controller; the lateral deviation control adopts a receiver straightened sideslip method, and uses the rolling angle as an inner ring; relative position control is achieved by feeding back relative positions between the tanker and the receiver.
2. A bionic visual navigation control method for autonomous aerial refueling docking, comprising steps of: step 1: establishing a receiver model, a hose-drogue model and multi-wind disturbances models, and setting initial states, comprising specific steps of: establishing a six-degree-of-freedom nonlinear tanker/receiver model:
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
SYMBOL REFERENCE
(6) (Δx,Δy,Δh)—relative position of tanker and receiver (x.sub.R,y.sub.R,h.sub.R)—receiver position (x.sub.T,y.sub.T,h.sub.T)—tanker position (Δx.sub.v,Δy.sub.v,Δh.sub.v)—drogue position obtained by bionic visual navigation (w.sub.u,w.sub.v,w.sub.w)—three components of superimposed wind disturbance a—tanker acceleration ω—tanker angle speed (x.sub.dro,y.sub.dro,z.sub.dro)—drogue position F.sub.s—expected lateral active control force of drogue F.sub.v—expected vertical active control force of drogue (act.sub.1,act.sub.2,act.sub.3,act.sub.4)—four opening angles of drogue's actuators (ϑ.sub.k1,ϑ.sub.k2)—two state angles of k-th link of hose a.sub.k—acceleration of k-th link of hose Q.sub.dro—drogue aerodynamic force
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
(7) Referring to
(8)
(9) step 1: establishing a receiver model, a hose-drogue model and multi-wind disturbances models, and setting the initial states
(10) According to equations (1) to (2), the receiver model and the hose-drogue model are respectively established. Assume that the midpoint of the line connecting two vortex centers at trailing edges of wings is the origin O.sub.v; Y.sub.v axis coincides with the center line, and the positive direction points to the right wing tip; X.sub.v axis and Z.sub.v axis are respectively parallel to the corresponding axes of the tanker-body coordinate system, thus the trailing vortex coordinate system O.sub.vX.sub.vY.sub.vZ.sub.v is established. The wind speed of the Hallock-Burnham model is v.sub.vor=[v.sub.vorx v.sub.vory v.sub.vorz], wherein based on the model's feature v.sub.vorx≈0, the other two components are calculated as follows:
(11)
(12) wherein (x.sub.rel y.sub.rel z.sub.rel) is the coordinate of a point in the trailing vortex coordinate system; Γ.sub.0 is the initial strength of the trailing vortex and Γ.sub.0=4G.sub.T/(πρV.sub.Tb.sub.T), G.sub.T is gravity of the tanker, ρ is density of the atmosphere, V.sub.T is the speed of the tanker, b.sub.T is span of the tanker; r.sub.c is the radius of the trailing vortex and r.sub.c=0.5√{square root over (−x.sub.rel/V.sub.T)}.
(13) The receiver bow wave Rankine half-body model can be obtained by superposition of an average flow and a point source flow. Assume that the center of gravity of the point source flow is the origin O.sub.b; X.sub.b, Y.sub.b and Z.sub.b are respectively parallel to the corresponding axes of the receiver-body coordinate system, thus the bow wave coordinate system O.sub.bX.sub.bY.sub.bZ.sub.b is established. The bow wave wind speed can be expressed as v.sub.bow=[v.sub.bowx v.sub.bowy v.sub.bowz], specific expression of three components is as follows:
(14)
(15) wherein (x.sub.b y.sub.b z.sub.b) is the coordinate of a point in the bow wave coordinate system; in the polar coordinate system having the origin O.sub.b, the radial speed v.sub.r and the circumferential speed v.sub.e of the bow wave model can be expressed as:
(16)
(17) wherein U is the average flow speed; Q.sub.b is the intensity of the point source flow, Q.sub.b=2πUb.sub.n, b.sub.n=h.sub.nose/π, h.sub.nose is the maximum radius of the receiver's airframe in the width direction; r is the polar diameter; η is the polar angle.
(18) The initial states of the tanker/receiver are: after autonomous rendezvous, receiver altitude h.sub.R=7300 m, tanker/receiver speed V.sub.R=V.sub.T=180 n/s, initial distance between the tanker and the receiver Δx≈368 m, Δy≈0 m and Δh≈30 m, hose length l=22.86 m, number of hose links N=20, hose k-th link length l.sub.k=l/N=1.143 m, tanker weight m.sub.T=120000 kg, wingspan b.sub.T=39.88 m, U=V.sub.T, h.sub.hose=0.4 m, ρ=1.293 kg/m.sup.3. The simulation step size is set to 0.02 s and the simulation time is set to 200 s with mild turbulence intensity.
(19) step 2: controlling drogue position stability
(20) During the first 50 s of simulation, the hose is subjected to a free flow and the tanker trailing vortex to obtain a relatively stable position. After the first 50 s of simulation, influence of the receiver bow wave and atmospheric turbulence are added, and the atmospheric turbulence is mild turbulence.
(21) According to the stable position of the drogue of the simulation time t=50 s, lateral and vertical position PID controllers of the drogue are designed respectively, and changing angles of four control surfaces of the controllable drogue are obtained through the drogue actuator distribution, thereby changing the aerodynamic forces of the drogue for resisting influence of multi-wind disturbances on the drogue position.
(22) step 3: calculating the relative position of the tanker and the receiver, and performing visual enablement judgment
(23) Processing the tanker position and the receiver position with difference calculation, so as to obtain the relative position Δx, Δy, Δh; if Δx>80 m (the distance between the drogue and the probe is about 40 m), providing relative position precise control under the GPS signal, and executing the step 4; otherwise, activating the bionic visual navigation system, and executing steps 5-7.
(24) step 4: performing the relative position precise control under the GPS signal
(25) The altitude controller is expressed as the equation (3), wherein altitude proportional, integral and derivative coefficients are k.sub.P.sup.h=0.45, k.sub.I.sup.h=0.1, k.sub.D.sup.h=0.65; the forward deviation controller is expressed as the equation (4), wherein the forward proportional coefficient is k.sub.P.sup.x=0.26; the lateral deviation controller is expressed as the equation (5), wherein lateral deviation proportional, integral and derivative coefficients are k.sub.P.sup.y=1.21, k.sub.I.sup.y=4, k.sub.D.sup.y=0.03, and the integral process threshold is e.sub.y.sup.thr=2 m.
(26) step 5: utilizing eagle-eye color vision mechanism for navigation method selection
(27) Eagle-eye color vision detection method is expressed the equation (6), wherein long wave, medium wave and short wave thresholds are L.sub.thr=20, M.sub.thr=20, S.sub.thr=20. Firstly, processing the image obtained in the VR scene with color segmentation by the eagle-eye color vision mechanism, and selecting the output response image of the long wave channel for binarization processing, thereby obtaining a binary image containing only the information of the red region; filling holes of the binary image containing only the red region, and eliminating noises, so as to obtain a drogue binary image; combining (making “and” operation) the drogue binary image with the original image (the image obtained in the VR scene) to obtain a color image containing only the drogue; using the image color segmentation method based on the eagle-eye color vision mechanism for color segmentation of the color image containing only the drogue; selecting the output response image of the medium wave channel for binarization processing, thereby obtaining a binary image containing the green circular marker, and then performing blob detection to obtain the number of the green circular marker and the central pixel coordinate of the corresponding marker; selecting the output response image of the short wave channel for binarization processing, thereby obtaining a binary image containing the blue circular markers, and then performing blob detection to obtain the number of the blue circular markers and the central pixel coordinates of the corresponding markers; summing the green marker and the blue markers to determine whether all seven markers are detected; if all the markers are detected, executing the step 6; otherwise, executing the step 7;
(28) step 6: performing close-range bionic visual navigation when all the markers are detected
(29) If all the markers on the drogue are detected, matching the markers; according to the coordinates of all the markers detected in the step 5, performing convex hull transformation to clockwise sort the coordinates of all the markers; then determining the initial green marker; according to the coordinate of the green marker detected in the step 5, processing the coordinates of all the seven markers with square difference in turn, wherein the coordinate with minimum square difference is the coordinate of the green marker; finally, matching the remaining six blue markers clockwise, so as to obtain the matching result of all the markers.
(30) Making the drogue's precise pose estimation by the RPnP pose estimation algorithm: 1) establish the fourth-order polynomials of the drogue markers, wherein in the drogue coordinate system, a line connecting two markers is selected as the rotation axis, the midpoint of the line is the coordinate origin, and the direction of the line is the Z-axis direction, so as to establish the new coordinate system O.sub.aX.sub.aY.sub.aZ.sub.a; coordinates of seven three-dimensional feature points in the original drogue coordinate system are converted into coordinates in the coordinate system O.sub.aX.sub.aY.sub.aZ.sub.a, and each three of seven three-dimensional markers in the new coordinate system are combined, thus form five point sets. According to the principle of triangular geometry, a fourth-order polynomial can be constructed for every three points. The five point sets can form an equation set with five fourth-order polynomials. 2) solution of the coordinates of the markers in the camera coordinate system is that each fourth-order polynomial in the fourth-order polynomial equation sets are processed with square sum, so as to construct an eighth-order cost function. By deriving the eighth-order cost function and then finding the zero solution of derivative of the cost function, at most four characteristic roots can be obtained. The characteristic roots are used to calculate the coordinates of the seven three-dimensional markers in the camera coordinate system, wherein the Z, axis determined by the coordinate (P.sub.i0.sup.c,P.sub.j0.sup.c) in the camera coordinate system corresponding to the two markers forming the rotation axis in the first step of the RPnP algorithm is Z.sub.c=({right arrow over (P.sub.i0.sup.c.Math.P.sub.j0.sup.c)})/∥P.sub.i0.sup.c.Math.P.sub.j0.sup.c∥; 3) pose solution is that since the Z.sub.a axis of the coordinate system O.sub.aX.sub.aY.sub.aZ.sub.a corresponds to the Z.sub.c axis in the camera coordinate system O.sub.cX.sub.cY.sub.cZ.sub.c, the rotation matrix R between the two coordinate systems can be expressed as the equation of the rotation angle ξ, so that there is only one parameter ξ, wherein the equation is as follows:
(31)
(32) wherein H is the arbitrary orthogonal rotation matrix whose third column [h.sub.7 h.sub.8 h.sub.9].sup.T is equal to Z.sub.c, rot(Z.sub.c,ξ) indicates the rotation angle around the Z.sub.c axis; according to the principle of camera imaging, the projection of the three-dimensional feature point to the two-dimensional normalized image plane can be expressed as:
(33)
(34) wherein (u.sub.i,v.sub.i) is the pixel coordinate of the feature point, (X.sub.i,Y.sub.i,Z.sub.i) is the coordinate of the feature point in the coordinate system O.sub.aX.sub.aY.sub.aZ.sub.a, t=[t.sub.x t.sub.y t.sub.z].sup.T is the translation vector between the coordinate system O.sub.aX.sub.aY.sub.aZ.sub.a and the camera coordinate system O.sub.cX.sub.cY.sub.cZ.sub.c. In order to solve the parameter [cos ξ sin ξ t.sub.x t.sub.y t.sub.z 1].sup.T, the equation (16) is solved, wherein each feature point can construct a 2×6 equation set; the equation sets formed by all the feature points are combined, and are solved by the singular value decomposition method, so as to obtain the rotation matrix R and the translation vector t between the two coordinate systems.
(35) Finally, the drogue position in the camera coordinate system is obtained and converted between the camera coordinate system, the aircraft-body coordinate system and the earth-surface inertial coordinate system, so as to obtain the position difference Δx.sub.v, Δy.sub.v, Δh.sub.v between the drogue and the probe under the earth-surface inertial coordinate system.
(36) step 7: performing close-range bionic visual navigation when the drogue is far away or partially blocked
(37) When the drogue is far away or the markers are partially blocked, processing the drogue with ellipse fitting and matching circumscribed rectangular pixel points; using matched vertex pixel points for RPnP precise pose measurement, thereby obtaining the drogue position in the camera coordinate system and converting between the camera coordinate system, the aircraft-body coordinate system and the earth-surface inertial coordinate system, so as to obtain the position difference Δx.sub.v, Δy.sub.v, Δh.sub.v between the drogue and the probe under the earth-surface inertial coordinate system.
(38) step 8: choosing the GPS signal or the bionic visual navigation signal for control and designing bionic visual navigation controllers
(39) when the distance in x direction between the tanker and the receiver is less than Δx.sub.thr=80 m, which means the distance between the drogue and the probe is about 40 m, and the position difference Δx.sub.v,Δy.sub.v, Δh.sub.v is obtained by close-range bionic vision relative navigation through the steps 5-7, if Δx.sub.v>30 m, using the GPS signal for navigation control, and performing precise position control according to the step 4; if Δx.sub.v<30 m, using the bionic visual navigation signal for navigation control.
(40) The visual navigation precise position controllers can be expressed as the equations (7)-(9), wherein visual navigation altitude control k.sub.P.sup.vh=0.23, k.sub.I.sup.vh=0.04, k.sub.D.sup.vy=165, forward deviation k.sub.P.sup.vx=0.22, lateral deviation k.sub.P.sup.vy=0.23, k.sub.I.sup.vy=0.006, k.sub.D.sup.vy=0.12, lateral deviation integral threshold Δy.sub.v.sup.thr=0.3 m.
(41) step 9: determine if docking is successful, specifically comprising steps of:
(42) when bionic visual navigation Δx.sub.v<0.1 m, judging whether Δy.sub.v<0.25 m and Δh.sub.v<0.25 m; if so, docking is successful, then ending the docking process; otherwise, docking fails, then decelerating the receiver, and setting the forward deviation target to Δx.sub.v.sup.tar=30 m, in such a manner that the relative distance in x direction between the tanker and the receiver will be controlled to 30 m and the docking process will be restarted.
(43) One skilled in the art will understand that the embodiment of the present invention as shown in the drawings and described above is exemplary only and not intended to be limiting.
(44) It will thus be seen that the objects of the present invention have been fully and effectively accomplished. Its embodiments have been shown and described for the purposes of illustrating the functional and structural principles of the present invention and is subject to change without departure from such principles. Therefore, this invention includes all modifications encompassed within the spirit and scope of the following claims.