Abstract
Disclosed is an unmanned platform with bionic visual multi-source information and intelligent perception. The unmanned platform is equipped with a bionic polarization vision/inertia/laser radar combined navigation module, a deep learning object detection module and an autonomous obstacle avoidance module; the bionic polarization vision/inertia/laser radar combined navigation module is configured to position and orient the unmanned platform in real time; the deep learning object detection module is configured to sense an environment around the unmanned platform according to RGB images of a surrounding environment collected by the bionic polarization vision/inertia/laser radar combined navigation module; and the autonomous obstacle avoidance module determines whether there are any obstacles around the unmanned platform during running according to the objects identified by the target, and performs autonomous obstacle avoidance in combination with the carrier navigation and positioning information. Concealment, autonomous navigation, object detection and autonomous obstacle avoidance capabilities of the unmanned platform are thus improved.
Claims
1. An unmanned platform with bionic visual multi-source information and intelligent perception, comprising a bionic polarization vision/inertia/laser radar combined navigation module, a deep learning object detection module and an autonomous obstacle avoidance module, wherein the bionic polarization vision/inertia/laser radar combined navigation module is configured to position and orient the unmanned platform in real time; the deep learning object detection module is configured to sense an environment around the unmanned platform according to RGB images of a surrounding environment collected by the bionic polarization vision/inertia/laser radar combined navigation module; and the autonomous obstacle avoidance module determines whether there are any obstacles around the unmanned platform during running according to the objects identified by the target, and performs autonomous obstacle avoidance in combination with the carrier navigation and positioning information obtained by the bionic polarization vision/inertia/laser radar combined navigation module.
2. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 1, wherein the bionic polarization vision/inertia/laser radar combined navigation module comprises a bionic polarization camera, a laser radar sensor, an inertial sensor and a monocular vision camera sensor; the bionic polarization camera is configured to sense an atmospheric polarization state, and solve course angle information; the laser radar sensor is configured to obtain point cloud information of a surrounding environment and construct a dense map; the inertial sensor is configured to update positions and orientations of the unmanned platform; and the monocular vision camera sensor is configured to perform object detection of the surrounding environment.
3. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 2, wherein the bionic polarization vision/inertia/laser radar combined navigation module contains two application scenarios: a field environment and an unknown enclosed environment; and the bionic polarization camera and the inertial sensor are used in the field environment to improve the navigation and orientation precision of the unmanned platform; when the bionic polarization camera loses lock in an unknown enclosed environments, the laser radar sensor and the inertial sensor are used together to improve the navigation, positioning and mapping capabilities of the unmanned platform.
4. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 3, wherein the bionic polarization vision/inertia/laser radar combined navigation module is used in the field environment to improve the navigation and orientation precision of the unmanned platform, comprising the following working steps: (101) using the bionic polarization camera to acquire a sky polarization pattern, and solving to obtain course angle information of the unmanned platform in a horizontal state; and using the inertial sensor to measure a roll angle, a pitch angle and a course angle of the unmanned platform, performing inclination angle compensation for the course angle in the horizontal state to obtain course angle information of the unmanned platform in a non-horizontal state; and (102) fusing the course angle information of the unmanned platform in the horizontal state and the course angle information of the unmanned platform in the non-horizontal state obtained in the step (101) by using the cubature Kalman filter algorithm assisted by a radial basis function neural network, and outputting the optimal course angle information.
5. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 4, wherein the step (101) has the following specific method: since the course angle of the carrier in a geographic coordinate system in the horizontal state is ?=?.sub.s??.sub.b, wherein ?.sub.b and ?.sub.s are azimuth angles of the solar meridian in a carrier coordinate system and the geographic coordinate system; under the condition that the local longitude and latitude information of the carrier and accurate time information at the time of observation are known, a solar elevation angle h.sub.s and a solar azimuth angle ?.sub.s in the geographic coordinate system are respectively as follows: in the formula, ?, ? and t.sub.s represent a declination angle of the sun, the latitude of an observation point and an hour angle of the sun respectively; then, three-channel polarization images are collected through the bionic polarization camera carried by the unmanned platform, feature point sets are searched and identified based on all-sky-domain polarization data by using a feature that an E-vector in a direction of the solar meridian is perpendicular to a scattering plane, the feature points are subjected to a linear fitting once to solve ?.sub.b, and finally the course angle ? of the carrier in the geographic coordinate system in the horizontal state can be solved; when the unmanned platform is in the non-horizontal state, an E-vector direction will not be perpendicular to the solar meridian, but perpendicular to a solar vector, shown in the formula as below:
e.sub.p.sup.n.Math.a.sub.OS.sup.n=0 in the formula, n represents a navigation coordinate system, b represents the carrier coordinate system, a.sub.OS.sup.n represents a sun vector in the navigation coordinate system, and e.sub.p.sup.n represents the E vector in the navigation coordinate system; the E-vector direction in the carrier coordinate system is defined as e.sub.p.sup.b=[cos ? sin ? 0].sup.T, wherein ? represents a clockwise angle between a reference direction of the bionic polarization camera and an E-vector direction of incident light, and C.sub.b.sup.n represents a direction cosine matrix from the carrier coordinate system to the navigation coordinate system, with e.sub.p.sup.n=C.sub.b.sup.ne.sub.p.sup.b; then in the navigation coordinate system, the sun vector a.sub.OS.sup.n can be calculated by the solar azimuth angle ?.sub.s and the solar elevation angle h.sub.s, as shown in the following formula: in the formula, a.sub.OSx.sup.n, a.sub.OSy.sup.n, a.sub.OSz.sup.n represent projection of the solar vector in the navigation system; combined the above formula, the course angle ? after inclination angle compensation in the non-horizontal state can be calculated: ?=arcsin c+arctan d, wherein c and d are intermediate parameters of the solution, and in the formula, ? and ? represent the roll angle and the pitch angle of the carrier in a non-horizontal state, which are obtained by the inertial sensor through solution.
6. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 4, wherein the step (102) has the following specific method: first, the inertial sensor realizes initial alignment through the course angle information provided by the bionic polarization camera, the bionic polarization vision/inertial combination model is then established, the course angle information output by the inertial system is taken as the quantity of state, the course angle information obtained by the bionic polarization camera through the inclination angle compensation is used as the quantity measurement, and the cubature Kalman filter algorithm is adopted to conduct the data fusion; a state model in a discrete form is expressed as: in the formula, x.sub.k and x.sub.k?1 represent the quantity of state at moments k and k?1 respectively; u.sub.k?1 is a filter input; z.sub.k is the quantity measurement; f( ) and h( ) are a state transition equation and a measurement equation respectively; and w.sub.k?1 and v.sub.k are uncorrelated zero-mean Gaussian process noise and measurement noise respectively; a radial basis function neural network is introduced into a cubature Kalman filter framework to perform error compensation on a filter: first, a window size h is set, which reflects that the cubature Kalman filter algorithm iteratively estimates the optimal course angle information for h moments within one window size, the quantity of state within a window, and innovations of the cubature Kalman filter and the optimal estimated course angle information are input to the radial basis function neural network for training; when data of the next moment come, the quantity of state, as well as gains and innovations of the cubature Kalman filter at that moment are input to the radial basis function neural network trained in the previous window, and the input and the output are as follows: in the formula, r represents a certain moment when the r.sup.th data within the window come, K.sub.r and I.sub.r represent gains and innovations of the cubature Kalman filter within the window respectively, p.sub.r, v.sub.r and q.sub.r represent the quantities of state of positions, velocities and attitudes of the inertial sensor within the window, z.sub.r represents the quantity measurement collected by the bionic polarization camera within the window, y.sub.r represents compensation output of the neural network, b.sub.bias represents bias matrix of network nodes, and W represents a weight matrix of the network nodes; and represents a Gaussian kernel, wherein ? and ? represent a center and a width of the Gaussian kernel function respectively, and ?.Math.? represents an Euclidean distance; and estimated values of the cubature Kalman filter algorithm at the current moment are then compensated and corrected through error values output by the radial basis function neural network, data of the next window size h, starting from the current correction moment, are used to continue training the radial basis function neural network, when data of the next moment after the end of a second window come, the estimated values of the cubature Kalman filter algorithm are compensated and corrected again through error values output by the retrained radial basis function neural network, and iteration is performed in such a way, so that the cubature Kalman filter can be optimized by the radial basis function neural network.
7. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 3, wherein the bionic polarization vision/inertia/laser radar combined navigation module is used in the unknown enclosed environment to improve the he navigation, positioning and mapping capabilities of the unmanned platform, comprising the following working steps: (201) pre-integrating the inertial sensor, and performing discretization to obtain the relationship among positions, velocities and attitudes at the moment of two quantity measurement values of the inertial sensor by using a median method; (202) designing a residual term through the pre-integration of the inertia sensor in the step (201) and calculating a Jacobian of the residual term relative to variables to be optimized to obtain pre-integration relative constraints, fusing the pre-integration relative constraints with relative pose constraints of the laser radar, constructing a nonlinear optimization model, constructing a laser/inertia odometry, and solving the optimal pose; and (203) mapping the unknown enclosed environments according to landmark points obtained by the laser/inertia odometry in the step (202).
8. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 7, wherein the step (201) has the following specific method: in the step (201), performing navigation according to the measurement information of the inertial sensor and the laser radar when the unmanned platform enters the unknown closed environment and the bionic polarization camera suffers a failure; first, pre-integration is carried out on the inertial sensor, and a continuous expression of a pre-integration method is as follows: in the formula, subscript b represents the carrier coordinate system, w is a world coordinate system, a time interval between two frames is ?t, p.sub.b.sub.k+1.sup.w, p.sub.b.sub.k.sup.w represent positions at moments k+1 and k in the world coordinate system respectively, v.sub.b.sub.k+1.sup.w, v.sub.b.sub.k.sup.w represent velocities at the moments k+1 and k in the world coordinate system respectively, q.sub.b.sub.k+1.sup.w, q.sub.b.sub.k.sup.w represent attitudes at the moments k+1 and k in the world coordinate system respectively, a.sub.b.sub.t and ?.sub.b.sub.t represent outputs of an accelerometer and a gyroscope at a moment t, b.sub.a.sub.t, b.sub.?.sub.t represent zero bias of the accelerometer and the gyroscope at the moment t, n.sub.a and n.sub.? represent random noise of the accelerometer and the gyroscope, and g.sup.w represents a gravity acceleration in the world coordinate system, the median method is adopted to perform discretization, and the relationship of outputs between the i+1.sup.th and the i.sup.th inertial sensors is as follows: in the formula, p.sub.b.sub.i+1.sup.w, p.sub.b.sub.i.sup.w represent positions at an output moment of the i+1.sup.th and the i.sup.th inertial sensors after discretization in the world coordinate system respectively, v.sub.b.sub.i+1.sup.w, v.sub.b.sub.i.sup.w represent velocities at the output moment of the i+1.sup.th and the i.sup.th inertial sensors after discretization in the world coordinate system respectively, q.sub.b.sub.i+1.sup.w, q.sub.b.sub.i.sup.w represent attitudes at the output moment of the i+1.sup.th and the i.sup.th inertial sensors after discretization in the world coordinate system respectively, and ?t represents an time interval of outputs between the i+1.sup.th and the i.sup.th inertial sensors; and acceleration ? and angular velocity ? are the average value of two quantity measurements, that is: in the formula, a.sub.b.sub.i, a.sub.b.sub.i+1 represent outputs of the accelerometer at the output moment of the i+1.sup.th and the i.sup.th inertial sensors respectively, ?.sub.b.sub.i, ?.sub.b.sub.i+1 represent outputs of the gyroscope at the output moment of the i+1.sup.th and the i.sup.th inertial sensors respectively, b.sub.b.sub.i, b.sub.b.sub.i+1 represent zero bias of the accelerometer at the output moment of the i+1.sup.th and the i.sup.th inertial sensors respectively, and b.sub.?.sub.i, b.sub.?.sub.i+1 represent zero bias of the gyroscope at the output moment of the i+1.sup.th and the i.sup.th inertial sensors respectively.
9. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 7, wherein the step (202) has the following specific method: in the step (202), defining the quantity of state to be optimized and estimated of the system within a sliding window as: in the formula, the sliding window is provided with the quantities of state pre-integrated by the inertial sensor at s moments and the quantities of state obtained by the laser radar at s moments, the laser radar has a frequency of 10 Hz, the inertial sensor has a frequency of 200 Hz, and pre-integration is carried out between every two frames of the laser radar through the inertial sensor; x.sub.1.sup.imu,x.sub.k.sup.imu,x.sub.s.sup.imu represent pre-integrated states of the inertial sensor at an initial moment (a first moment), a moment k and a moment s respectively; and x.sub.1.sup.lidar, x.sub.k.sup.lidar, x.sub.s.sup.lidar represent the quantities of state of the laser radar at the initial moment, the moment k and the moment s respectively; and b.sub.a.sub.k, b.sub.?.sub.k are zero bias of the accelerometer and the gyroscope at the moment k, and p.sub.l.sub.k.sup.w, q.sub.l.sub.k.sup.w are a position and an attitude estimated by the laser radar at the moment k; After state variables are defined, the residual term of the inertia sensor and the laser radar is designed, the Jacobian of the residual term relative to variables to be optimized is calculated, and a nonlinear optimization function of the laser radar/inertial combined navigation is then constructed: in the formula, ? ? represents a Mahalanobis Distance of the vector, r.sub.B.sup.k( ) represents a measurement residual of the inertial sensor, r.sub.L.sup.k( ) represents a point cloud measurement residual of the laser radar, z.sub.k.sup.k+1 represents relative measurement variation of the inertial sensor between the moments k and k?1, and m.sub.k represents measurement information of the laser radar at the moment k; Finally, a Gauss-Newton method is adopted to solve the nonlinear optimization function, so as to obtain the optimal pose estimation.
10. The unmanned platform with bionic visual multi-source information and intelligent perception according to claim 2, wherein the deep learning object detection module has the following specific method: first, the monocular vision camera sensor is used to collect RGB images of a surrounding environment, with the collection frequency of 30 Hz; a bilinear Interpolation algorithm is then adopted to adjust grayscale images to be a size of 640?640, and a median filtering algorithm is adopted to smoothly denoise the images and complete the pre-processing of the images; and a multi-directional blur superposition algorithm is adopted to simulate the blurred image problem caused by a vision camera in a real environment, as shown in the following formula: in the formula, P.sub.stack.sup.R, P.sub.stack.sup.G, P.sub.stack.sup.B represent pixel values of the corresponding coordinates after superposition on three RGB channel respectively, P.sub.j.sup.R, P.sub.j.sup.G, P.sub.j.sup.B represent pixel values of the original coordinates on the three RGB channel respectively, ?.sub.j represents a transparency factor of the superposed images, and T.sub.j represents a blurring superposition displacement matrix, which is determined by a superposition direction, a step length and the number of superposed images; and an image data set simulated by the multi-directional blur superposition algorithm is input to a YOLOv7 deep learning network for training, finally enhanced images are input to the trained YOLOv7 deep learning network to obtain object detection frames and vertex coordinates of various objects in the images.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0061] FIG. 1. is a hardware platform of an unmanned platform with bionic visual multi-source information and intelligent perception;
[0062] FIG. 2 is a schematic principle diagram of the present disclosure;
[0063] FIG. 3 is a bionic polarization vision/inertia combined navigation framework in a field environment;
[0064] FIG. 4 shows a schematic diagram of the corresponding relationship between inertial sensor output/laser radar output and pre-integration;
[0065] FIG. 5 is a flow chart of a deep learning object detection module;
[0066] FIG. 6 shows a comparison chart of sports car experiments of a course angle solved by a bionic polarization camera, a course angle solved by an inertial sensor, a reference course angle of a high-precision navigation and positioning system, and a course angle fused and optimized by a cubature Kalman filter;
[0067] FIG. 7 shows results of object recognition performed by a deep learning object detection module during the traveling of an unmanned platform; and
[0068] FIG. 8 shows depth information of a surrounding environment during the traveling of an unmanned platform.
DETAILED DESCRIPTIONS OF THE EMBODIMENTS
[0069] The technical solution of the present disclosure will be further described below in conjunction with the accompanying drawings.
[0070] As shown in FIG. 1, the present disclosure provides an unmanned platform with bionic visual multi-source information and intelligent perception. The unmanned platform is equipped with a bionic polarization camera, a laser radar sensor, an inertial sensor and a monocular vision camera sensor.
[0071] The bionic polarization camera is configured to sense an atmospheric polarization state, and solve course angle information; the laser radar sensor is configured to obtain point cloud information of a surrounding environment and construct a dense map; the inertial sensor is configured to update positions and orientations of the unmanned platform; and the monocular vision camera sensor is configured to perform object detection of the surrounding environment.
[0072] As shown in FIG. 2, the unmanned platform with bionic visual multi-source information and intelligent perception includes three major modules: a bionic polarization vision/inertia/laser radar combined navigation module, a deep learning object detection module and an autonomous obstacle avoidance module. The three major modules have different functions and are combined to realize the multi-source information and intelligent perception function of the unmanned platform.
[0073] The bionic polarization vision/inertia/laser radar combined navigation module is configured to position and orient the unmanned platform in real time, the deep learning object detection module is configured to sense an environment around the unmanned platform, and the autonomous obstacle avoidance module is configured to achieve autonomous obstacle avoidance of the unmanned platform by combining its own positioning information with a target object identified in the environment.
[0074] The bionic polarization vision/inertia/laser radar combined navigation module contains two application scenarios: a field environment and an unknown enclosed environment, where the bionic polarization camera and the inertial sensor are combined and used in the field environment to improve the navigation and orientation precision of the unmanned platform; when the bionic polarization camera loses lock in an unknown enclosed environments, the laser radar sensor and the inertial sensor are used together to improve the navigation, positioning and mapping capabilities of the unmanned platform.
[0075] Further, working steps of the bionic polarization vision/inertia/laser radar combined navigation module in the field environment are as follows: [0076] (101) using the bionic polarization camera to acquire a sky polarization pattern, and solving to obtain course angle information of the unmanned platform in a horizontal state; and using the inertial sensor to measure a roll angle, a pitch angle and a course angle of the unmanned platform, performing inclination angle compensation for the course angle in the horizontal state to obtain course angle information of the unmanned platform in a non-horizontal state; and [0077] (102) fusing the course angle information obtained by the bionic polarization camera and the inertial sensor through the inclination angle compensation in the step (101) by using the cubature Kalman filter algorithm assisted by a radial basis function neural network, and outputting the optimal course angle information.
[0078] An overall framework of the bionic polarization vision/inertial combined navigation module in the field environment is shown in FIG. 3, where the bionic polarization camera provides the course angle information by detecting polarization information of an atmospheric polarization pattern, and the inertial sensor realizes the initial course alignment through the bionic polarization camera. The bionic polarization camera is integrated with the inertial sensor, on the one hand, the inclination angle compensation is performed on the bionic polarization camera through the pitch angle and the roll angle provided by an inertial system to improve the solving precision of the course angle of the unmanned platform in the non-horizontal state; on the other hand, a bionic polarization vision/inertial combination model is established, the course angle information output by the inertial sensor is taken as a quantity of state, and the course angle solved by the bionic polarization camera is identified as quantity measurement, the cubature Kalman filter algorithm is adopted to conduct a fusion. In order to overcome the optimal estimation error generated by a filter, the cubature Kalman filter assisted by a radial basis function neural network is adopted to finally output the optimal course angle information.
[0079] Further, in the step (101), it can be concluded from a distribution pattern of the atmospheric polarization that a polarization pattern obtained therefrom varies with changes in a spatial position of the sun when a carrier's course changes. By solving azimuth information of the solar meridian in a carrier coordinate system, the course angle of the carrier in a geographic coordinate system is obtained, that is: ?=?.sub.s??.sub.b, where ?.sub.b and ?.sub.s are azimuth angles of the solar meridian in the carrier coordinate system and the geographic coordinate system. Under the condition that the local longitude and latitude information of the carrier and accurate time information at the time of observation are known, a solar elevation angle h.sub.s and a solar azimuth angle ?.sub.g in the geographic coordinate system are respectively as follows:
[00013] [0080] where ?, ? and t.sub.s represent a declination angle of the sun, the latitude of an observation point and an hour angle of the sun respectively.
[0081] Then, three-channel polarization images are collected through the bionic polarization camera carried by the unmanned platform, feature point sets are searched and identified based on all-sky-domain polarization data by using a feature that an E-vector in a direction of the solar meridian is perpendicular to a scattering plane, and the feature points are subjected to a linear fitting once to solve ?.sub.b. Finally, the course angle ? of the carrier in the geographic coordinate system in the horizontal state can be solved.
[0082] When the unmanned platform is in the non-horizontal state, an E-vector direction will not be perpendicular to the solar meridian, but perpendicular to a solar vector, shown in the formula as below:
e.sub.p.sup.n.Math.a.sub.OS.sup.n=0 [0083] where n represents a navigation coordinate system, b represents the carrier coordinate system, a.sub.OS.sup.n represents a sun vector in the navigation coordinate system, and e.sub.p.sup.n represents the E vector in the navigation coordinate system. The E-vector direction in the carrier coordinate system is defined as e.sub.p.sup.b=[cos ? sin ? 0].sup.T, where ? represents a clockwise angle between a reference direction of the bionic polarization camera and an E-vector direction of incident light, and C.sub.b.sup.n represents a direction cosine matrix from the carrier coordinate system to the navigation coordinate system, with e.sub.p.sup.n=C.sub.b.sup.ne.sub.p.sup.b. Then in the navigation coordinate system, the sun vector a.sub.OS.sup.n can be calculated by the solar azimuth angle ?.sub.s and the solar elevation angle h.sub.s, as shown in the following formula:
[00014] [0084] where a.sub.OSx.sup.n, a.sub.OSy.sup.n, a.sub.OSz.sup.n represent projection of the solar vector in the navigation system. Combined the above formula, the course angle ? after inclination angle compensation in the non-horizontal state can be calculated: ?=arcsin c+arctan d, where c and d are intermediate parameters of the solution, and
[00015] [0085] in the formula, ? and ? represent the roll angle and the pitch angle of the carrier in a non-horizontal state, which are obtained by the inertial sensor through solution.
[0086] Further, in the step (102), first, the inertial sensor realizes initial alignment through the course angle information provided by the bionic polarization camera, the bionic polarization vision/inertial combination model is then established, the course angle information output by the inertial system is taken as the quantity of state, the course angle information obtained by the bionic polarization camera through the inclination angle compensation is used as the quantity measurement, and the cubature Kalman filter algorithm is adopted to conduct the data fusion.
[0087] A state model in a discrete form is expressed as:
[00016] [0088] where x.sub.k and x.sub.k?1 represent the quantity of state at moments k and k?1 respectively; u.sub.k?1 is a filter input; z.sub.k is the quantity measurement; f( ) and h( ) are a state transition equation and a measurement equation respectively; and w.sub.k?1 and v.sub.k are uncorrelated zero-mean Gaussian process noise and measurement noise respectively; [0089] a radial basis function neural network is introduced into a cubature Kalman filter framework to perform error compensation on a filter: first, a window size h is set, which reflects that the cubature Kalman filter algorithm iteratively estimates the optimal course angle information for h moments within one window size, the quantity of state within a window, and innovations of the cubature Kalman filter and the optimal estimated course angle information are input to the radial basis function neural network for training. When data of the next moment come, the quantity of state, as well as gains and innovations of the cubature Kalman filter at that moment are input to the radial basis function neural network trained in the previous window, and the input and the output are as follows:
[00017] [0090] where r represents a certain moment when the r.sup.th data within the window come, K.sub.r and I.sub.r represent gains and innovations of the cubature Kalman filter within the window respectively, p.sub.r, v.sub.r and q.sub.r represent the quantities of state of positions, velocities and attitudes of the inertial sensor within the window, z.sub.r represents the quantity measurement collected by the bionic polarization camera within the window; y.sub.r represents compensation output of the neural network, b.sub.bias represents bias matrix of network nodes, W represents a weight matrix of the network nodes, and
[00018]
represents a Gaussian kernel, where ? and ? represent a center and a width of the
[0091] Gaussian kernel function respectively, and ?.Math.? represents an Euclidean distance.
[0092] Estimated values of the cubature Kalman filter algorithm at the current moment are compensated and corrected through error values output by the radial basis function neural network. data of the next window size h, starting from the current correction moment, are used to continue training the radial basis function neural network. When data of the next moment after the end of a second window come, the estimated values of the cubature Kalman filter algorithm are compensated and corrected again through error values output by the retrained radial basis function neural network, and iteration is performed in such a way, so that the cubature Kalman filter can be optimized by the radial basis function neural network.
[0093] Further, working steps of the bionic polarization vision/inertia/laser radar combined navigation module in the unknown enclosed environment are as follows: [0094] (201) pre-integrating the inertial sensor, and performing discretization to obtain the relationship among positions, velocities and attitudes at the moment of two quantity measurement values of the inertial sensor by using a median method; [0095] (202) designing a residual term through the pre-integration of the inertia sensor in the step (201) and calculating a Jacobian of the residual term relative to variables to be optimized to obtain pre-integration relative constraints, fusing the pre-integration relative constraints with relative pose constraints of the laser radar, constructing a nonlinear optimization model, constructing a laser/inertia odometry, and solving the optimal pose; and [0096] (203) mapping the unknown enclosed environments according to landmark points obtained by the laser/inertia odometry in the step (202).
[0097] Further, in the step (201), performing navigation according to the measurement information of the inertial sensor and the laser radar when the unmanned platform enters the unknown closed environment and the bionic polarization camera suffers a failure. First, pre-integration is carried out on the inertial sensor, and a continuous expression of a pre-integration method is as follows:
[00019] [0098] where b represents the carrier coordinate system, w is a world coordinate system, and a time interval between two frames is ?t. p.sub.b.sub.k+1.sup.w, p.sub.b.sub.k.sup.w represent positions at moments k+1 and k in the world coordinate system respectively, v.sub.b.sub.k+1.sup.w, v.sub.b.sub.k.sup.w represent velocities at the moments k+1 and k in the world coordinate system respectively, and q.sub.b.sub.k+1.sup.w, q.sub.b.sub.k.sup.w represent attitudes at the moments k+1 and k in the world coordinate system respectively; a.sub.b.sub.t and ?.sub.b.sub.t represent outputs of an accelerometer and a gyroscope at a moment t, b.sub.a.sub.t, b.sub.?.sub.t represent zero bias of the accelerometer and the gyroscope at the moment t, n.sub.a and n.sub.? represent random noise of the accelerometer and the gyroscope, and g.sup.w represents a gravity acceleration in the world coordinate system.
[0099] The median method is adopted to perform discretization, and the relationship of outputs between the i+1.sup.th and the i.sup.th inertial sensors is as follows:
[00020] [0100] where p.sub.b.sub.i+1.sup.w, p.sub.b.sub.i.sup.w represent positions at an output moment of the i+1.sup.th and the i.sup.th inertial sensors after discretization in the world coordinate system respectively, v.sub.b.sub.i+1.sup.w, v.sub.b.sub.i.sup.w represent velocities at the output moment of the i+1.sup.th and the i.sup.th inertial sensors after discretization in the world coordinate system respectively, q.sub.b.sub.i+1.sup.w, q.sub.b.sub.i.sup.w represent attitudes at the output moment of the i+1.sup.th and the i.sup.th inertial sensors after discretization in the world coordinate system respectively, and ?t represents an time interval of outputs between the i+1.sup.th and the i.sup.th inertial sensors. Acceleration ? and angular velocity ? are the average value of two quantity measurements, that is:
[00021] [0101] where a.sub.b.sub.i, a.sub.b.sub.i+1 represent outputs of the accelerometer at the output moment of the i+1.sup.th and the i.sup.th inertial sensors respectively, ?.sub.b.sub.i, ?.sub.b.sub.i+1 represent outputs of the gyroscope at the output moment of the i+1.sup.th and the i.sup.th inertial sensors respectively, b.sub.a.sub.i, b.sub.a.sub.i+1 represent zero bias of the accelerometer at the output moment of the i+1.sup.th and the i.sup.th inertial sensors respectively, and b.sub.?.sub.i, b.sub.?.sub.i+1 represent zero bias of the gyroscope at the output moment of the i+1.sup.th and the i.sup.th inertial sensors respectively.
[0102] Further, in the step (202), first defining the quantity of state to be optimized and estimated of the system within a sliding window as:
[00022] [0103] where the sliding window is provided with the quantities of state pre-integrated by the inertial sensor at s moments and the quantities of state obtained by the laser radar at s moments, the laser radar has a frequency of 10 Hz, the inertial sensor has a frequency of 200 Hz, and pre-integration is carried out between every two frames of the laser radar through the inertial sensor. x.sub.1.sup.imu, x.sub.k.sup.imu, x.sub.s.sup.imu represent pre-integrated states of the inertial sensor at an initial moment (a first moment), a moment k and a moment s respectively; and x.sub.1.sup.lidar, x.sub.k.sup.lidar, x.sub.s.sup.lidar represent the quantities of state of the laser radar at the initial moment (the first moment), the moment k and the moment s respectively. b.sub.a.sub.k, b.sub.?.sub.k are zero bias of the accelerometer and the gyroscope at the moment k, p.sub.l.sub.k.sup.w, q.sub.l.sub.k.sup.w are a position and an attitude estimated by the laser radar at the moment k, and the corresponding relationship between the inertial sensor output/laser radar output and the pre-integration is shown in FIG. 4.
[0104] After state variables are defined, the residual term of the inertia sensor and the laser radar is designed, the Jacobian of the residual term relative to variables to be optimized is calculated, and a nonlinear optimization function of the laser radar/inertial combined navigation is then constructed:
[00023]
[0105] In the formula, ? ? represents a Mahalanobis Distance of the vector, r.sub.B.sup.k( ) represents a measurement residual of the inertial sensor, r.sub.L.sup.k( ) represents a point cloud measurement residual of the laser radar, z.sub.k.sup.k+1 represents relative measurement variation of the inertial sensor between the moments k and k?1, and m.sub.k represents measurement information of the laser radar at the moment k.
[0106] Finally, a Gauss-Newton method is adopted to solve the nonlinear optimization function, so as to obtain the optimal pose estimation.
[0107] Further, the unmanned platform with bionic visual multi-source information and intelligent perception, where the deep learning object detection module has the following specific method: [0108] the whole process is shown in FIG. 5, first, the monocular vision camera sensor is used to collect RGB images of a surrounding environment, with the collection frequency of 30 Hz; a bilinear Interpolation algorithm is then adopted to adjust grayscale images to be a size of 640?640, and a median filtering algorithm is adopted to smoothly denoise the images and complete the pre-processing of the images.
[0109] A multi-directional blur superposition algorithm is adopted to simulate the blurred image problem caused by a vision camera in a real environment, as shown in the following formula:
[00024] [0110] where P.sub.stack.sup.R, P.sub.stack.sup.G, P.sub.stack.sup.B represent pixel values of the corresponding coordinates after superposition on three RGB channel respectively, P.sub.j.sup.R, P.sub.j.sup.G, P.sub.j.sup.B represent pixel values of the original coordinates on the three RGB channel respectively, ?.sub.j represents a transparency factor of the superposed images, and T.sub.j represents a blurring superposition displacement matrix, which is determined by a superposition direction, a step length and the number of superposed images. An image data set simulated by the multi-directional blur superposition algorithm is input to a YOLOv7 deep learning network for training, and finally enhanced images are input to the trained YOLOv7 deep learning network to obtain object detection frames and vertex coordinates of various objects in the images.
[0111] The autonomous obstacle avoidance module determines whether there are any obstacles around the unmanned platform during running according to the objects identified by the target, and performs autonomous obstacle avoidance in combination with the carrier navigation and positioning information obtained by the bionic polarization vision/inertia/laser radar combined navigation module.
[0112] Simulation Experiment: [0113] a hardware platform adopted by the unmanned platform with bionic visual multi-source information and intelligent perception is shown in FIG. 1. The carrier is an RIA-E100 unmanned vehicle, which is equipped with an Intel D435i vision camera, a VelodyneVLP-16 laser radar, an Ublox-RTK-GNSS, a SonyLucid polarization camera and a Pix-IMU inertial sensor.
[0114] A bionic polarization vision/inertial combined navigation solution is verified in the field environment by an experiment, and a sports car experiment in 1,200 seconds is conducted outdoors, with the experimental results shown in FIG. 6. In the figure, the broken line consisting of dots represents the course angle information obtained through solution by the inertial sensor, the dotted line represents the course angle information obtained through solution by the bionic polarization camera, the heavy line represents the course angle reference information provided by a high-precision navigation and positioning system, and the star continuous line represents the optimal course angle information obtained through the bionic polarization vision/inertial combination. It can be seen from FIG. 6 that the bionic polarization vision/inertial combined navigation solution adopted by the present disclosure effectively improved the precision of the course angle. Table 1 itemizes a maximum error, a standard deviation, a root mean square error and a mean error of the bionic polarization camera, the inertial sensor and the cubature Kalman filter with the reference course angle, and quantitative comparison results verify the feasibility of the bionic polarization vision/inertial combined navigation solution in the field environment.
TABLE-US-00001 TABLE 1 Comparison among Errors of the Bionic Polarization Vision/Inertial Combined Navigation Maximum Standard Root mean Mean error deviation square error error Course angle of the bionic 5.50? 0.86? 2.65? 2.51? polarization camera Course angle of the inertial 2.67? 0.48? 1.45? 1.37? sensor Course angle through the 1.60? 0.39? 0.48? 0.37? bionic polarization vision/inertial combination
[0115] The laser radar/inertial combined navigation solution is then verified in the unknown enclosed environment through experiments, and the situation of losing lock of the bionic polarization camera indoors is selected.
[0116] Finally, functions of the deep learning object detection module are verified through
[0117] experiments, the experiments are carried out indoors, and the object recognition results are shown in FIG. 7. It can be seen from FIG. 7 that the surrounding environment is relatively complex, the object on the left closest to the car is a chair, and there is a box at the very front, and the obstacles are avoided by changing the path during the traveling according to the self-positioning information and the object detection frame information, as well as in combination with the depth information as shown in FIG. 8, such that the unmanned platform realizes autonomous obstacle avoidance.
[0118] The present disclosure has the following beneficial effects: [0119] (1) the unmanned platform with bionic visual multi-source information and intelligent perception provided in the present disclosure can improve the comprehensive capabilities of the unmanned platform in the complex environments; [0120] (2) the bionic polarization vision/inertia/laser radar combined navigation module provided in the present disclosure can avoid the problem that satellite navigation is susceptible to counter-reconnaissance and interference during wartime, improve the concealment and autonomous navigation capabilities of the unmanned platform in the complex environments (field environment and unknown enclosed environment), and enhance the stability and robustness of a combined navigation system; and [0121] (3) the deep learning object detection module provided in the present disclosure can improve the intelligent perception capability of the unmanned platform to the complex environments, and can improve the autonomous obstacle avoidance capability in combination with the combined navigation and positioning information.
[0122] As stated above, although the present disclosure has been illustrated and elaborated with reference to specific preferred embodiments, which cannot be construed as limitations on the present disclosure itself. Various variations can be made in form and detail in the present disclosure without departing from the spirit and scope of the present disclosure as defined by the appended claims.