Abstract
The present invention proposes an active scene mapping method based on constraint guidance and space optimization strategies, comprising a global planning stage and a local planning stage; in the global planning stage, the next exploration goal of a robot is calculated to guide the robot to explore a scene; and after the next exploration goal is determined, specific actions are generated according to the next exploration goal, the position of the robot and the constructed occupancy map in the local planning stage to drive the robot to go to a next exploration goal, and observation data is collected to update the information of the occupancy map. The present invention can effectively avoid long-distance round trips in the exploration process so that the robot can take account of information gain and movement loss in the exploration process, find a balance of exploration efficiency, and realize the improvement of active mapping efficiency.
Claims
1. An active scene mapping method based on constraint guidance and space optimization strategies, comprising a global planning stage and a local planning stage; in the global planning stage, the next exploration goal of a robot is calculated to guide the robot to explore a scene; and when the next exploration goal is determined, specific actions are generated according to the next exploration goal, the position of the robot and the constructed occupancy map in the local planning stage to drive the robot to go to the next exploration goal, and observation data is collected to update the information of the occupancy map, specifically comprising the following steps: step 1: generating a state according to the scanning data of the robot; the state comprises three parts: an occupancy map M.sub.t, a geodesic distance map D(M.sub.t, .sub.t) and a frontier-based entropy I(M.sub.t); and .sub.t indicates the position of the robot;
s(.sub.t)=(M.sub.t,D(M.sub.t,.sub.t),I(M.sub.t) 1.1) Occupancy Map; a 3D scene model is obtained by back projection of the depth with the pose of the robot according to the observation C(.sub.t) of the robot in position .sub.t; a 2D global map is constructed from the top-down view of the 3D scene model as an occupancy map M.sub.t; at time t, the occupancy map is expressed as M.sub.t [0,1].sup.XY2, and X, Y represent the length of the occupancy map and the width of the occupancy map, respectively; the occupancy map comprises two channels indicating the explored and occupied regions, respectively; the grids in the occupancy map M.sub.t are classified as follows: free (explored but not occupied), occupy (occupied) and unknown (not explored); and the frontier grid F.sub.tM.sub.t is a free grid adjacent to an unknown grid; 1.2) geodesic distance map; given the current position .sub.t and the currently constructed occupancy map M.sub.t, a geodesic distance map D(M.sub.t, .sub.t)
.sup.XY is constructed, wherein D.sub.x,y(M.sub.t,.sub.t) represents the geodesic distance from the position (x, y) to the position .sub.t of the robot:
D.sub.x,y(M.sub.t,.sub.t)=dist.sup.M.sup.t((x,y),.sub.t) the geodesic distance dist.sup.M.sup.t is the shortest distance for traversal between two points in the occupancy map M.sub.t, and the geodesic distance dist.sup.M.sup.t is calculated by the fast marching method; 1.3) frontier-based entropy; the frontier-based entropy is introduced as constraints to reduce the searching space when the height of the occupancy map M.sub.t is incomplete; and the frontier point fF.sub.t represents a next potential optimal exploration goal, a frontier-based entropy I is introduced for encoding the spatial distribution information of the frontier points based on the characteristics of small-range aggregation and large-range dispersion of frontier points, the encoded spatial distribution information is used as one of the inputs of the actor network in the global planning strategy to constrain action search, and the frontier-based entropy I is defined as follows: wherein I.sub.x,y(M.sub.t) represents the number of frontier points within the neighborhood centered on the position (x,y) of the frontier point in the occupancy map M.sub.t; and the spatial distribution information contained in each frontier point includes the (x,y) coordinate of the position of the point and the statistical information of the spatial distribution of the frontier points within the neighborhood; step 2: calculating the probability distribution of the action space of the robot according to the state input; off-policy learning approach proximal policy optimization (PPO) is used as a policy optimizer for training optimization and decision execution of the global planning strategy; and the policy optimizer comprises an actor network and a critic network; the actor network uses a multi-layer perceptron (MLP) as an encoder for feature extraction and uses a graph neural network for feature fusion; and a graph is constructed according to the frontier point given by the state s(.sub.t), and feature extraction and feature fusion are carried out on the constructed graph to obtain the score of the frontier point; the critic network comprises five convolutional layers, a flatten operation and three linear layers, a ReLu activation function is attached behind each convolutional layer and each linear layer, and the flatten operation is used for flattening multi-dimensional data into one-dimensional data; and the critic network is used for predicting the state value V(s(.sub.t)) of the occupancy map to indicate the critic value obtained by the current state of the frontier point, and the critic value, as part of the loss function, is used for training the actor network; the process of calculating the probability distribution of the action space according to the state input is specifically as follow: a graph G(F.sub.t,.sub.t) is constructed based on the frontier grid F.sub.t and the exploration path .sub.t={.sub.0, . . . , .sub.t} to represent the context information of the current scene, a corresponding relation between the robot and the frontier point extracted from the occupancy map M.sub.t is established in the graph G(F.sub.t,.sub.t), and the information given by the state s(.sub.t) is assigned to the node and edge of G(F.sub.t,.sub.t); for each node n.sub.i, the node input feature feat(n.sub.i)
.sup.5 includes: (x,y) coordinate information in the occupancy map M.sub.t, semantic label information indicating n.sub.iF.sub.t or n.sub.i.sub.t, historical label information indicating that n.sub.i is the current node at or historical exploration node n.sub.i{.sub.0, . . . , .sub.t1}, and a frontier-based entropy I.sub.n.sub.i(M.sub.t); the node edge feature feat(l.sub.ij)
.sup.32 is extracted by the multi-layer perceptron (MLP), wherein l.sub.ij
.sup.1 represents the geodesic distance from node n.sub.j to node n.sub.i; the node input feature and the node edge feature are input into the actor network for feature extraction and feature fusion, and a set of scores of frontier points are output; and the sampling probability .sub.mask(f|s(.sub.t)) of each action of the robot is calculated based on the set of scores of frontier points; step 3: carrying out action mask guided space alignment and selecting the next exploration goal; the robot selects the frontier point with the highest score according to as the next exploration goal .sub.t+1, wherein H represents the global planning strategy to calculate the score of each frontier point in the current state; and the next exploration goal .sub.t+1 is to select a frontier point f from the frontier grid F.sub.t; based on the score of the frontier point, the action mask strategy is introduced to solve the misalignment problem of the space metrics; and the action mask strategy includes two action masks: a valid distance mask and a stuck mask, which are used for filtering actions in the action space of the global planning strategy and constraining action sampling to a valid action space; 3.1) valid distance mask; the valid distance mask is used for filtering invalid goals in the action space of the global planning strategy; the action space is filtered according to the geodesic distance from the position .sub.t of the robot to the next optimal exploration goal; the nearest threshold .sub.near and the farthest threshold .sub.far are set; and for a next potential optimal exploration goal beyond the threshold range [.sub.near, .sub.far], the sampling probability is set to 0, and .sub.mask(f|s(.sub.t)) after the valid distance mask is as follows: wherein .sub.mask(f|s(.sub.t)) represents the action mask probability of selecting the frontier point f as the next exploration goal, dist.sup.M.sup.t(f, .sub.t) is the geodesic distance from the position .sub.t of the robot to the frontier point f, and (f|s(.sub.t)) is original probability distribution from the encoder of the actor network; 3.2) stuck mask the stuck mask is used for filtering out actions in the action space that cause the robot to get stuck; the max moving length and the max area increment of the robot in the last three global planning stages are calculated, wherein c(M.sub.t) represents the area of the scanned region at time t calculated according to the occupancy map M.sub.t, and when the moving length l.sub.max and the area increment c.sub.max are greater than the set thresholds, the action is considered reasonable; otherwise, the probability value corresponding to the action is set to 0, that is .sub.mask(a|s(.sub.t))=0; step 4: planning a path to the next exploration goal; the robot uses the fast marching method (FMM) to plan a moving path according to the position .sub.t of the robot, the next exploration goal .sub.t+1 obtained by calculation and the constructed occupancy map M.sub.t to drive the robot to go to the next exploration goal, and observation data scanned by the robot during movement is collected to update the information of the map; step 5: making a judgment about the termination of exploration; repeating steps 1-4 to judge whether the exploration meets the termination conditions, and terminating the exploration when the scanning coverage of the robot exceeds the set threshold or the number of exploration steps of the robot exceeds the maximum set number of steps.
Description
DESCRIPTION OF DRAWINGS
(1) FIG. 1 is an overall flow chart of an active scene mapping method based on constraint guidance and space optimization strategies of the present invention.
(2) FIG. 2 shows a network architecture of a global planning stage of the present invention.
(3) FIG. 3 shows calculation of a frontier-based entropy of the present invention.
(4) FIG. 4(a-1)-FIG. 4(a-5) show comparison of training results of different methods in scene 1; FIG. 4(a-1) shows Greedy Dist, and FIG. 4 (a-2) shows Greedy Info; FIG. 4(a-3) shows Active Neural SLAM; FIG. 4(a-4) shows NeuralCoMapping; and FIG. 4(a-5) shows a method of the present invention;
(5) FIG. 4(b-1)-FIG. 4(b-5) show comparison of training results of different methods in scene 2; FIG. 4(b-1) shows Greedy Dist, and FIG. 4 (b-2) shows Greedy Info; FIG. 4(b-3) shows Active Neural SLAM; FIG. 4(b-4) shows NeuralCoMapping; and FIG. 4(b-5) shows a method of the present invention;
(6) FIG. 4(c-1)-FIG. 4(c-5) show comparison of training results of different methods in scene 3; FIG. 4(c-1) shows Greedy Dist, and FIG. 4(c-2) shows Greedy Info; FIG. 4(c-3) shows Active Neural SLAM; FIG. 4(c-4) shows NeuralCoMapping; and FIG. 4(c-5) shows a method of the present invention;
(7) FIG. 4(d-1)-FIG. 4(d-5) show comparison of training results of different methods in scene 4; FIG. 4(b-1) shows Greedy Dist, and FIG. 4 (b-2) shows Greedy Info; FIG. 4(b-3) shows Active Neural SLAM; FIG. 4(b-4) shows NeuralCoMapping; and FIG. 4(b-5) shows a method of the present invention;
(8) FIG. 4(e-1)-FIG. 4(e-5) show comparison of training results of different methods in scene 5; FIG. 4(e-1) shows Greedy Dist, and FIG. 4(e-2) shows Greedy Info; FIG. 4(e-3) shows Active Neural SLAM; FIG. 4(e-4) shows NeuralCoMapping; and FIG. 4(e-5) shows a method of the present invention;
(9) FIG. 5(a-1)-FIG. 5(a-5) show comparison of training results of different methods in scene 6; FIG. 5(a-1) shows Greedy Dist, and FIG. 5 (a-2) shows Greedy Info; FIG. 5(a-3) shows Active Neural SLAM; FIG. 5(a-4) shows NeuralCoMapping; and FIG. 5(a-5) shows a method of the present invention;
(10) FIG. 5(b-1)-FIG. 5(b-5) show comparison of training results of different methods in scene 7; FIG. 5(b-1) shows Greedy Dist, and FIG. 5(b-2) shows Greedy Info; FIG. 5(b-3) shows Active Neural SLAM; FIG. 5(b-4) shows NeuralCoMapping; and FIG. 5(b-5) shows a method of the present invention;
(11) FIG. 5(c-1)-FIG. 5(c-5) show comparison of training results of different methods in scene 8; FIG. 5(c-1) shows Greedy Dist, and FIG. 5(c-2) shows Greedy Info; FIG. 5(c-3) shows Active Neural SLAM; FIG. 5(c-4) shows NeuralCoMapping; and FIG. 5(c-5) shows a method of the present invention;
(12) FIG. 5(d-1)-FIG. 5(d-5) show comparison of training results of different methods in scene 9; FIG. 5(b-1) shows Greedy Dist, and FIG. 5(b-2) shows Greedy Info; FIG. 5(b-3) shows Active Neural SLAM; FIG. 5(b-4) shows NeuralCoMapping; and FIG. 5(b-5) shows a method of the present invention;
(13) FIG. 5(e-1)-FIG. 5(e-5) show comparison of training results of different methods in scene 10; FIG. 5(e-1) shows Greedy Dist, and FIG. 5(e-2) shows Greedy Info; FIG. 5(e-3) shows Active Neural SLAM; FIG. 5(e-4) shows NeuralCoMapping; and FIG. 5(e-5) shows a method of the present invention.
DETAILED DESCRIPTION
(14) Specific embodiments of the present invention are further described below in combination with accompanying drawings and the technical solution.
(15) The present embodiment is run based on the IGibson simulator, 50 scenes are selected from the IGibson data set for training, and 13 scenes are selected from the IGibson and Matterport data sets for testing. Most of the training scenes and testing scenes are large-scale scenes (with the area greater than or equal to 30 m.sup.2), and also comprise large-scale scenes and small-scale scenes. The maximum number of scene exploration steps is set to n=3000, and the scene coverage of exploration termination is set to r=90%. The exploration will be terminated only when the number of exploration steps exceeds n or the area of the explored region is larger than r.
(16) FIG. 1 is an overall flow chart of the present invention, mainly including two planning stages: global planning stage and local planning stage. In the global planning stage, the next exploration goal, i.e., the long-term goal, of a robot shall be calculated to guide the robot to explore a scene; and after the long-term goal is determined, a set of specific actions are generated according to the long-term goal, the position of the robot and the constructed exploration map in the local planning stage to drive the robot to go to the long-term goal, and observation data is collected to update the information of the map. In the specific practices of the present invention, the interval of moving steps in the global planning stage is set to 25 steps, i.e., the robot updates the long-term goal every 25 steps. The global planning stage and the local planning stage are alternate until the whole scene is explored completely or the number of exploration steps of the robot exceeds the maximum set number of steps.
(17) FIG. 2 shows a network architecture of a global planning stage of the present invention. The robot extracts frontier points according to the constructed occupancy map M, calculates the information entropy of each frontier point, constructs a state graph G, sends the state graph as the state input to the actor network, and outputs the score of each frontier point through the encoder and three graph neural networks. Then, according to the score, action mask guided action sampling is carried out to determine the long-term goal. The critic network calculates a critic value according to the occupancy map M to train the actor network.
(18) FIG. 3 shows the calculation process of a frontier-based entropy, and according to each frontier point, the robot calculates the sum of all frontier points in a neighborhood map centered on the point and assigns the point as the entropy of the frontier point.
(19) FIG. 4 and FIG. 5 show visualization results of a contrast experiment. Greedy Dist and Greedy Info are distance-based greedy strategy and information gain-based greedy strategy, respectively, i.e., a point closer to the robot and a point with greater information gain is selected to drive the robot to explore the scene; Active Neural SLAM (ANS) takes the whole map as the input and trains a convolutional neural network to learn the strategies, and the output is any point in the map, which is used as the long-term goal to drive the robot to explore the scene; and NeuralCoMapping extracts the frontier points in the map, uses the original information of the frontier points such as x, y coordinate to construct a graph, and takes the graph as the input to train a graph neural network to learn the strategies, and the output long-term goal is a frontier point in the graph. All the methods are tested in the same scene, and the final visualization result shows the mapping results and the moving path of the robot. It can be found through the contrast experiment that the exploration strategy used in the present invention has better performance in terms of exploration completeness and exploring path rationality as well as the degree of the robot getting stuck, and verifies the effectiveness of the method of the present invention for active mapping tasks.
(20) The quantitative verification results of the contrast experiment are shown in Table 1. The exploration area represents the total area of the regions scanned by the robot, and the coverage ratio is the ratio of the exploration area to the area of the whole scene, which is used for measuring the exploration completeness of the robot. The larger the value of completeness is, the more complete the scene scanning is. The exploration efficiency is the ratio of the exploration area to the number of exploration steps, which represents the average information gain of each exploration step of the robot and is used for measuring the exploration efficiency of the robot. The larger the value is, the higher the exploration efficiency of the robot is, and the more reasonable the exploration path planning of the robot is, without large-scale reentry and invalid actions.
(21) TABLE-US-00001 TABLE 1 Exploration Coverage Exploration Method Area (m.sup.2) (%) Efficiency (m.sup.2/step) Greedy Dist 49.79 72.57 0.02008 Greedy Info 55.17 79.82 0.03683 ANS 50.97 79.35 0.02052 NeuralCoMapping 63.34 90.37 0.04011 The present invention 67.14 92.49 0.04189