Method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces

11880208 ยท 2024-01-23

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces is disclosed, which includes the following steps: acquiring 3D point cloud data of a roadway; computing a 2D image drivable area of the coal mine roadway; acquiring a 3D point cloud drivable area of the coal mine roadway; establishing a 2D grid map and a risk map, and performing autonomous obstacle avoidance path planning by using a particle swarm path planning method designed for deep confined roadways; and acquiring an optimal end point to be selected of a driving path by using a greedy strategy, and enabling an unmanned auxiliary haulage vehicle to drive according to the optimal end point and an optimal path. Images of a coal mine roadway are acquired actively by use of a single-camera sensor device.

Claims

1. A method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces, comprising the following steps: S1: performing environmental perception by use of binocular vision, and detecting a drivable area of an unmanned auxiliary haulage vehicle in a deep coal mine roadway; specifically comprising: S11: collecting, by a binocular camera, a video image of the auxiliary haulage vehicle driving in the coal mine roadway, and preprocessing a coal mine roadway image; S12: for a preprocessed coal mine roadway image obtained in step S11, designing a stereo matching algorithm specific to a stereo matching task of the coal mine roadway to implement a generation of a depth map of the coal mine roadway and computation of 3D point cloud data; specifically comprising: S121: constructing an atrous convolution window with a specification of 5*5, and assigning different weights to different positions of the window according to a 2D Gaussian distribution function, where the weights in a sequence from left to right and top to bottom are respectively 3, 0, 21, 0, 3, 0, 0, 0, 0, 0, 21, 0, 1, 0, 21, 0, 0, 0, 0, 0, 3, 0, 21, 0, 3; S122: covering a left view of the coal mine roadway image with the convolution window constructed in step S121, and selecting pixel points in all coverage areas; S123: covering a right view of the coal mine roadway image with the convolution window constructed in step S121, and selecting pixel points in all coverage areas; S124: finding an absolute value of a gray value difference of all corresponding pixel points in the convolution window coverage areas of the left and right coal mine roadway views in step S122 and step S123, and according to the weights of the different positions of the window in step S121, performing weighted summation on the weights as a matching cost by using a formula as follows: C matching cost ( p , d ) = .Math. q N p ( .Math. "\[LeftBracketingBar]" I L ( q ) - I R ( qd ) .Math. "\[RightBracketingBar]" .Math. w q ) where, p is a pixel of the coal mine roadway image, d is a disparity of the coal mine roadway image, I.sup.L (q) and I.sup.R (qd) are window elements taking q and qd as image centers on the left and right coal mine roadway images respectively, w q is the weight of the different positions of the convolution window, and N p is a 5*5 Gaussian atrous convolution window; S125: performing matching cost aggregation according to a matching cost computation method in step S124, where a matching cost aggregation step size d step is adaptively changed according to a pixel luminance of the coal mine roadway image by using a formula as follows: d s t e p = D max - 1 G max - G min .Math. g + G max - G min D max G max - G min where, D.sub.max is a maximum disparity of a binocular image of the coal mine roadway, G.sub.max and G.sub.min are a maximum gray value and a minimum gray value of a gray image of the coal mine roadway respectively, and g is a gray value of the gray image of the coal mine roadway; S126: based on matching costs in an adaptive matching cost aggregation step size d.sub.step range obtained in step S125, finding a window with a minimum matching cost value as a disparity by using a winner-take-all (WTA) algorithm, and performing cyclic computation to obtain disparity maps of the coal mine roadway; S127: based on the disparity maps of the coal mine roadway obtained in step S126, performing disparity image optimization in accordance with a left-right consistency constraint criterion by using a formula as follows: D = { D l if .Math. "\[LeftBracketingBar]" D l - D r .Math. "\[RightBracketingBar]" 1 D invalid otherwise where, D.sub.l is a left disparity map, D.sub.r is a right disparity map, and D.sub.invalid is an occlusion point where no disparity exists; and S128: based on a disparity optimization image of the coal mine roadway obtained in step S127, performing disparity-map to 3D-data computation according to a binocular stereo vision principle, and obtaining 3D point cloud information (X.sub.w, Y.sub.w, Z.sub.w) of the coal mine roadway in the advancing direction of the unmanned haulage vehicle by using a formula as follows: { X w ( x , y ) = b .Math. f .Math. ( x - u ) .Math. d ( x , y ) Y w ( x , y ) = b .Math. f .Math. ( y - v ) .Math. d ( x , y ) Z w ( x , y ) = b .Math. f d ( x , y ) where, b is a distance between left and right optical centers of a binocular camera, is a focal distance of the camera, d is a disparity of the coal mine roadway image, (x, y) indicates pixel coordinates of the coal mine roadway image, (u, v) indicates coordinates of an origin of an image coordinate system in a pixel coordinate system, and and are the focal distances of a pixel in the x and y directions of an image plane respectively; S13: for the preprocessed coal mine roadway image obtained in step S11, designing a deep learning model for a semantic segmentation task of the coal mine roadway to implement the drivable area semantic segmentation of a 2D image of the coal mine roadway; specifically comprising: S131: making a semantic segmentation data set of the coal mine roadway in which only a drivable area of the roadway is marked, which specifically comprises: S1311: performing marking by using labelme image marking software, and starting the labelme software; S1312: opening a coal mine roadway image folder, and selecting an image; S1313: only selecting the drivable area of the coal mine roadway with a frame, and naming the drivable area drivable area; and S1314: repeating steps S1312 and S1313, so as to finally complete the making of the drivable area semantic segmentation data set of the 2D image of the coal mine roadway; S132: pre-training a semantic segmentation model deeplabv3+ model based on a PASCAL VOC data set; S133: based on a pre-trained deeplabv3+ model obtained in step S132 and the drivable area semantic segmentation data set of the coal mine roadway obtained in step S131, performing pre-trained model fine-tuning; S134: performing real-time drivable area semantic segmentation of the 2D image of the coal mine roadway according to the deep learning model fine-tuned with the data set of the coal mine roadway obtained in step S133 to obtain a 2D image drivable area; S14: according to the 3D point cloud data of the coal mine roadway obtained in step S12 and a 2D drivable area segmentation image of the coal mine roadway obtained in step S13, designing a 2D-image to 3D-point-cloud mapping method to implement the drivable area semantic segmentation of a 3D point cloud of the coal mine roadway; specifically comprising: S141: according to the 2D image drivable area obtained in step S134, performing drivable area image processing based on morphological opening operation; and S142: according to a left coal mine roadway view of the unmanned auxiliary haulage vehicle, performing segmentation for mapping the 2D image drivable area obtained in step S134 to the 3D point cloud so as to obtain a 3D point cloud drivable area of the coal mine roadway by using a formula as follows: P ( x , y , z ) = { P ( x , y , z ) ( x , y ) I ( x , y ) 0 ( x , y ) .Math. I ( x , y ) where, P (x, y, z) is the 3D point cloud data of the coal mine roadway obtained in step S128, I (x, y) is a drivable area image of the coal mine roadway obtained after morphological processing is performed in step S141, and (x, y) indicates pixel coordinates of the left coal mine roadway view collected by the binocular camera; S2: determining a drivable area of a deep confined roadway according to drivable area detection information in step S1, and performing safe driving of the unmanned auxiliary haulage vehicle in the deep confined roadway by using an autonomous obstacle avoidance algorithm based on a particle swarm optimization algorithm; specifically comprising: S21: establishing a 2D workspace grid map of the unmanned auxiliary haulage vehicle based on the drivable area detection information in step S1, and intercepting a roadway grid map with an appropriate length, where the roadway grid map comprises deformation and roof leakage of blanket nets and support materials at the top of the roadway, deformation, destruction, leakage and cracking on both sides of the deep roadway, and water puddles and other obstacles on the ground surface of the roadway; S22: establishing a risk grid map based on the projection of the drivable area in step S1, where the risk grid map contains a safety area, a partial safety area and a complete risk area; the safety area is a part of the drivable area in step S1, in which the unmanned auxiliary haulage vehicle is allowed to drive directly; the complete risk area is an undrivable area, in which the unmanned auxiliary haulage vehicle is completely not allowed to drive; the partial safety area is a drivable area obtained by segmenting in step S1 between the safety area and the undrivable area, in which there is a risk when the unmanned auxiliary haulage vehicle drives; and in autonomous obstacle avoidance planning, a driving route of the vehicle should be planned in the safety area as much as possible, and cannot be planned in a safety risk area, and under a certain condition, the driving route of the vehicle is allowed to include the partial safety area, but should be far away from the complete risk area as much as possible; and a rule of establishment among the three kinds of areas is as follows: S221: area risk levels are established according to the drivable area of the vehicle obtained by segmenting in step S1: the undrivable area itself is at a highest level 5; eight neighbor grids of a current grid are set to be at a risk level 4 by taking a grid where the undrivable area is located as a reference point; repeating is performed in a similar fashion until a risk level 1 reaches; the risk levels of other grids that are not at risk are still 0, that is, they are fully passable; if there is a conflict in the risk level of a grid, the grid with the conflict is assessed with the highest risk level; and the undrivable area itself is an absolute complete risk area in which vehicles are not allowed to drive, and the safety area and the partial safety area are both drivable areas, where a grid with the risk level of 0 is a safety area; S23: intercepting a map with an appropriate length, selecting, in the map, a grid allowed to serve as a local end point by taking the unmanned auxiliary haulage vehicle as a start point of the current map, and recording the grid into a table of end points to be selected in accordance with a rule as follows: an end point to be selected is in the last column of a local grid map; the grid is not an obstacle grid; the neighbor grids of the end point to be selected at least comprise one passable grid; and the grid is not surrounded by obstacles; S24: performing autonomous obstacle avoidance path planning by using a particle swarm path planning method designed for the deep confined roadway; S25: obtaining an optimal end point to be selected of a driving path by using a greedy strategy, and enabling the unmanned auxiliary haulage vehicle to drive according to the optimal end point and an optimal path; and S26: repeating steps S21 to S25 to complete the autonomous obstacle avoidance of the unmanned auxiliary haulage vehicle in the deep confined roadway until the unmanned auxiliary haulage vehicle arrives at a task destination; wherein in step S11, the process of collecting, by a binocular camera, a video image of the auxiliary haulage vehicle driving in the coal mine roadway, and preprocessing a coal mine roadway image comprises the following steps: S111: performing coal mine roadway image correction processing by using a Hartley image correction algorithm; S112: performing coal mine roadway image enhancement processing on a corrected image obtained in step S111 by using an image enhancement algorithm based on logarithmic Log transformation; and S113: performing image filtering processing on an enhanced image obtained in step S112 by using an image filtering algorithm based on bilateral filtering; and wherein in step S111, the process of performing coal mine roadway image correction processing by using a Hartley image correction algorithm comprises the following steps: S1111: obtaining an epipolar constraint relationship of the left and right coal mine roadway images obtained according to a camera calibration algorithm, and finding epipolar points p and p in the left and right coal mine roadway images; S1112: computing a transformation matrix H mapping p to an infinity point (1, 0, 0).sup.T; S1113: computing a photographic transformation matrix H matched with the transformation matrix H, and satisfying a least square constraint, so as to minimize the following formula: min .Math. i J ( H m 1 i , H m 2 i ) 2 where, m.sub.1i=(u.sub.1,v.sub.1,1), m.sub.2i=(u.sub.2,v.sub.2,1), and J represents a cost function error, and (u.sub.1, v.sub.1) and (u.sub.2, v.sub.2) are a pair of matching points on original left and right images; and S1114: allowing the transformation matrix H in step S1112 and the photographic transformation matrix H in step S1113 to respectively act on the left and right coal mine roadway images to obtain a corrected coal mine roadway image.

2. The method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces according to claim 1, wherein in step S112, the process of performing coal mine roadway image enhancement processing on the corrected image obtained in step S111 by using an image enhancement algorithm based on logarithmic Log transformation comprises the following steps: S1121: based on an image logarithmic transformation formula, transforming the corrected coal mine roadway image obtained in step S111 by using a formula as follows:
s=c.Math.log.sub.v+i(1+v.Math.r) where, r is an input grayscale of the image, r[0, 1], s is an output grayscale of the image, c is a constant, and v is a logarithmic Log transformation intensity adjustment factor; S1122: normalizing a logarithmically transformed coal mine roadway image obtained in step S1121 to 0-255 for transformation according to a formula as follows: g = s - s min s max - s min .Math. 255 where, s is an unnormalized input grayscale, g is a normalized output grayscale, s.sub.max is a maximum grayscale of the image, and s.sub.min is a minimum grayscale of the image.

3. The method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces according to claim 1, wherein in step S113, the process of performing image filtering processing on an enhanced image obtained in step S112 by using an image filtering algorithm based on bilateral filtering comprises the following steps: S1131: for the enhanced coal mine roadway image obtained in step S112, constructing an n*n convolution template to carry out a convolution operation; S1132: based on a spatial area and a range area, performing weight assignment on the convolution template in step S1131 according to the following formula: { G a ( .Math. i - j .Math. ) = e - ( x i - x j ) 2 + ( y i - y j ) 2 2 2 G ( .Math. "\[LeftBracketingBar]" i - j .Math. "\[RightBracketingBar]" ) = e - ( gray ( x i , y i ) - gray ( x j , y j ) ) 2 2 2 where, G.sub. is a weight of the spatial area, G.sub. is a weight of the range area, (x.sub.i, y.sub.i), (x.sub.j, y.sub.j) are respectively central pixel coordinates of the convolution template and pixel coordinates of the convolution template in step S1131, is a smoothing parameter, gray ( ) is a pixel gray value of the image, i is a center pixel of the convolution template, and j is a pixel of the convolution template; S1133: according to the convolution template in step S1131 and the weights in step S1132, performing traversal computation of the left and right coal mine roadway images by using the following formula to obtain a filtered image: { I i = 1 w i .Math. j S G a ( .Math. i - j .Math. ) G ( | i - j | ) I j w i = .Math. j S G a ( .Math. i - j .Math. ) G ( .Math. "\[LeftBracketingBar]" i - j .Math. "\[RightBracketingBar]" ) where, S is the convolution template in step S1131, is an original input image, h is the filtered image, and w.sub.i is a normalization factor.

4. The method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces according to claim 1, wherein in step S24, said using a particle swarm path planning method designed for the deep confined roadway comprises the following steps: S241: performing grid encoding and decoding on the grid map established based on drivable area detection in step S21 and the risk grid map established based on the projection of the drivable area of step S1 in step S22; where according to the features of the grid map, an encoding method of a collision-free local optimal path is: X.sub.i is defined to be an obstacle-free path from a current location of a coal mine robot to a specified target point, which may be represented by all grids that constitute the path, i.e.,
X.sub.i={V.sub.1,V.sub.2, . . . ,V.sub.n} where, V.sub.1,V.sub.2, . . . ,V.sub.n represents the serial numbers of all the grids through which the path X.sub.i passes, there is no complete risk grid with a risk level of 5, and the serial numbers are not repeated mutually; and the serial numbers of the grids are continuously arranged from top to bottom and left to right by taking the first grid at the upper left corner of the grid map as 1 until the last grid at the lower right corner reaches, that is, V n = ( v n , 1 - d 2 ) G row + v n , 2 + d 2 where, v.sub.n,1 and v.sub.n,2 represent a x-coordinate and y-coordinate of a current grid, which are the coordinates of a center point of the grid but not the coordinates of a vertex of the grid; d is a side length of the grid; G.sub.row is the number of one row of grids in the current grid map; and after path points are computed, the following formula is used during inverse solving: [ v n , 1 , v n , 2 ] = [ mod ( V n - 1 , G r o w ) + d 2 , ceil ( V n - 1 / G r o w ) - d 2 ] so as to complete the grid encoding and decoding; S242: processing obstacle information according to the drivable area detection information in step S1 and the grid map established in step S21, and initializing a particle swarm population; comprising the following steps: S2421: initially establishing a square matrix of which the numbers of rows and columns are both the total number V.sub.overall of the grid, and establishing a grid connection matrix M.sub.link; S24211: computing whether the current grid is adjacent to all other grids by use of cyclic traversal, and judging whether the adjacent grids are obstacles; S24212: if the current grid is not adjacent to all other grids or the adjacent grids are obstacles, setting a corresponding matrix element to 0; and if the current grid is adjacent to all other grids and the adjacent grids are not obstacles, computing a distance between the adjacent grids by using a formula as follows:
Vd.sub.n,i={square root over ((v.sub.n,1v.sub.i,1).sup.2+(v.sub.n,2v.sub.i,2).sup.2)} where, Vd.sub.n,i is a grid distance between a grid n and a grid i, and v.sub.1 and v.sub.2 represent a x-coordinate and y-coordinate of the current grid; S2422: determining a start point V.sub.S of the coal mine robot and a final target point V.sub.E thereof, and placing the two at the head and tail nodes of an initial route X.sub.0 through the encoding method in step S241; S2423: randomly selecting, from a starting node V.sub.S, a next grid connected to the start point according to the connection matrix M.sub.link established in step 1; and S2424: repeating step S2423 according to the connection matrix M.sub.link until an encoding combination of a complete path connecting V.sub.S to V.sub.E is completed, and outputting an initial path; S243: based on the grid decoding/encoding methods in S241 and the initialized population in S242, updating the velocity and location of particles in a particle swarm by using a formula as follows:
v.sub.in.sup.t+1={.sub.1,.sub.2,.sub.3} a particle velocity consists of three replacement sets; where, .sub.1, .sub.2 and .sub.3 represent three parts of the particle velocity: a self-velocity term, a cognitive term and a social term, and the last two terms are determined by a cognitive factor c.sub.1, an individual historical optimal solution pbest.sub.t.sub.i, a social factor c.sub.2, and a global optimal solution gbest.sup.t, which is specifically as follows: 1 = { 0 , < R i rand ( X i ) = rand ( V 1 , V 2 , .Math. , V n ) , otherwise V n = round ( rand ( 0 , 1 ) G row ) G col + round ( rand ( 0 , 1 ) G col ) + d 2 = { { SN ( pbest i j t ) , pbest i j t = x i n t & j n 0 , otherwise , otherwise 0 , c 1 < R i 3 = { { SN ( gbest j t ) , gbest j t = x i n t & j n 0 , otherwise , otherwise 0 , c 1 < R i R i = R min + rand ( 0 , 1 ) ( R max - R min ) where, the self-velocity term is recorded by using random distribution to calculate grid coordinates and using the encoding method in step S241 to calculate corresponding grids, is a velocity inertia weight, and G.sub.col is the number of one column of grids in the current grid map; the cognitive term is recorded using the serial numbers of same positions in a set of paths X.sub.i represented by a current particle i in the individual historical optimal solution pbest.sub.t.sub.i, and part of the serial numbers are set to 0 according to a certain probability; and the social term is updated by using the same strategy to obtain part of the serial numbers of the same positions in a set of paths X.sub.i represented by the current particle i and in the global optimal solution gbest.sup.t; R.sub.i represents a replacement rate;
x.sub.i.sup.t+1=(replace(x.sub.i.sup.t))
replace(x.sub.i.sup.t)=comb(x.sub.i.sup.t,v.sub.i.sup.t+1)=comb(x.sub.i.sup.t,{.sub.1,.sub.2,.sub.3}) a position update item is a fitness value of a set of paths X.sub.i represented by the current particle i; the path X.sub.i represented by the current particle i is subjected to position replacement based on the three replacement sets in the particle velocity; () is a fitness function, comb() is a permutation & combination function, and replace() is a replace function, indicating a replacement made between the current path X.sub.i and the particle velocity v.sub.i.sup.t+1; and S244: determining whether a maximum number of iterations is reached, if so, outputting an optimal autonomous obstacle avoidance path corresponding to a current end point, otherwise, returning to S243 to continue iteration.

5. The method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces according to claim 4, wherein in step S243, the velocity inertia weight w is: t = min + ( max - rand ( ) min ) 2 ( t T ) 2 where, .sub.min and .sub.max are a minimum inertia weight and a maximum inertia weight respectively; t is a current number of iterations; and T is a maximum number of iterations.

6. The method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces according to claim 4, wherein in step S243, for the drivable area detection information of the deep confined roadway established in step S1, a fitness function of a particle solution is established by taking the minimization of the total line length and the minimization of risk values as optimization objectives, where a relational expression of each solution in a search space and an objective function is: f = f ( X i , D i , R i ) = ( 1 - w R ) .Math. V j = V s V E V d j + w R .Math. V j = V s V E V r j where, represents the fitness function, D i = .Math. V j = V s V s Vd j represents a total length of a set of paths represented by the current particle i, R i = .Math. V j = V s V s Vr j represents a risk degree of the path, V.sub.j is a j-th composition grid between a start point V.sub.S and end point V.sub.E of the path, Vr.sub.j represents the risk degree of the j-th grid, and W R represents a risk factor; and a method of computing the fitness function is made by weighting the length and risk value of each grid in a path solution set X.sub.i according to risk degree indexes, adding obtained values, and taking a reciprocal of the obtained sum.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a schematic diagram of an application of an auxiliary haulage vehicle in a method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces according to an embodiment of the present disclosure; Reference numerals in FIG. 1: 1-body of an auxiliary haulage vehicle for coal mines; 2-standard container of the auxiliary haulage vehicle for coal mines; 3-explosion-proof vehicle lamp of the auxiliary haulage vehicle for coal mines; 4-track of the auxiliary haulage vehicle for coal mines; and 5-explosion-proof binocular camera.

(2) FIG. 2 is an example diagram of a drivable area detection and autonomous obstacle avoidance process of unmanned haulage equipment in a deep underground confined space according to the embodiment of the present disclosure;

(3) FIG. 3 is a flowchart of a method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in a deep confined space according to the embodiment of the present disclosure;

(4) FIG. 4 is a flowchart of a method for drivable area detection of a coal mine roadway based on binocular vision according to the embodiment of the present disclosure;

(5) FIG. 5 is a network diagram of deep learning semantic segmentation model deeplabv3+;

(6) FIG. 6 is a left coal mine roadway image;

(7) FIG. 7 is a depth map of the coal mine roadway;

(8) FIG. 8 is a segmentation diagram of a 2D image drivable area of the coal mine roadway;

(9) FIG. 9 is a detection diagram of a 3D point cloud drivable area of the coal mine roadway; and

(10) FIG. 10 is a risk map of a deep coal mine confined roadway established according to a drivable area detection method.

DETAILED DESCRIPTION OF THE EMBODIMENTS

(11) The present disclosure will now be further described in detail with reference to the accompanying drawings.

(12) It should be noted that the terms such as upper, lower, left, right, front, rear and the like referred to in the present disclosure are used only for the clarity of description and are not intended to limit the scope of the present disclosure, and any changes and adjustments to the relative relations thereof should also be deemed to fall within the implementable scope of the present disclosure without substantially changing the technical content.

(13) FIG. 1 is a schematic diagram of an application of an auxiliary haulage vehicle in a deep underground space in a method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces according to the present disclosure. A track 4 of an auxiliary haulage vehicle for coal mines is disposed below a body 1 of the auxiliary haulage vehicle for coal mines and carries the body of the auxiliary haulage vehicle for coal mines to move. An anti-explosion binocular camera 5 is disposed at the front end of the body 1 of the auxiliary haulage vehicle for coal mines, and configured to collect real-time video images of a coal mine roadway, and perform background processing on videos of the coal mine roadway to obtain a 3D point cloud drivable area of the coal mine roadway, so as to provide roadway environment information for path planning and safety obstacle avoidance of the auxiliary haulage vehicle. An explosion-proof vehicle lamp 3 of the auxiliary haulage vehicle for coal mines is disposed on one side of the anti-explosion binocular camera 5, which can not only illuminate the surrounding environment, but also improve the image shooting quality.

(14) The present disclosure provides a method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces, which, in combination with FIG. 2 and FIG. 3, contains two parts: drivable area detection of a coal mine roadway based on binocular vision and autonomous obstacle avoidance based on an improved particle swarm optimization algorithm. The method specifically includes the following steps:

(15) Step 1: Environmental Perception is Performed by Use of Binocular Vision, and a Drivable Area of an Unmanned Auxiliary Haulage Vehicle in a Deep Coal Mine Roadway is Detected.

(16) Exemplarily, in combination with FIG. 4, in step 1, four modules of preprocessing a coal mine roadway image, performing stereo matching on the image to obtain a depth map, performing drivable area semantic segmentation of a 2D image, and mapping to a 3D point cloud drivable area are included, specifically including the following steps:

(17) Step 1-1: A Binocular Camera Collects a Video Image of the Auxiliary Haulage Vehicle Driving in the Coal Mine Roadway, and a Coal Mine Roadway Image is Preprocessed.

(18) Exemplarily, in step 1-1, the process of preprocessing a coal mine roadway image includes the following substeps: step 1-1-1: coal mine roadway image correction processing is performed by using a Hartley image correction algorithm; step 1-1-2: image enhancement processing is performed on a corrected image obtained in step 1-1-1 by using an image enhancement algorithm based on logarithmic Log transformation; and step 1-1-3: image filtering processing is performed on an enhanced image obtained in step 1-1-2 by using an image filtering algorithm based on bilateral filtering.

(19) Each substep of step 1-1 is described below.

(20) (I) Coal Mine Roadway Image Correction Processing

(21) The Hartley image correction algorithm used in step 1-1-1 specifically includes: step 1-1-1-1: an epipolar constraint relationship of the left and right coal mine roadway images is obtained according to a camera calibration algorithm, and epipolar points p and p in the left and right coal mine roadway images are found; step 1-1-1-2: a transformation matrix H mapping p to an infinity point (1, 0, 0).sup.T is computed; step 1-1-1-3: a photographic transformation matrix H matched with the transformation matrix H is computed, and a least square constraint is satisfied so as to minimize the following formula:

(22) min .Math. i J ( H m 1 i , H m 2 i ) 2 where, m.sub.1i=(u.sub.1, v.sub.1, 1), m.sub.2i=(u.sub.2, v.sub.2,1), and J represents a cost function error, and (u.sub.1, v.sub.1) and (u.sub.2, v.sub.2) are a pair of matching points on original left and right images; and step 1-1-1-4: the transformation matrix H in step 1-1-1-2 and the photographic transformation matrix H in step 1-1-1-3 respectively act on the left and right coal mine roadway images to obtain a corrected coal mine roadway image.

(23) (II) Image Enhancement Processing

(24) The image enhancement algorithm in step 1-1-2 specifically includes: step 1-1-2-1: based on an image logarithmic Log transformation formula, the corrected coal mine roadway image obtained in step 1-1-1 is transformed by using a formula as follows:
s=c.Math.log.sub.v+i(1+v.Math.r) where, r is an input grayscale of the image, r[0, 1], s is an output grayscale of the image, c is a constant, and v is a logarithmic Log transformation intensity adjustment factor; and step 1-1-2-2: a logarithmically transformed coal mine roadway image obtained in step 1-1-2-1 is normalized to 0-255 for transformation by using a formula as follows:

(25) g = s - s min s max - s min .Math. 255 where, s is an unnormalized input grayscale, g is a normalized output grayscale, S.sub.max is a maximum grayscale of the image, and S.sub.min is a minimum grayscale of the image.

(26) (III) Image Filtering Processing

(27) The image filtering algorithm in step 1-1-3 specifically includes: step 1-1-3-1: for the enhanced coal mine roadway image obtained in step 1-1-2, an n*n convolution template is constructed to carry out a convolution operation; step 1-1-3-2: based on a spatial area and a range area, weight assignment is performed on the convolution template in step 1-1-3-1 by using a formula as follows:

(28) { G ( .Math. i - j .Math. ) = e ( x i - x j ) 2 + ( y i - y j ) 2 2 2 G ( .Math. "\[LeftBracketingBar]" i - j .Math. "\[RightBracketingBar]" ) = e ( gray ( x i - y i ) - gray ( x j , y j ) ) 2 2 2 where, G.sub.a is a weight of the spatial area, G.sub. is a weight of the range area, (x.sub.i, yi), (x.sub.j, y.sub.j) are respectively central pixel coordinates of the convolution template and pixel coordinates of the convolution template in step 1-1-3-1, is a smoothing parameter, gray ( ) is a pixel gray value of the image, i is a center pixel of the convolution template, and j is a pixel of the convolution template; and step 1-1-3-3: according to the convolution template in step 1-1-3-1 and the weights in step 1-1-3-2, traversal computation of the left and right coal mine roadway images is performed by using the following formula to obtain a filtered image:

(29) 0 { I i = 1 w i .Math. j S G ( .Math. i - j .Math. ) G ( .Math. "\[LeftBracketingBar]" i - j .Math. "\[RightBracketingBar]" ) I j w i = .Math. j S G ( .Math. i - j .Math. ) G ( .Math. "\[LeftBracketingBar]" i - j .Math. "\[RightBracketingBar]" ) where, S is the convolution template in step 1-1-3-1, is an original input image, I.sub.i is the filtered image, and w.sub.i is a normalization factor.

(30) Step 1-2: For a Preprocessed Coal Mine Roadway Image Obtained in Step 1, a Stereo Matching Algorithm Specific to a Stereo Matching Task of the Coal Mine Roadway is Designed to Implement the Generation of a Depth Map of the Coal Mine Roadway and the Computation of 3D Point Cloud Data.

(31) The process of the generation of a depth map of the coal mine roadway and the computation of 3D point cloud data in step 1-2 specifically includes the following substeps: step 1-2-1: an atrous convolution window with a specification of 5*5 is constructed, and different weights are assigned to different positions of the window according to a 2D Gaussian distribution function, where the weights in a sequence from left to right and top to bottom are respectively 3, 0, 21, 0, 3, 0, 0, 0, 0, 0, 21, 0, 1, 0, 21, 0, 0, 0, 0, 0, 3, 0, 21, 0, 3; step 1-2-2: a left view of the coal mine roadway image is covered with the convolution window constructed in step 1-2-1, and pixel points in all coverage areas are selected; step 1-2-3: a right view of the coal mine roadway image is covered with the convolution window constructed in step 1-2-1, and pixel points in all coverage areas are selected; step 1-2-4: an absolute value of a gray value difference of all corresponding pixel points in the convolution window coverage areas of the left and right coal mine roadway views in step 1-2-2 and step 1-2-3 is found, and according to the weights of the different positions of the window in step 1-2-1, weighted summation is performed on the weights as a matching cost by using a formula as follows:

(32) C matching cost ( p , d ) = .Math. q N p ( .Math. "\[LeftBracketingBar]" I L ( q ) - I R ( qd ) .Math. "\[RightBracketingBar]" .Math. w q ) where, p is a pixel of the coal mine roadway image, d is a disparity of the coal mine roadway image, I.sup.L (q) and I.sup.R (qd) are window elements taking q and qd as image centers on the left and right coal mine roadway images respectively, w q is the weight of the different positions of the convolution window, and N p is a 5*5 Gaussian atrous convolution window; step 1-2-5: matching cost aggregation is performed according to a matching cost computation method in step 1-2-4, where a matching cost aggregation step size d step is adaptively changed according to a pixel luminance of the coal mine roadway image by using a formula as follows:

(33) d step = D max - 1 G max - G min .Math. g + G max - G min D max G max - G min where, D.sub.max is a maximum disparity of a binocular image of the coal mine roadway, G.sub.max and G.sub.min are a maximum gray value and a minimum gray value of a gray image of the coal mine roadway respectively, and g is a gray value of the gray image of the coal mine roadway; step 1-2-6, based on matching costs in an adaptive matching cost aggregation step size d.sub.step range obtained in step 1-2-5, a window with a minimum matching cost value is found as a disparity by using a winner-take-all (WTA) algorithm, and cyclic computation is performed to obtain disparity maps of the coal mine roadway; step 1-2-7: based on the disparity maps of the coal mine roadway obtained in step 1-2-6, disparity image optimization is performed in accordance with a left-right consistency constraint criterion by using a formula as follows:

(34) D = { D l if .Math. "\[LeftBracketingBar]" D l - D r .Math. "\[RightBracketingBar]" 1 D invalid otherwise where, D.sub.l is a left disparity map, D.sub.r is a right disparity map, and D.sub.invalid is an occlusion point where no disparity exists; and step 1-2-8, based on a disparity optimization image of the coal mine roadway obtained in step 1-2-7, disparity-map to 3D-data computation is performed according to a binocular stereo vision principle, and 3D point cloud information (X.sub.w, Y.sub.w, Z.sub.w) of the coal mine roadway in the advancing direction of the unmanned haulage vehicle is obtained by using a formula as follows:

(35) { X w ( x , y ) = b .Math. f .Math. ( x - u ) .Math. d ( x , y ) Y w ( x , y ) = b .Math. f .Math. ( y - v ) .Math. d ( x , y ) Z w ( x , y ) = b .Math. f d ( x , y ) where, b is a distance between left and right optical centers of a binocular camera, is a focal distance of the camera, d is a disparity of the coal mine roadway image, (x, y) indicates pixel coordinates of coal mine roadway image, (u, v) indicates coordinates of an origin of an image coordinate system in a pixel coordinate system, and and are the focal distances of a pixel in the x and y directions of an image plane respectively. FIG. 6 is a left coal mine roadway image. FIG. 7 is a depth map of the coal mine roadway.

(36) Step 1-3, for the Preprocessed Coal Mine Roadway Image Obtained in Step 1-1, a Deep Learning Model for a Semantic Segmentation Task of the Coal Mine Roadway is Designed to Implement the Drivable Area Semantic Segmentation of a 2D Image of the Coal Mine Roadway.

(37) The process of drivable area semantic segmentation of a 2D image of the coal mine roadway in step 1-3 specifically includes: step 1-3-1: a semantic segmentation data set of the coal mine roadway is made, which specifically includes: step 1-3-1-1: marking is performed by using labelme image marking software, and the labelme software is started; step 1-3-1-2: a coal mine roadway image folder is opened, and an image is selected; step 1-3-1-3: only the drivable area of the coal mine roadway is selected with a frame, and the drivable area is named drivable area; and step 1-3-1-4: steps 1-3-1-2 and 1-3-1-3 are repeated to finally complete the making of the drivable area semantic segmentation data set of the 2D image of the coal mine roadway.

(38) Step 1-3-2: a semantic segmentation model deeplabv3+model is pre-trained based on a PASCAL VOC data set. The PASCAL VOC data set used in this embodiment contains a total of 11,530 images in 20 types.

(39) Step 1-3-3: the transfer learning and fine-tuning training of the deeplabv3+ semantic segmentation model is performed according to a pre-trained semantic segmentation model obtained in step 1-3-2 and the drivable area semantic segmentation data set of the coal mine roadway obtained in step 1-3-1. FIG. 5 is a network diagram of the deep learning semantic segmentation model deeplabv3+.

(40) Step 1-3-4, real-time drivable area semantic segmentation of the 2D image of the coal mine roadway is performed according to the deep learning model fine-tuned with the data set of the coal mine roadway obtained in step 1-3-3 to obtain a 2D image drivable area. FIG. 8 is a segmentation diagram of the 2D image drivable area of the coal mine roadway.

(41) Step 1-4: According to the 3D Point Cloud Data of the Coal Mine Roadway Obtained in Step 1-2 and a 2D Drivable Area Segmentation Image of the Coal Mine Roadway Obtained in Step 1-3, a 2D-Image to 3D-Point-Cloud Mapping Method is Designed to Implement the Semantic Segmentation of a 3D Point Cloud of the Coal Mine Roadway.

(42) The process of semantic segmentation of a 3D point cloud of the coal mine roadway in step 1-4 specifically includes: step 1-4-1, according to the 2D image drivable area obtained in step 1-3-4, drivable area image processing is performed based on morphological opening operation, which specifically includes: step 1-4-1-1: a morphological erosion operation is performed according to the drivable area image of the coal mine roadway obtained in step 1-3-4; and step 1-4-1-2: a morphological dilation operation is performed according to a morphological erosion image obtained in step 1-4-1-1.

(43) Step 1-4-2: according to a left coal mine roadway view of the unmanned auxiliary haulage vehicle, segmentation for mapping the 2D image drivable area obtained in step 1-3-4 to the 3D point cloud is performed to obtain a 3D point cloud drivable area of the coal mine roadway by using a formula as follows:

(44) P ( x , y , z ) = { P ( x , y , z ) ( x , y ) I ( x , y ) 0 ( x , y ) .Math. I ( x , y ) where, P (x, y, z) is the 3D point cloud data of the coal mine roadway obtained in step 1-2-8, I (x, y) is a drivable area image of the coal mine roadway obtained after morphological processing is performed in step 1-4-1, and (x, y) is pixel coordinates of the left coal mine roadway view collected by the binocular camera. FIG. 9 is a detection diagram of the 3D point cloud drivable area of the coal mine roadway.

(45) Step 2: A Drivable Area of a Deep Confined Roadway is Determined According to Drivable Area Detection Information in Step 1, and the Safe Driving of the Unmanned Auxiliary Haulage Vehicle in the Deep Confined Roadway is Performed by Using an Autonomous Obstacle Avoidance Algorithm Based on an Improved Particle Swarm Optimization Algorithm.

(46) Exemplarily, in step 2, four modules of establishing an environment map, initializing basic data, planning autonomous obstacle avoidance, and outputting an optimal route are included, which specifically includes the following substeps:

(47) Step 2-1, a 2D Workspace Grid Map of the Unmanned Auxiliary Haulage Vehicle is Established Based on the Drivable Area Detection Information in Step 1.

(48) The deep confined roadway is affected by deformation energy, kinetic energy and driving stress of coal mining for a long time, so part of the roadway shows an overall convergence trend, and has situations of roof leakage at the top of the roadway, hanging-down of blanket nets, damage, leakage and cracking on both sides of the roadway, deformation of support materials, and more water puddles on the ground surface of the roadway, and the like. The 2D grid map is obtained based on the projection of scene reconstruction information in step 1, and includes deformation and roof leakage of blanket nets and support materials at the top of the roadway, deformation, destruction, leakage and cracking on both sides of the deep roadway, and water puddles and other obstacles on the ground surface of the roadway.

(49) Step 2-2: A Risk Grid Map is Established According to the Projection of the Drivable Area in Step 1.

(50) The map obtained by performing safety assessment on the deep confined roadway by using an image segmentation technology in step 1 contains a safety area, a partial safety area and a complete risk area; the safety area is a part of the drivable area in step 1, in which the unmanned auxiliary haulage vehicle is allowed to drive directly; the complete risk area is an undrivable area, in which the unmanned auxiliary haulage vehicle is completely not allowed to drive; the partial safety area is a drivable area obtained by segmenting in step 1 between the safety area and the undrivable area, in which there is a risk when the unmanned auxiliary haulage vehicle drives; and in autonomous obstacle avoidance planning, a driving route of the vehicle should be planned in the safety area as much as possible, and cannot be planned in a safety risk area, and under a certain condition, the driving route of the vehicle can be planned in the partial safety area, but should be far away from the complete risk area as much as possible; and a rule of establishment among the three kinds of areas is as follows: rule 2-2-1: area risk levels are established according to the drivable area of the vehicle obtained by segmenting in step 1: the undrivable area itself is at a highest level 5; eight neighbor grids of a current grid are set to be at a risk level 4 by taking a grid where the undrivable area is located as a reference point; repeating is performed in a similar fashion until a risk level 1 reaches; the risk levels of other grids that are not at risk are still 0, that is, they are fully passable; rule 2-2-2: if there is a conflict in the risk level of a grid, the grid with the conflict is assessed with the highest risk level; and rule 2-2-3: the undrivable area itself is an absolute complete risk area, in which vehicles are not allowed to drive, and the safety area and the partial safety area (risk levels 1-4) are both drivable areas, where a grid with the risk level of 0 is a safety area; FIG. 10 is a risk map of a deep coal mine confined roadway established according to a drivable area detection method.

(51) Step 2-3: Maps with an Appropriate Length are Intercepted, and a Temporary Table of End Points to be Selected is Established Based on the Map Established Based on the Drivable Area Detection Information.

(52) To ensure the real-time performance of autonomous obstacle avoidance, maps with an appropriate length needs to be continuously intercepted for planning so as to update an autonomous obstacle avoidance path during vehicle driving. After the map is intercepted, a grid allowed to serve as a local end point is selected in the map by taking the unmanned auxiliary haulage vehicle as a start point of the current map, and recorded into a table of end points to be selected in accordance with a rule as follows: rule 2-3-1: an end point to be selected must be in the last column of a local grid map; rule 2-3-2: the grid is not an obstacle grid; rule 2-3-3: the neighbor grids of the end point to be selected at least comprise one passable grid; and rule 2-3-4: the grid is not surrounded by obstacles.

(53) Step 2-4: Autonomous Obstacle Avoidance Path Planning is Performed by Using an Improved Particle Swarm Path Planning Method Designed for the Deep Confined Roadway.

(54) The improved particle swarm path planning method designed for the deep confined roadway in step 2-4 specifically includes: step 2-4-1: grid encoding and decoding is performed on the grid map established based on drivable area detection in step 2-1 and the risk grid map established based on the projection of the drivable area of step 1 in step 2-2; where in consideration of the particularity of path planning in the deep confined space, according to the features of the grid map, an encoding method of a collision-free local optimal path is: X.sub.i is defined to be an obstacle-free path from a current location of a coal mine robot to a specified target point, which may be represented by all grids that constitute the path, i.e.,
X.sub.i={V.sub.1,V.sub.2, . . . ,V.sub.n} where, V.sub.1,V.sub.2, . . . V.sub.n represents the serial numbers of all the grids through which the path X.sub.i passes, there is no complete risk grid with a risk level of 5, and the serial numbers are not repeated mutually. The serial numbers of the grids are continuously arranged from top to bottom and left to right by taking the first grid at the upper left corner of the grid map as 1 until the last grid at the lower right corner reaches, that is,

(55) V n = ( v n , 1 - d 2 ) G r o w + v n , 2 + d 2 where, v.sub.n,1 and v.sub.n,2 represent a x-coordinate and a y-coordinate of a current grid, which are the coordinates of a center point of the grid but not the coordinates of a vertex of the grid; d is a side length of the grid; G.sub.row is the number of one row of grids in the current grid map.

(56) After path points are computed, the following formula is used during inverse solving:

(57) [ v n , 1 , v n , 2 ] = [ mod ( V n - 1 , G r o w ) + d 2 , ceil ( V n - 1 / G r o w ) - d 2 ] so as to complete the grid encoding and decoding. step 2-4-2: obstacle information is processed according to the drivable area detection information in step 1 and the grid map established in step 2-1, and a particle swarm population is initialized, which includes the following steps: step 2-4-2-1: a square matrix of which the numbers of rows and columns are both the total number V.sub.overall of the grid is initially established, and a grid connection matrix M.sub.link is established; step 2-4-2-1-1: whether the current grid is adjacent to all other grids by use of cyclic traversal is computed, and whether the adjacent grids are obstacles is judged; step 2-4-2-1-2: if the current grid is not adjacent to all other grids or the adjacent grids are obstacles, a corresponding matrix element is set to 0; and if the current grid is adjacent to all other grids and the adjacent grids are not obstacles, a distance between the adjacent grids is computed by using a formula as follows:
Vd.sub.n,i={square root over ((v.sub.n,1v.sub.i,1).sup.2+(v.sub.n,2v.sub.i,2).sup.2)} where, Vd.sub.n,i is a grid distance between the grid n and the grid i, and v.sub.1 and v.sub.2 represent a x-coordinate and y-coordinate of the current grid; step 2-4-2-2: a start point V.sub.S of the coal mine robot and a final target point V.sub.E thereof are determined, and placed at the head and tail nodes of an initial route X.sub.0 through the encoding method in step 2-4-1; step 2-4-2-3: a next grid connected to the start point is randomly selected starting from a starting node V.sub.S according to the connection matrix M.sub.link established in step 1; and step 2-4-2-4: step 2-4-2-3 is repeated according to the connection matrix M.sub.link until an encoding combination of a complete path connecting V.sub.S to V.sub.E is completed, and an initial path is output. step 2-4-3: based on the grid decoding/encoding methods in step 2-4-1 and the initialized population in step 2-4-2, the velocity and location of particles in a particle swarm are updated by using a formula as follows:
v.sub.in.sup.t+1={.sub.1,.sub.2,.sub.3} an improved particle velocity consists of three replacement sets. In the formula, .sup.1, .sup.2 and .sup.3 represent three parts of the particle velocity: a self-velocity term, a cognitive term and a social term, and the last two terms are determined by a cognitive factor c.sub.1, an individual historical optimal solution pbest.sub.i.sup.t, a social factor c.sub.2, and a global optimal solution gbest.sup.t, which is specifically as follows:

(58) 1 = { 0 , < R i rand ( X i ) = rand ( V 1 , V 2 , .Math. , V n ) , otherwise V n = round ( rand ( 0 , 1 ) G row ) G col + round ( rand ( 0 , 1 ) G col ) + d 2 = { { SN ( pbest ij t ) , pbest ij t = x in t & j n 0 , otherwise , otherwise 0 , c 1 < R i 3 = { { SN ( gbest j t ) , gbest j t = x in t & j n 0 , otherwise , otherwise 0 , c 1 < R i R i = R min + rand ( 0 , 1 ) ( R max - R min ) where, the self-velocity term is recorded by using random distribution to calculate grid coordinates and using the encoding method in step 2-4-1 to calculate corresponding grids. G.sub.col is the number of one column of grids in the current grid map. The cognitive term is recorded using the serial numbers of same positions in a set of paths X.sub.i represented by a current particle i in the individual historical optimal solution pbest.sub.i.sup.t, and part of the serial numbers are set to 0 according to a certain probability. The social term is updated by using the same strategy to obtain part of the serial numbers of the same positions in a set of paths X.sub.i represented by the current particle i and the global optimal solution gbest.sup.t. is a velocity inertia weight. In order to balance global and local search capabilities and allow the algorithm to jump out of local optimum as much as possible, an inertia weight is introduced into the self-velocity term of a particle update formula, and the value range thereof is between 0 and 1. The greater the inertia weight is, the stronger the global search capability is, and the weaker the local search capability is; otherwise, the global search capability is weakened, and the local search capability is enhanced. The inertia weight is designed herein as follows:

(59) t = min + ( max - rand ( ) min ) 2 ( t T ) 2 where, .sub.min and .sub.max are a minimum inertia weight and a maximum inertia weight. t is a current number of iterations. T is a maximum number of iterations.

(60) R.sub.i represents a replacement rate.
x.sub.i.sup.t+1=(replace(x.sub.i.sup.t))
replace(x.sub.i.sup.t)=comb(x.sub.i.sup.t,v.sub.i.sup.t+1)=comb(x.sub.i.sup.t,{.sub.1,.sub.2,.sub.3})

(61) A position update item is a fitness value of a set of paths X.sub.i represented by the current particle i. The path X.sub.i represented by the current particle i is subjected to position replacement based on the three replacement sets in the particle velocity. comb() is a permutation & combination function. replace() is a replace function, indicating a replacement made between the current path X.sub.i and the particle velocity v.sub.i.sup.t+1. () is a fitness function.

(62) The convergence of the particle swarm optimization algorithm requires the fitness function as a determination criterion, a greater fitness value of an optimization result indicates that a set of solutions represented by the particle is more preferred. Therefore, for the drivable area detection information of the deep confined roadway established in step S1, a fitness function of a particle solution is established by taking the minimization of the total line length and the minimization of risk values as optimization objectives. A relational expression of each solution in a search space thereof and an objective function is:

(63) 0 f = f ( X i , D i , R i ) = ( 1 - w R ) .Math. V j = V S V E V d j + w R .Math. V j = V S V E V r j where, represents a fitness function,

(64) D l = .Math. V j = V S V E Vd j
represents a total length of a set of paths represented by the current particle i,

(65) R j = .Math. V j = V S V S Vr j
represents a risk degree or the path, V.sub.j is a j-th composition grid between a start point V.sub.S and end point V.sub.E of the path, Vr.sub.j represents the risk degree of the j-th grid, and W.sub.R represents a risk factor; and a method of computing the fitness function is made by weighting the length and risk value of each grid in a path solution set X.sub.i according to risk degree indexes, adding obtained values, and taking a reciprocal of the obtained sum. step 2-4-4: whether a maximum number of iterations is reached is determined, if so, an optimal autonomous obstacle avoidance path corresponding to a current end point is output, otherwise, step 2-4-3 is returned to continue iteration.

(66) Step 2-5: An Optimal End Point to be Selected of a Driving Path is Obtained by Using a Greedy Strategy, and the Unmanned Auxiliary Haulage Vehicle is Enabled to Drive According to the Optimal End Point and an Optimal Path.

(67) Step 2-6: Steps 2-1 to 2-5 are Repeated to Complete the Autonomous Obstacle Avoidance of the Unmanned Auxiliary Haulage Vehicle in the Deep Confined Roadway Until the Unmanned Auxiliary Haulage Vehicle Arrives at a Task Destination.

(68) In summary, the present disclosure provides a method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces, which can effectively utilize the information of a deep confined roadway collected by a binocular camera and a drivable area based on safety state assessment to perform autonomous obstacle avoidance in combination with a particle swarm optimization algorithm improved for path planning, so as to realize the unmanned driving of an auxiliary haulage vehicle for coal mines in a deep confined space; a method for drivable area detection of a deep confined space based on binocular vision performs coal mine roadway image preprocessing by using a Hartley image correction algorithm, an image enhancement algorithm based on logarithmic Log transformation, and an image filtering algorithm based on bilateral filtering, so that the problems such as dim environment and large salt & pepper noise of the deep underground coal mine roadway are solved; a SAD stereo matching algorithm based on Gaussian atrous convolution with an adaptive aggregation step size is used, which improves the matching precision and matching efficiency of the left and right coal mine roadway images; a model transfer-based method for training the semantic segmentation model of the drivable area of the coal mine roadway is used, which solves the problem of lack of data sets of the coal mine roadway, thereby improving the robustness and generalization of the semantic segmentation model; a semantic segmentation method of the point cloud of the coal mine roadway based on 2D-image to 3D-point-cloud mapping is used, which solves a problem of expensiveness caused by radar equipment, improves the semantic segmentation accuracy and efficiency of the 3D point cloud, and can obtain rich texture information of the coal mine roadway; and the method for autonomous obstacle avoidance of the unmanned auxiliary haulage vehicle based on the improved particle swarm optimization algorithm can effectively utilize the drivable area detection information, and quickly and autonomously perform the autonomous obstacle avoidance of the unmanned auxiliary haulage vehicle, thereby improving the security and intelligence of local path planning of the vehicle.

(69) The above is only a preferred embodiment of the present disclosure, the scope of protection of the present disclosure is not limited to the above embodiment, and all technical solutions of the present disclosure under the thought of the present disclosure fall within the scope of protection of the present disclosure. It should be noted that, for a person of ordinary skill in the art, any improvements and modifications made without departing from the principle of the present disclosure should be considered to be within the scope of protection of the present disclosure.