MULTI-SENSOR FUSION-BASED SLAM METHOD AND SYSTEM

20230194306 · 2023-06-22

    Inventors

    Cpc classification

    International classification

    Abstract

    The present invention provides a multi-sensor fusion-based Simultaneous Localization And Mapping (SLAM) mapping method and system for a server, comprising: obtaining a plurality of sensor data regarding a surrounding environment of a moving platform, the plurality of sensor data including point cloud data, image data, inertial measurement unit (IMU) data, and global navigation satellite system (GNSS) data; performing hierarchical processing on the plurality of sensor data to generate a plurality of localization information, wherein one sensor data corresponds to one localization information; obtaining target localization information of the moving platform based on the plurality of localization information; generating a high-precision local map based on the target localization information; and performing a closed-loop detection operation to the high-precision local map to obtain a high-precision global map of the moving platform. The present invention mitigates the technical problem in the related art that easy susceptibility to a surrounding environment leads to low precision.

    Claims

    1. A multi-sensor fusion-based Simultaneous Localization and Mapping (SLAM) method for a server, comprising: obtaining a plurality of sensor data regarding a surrounding environment of a moving platform, the plurality of sensor data comprising point cloud data, image data, inertial measurement unit (IMU) data, and global navigation satellite system (GNSS) data; performing hierarchical processing on the plurality of sensor data to generate a plurality of localization information, wherein one sensor data corresponds to one localization information; obtaining target localization information of the moving platform based on the plurality of localization information; generating a high-precision local map based on the target localization information; and performing a closed-loop detection operation to the high-precision local map to obtain a high-precision global map of the moving platform.

    2. The method according to claim 1, wherein the step of obtaining the plurality of sensor data regarding the surrounding environment of the moving platform comprises: with a laser as a benchmark, calibrating position relationships among a camera, an IMU, a GNSS and the laser to obtain calibration information, wherein the laser, the camera, the IMU, and the GNSS are all sensors on the moving platform; with time of the GNSS as a benchmark, synchronizing time of the laser, time of the camera, and time of the IMU to a current time system of the GNSS; and synchronously collecting data from the laser, the camera, the IMU and the GNSS to obtain the plurality of sensor data regarding the surrounding environment of the moving platform, wherein the point cloud data is the data collected from the laser, the image data is the data collected from the camera, the IMU data is the data collected from the IMU, and the GNSS data is the data collected from the GNSS.

    3. The method according to claim 2, wherein in the step of performing hierarchical processing on the plurality of sensor data, a plurality of localization information is generated, which includes initial localization information, first localization information, and second localization information, comprising: generating the initial localization information based on the IMU data, the GNSS data, and the calibration information; generating the first localization information by using visual SLAM on basis of the initial localization information and the image data; and generating the second localization information by using laser SLAM on basis of the first localization information and the point cloud data.

    4. The method according to claim 3, wherein the step of obtaining target localization information of the moving platform based on the plurality of localization information comprises: extracting a keyframe matching point set of the image data and a point cloud data matching point set; generating a comprehensive localization information database based on the second localization information, the IMU data, the GNSS data, the keyframe matching point set, and the point cloud data matching point set; performing joint optimization to data sets in the comprehensive localization information database to gain a high-precision trace of the moving platform; and using the high-precision trace as the target localization information.

    5. The method according to claim 4, wherein in the step of generating a high-precision local map based on the target localization information, the high-precision local map includes a local image map and a local point cloud three-dimensional scene map, comprising: resolving position and attitude information of the keyframe of the image data based on the high-precision trace to generate the local image map; and resolving position and attitude information of the point cloud data based on the high-precision trace to generate the local point cloud three-dimensional scene map.

    6. The method according to claim 5, wherein the step of performing a closed-loop detection operation to the high-precision local map to obtain a high-precision global map of the moving platform comprises: performing the closed-loop detection operation to the high-precision local map to obtain a local map rotation and translation matrix; constructing an image optimization pose constraint based on the local map rotation and translation matrix; correcting the high-precision trace by using the image optimization posture constraint to obtain a corrected high-precision trace; and obtaining the high-precision global map of the moving platform based on the corrected high-precision trace.

    7. A multi-sensor fusion-based SLAM mapping system for a server, comprising an obtaining module, a hierarchical processing module, a localizing module, a first generation module, and a second generation module, wherein, the obtaining module is configured to obtain a plurality of sensor data regarding a surrounding environment of a moving platform, the plurality of sensor data comprising point cloud data, image data, inertial measurement unit (IMU) data, and global navigation satellite system (GNSS) data; the hierarchical processing module is configured to perform hierarchical processing on the plurality of sensor data to generate a plurality of localization information, wherein one sensor data corresponds to one localization information; the localizing module is configured to obtain target localization information of the moving platform based on the plurality of localization information; the first generation module is configured to generate a high-precision local map based on the target localization information; and the second generation module is configured to perform a closed-loop detection operation on the high-precision local map to obtain a high-precision global map of the moving platform.

    8. The system according to claim 7, wherein the obtaining module further comprises a calibration unit, a synchronization unit, and a collection unit, wherein, the calibration unit is configured with a laser as a benchmark to calibrate position relationships among a camera, an IMU, a GNSS, and the laser to obtain calibration information, wherein the laser, the camera, the IMU and the GNSS are all sensors on the moving platform; the synchronization unit is configured with time of the GNSS as a benchmark to synchronize time of the laser, time of the camera, and time of the IMU to a current time system of the GNSS; and the collection unit is configured to synchronously collect data from the laser, the camera, the IMU, and the GNSS to obtain the plurality of sensor data regarding the surrounding environment of the moving platform, wherein the point cloud data is the data collected from the laser, the image data is the data collected from the camera, the IMU data is the data collected from the IMU, and the GNSS data is the data collected from the GNSS.

    9. An electronic device comprising a memory, a processor and a computer program stored in the memory and performed by the processor, wherein the processor, when performing the computer program, implements steps in the method according to any one of claims 1 to 6.

    10. A computer-readable medium having nonvolatile program code performed by a processor, wherein the processor performs the method according to any one of claims 1 to 6 according to the program code.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0017] To describe the technical solutions in specific embodiments of the present invention or the prior art more clearly, the following briefly introduces the accompanying drawings required for describing the specific embodiments or the prior art. Apparently, the accompanying drawings in the following description show some embodiments of the present invention, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.

    [0018] FIG. 1 is a flowchart of a multi-sensor fusion-based SLAM mapping method according to an embodiment of the present invention;

    [0019] FIG. 2 is a method flowchart of obtaining a plurality of sensor data regarding a surrounding environment of a moving platform according to an embodiment of the present invention;

    [0020] FIG. 3 is a method flowchart of performing hierarchical processing on a plurality of sensor data according to an embodiment the present invention;

    [0021] FIG. 4 is a method flowchart of performing joint optimization on a plurality of localization information according to an embodiment the present invention;

    [0022] FIG. 5 is a method flowchart of obtaining a high-precision global map of a moving platform according to an embodiment of the present invention;

    [0023] FIG. 6 is a schematic diagram of a multi-sensor fusion-based SLAM mapping system according to an embodiment of the present invention; and

    [0024] FIG. 7 is a schematic diagram of another multi-sensor fusion-based SLAM mapping system according to an embodiment of the present invention.

    DETAILED DESCRIPTION

    [0025] The following clearly and completely describes the technical solutions of the present invention with reference to the accompanying drawings. Apparently, the described embodiments are merely some rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.

    Embodiment 1

    [0026] FIG. 1 is a flowchart of a multi-sensor fusion-based SLAM mapping method according to an embodiment of the present invention. The method is applied to a server. As shown in FIG. 1, the method specifically includes the following steps.

    [0027] Step S102: Obtain a plurality of sensor data regarding a surrounding environment of a moving platform, the plurality of sensor data includes point cloud data, image data, IMU data, and GNSS data.

    [0028] Specifically, a laser collects point cloud information of a surrounding environment to obtain the point cloud data. A camera collects image information to obtain the image data. An IMU obtains the angular velocity and acceleration of the moving platform to obtain the IMU data. A GNSS obtains the absolute longitude and latitude coordinates at every moment to obtain the GNSS data.

    [0029] Step S104: Perform hierarchical processing on the plurality of sensor data to generate a plurality of localization information, wherein one sensor data corresponds to one localization information.

    [0030] Step S106: Obtain target localization information of the moving platform based on the plurality of localization information.

    [0031] Step S108: Generate a high-precision local map based on the target localization information.

    [0032] Step S110: Perform a closed-loop detection operation to the high-precision local map to obtain a high-precision global map of the moving platform.

    [0033] The present invention provides a multi-sensor fusion-based SLAM mapping method, including: obtaining a plurality of sensor data regarding a surrounding environment of a moving platform, the plurality of sensor data including point cloud data, image data, IMU data, and GNSS data; performing hierarchical processing on the plurality of sensor data to generate a plurality of localization information, wherein one sensor data corresponds to one localization information; obtaining target localization information of the moving platform based on the plurality of localization information; generating a high-precision local map based on the target localization information; and performing a closed-loop detection operation to the high-precision local map to obtain a high-precision global map of the moving platform. The present invention mitigates the technical problem in the related art that easy susceptibility to a surrounding environment leads to low precision and great errors.

    [0034] Optionally, as shown in FIG. 2, Step S102 includes the following steps.

    [0035] Step S1021: With a laser as a benchmark, calibrate position relationships among a camera, an IMU, a GNSS, and the laser to obtain calibration information, where the laser, the camera, the IMU, and the GNSS are all sensors on the moving platform.

    [0036] Specifically, for the calibration between the laser and the camera, a laser and a camera on the same rigid body are used to face a calibration target to collect data. Information such as surface features in point cloud data and corner points in an image are fitted. Optimization is performed by using a distance from a point to a surface to solve extrinsic parameters of the camera and the laser. For the calibration between the laser and the IMU, the trace of a moving device may be obtained by using the point cloud data collected by the laser, and the trace of the moving device may also be obtained by the IMU. Extrinsic parameters may be obtained by aligning the traces. Because an accelerometer of the IMU is prone to drift, the extrinsic parameters between the two can only be approximately estimated (or measured with a ruler).

    [0037] Step S1022: With time of the GNSS as a benchmark, synchronize time of the laser, time of the camera and time of the IMU to a current time system of the GNSS.

    [0038] Step S1023: Synchronously collect data from the laser, the camera, the IMU and the GNSS to obtain the plurality of sensor data regarding the surrounding environment of the moving platform, wherein the point cloud data is the data collected from the laser, the image data is the data collected from the camera, the IMU data is the data collected from the IMU, and the GNSS data is the data collected from the GNSS.

    [0039] In the embodiments of the present invention, the plurality of localization information include initial localization information, first localization information, and second localization information.

    [0040] Optionally, as shown in FIG. 3, Step S104 includes the following steps.

    [0041] Step S1041: Generate the initial localization information based on the IMU data, the GNSS data, and the calibration information.

    [0042] Specifically, the initialization and alignment of a navigation system are first completed on the moving platform. Initial parameters of Kalman filtering are set, and then a posteriori state estimation information P0.sub.t at every moment t is solved by using GNSS global localization information, an INS, and the Kalman filtering theory.

    [0043] Step S1042: Generate the first localization information by using visual SLAM on basis of the initial localization information and the image data.

    [0044] Specifically: a) For each keyframe image, feature points of the keyframe image are calculated. Specific feature points include an ORB feature point, a Harris corner point, and the like.

    [0045] b). For two adjacent image keyframes, according to initial localization information P0, feature points F.sub.1 and F.sub.2 with positions corrected are shown in the following formula. P.sup.t-1 represents a set of all feature points of the image frame F.sub.1, and P.sup.t represents a set of all feature points of the image frame F.sub.2.

    [00001] { P t = { P 1 t , P 1 t , .Math. , P N t } F 1 P t - 1 = { P 1 t - 1 , P 1 t - 1 , .Math. , P 1 t - 1 } F 2 .

    [0046] RANSAC is used to eliminate outliers, and the calculation is further optimized to obtain feature points P.sup.t and P.sup.t-1 in a normalization plane of the camera.

    [0047] For the reason of being in the same feature environment, a conversion relationship among these feature matching point pairs may be expressed as the following formula: ∀i, P.sub.i.sup.t=RP.sub.i.sup.t-1+T, wherein R is a pose rotation and transformation matrix of a robot, T is a displacement matrix of the robot, and P.sub.i.sup.t and P.sub.i.sup.t-1 are a feature point matching point pair from a moment t to a moment t+1. A method of minimizing a reprojection error is used to solve poses R and T, as shown in the following formula:

    {R,T}=arg min Σ.sub.i=1.sup.N∥P.sub.i.sup.t−(R.sup.t-1P.sub.i+T)∥.sup.2, wherein P.sup.t represents the set of all feature points of the image frame F.sub.1, and P.sup.t-1 represents the set of all feature points of the image frame F.sub.1; and R is a rotation matrix of a vector, T is a translation vector of the vector, and N represents a quantity of feature point pairs.

    [0048] A rotation and translation matrix of adjacent keyframes is calculated, first localization information P1 of all keyframes is sequentially obtained, and a current optimal feature matching pair (that is, an optimal feature matching pair regarding the first localization information P1) is added to a matching database.

    [0049] Step S1043: Generate the second localization information by using laser SLAM on basis of the first localization information and the point cloud data.

    [0050] Specifically, for laser point cloud data P.sub.K of a current frame, according to the following formula, a point feature F.sub.K1, a line feature F.sub.K2, and a surface feature F.sub.K3 of the laser point cloud data may be calculated.

    [00002] f = 1 .Math. "\[LeftBracketingBar]" p .Math. "\[RightBracketingBar]" .Math. .Math. X i .Math. .Math. j p , j i .Math. ( X i - X j ) .Math. ,

    wherein i is a point in P.sub.K, X.sub.i is a coordinate of the point i, p is a neighborhood point set of the point i, j is a point in p, X.sub.i is a coordinate of the point i, and f is a feature value; and thresholds M.sub.1, M.sub.2, M.sub.3, and M.sub.4 are given in advance, a point with the feature value f less than M.sub.1 belongs to a feature F.sub.k1, a point with the feature value f greater than M.sub.2 and less than M.sub.3 belongs to F.sub.k2, and a point with the feature value f greater than M.sub.4 belongs to F.sub.k3.

    [0051] According to the first localization information P1, feature data of each frame is converted into a coordinate system corresponding to the localization information P1. Point cloud data P.sub.t and P.sub.t-1 of two adjacent frames are obtained, domain search is performed in F.sub.t to all matching pairs F.sub.t-1, to determine all candidate feature matching pairs. Rotation and translation parameters R and T of the point cloud data of the two adjacent frames are solved according to the matching pairs and by using a least squares method. Specifically, the parameters may be solved by using the following formula:

    Y=RX+T, wherein Y represents a feature extracted from the latter data frame of the two adjacent data frames, X represents a feature extracted from the former data frame of the two adjacent data frames, R is a rotation matrix of a vector, and T is a translation vector of the vector.

    [0052] Next, selection may be performed to matching pairs according to an obtained result, and the feature point F.sub.t′ is calculated again. For a feature point in a point cloud P.sub.t-1, F.sub.t is searched again for feature point pairs, and recalculation is performed to obtain new rotation and translation matrices R and T, which are updated. Finally, rotation and translation position information R.sub.t-1 and T.sub.t-1 of two adjacent frames are finally obtained, and a current optimal feature matching pair is added to a matching database K.

    [0053] Finally, according to a conversion matrix of adjacent frames, second localization information P2 according to laser point cloud data is obtained (that is, an optimal feature matching pair of the second localization information P2). The foregoing FIG. 3 involves the following operations: Step S1041: Generate the initial localization information based on the IMU data, the GNSS data, and the calibration information. Step S1042: Generate the first localization information by using visual SLAM on basis of the initial localization information and the image data. Step S1043: Generate the second localization information by using laser SLAM on basis of the first localization information and the point cloud data. In the foregoing technical solution, a brand new algorithm design is used for generating the initial localization information based on the IMU data, the GNSS data, and the calibration information (a method of minimizing a reprojection error and a least squares method are combined). The foregoing calculation algorithm is applied to obtain initial localization information.

    [0054] Optionally, FIG. 4 is a method flowchart of performing joint optimization on a plurality of localization information according to an embodiment the present invention. As shown in FIG. 4, the method includes the following steps:

    [0055] Step S1061: Extract a keyframe matching point set of the image data and a point cloud data matching point set.

    [0056] Step S1062: Generate a comprehensive localization information database based on the second localization information, the IMU data, the GNSS data, the keyframe matching point set, and the point cloud data matching point set.

    [0057] Step S1063: Perform joint optimization to data sets in the comprehensive localization information database to gain a high-precision trace of the moving platform.

    [0058] Step S1064: Use the high-precision trace as the target localization information.

    [0059] Specifically, a sliding window with a capacity of n is first constructed. Each unit of the sliding window includes keyframe matching pair information of an original camera or laser point cloud matching pair information, and IMU preintegration information. The IMU preintegration information is an observed value formed by all IMU data of two consecutive frames of data through an IMU preintegration model. Next, a factor graph model is sequentially constructed for data in the sliding window, including constructing an IMU preintegration constraint Z.sup.imu, a laser point cloud feature matching constraint Z.sup.laser, an image keyframe matching constraint Z.sup.img, a GNSS position constraint Z.sup.gnss, and the like. A maximum a posteriori probability of joint probability distribution is solved, to obtain each state variable at each time point. A state vector that needs to be estimated is:


    S={E,N,U,V.sub.e,V.sub.n,V.sub.u,roll,pitch,yaw,ϕ.sub.x,ϕ.sub.y,ϕ.sub.z,σ.sub.x,σ.sub.y,σ.sub.z}.

    [0060] Wherein E, N, and U respectively represent three-dimensional coordinates of a world coordinate system; V.sub.e, V.sub.n, and V.sub.u respectively represent an east velocity, a north velocity, and an up velocity; roll, pitch, and yaw represent attitude angles; ϕ.sub.x, ϕ.sub.y, and ϕ.sub.z respectively represent deviation amounts of a gyroscope; and σ.sub.x, σ.sub.y, and σ.sub.z respectively represent deviation amounts of an accelerometer.

    [0061] For a state set S.sub.k={S.sub.1,δ.sub.2, . . . ,S.sub.k} at each moment, according to the foregoing constructed measured value set Z={Z.sup.imu,Z.sup.laser,Z.sup.img,Z.sup.gnss}, a maximum a posteriori probability of joint probability distribution p(S.sub.k|Z.sub.k) is solved: S.sub.k′=arg.sub.x.sub.kmax p(S.sub.k|Z.sub.k), wherein S.sub.k′={S.sub.1′,S.sub.2′, . . . ,S.sub.k′} represents an optimal estimated value of S.sub.k. An optimal state amount is solved, to gain a high-precision trace T.

    [0062] In the embodiments of the present invention, the high-precision local map includes a local image map and a local point cloud three-dimensional scene map. Optionally, step S108 further includes the following steps:

    [0063] Step S1081: Resolve position and attitude information of the keyframe of the image data based on the high-precision trace to generate the local image map.

    [0064] Step S1082: Resolve position and attitude information of the point cloud data based on the high-precision trace to generate the local point cloud three-dimensional scene map.

    [0065] Optionally, as shown in FIG. 5, Step S110 includes the following steps.

    [0066] Step S1101: Perform the closed-loop detection operation to the high-precision local map to obtain a local map rotation and translation matrix.

    [0067] Specifically, it is initially determined according to the GNSS data whether there is a repetition between a current map and a previously scanned map. If a difference in longitude and latitude information is within a particular threshold, it is considered that the moving platform remains at the same position, and the two frames form a closed loop.

    [0068] Next, it is determined, according to feature point information of images, whether there is a repetition between a local map of a current image and a previously formed image map, to perform closed-loop image detection. Specifically, a feature of each frame of picture is used to perform a search in a dictionary to calculate a similarity. If the similarity is excessively high, it is considered that the moving platform returns to a previous position, to form a closed loop.

    [0069] Next, it is determined, according to laser point cloud information, whether there is a repetition between a current local point cloud map and a previously formed point cloud map, to determine to perform point cloud closed-loop detection.

    [0070] Specifically, for obtained candidate determination point clouds i and j of two frames, a registration error E of the point clouds is calculated, and a minimum error function ϕ is calculated. The following formulas are function formulas for calculating a registration error and a minimum error.


    E.sub.i,j=Xi−T.sub.i,j.Math.X.sub.j,

    ϕ=Σ.sub.i,j∥E.sub.ij∥.sub.2.sup.2, wherein i is a point in a to-be-registered data frame in the current map, X.sub.i is coordinates of i, j is a point in a global coordinate system data frame, X.sub.j is coordinates of j, T.sub.i,j is a rotation and translation matrix from j to i, E.sub.i,j is the registration error, and ϕ is a preset norm.

    [0071] A determination reference is a point cloud-based Intersection over Union (IoU), that is, a ratio of points with the same name in the point cloud to points in a point cloud overlap area when the registration error is minimal. If the IoU of a point cloud is greater than a particular percentage, it is determined that the point cloud is a closed loop. In this case, T.sub.i,j is a rotation and translation matrix of the point cloud.

    [0072] Step S1102: Construct an image optimization pose constraint based on the local map rotation and translation matrix.

    [0073] Step S1103: Correct the high-precision trace by using the image optimization pose constraint to obtain a corrected high-precision trace.

    [0074] Step S1104: Obtain the high-precision global map of the moving platform based on the corrected high-precision trace.

    [0075] In the embodiments of the present invention, with the elapse of time, accumulated errors keep increasing, leading to reduced precision. Closed-loop detection may determine whether there is a great similarity between a scenario collected from a current frame and that from a previous frame. If yes, a loop is formed. The accumulation of errors may be reduced during optimization, to generate a high-precision global map.

    [0076] Compared with the manner in the prior art, the embodiments of the present invention have at least one of the following advantages:

    (1) A case that a single sensor fails in SLAM in a feature environment is resolved. The collection processing of the system has high stability and robustness.
    (2) Multi-sensor fusion is used to resolve the problem that accumulated errors are large in a conventional SLAM algorithm, thereby improving the mapping precision of a global map.

    Embodiment 2

    [0077] FIG. 6 is a schematic diagram of a multi-sensor fusion-based SLAM mapping system according to an embodiment of the present invention. The system is applied to a server. As shown in FIG. 6, the system includes an obtaining module 10, a hierarchical processing module 20, a localizing module 30, a first generation module 40, and a second generation module 50.

    [0078] Specifically, the obtaining module 10 is configured to obtain a plurality of sensor data regarding a surrounding environment of a moving platform, the plurality of sensor data including point cloud data, image data, IMU data, and GNSS data.

    [0079] The hierarchical processing module 20 is configured to perform hierarchical processing on the plurality of sensor data to generate a plurality of localization information, wherein one sensor data corresponds to one localization information.

    [0080] The localizing module 30 is configured to obtain target localization information of the moving platform based on the plurality of localization information.

    [0081] The first generation module 40 is configured to generate a high-precision local map based on the target localization information.

    [0082] The second generation module 50 is configured to perform a closed-loop detection operation on the high-precision local map to obtain a high-precision global map of the moving platform.

    [0083] Optionally, FIG. 7 is a schematic diagram of another multi-sensor fusion-based SLAM mapping system according to an embodiment of the present invention. As shown in FIG. 7, the obtaining module 10 includes a calibration unit 11, a synchronization unit 12, and a collection unit 13.

    [0084] Specifically, the calibration unit 11 is configured with a laser as a benchmark to calibrate position relationships among a camera, an IMU, a GNSS, and the laser to obtain calibration information, wherein the laser, the camera, the IMU and the GNSS are all sensors on the moving platform.

    [0085] The synchronization unit 12 is configured with time of the GNSS as a benchmark to synchronize time of the laser, time of the camera, and time of the IMU to a current time system of the GNSS.

    [0086] The collection unit 13 is configured to synchronously collect data from the laser, the camera, the IMU, and the GNSS to obtain the plurality of sensor data regarding the surrounding environment of the moving platform, wherein the point cloud data is the data collected from the laser, the image data is the data collected from the camera, the IMU data is the data collected from the IMU, and the GNSS data is the data collected from the GNSS.

    [0087] The embodiments of the present invention further provide an electronic device including a memory, a processor and a computer program stored in the memory and performed by the processor, the processor, when performing the computer program, implements steps in the method according to Embodiment 1.

    [0088] The embodiments of the present invention further provide a computer-readable medium having nonvolatile program code performed by a processor, the processor performs the method according to Embodiment 1 according to the program code.

    [0089] Finally, it should be noted that the foregoing embodiments are merely intended for describing the technical solutions of the present invention rather than limiting the present invention. Although the present invention is described in detail with reference to the foregoing embodiments, persons of ordinary skill in the art should understand that they may still make modifications to the technical solutions described in the foregoing embodiments or make equivalent replacements to some or all the technical features thereof, without departing from the scope of the technical solutions of the embodiments of the present invention.