ACTIVE PERCEPTION SYSTEM FOR DOUBLE-AXLE STEERING CAB-LESS MINING VEHICLE

20230406366 ยท 2023-12-21

Assignee

Inventors

Cpc classification

International classification

Abstract

The present disclosure discloses an active perception system for a double-axle steering cabless mining vehicle. The adaptive perception system includes: an environment perception mapping module, a single-frame target detection module, a multi-frame data association module, a cloud control platform, a decision-making and planning module, and a vehicle control module. The double-axle steering autonomous driving mining vehicle can be actively adjusted to be in a forward advancing mode or a backward advancing mode according to the driving mode and comprehensive information of a sensor. The system improves working efficiency of mining area, and solves the problems of large perception blind area and high risk during turning, uphill and downhill of an existing autonomous driving perception system.

Claims

1. An active perception system for a double-axle steering cab-less mining vehicle, comprising: an environment perception mapping module, a single-frame target detection module, a multi-frame data association module, a cloud control platform, a decision-making and planning module, and a vehicle control module, wherein the environment perception mapping module is configured to build a three-dimensional (3D) perception map and extract a region of interest (ROI) by marking an unstructured road boundary; the single-frame target detection module is configured to detect single-frame data by each sensor detection module, comprising but not limited to Euclidean clustering-based millimeter wave radar detection, depth learning algorithm-based Lidar detection and depth learning algorithm-based camera detection; the multi-frame data association module is configured to predict a risk index of each detection target by calculating an information entropy of each sensor and acquiring indicators such as a mark, speed and distance of the target, and determine a state of an obstacle after the risk index feeds back on the information entropy; the cloud control platform is configured to receive perceived environmental data and transmit the data to the decision-making and planning module; the decision-making and planning module is configured to determine a speed, slope and steering angle of the vehicle through a corresponding planning algorithm, transmit the above information to the vehicle control module for vehicle control, and feed back the above information to control a rotation direction of each sensor adaptively, so as to increase an effective perception and detection range; and the vehicle control module is configured to actively adjust the vehicle to a forward advancing mode and a backward advancing mode according to a driving mode and comprehensive information of the sensor.

2. The active perception system according to claim 1, wherein the double-axle steering autonomous driving mining vehicle has the forward advancing mode and the backward advancing mode: in the forward advancing mode, a sensor at the front of a vehicle body is taken as a main sensor, and the forward advancing mode is started when the vehicle moves from a loading area to an unloading area under a full load; and in the backward advancing mode, a sensor at the rear of the vehicle body is taken as a main sensor, and the backward advancing mode is started when the vehicle moves from the unloading area to the loading area under a no-load state.

3. The active perception system according to claim 1, comprising a forward sensor component and a backward sensor component with the same configuration, wherein each of the sensor components comprises at least a slope sensor, a global navigation satellite system/inertial measurement unit (GNSS/IMU), a Lidar, a camera, a millimeter wave radar, and a rotation angle receiving module and rotation angle control module of each sensor.

4. The active perception system according to claim 1, wherein the environment perception mapping module comprises an equipment installation and calibration module, a point cloud map generation module, and a ROI extraction module: the equipment installation and calibration module is configured to arrange two sets of Lidars, millimeter wave radars and cameras at the front and rear of a double-axle steering autonomous driving mining vehicle body and a GNSS/IMU and a slope sensor on the vehicle body, and calibrate the Lidar and the GNSS/IMU to obtain the positioning relationship between a vehicle body coordinate system and a world coordinate system; the point cloud map generation module is configured to establish a local world map with a global positioning system (GPS) of a first waypoint as an origin, determine a conversion matrix by Lidar and vehicle GNSS/IMU calibration information by inputting the GPS and Lidar data, convert GPS information of a point cloud at every moment into the world coordinate system, and perform front-end iterative closest point (ICP) matching and back-end extended Kalman filter (EKF) optimization processing on the point cloud to form a local point cloud map; and the ROI extraction module is configured to extract the unstructured road boundary based on a point cloud normal vector, distinguish a relatively vertical ground normal vector and a relatively inclined retaining wall normal vector, and extract a driving boundary through orientation features of the normal vector, so as to generate a drivable area.

5. The active perception system according to claim 1, wherein the single-frame target detection module comprises a data obtaining module, an adaptive angular rotation module, a Euclidean clustering extraction-based millimeter wave radar detection target information module, a deep learning algorithm extraction-based Lidar detection target information module, and a deep learning algorithm extraction-based camera detection target information module: the data obtaining module is configured to receive point cloud information of a ROI and speed, slope and steering angle information of the vehicle, and obtain data output by a millimeter wave radar, a Lidar and a camera; the adaptive angular rotation module is configured to determine a rotation angle of the sensor according to the steering angle information of the vehicle combined with the point cloud information of the ROI and control the rotation angle of each sensor by a rotation angle control module, and determine the rotation angle of the sensor according to an output value of a slope sensor when a vehicle body is tilted or a center of gravity shifts and compensate a visual field blind area caused by a pitch angle of the vehicle; the Euclidean clustering extraction-based millimeter wave radar detection target information module is configured to perform Euclidean clustering on a point cloud on a road after ground information is removed from the ROI, specifically, randomly selecting target points and putting the target points into a cluster if a distance between current points is less than a threshold, otherwise, selecting other points for judgment, continuing to repeat the above operations until there are no points with a distance less than the threshold, and outputting a clustering result; the deep learning algorithm extraction-based Lidar detection target information module is configured to cut an original point cloud into grids from a perspective of a top view, pillarize a 3D point cloud, represent a point cloud in each Pillar by a 9-dimensional vector, input the point cloud into a PointNet feature extraction network to generate a two-dimensional (2D) pseudo-image, input the pseudo-image into a 2D convolutional neural network (CNN) to further extract point cloud features, perform BoundingBox regression through a single-shot detector (SSD) head and output a position and orientation of the obstacle; and the deep learning algorithm extraction-based camera detection target information module is configured to enhance an image using Mosaic data, change a size and position of a truth value target, input the enhanced image into the network, unify the size of the image using a Focus module, extract deep semantic information and shallow spatial information of the image through an improved residual network, and accurately detect a category and position of the target under three scales through a feature pyramid network.

6. The active perception system according to claim 1, wherein the multi-frame data association module comprises a target fusion module and a risk level prediction module: the target fusion module is configured to generate a detection target list according to single-frame target detection information and fuse target features output by each sensor by a Bayesian method to obtain a consistent interpretation of the target, calculate a likelihood probability P(E|H.sub.i) of an observed feature E of the sensor when a given hypothesis H.sub.i is true, compare an observed value with a predetermined threshold th.sub.i, a number of expected targets, feature matching and quality of feature matching, and analyze the above data to determine a value of a likelihood function which represents a confidence of the target a.sub.j; and the risk level prediction module is configured to input all relevant factors into an inference machine which repeatedly infers through an algorithm and related rules in a knowledge base by using an expert system and taking a target speed, a vehicle body speed, a tag value, and a position of the target relative to a vehicle body as evaluation criteria, and consider that a detection risk level is high and actively update the confidence of the target a.sub.j if a risk score of the target output by the expert system is greater than a risk level threshold.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0038] The accompanying drawings are provided for further understanding of the embodiments of the present disclosure, and constitute a part of the specification. The accompanying drawings and the following specific implementations are intended to explain embodiments of the present disclosure, rather than to limit the embodiments of the present disclosure.

[0039] FIG. 1 is an overall architecture diagram of the active perception system;

[0040] FIG. 2 is a schematic diagram of the influence of a vehicle body pitch angle on a perception range of a sensor;

[0041] FIG. 3 is a geometric schematic diagram of Ackermann steering;

[0042] FIG. 4 is a schematic diagram of the influence of a vehicle body yaw angle on the perception range of the sensor;

[0043] FIG. 5 is a diagram of the stepper motor control system;

[0044] FIG. 6 is a flow chart of the Bayesian algorithm; and

[0045] FIG. 7 is a flow chart of the algorithm of the multi-frame target association module.

DETAILED DESCRIPTION OF THE EMBODIMENTS

[0046] In the driving process of a vehicle, the perception system can actively adjust the front and rear wheel steering according to the driving mode and the comprehensive information of the sensor. For example, when the vehicle is fully loaded, the uphill goods move down, the front weight of the vehicle is less than the rear weight, and the vehicle tilts back, is prone to oversteering, and is adaptively adjusted to rear wheel steering. When the downhill goods move forward, the front weight of the vehicle is greater than the rear weight, and the vehicle is prone to understeering, and is adaptively adjusted to front wheel steering. An adaptive perception system for a double-axle steering autonomous driving mining vehicle is provided with the following modules.

(1) Environment Perception Mapping Module:

[0047] Step I: Equipment Installation and Calibration

[0048] Two sets of Lidars, millimeter wave radars and cameras are arranged at the front and rear of a double-axle steering autonomous driving mining vehicle body, a GNSS/IMU and a slope sensor are arranged on the vehicle body, and the Lidar and the GNSS/IMU are calibrated to obtain a positioning relationship between a vehicle body coordinate system and a world coordinate system. [0049] Step II: Point Cloud Map Generation

[0050] A local world map is established with a GPS of a first waypoint as an origin. The GPS and Lidar data are input synchronously. The GPS of the vehicle at every moment is converted into the world coordinate system. A conversion matrix is determined by Lidar and vehicle IMU calibration information, and GPS information of a point cloud at every moment is converted into the world coordinate system. The point cloud at every moment is subjected to front-end ICP matching and back-end EKF optimization processing to form a local point cloud map. [0051] Step III: ROI Extraction

[0052] The unstructured road boundary is extracted based on a point cloud normal vector. The ground normal vector is relatively vertical, while the retaining wall normal vector is relatively inclined. A boundary is extracted through orientation features of the normal vector, so as to generate a drivable area.

(2) Single-Frame Target Detection Module:

[0053] Step I: Data Obtaining

[0054] Point cloud information of a ROI, steering angle information of the vehicle, and slope sensor information are received, and data output by a millimeter wave radar, a Lidar and a camera is obtained. [0055] Step II: Adaptive Angle Rotation

[0056] 1) When the planning module continuously outputs vehicle steering angle information, the rotation angle of the sensor is determined combined with the comparison of point cloud information in the drivable area, and the sensor is controlled to rotate at a corresponding angle by the sensor rotation controller.

[0057] When the matching degree between the current drivable area and the expected ROI is continuously less than the threshold, namely:

[00001] 1 n .Math. n 1 M i < M x , where M i = S n S p ,

where i is the current moment, M.sub.i is the matching degree between the current drivable area and the expected ROI, M.sub.x is the matching threshold, S.sub.n is an area of the current drivable area, S.sub.p is an area of the expected ROI. If the above formula is met, it is considered that the forward sensor meets the steering conditions and steering can be performed.

[0058] The steering angle derivation method based on the vehicle safety braking distance is related to the driving road adhesion coefficient and the weather conditions. Because the ground of the mining area is bumpy and the weather is uncertain, the steering angle derived by this method has large fluctuations and is not applicable. The present disclosure uses a steering angle derivation method directly based on the geometric relationship.

[0059] It is assumed that when the vehicle turns right, if the left wheel steering angle is .sub.l, the right wheel steering angle is .sub.r, a steering angle from the sensor to the intersection of the left and right front wheel axes is .sub.x, K is the ground wheelbase of the left and right front wheel axles, and L is the ground wheelbase of the front and rear wheel axles on the same side of the vehicle. The distance between the left and right front wheel axles of the vehicle and the sensor is 1, and the sensor should turn at an angle of . According to the Ackermann's vehicle steering law, it can be derived that:

[00002] cot 1 = cot r + K L .

[0060] As shown in FIG. 3:

[00003] H = L tan r , and tan x = L H + x .

[0061] The steering angle from the sensor to the intersection of the left and right front wheel axes .sub.x is

[00004] tan x = L .Math. tan r L + x .Math. tan r .

[0062] The turning radius of a corresponding sensor emission source is:

[00005] R x = L sin x = L sin ( arctan ( L .Math. tan r L + x .Math. tan r ) ) ,

where [0063] A, B, C-distance from sensor emission source to front left and right wheel axles, distance between rear left and right wheel axles, and position of sensor emission source; [0064] x-distance from sensor emission source to right wheel; [0065] l-distance from left and right front wheel axles to sensor; [0066] L-locomotive axle spacing; [0067] O-instantaneous turning center of locomotive; [0068] H-distance from instantaneous turning center of locomotive to right wheel; and [0069] D-tangent point between axis of sensor and intended track line of corresponding wheel.

[0070] It can be seen from FIG. 4 that:

[00006] { OD CD OB BC .

[0071] According to the sum theorem of the interior angles of quadrangles:


BCD+BOD=

and:


BCD+=

therefore,


=BOD=.sub.x+AOD=.sub.x+AOC+COD.

[0072] According to the Pythagorean theorem:


OB.sup.2=OA.sup.2AB.sup.2=R.sub.x.sup.2L.sup.2, and


OC.sup.2=OB.sup.2+BC.sup.2=R.sub.x.sup.2L.sup.2+(L+1).sup.2=R.sub.x.sup.2+l.sup.2+2lL,

where R.sub.x is the radius from the center of the locomotive turning clockwise to the sensor emission source. For the angle COD:

[00007] cos COD = OA OC = R x R x 2 + I 2 + 2 IL .

[0073] For the angle AOC, according to the law of cosine, AC=1:

[00008] cos AOC = OC 2 + OA 2 - AC 2 2 .Math. OA .Math. OC = R x 2 + I 2 + 2 IL + R x 2 - I 2 2 R x R x 2 + I 2 + 2 IL = R x 2 + IL R x R x 2 + I 2 + 2 IL ,

then the sensor steering angle is:

[00009] = x + arccos ( R x 2 + IL R x R x 2 + I 2 + 2 IL ) + arccos ( R x R x 2 + I 2 + 2 IL ) .

[0074] According to the perception system, the angle is changed in advance, the delay error is reduced and the time variable t is substituted, the function of the adaptive rotation angle of the sensor can be obtained:


R.sub.x(t)=V(t)dt,

where V(t) is a speed function output by the decision-making and planning module.

[0075] It is substituted to obtain the final adaptive rotation angle function of the sensor:

[00010] ( t ) = 1 R x .Math. V ( t ) dt = 1 L sin x ( t ) V ( t ) dt .

[0076] 2) When there are slightly large rocks or puddles on the road, or when the vehicle goes down hills under a full load, the vehicle is tilted forward or the center of gravity shifts backward. The slope sensor outputs the slope, and determines the rotation angle of the sensor according to the output slope value. The sensor rotation controller controls the rotation of the sensor to compensate the visual field blind area caused by the pitch angle of the vehicle.

[0077] It can be seen from FIG. 2 that .sub.y is an angle that shall be adjusted for the forward tilt sensor of the vehicle (defined as positive when adjusted upward and negative when adjusted downward), is the forward tilt angle of the vehicle output by the slope sensor, the position perceived by the sensor decreases accordingly when the center of gravity of the vehicle sinks, and is the included angle between the decline point perceived by the sensor caused by the decline of the center of gravity of the vehicle and an expected perceived position point:


.sub.y=.

[0078] Since is small, it can be obtained approximatively:

[00011] tan = h w ,

where h is the height variation of the forward tilt sensor of the vehicle, and w is the horizontal component of the distance between the sensor and the center of gravity of the vehicle.

[0079] According to the geometric relation, it can be obtained:

[00012] tan = h W .

[0080] W is the effective perceived distance of the sensor, so it can be obtained:

[00013] y = - - tan - 1 ( w .Math. tan W ) .

[0081] The adaptive rotation sensor is driven by a small stepper motor. It is considered that the stepper motor control system is a second-order inertia link plus a pure hysteresis link. A diagram of the stepper motor control system is shown in FIG. 5.

[0082] In FIG. 5, N(s) is the input pulse number of the stepper motor control system, and Y(s) is the output angle of the stepper motor control system. A transfer function of the stepper motor control system is:

[00014] G ( s ) = Ke - s s ( Ts + 1 ) ,

where K is the static gain of the system, T is the system time constant, and is the pure lag time.

[0083] The mathematical model of the front sensor is established, and the least square method is used for system identification. The mathematical model of the time-invariant dynamic process is assumed to be:


A(z.sup.1)z(k)=B(z.sup.1)u(k)+n(k),

where u(k) and z(k) are the input and output of the system, n(k) is noise, and A(z.sup.1) and B(z.sup.1) are polynomials, where:

[00015] { A ( z - 1 ) = 1 + a 1 z - 1 + a 2 z - 2 + .Math. + a n z - n B ( z - 1 ) = b 1 z - 1 + b 2 z - 2 + .Math. + b m z - m .

[0084] Using a {u(k), z(k); k=1, . . . , L} sequence, the unknown coefficients in the polynomials A(z.sup.1) are B(z.sup.1) estimated.

[0085] It is defined that:

[00016] { = [ a 1 , a 2 , .Math. , a n ; b 1 , b 2 , .Math. , b m ] T h ( k ) = [ - z ( k - 1 ) , .Math. , - z ( k - n ) ; u ( k - 1 ) , .Math. , u ( k - m ) ] T ,

then the mathematical model of the initial time-invariant dynamic process can be converted into a standard least square format:


z(k)=h.sup.T(k)+n(k) k=1,2, . . . , L.

[0086] The above formula can also be written into a linear system of equations, and its matrix format is:


Z.sub.L=H.sub.L+N.sub.L,

where:

[00017] { Z L = [ z ( 1 ) , z ( 2 ) , .Math. , z ( L ) ] T N L = [ n ( 1 ) , n ( 2 ) , .Math. , n ( L ) ] T , and H L = [ h T ( 1 ) h T ( 2 ) .Math. h T ( L ) ] = [ - z ( 0 ) .Math. - z ( 1 - n ) u ( 0 ) .Math. u ( 2 - m ) - z ( 1 ) .Math. - z ( 2 - n ) u ( 1 ) u ( 2 - m ) .Math. u ( 2 - m ) .Math. .Math. .Math. .Math. .Math. .Math. - z ( L - 1 ) .Math. - z ( L - n ) u ( L - 1 ) .Math. u ( L - m ) ]

then the least square criterion function J() can be written as:


J()=(Z.sub.LH.sub.L)I.sub.L(Z.sub.LH.sub.L),

where I.sub.L is a weighted matrix, which is generally taken as a positive definite diagonal matrix. By minimizing J(), the estimated value {circumflex over ()} of the coefficient can be obtained:


=(H.sub.L.sup.TH.sub.L).sup.1H.sub.L.sup.TZ.sub.L,

where {circumflex over ()} is the estimated value of the least square method. After the input and output values of the system are obtained, the estimated value of the corresponding coefficient can be obtained at one time by using the above formula.

[0087] In order to establish the mathematical model of the sensor steering angle control system, the least square method is needed for system identification. Therefore, the inverse Laplace transform of the transfer function of the stepper motor control system is required. It is assumed that the input of the vehicle control system is a constant value x, the relationship after Laplace transform can be obtained:

[00018] y ( t ) = nK ( t - - T + Te - t - T ) ,

where y(t) is the inverse Laplace transform of Y(s), and n is the number of input pulses of the control system. In order to separate the three undetermined coefficients K, and T, the both ends are integrated to obtain:

[00019] 0 k s y ( t ) dt = 0 k s nK ( t - - T + Te - t - T ) dt .

[0088] By setting A(kT.sub.s)=.sub.0.sup.ksy(t)dt, the above formula can be simplified as:

[00020] nK [ 1 2 ( kT s ) 2 + 1 2 2 - kT s ] - Ty ( kT s ) = A ( kT s ) ,

where T.sub.s is the sampling period and k is a positive integer.

[0089] Through sorting-out:

[00021] [ 1 2 ( lT s ) 2 + 1 2 n - nkT s - ykT s ] [ K k 2 k T ] = A ( kT s ) .

[0090] By setting

[00022] X = [ K k 2 k T ] ,

it is obtained that:

[00023] B km .fwdarw. L = [ 1 2 n ( mT s ) 2 1 2 n - nmT s - y ( mT s ) 1 2 n [ ( m + 1 ) T s ] 2 1 2 n - n ( m + 1 ) T s - y [ ( m + 1 ) T s ] .Math. .Math. .Math. .Math. 1 2 n ( LT s ) 2 1 2 n - nLT s - y ( LT s ) ] , where A = [ A ( mT s ) A ( m + 1 ) T s .Math. A ( LT s ) ] , BX = A , and X = ( B T B ) - 1 B T A .

[0091] In the above formulas, the right side of the equal sign is only related to the input and output of the control system and the sampling time, which can be obtained from experiments. The left side of the equal sign is the undetermined coefficient.

[0092] The state space model is established, and the transfer function of the stepper motor control system used by the model is:

[00024] G ( s ) = 2.93 e - 0.06 s 1.77 s 2 + s .

[0093] The procedure for establishing the state space model is as follows: [0094] Nump=[0,0,2.93]; [0095] Denp=[1.77,1,0]; [0096] Gp=tf(nump,denp); [0097] [nump,denp]=pade(0.06,1); % pure delay part uses a first-order pade approximation [0098] G=Gp*tf(nump,denp); % reconstructs the approximation function [0099] [num,den]=tfdata(G,v); % extracts the numerator and denominator polynomial coefficient vector of the approximate transfer function [0100] [A,B,C,D]=tf2ss(num,den); % is converted to the state space model

[00025] X . = [ - 33.8983 - 18.8324 0 1 0 0 0 1 0 ] X + [ 1 0 0 ] U Y = [ 0 - 1.6554 55.1789 ] X

[0101] Step III: Euclidean clustering extraction-based millimeter wave radar detection target information

[0102] Euclidean clustering is performed on a point cloud on a road after ground information is removed from the ROI. Specifically, target points are randomly selected and the target points are put into a cluster if a distance between current points is less than a threshold, otherwise, other points are selected for judgment, the above operations are continuously repeated until there are no points with a distance less than the threshold, and a clustering result is output.

[0103] Step IV: Deep learning algorithm extraction-based Lidar detection target information

[0104] First, an original point cloud is cut into grids from a perspective of a top view, a 3D point cloud is pillarized, a point cloud in each Pillar is represented by a 9-dimensional vector, and the point cloud is subjected to feature extraction by PointNet to generate a 2D pseudo-image. The pseudo-image is input into a 2D CNN to further extract point cloud features. Finally, BoundingBox regression is performed through an SSD head and a position and orientation of the obstacle are output.

[0105] Step V: Deep learning algorithm extraction-based camera detection target information

[0106] An image is enhanced using Mosaic data, and a size and position of a truth value target are changed. The image after Mosaic enhancement is input into the network, the size of the image is unified using a Focus module, deep semantic information and shallow spatial information of the image are extracted through an improved residual network DarkNet, and a category and position of the target are accurately detected under three scales through a feature pyramid network.

(3) Multi-Frame Target Association

[0107] Step I: Target Fusion

[0108] A detection target list is generated according to single-frame target detection information and target features output by each sensor are fused by a Bayesian method to obtain a consistent interpretation of the target. A likelihood probability P(E|H.sub.i) of an observed feature E of the sensor is calculated when a given hypothesis H.sub.i is true. An observed value is compared with a predetermined threshold th.sub.i, a number of expected targets, feature matching and quality of feature matching, and these data is analyzed to determine a value of a likelihood function which represents a confidence of the target a.sub.j. [0109] Step II: Risk Level Prediction

[0110] A target speed, a vehicle body speed, a tag value, and a position of the target relative to a vehicle body are taken as evaluation criteria. First, a knowledge-based expert system is established. The expert system of the present disclosure includes a knowledge base, a global database and an inference machine. The knowledge base contains established facts, algorithms and heuristic rules, the global database is used to temporarily cache the input, intermediate and output results, and the inference machine is used to infer the risk level of the target.

[0111] The expert system inputs all relevant factors into the inference machine which repeatedly infers the relevant target risk level through the algorithm and related rules in the knowledge base, and finally outputs the optimal target risk level. A risk level threshold h th is set. If the risk score of the target output by the expert system is greater than the threshold, the detection risk level is considered high. According to the risk score of the target, the probability a.sub.j of the target is actively updated and fed back to step II to realize the active risk perception function of the target.

[0112] If the risk level of the target is greater than the risk threshold:


h.sub.i>h.sub.th,

where h.sub.i is the risk level of the target, and h.sub.th is the risk level threshold.

[0113] The risk level of the target is input into the expert system, and the expert system actively analyzes the target with a risk level greater than the risk threshold to improve the confidence of the target:


a.sub.j=a.sub.j+(1+a.sub.j)h.sub.i,

where a.sub.j is the confidence of the target input, and h.sub.i is the risk level improvement coefficient, and is iteratively updated by the expert system by constantly absorbing the prior knowledge.

[0114] Further, an embodiment of the present disclosure provides a computer-readable storage medium, storing a computer program thereon. When the program is executed by a processor, the method according to the embodiments of the present disclosure is implemented.

[0115] The foregoing describes optional implementations of the embodiments of the present disclosure in detail with reference to the accompanying drawings. However, the embodiments of the present disclosure are not limited to the specific details in the foregoing implementations. Within the scope of the technical concept of the embodiments of the present disclosure, various simple variations can be made to the technical solutions in the embodiments of the present disclosure. These simple variations all fall within the protection scope of the embodiments of the present disclosure.

[0116] In addition, it should be noted that various specific technical features described in the foregoing implementations can be combined in any suitable manner, provided that there is no contradiction. To avoid unnecessary repetition, various possible combinations are not separately described in the embodiments of the present disclosure.

[0117] Those skilled in the art can understand that all or some of the steps for implementing the method in the foregoing embodiments can be completed by a program instructing relevant hardware. The program is stored in a storage medium, and includes a plurality of instructions to enable a single chip microcomputer, a chip, or a processor to perform all or some of the steps in the method described in each embodiment of the present disclosure. The foregoing storage medium includes: a USB flash disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes.

[0118] In addition, various different embodiments of the present disclosure can also be arbitrarily combined, provided that the combinations do not violate the idea of the embodiments of the present disclosure. The combinations should also be regarded as the content disclosed in the embodiments of the present disclosure.