METHOD FOR STOCHASTIC INSPECTIONS ON POWER GRID LINES BASED ON UNMANNED AERIAL VEHICLE-ASSISTED EDGE COMPUTING

20240353861 ยท 2024-10-24

Assignee

Inventors

Cpc classification

International classification

Abstract

The present disclosure relates to a method for stochastic inspections on power grid lines based on unmanned aerial vehicle-assisted edge computing. According to the method, a stochastic distributed inspection unmanned aerial vehicle is adopted to acquire video images on a target power grid area, which can reduce funds and time costs of inspections. With assistance of superior unmanned aerial vehicle, a goal is to minimize energy consumption of an unmanned aerial vehicle system and extend operation time of the unmanned aerial vehicles under same payload conditions, while processing video image data collected from the inspection unmanned aerial vehicles. The near-far effect generated by communications between mobile unmanned aerial vehicles is eliminated by introducing a NOMA, and position coordinates, system resource allocations and task offload decision schemes are solved by using a method of combining a DDPG algorithm in a Deep reinforcement learning with a genetic algorithm.

Claims

1. A method for stochastic inspections on power grid lines based on unmanned aerial vehicle-assisted edge computing, wherein an inspection is conducted on a target power gird area including power grid equipment and power transmission lines by applying an unmanned aerial vehicle group including M inspection unmanned aerial vehicles and a superior unmanned aerial vehicle based on a central base station arranged on a fixed position; comprising following steps: Step S1, constructing, based on a flight mode of each of the inspection unmanned aerial vehicles in the unmanned aerial vehicle group, an unmanned aerial vehicle-assisted power grid lines stochastic inspection system, wherein the inspection unmanned aerial vehicles are merely in charge of acquiring video images for the power gird equipment and the power transmission lines in the target power gird area, and data are processed on obtained video images by the superior unmanned aerial vehicle or the central base station, and then entering Step S2; Step S2, acquiring, by each of the inspection unmanned aerial vehicles in the unmanned aerial vehicle group, the video images for the power gird equipment and the power transmission lines in the target power gird area based on the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, and obtaining the video image data acquired and obtained by each of the inspection unmanned aerial vehicles corresponding to each time slot respectively, and then entering Step S3; Step S3, constructing, according to the video image data acquired and obtained by each of the inspection unmanned aerial vehicles corresponding to each time slot respectively, a digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, in combination with a weight, a signal transmission power and position coordinates of each of the inspection unmanned aerial vehicles, a weight, a signal transmission power, position coordinates, and a computing capacity of the superior unmanned aerial vehicle, position coordinates of the central base station, as well as a system communication bandwidth, to fit the position coordinates of each of the inspection unmanned aerial vehicles and the superior unmanned aerial vehicle, and a resource status of the system, and then entering Step S4; Step S4, constructing, based on constraints of an offload latency and a data task processing latency for the power grid lines stochastic inspection system, an energy consumption model or a balanced energy consumption model of the unmanned aerial vehicle group corresponding to each time slot respectively, according to the digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system; further constructing an objective function for minimizing energy consumption of the unmanned aerial vehicle group corresponding to each time slot respectively or an objective function for minimizing balanced energy consumption of the unmanned aerial vehicle group corresponding to each time slot respectively, and then entering Step S5; Step S5, randomly initializing the position coordinates of the superior unmanned aerial vehicle, constructing, based on the position coordinates and the video image data of each of the inspection unmanned aerial vehicles corresponding to a t-th time slot respectively, a system status at the t-th time slot, and then entering Step S6; Step S6, solving, by adopting a deep deterministic policy gradient algorithm in a deep reinforcement learning, the energy consumption model of the unmanned aerial vehicle group corresponding to each time slot respectively, based on the position coordinates of the superior unmanned aerial vehicle and the system status at the t-th time slot, according to the objective function for minimizing energy consumption of the unmanned aerial vehicle group corresponding to each time slot or the objective function for minimizing balanced energy consumption of the unmanned aerial vehicle group corresponding to each time slot respectively; obtaining, an action space of the system at the t-th time slot corresponding to the system status at the t-th time slot in combination with the position coordinates of the superior unmanned aerial vehicle, that is, the action space of the system at the t-th time slot corresponding to the system status at the t-th time slot in combination with the position coordinates of the superior unmanned aerial vehicle, wherein the action space of the system at the t-th time slot is composed of the signal transmission power of each of the inspection unmanned aerial vehicles corresponding to the t-th time slot respectively, an offload mode of each of the inspection unmanned aerial vehicles corresponding to the t-th time slot respectively regarding the superior unmanned aerial vehicle or the central base station, and the signal transmission power and an allocated CPU calculation frequency of the superior unmanned aerial vehicle corresponding to the t-th time slot, and then entering Step S7; Step S7, determining whether an iteration overflow condition is satisfied or not, if yes, entering Step S8, if no, solving and updating, by using a genetic algorithm, the position coordinates of the superior unmanned aerial vehicle, based on the system status at the t-th time slot, in combination with system resource allocations and offload decision schemes for the video image data in the action space of the system at the t-th time slot corresponding to the position coordinates of the superior unmanned aerial vehicle, and returning to Step S6; and Step S8, processing, according to the position coordinates of the superior unmanned aerial vehicle, and the system resource allocations and the offload decision schemes for the video image data in the action space of the corresponding system at the t-th time slot, the video image acquired by each of the inspection unmanned aerial vehicles corresponding to each time slot in Step S2, to offload the video image data to the superior unmanned aerial vehicle or the central base station for processing.

2. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 1, wherein Step S1 includes following Step S11 to Step S13: Step S11, obtaining, based on a constant motion status of each of the inspection unmanned aerial vehicles within each time slot, a moving speed .sub.m(t), a horizontal moving direction .sub.m(t), and a vertical moving direction .sub.m(t) of a m-th inspection unmanned aerial vehicle corresponding to the t-th time slot for each of the inspection unmanned aerial vehicles respectively, according to following formulas: v m ( t ) = 1 v m ( t - 1 ) + ( 1 - 1 ) v + 1 - 1 2 m m ( t ) = 2 m ( t - 1 ) + ( 1 - 2 ) m _ + 1 - 2 2 m m ( t ) = 3 m ( t - 1 ) + ( 1 - 3 ) m _ + 1 - 3 2 m , where 1mM, represents an average moving speed of all inspection unmanned aerial vehicles, .sub.m represents an average horizontal moving angle of the m-th inspection unmanned aerial vehicle corresponding to previous t1 time slot, .sub.m represents an average vertical moving angle of the m-th inspection unmanned aerial vehicle corresponding to the previous t1 time slot, .sub.m(t1), .sub.m(t1) and .sub.m(t1) sequentially represent a moving speed, a horizontal moving direction, and a vertical moving direction of the m-th inspection unmanned aerial vehicle corresponding to the previous t1 time slot, 0<.sub.1<1, .sub.1 represents a preset parameter used to adjust impacts of the moving speed of the inspection unmanned aerial vehicles corresponding to the previous t1 time slot on a moving speed of the inspection unmanned aerial vehicles corresponding to the t-th time slot; 0<.sub.2<1, .sub.2 represents a preset parameter used to adjust impacts of the horizontal moving direction of the inspection unmanned aerial vehicles corresponding to the previous t1 time slot on a horizontal moving direction of the inspection unmanned aerial vehicles corresponding to the t-th time slot; 0<.sub.3<1, .sub.3 represents a preset parameter used to adjust impacts of the vertical moving direction of the inspection unmanned aerial vehicles corresponding to the previous t1 time slot on a vertical moving direction of the inspection unmanned aerial vehicles corresponding to the t-th time slot; a preset parameter .sub.m that follows an independent gaussian distribution represents a randomness of the moving speed of the m-th inspection unmanned aerial vehicle, a preset parameter .sub.m that follows an independent gaussian distribution represents a randomness of the horizontal moving direction of the m-th inspection unmanned aerial vehicle, and a preset parameter .sub.m that follows an independent gaussian distribution represents a randomness of the vertical moving direction of the m-th inspection unmanned aerial vehicle, and then entering Step S12; Step S12, obtaining, according to a length of each time slot, the position coordinates L.sub.m.sup.UAV(t)=(x.sub.m(t),y.sub.m(t),h.sub.m(t)) of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot, for each of the inspection unmanned aerial vehicles respectively according to following formulas: x m ( t ) = x m ( t - 1 ) + v m ( t - 1 ) cos ( m ( t - 1 ) ) y m ( t ) = y m ( t - 1 ) + v m ( t - 1 ) sin ( m ( t - 1 ) ) h m ( t ) = h m ( t - 1 ) + v m ( t - 1 ) sin ( m ( t - 1 ) ) , where x.sub.m(t), y.sub.m(t), h.sub.m(t) represent values for the m-th inspection unmanned aerial vehicle respectively on coordinate axes x, y, z corresponding to the t-th time slot, x.sub.m(t1), y.sub.m(t1), h.sub.m(t1) represent values for the m-th inspection unmanned aerial vehicle respectively on coordinate axes x, y, z corresponding to the t1-th time slot, and then entering Step S13; and Step S13, constructing, according to the moving speed, the horizontal moving direction, the vertical moving direction and the position coordinates of each of the inspection unmanned aerial vehicles respectively corresponding to the t-th time slot, the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, wherein the inspection unmanned aerial vehicles are merely in charge of acquiring video images for the power gird equipment and the power transmission lines in the target power gird area, and data are processed on the obtained video images by the superior unmanned aerial vehicle or the central base station, and then entering Step S2.

3. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 1, wherein Step S3 includes following Step S31 to Step S33: Step S31, constructing, in combination with the weight of each of the inspection unmanned aerial vehicles, the video image data acquired by each of the inspection unmanned aerial vehicles respectively corresponding to each time slot, the signal transmission power of each of the inspection unmanned aerial vehicles, the weight of the superior unmanned aerial vehicle, the CPU calculation frequency allocated to each of the inspection unmanned aerial vehicles respectively corresponding to each time slot, and the signal transmission power of the superior unmanned aerial vehicle, and the position coordinates of the central base station, a real physical entity network, according to the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, and then entering Step S32; Step S32, constructing, based on the real physical entity network, a digital twin model of each of the inspection unmanned aerial vehicles respectively corresponding to each time slot, according to a following formula: D T m UAV ( t ) = { W m UAV , D m UAV ( t ) , P m UAV ( t ) , L m UAV ( t ) , P max UAV } , where DT.sub.m.sup.UAV(t) represents a digital twin model of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot, W.sub.m.sup.UAV represents a weight of the m-th inspection unmanned aerial vehicle, D.sub.m.sup.UAV(t) represents video image data acquired by the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot, P.sub.m.sup.UAV(t) represents a signal transmission power of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot, L.sub.m.sup.UAV(t) represents position coordinates of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot, and P.sub.max.sup.UAV represents a maximum signal transmission power of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot; at the same time, constructing a digital twin model of the superior unmanned aerial vehicle corresponding to each time slot according to a following formula: D T SUAV ( t ) = { W SUAV , f SUAV ( t ) , P SUAV ( t ) , L SUAV ( t ) , P max SUAV , f max SUAV , C SUAV } , where DT.sup.SUAV(t) represents a digital twin model of the superior unmanned aerial vehicle corresponding to the t-th time slot, W.sup.SUAV represents a weight of the superior unmanned aerial vehicle, f.sup.SUAV(t) represents a CPU calculation frequency allocated to the superior unmanned aerial vehicle corresponding to the t-th time slot, P.sup.SUAV(t) represents a signal transmission power of the superior unmanned aerial vehicle corresponding to the t-th time slot, L.sup.SUAV(t) represents position coordinates of the superior unmanned aerial vehicle corresponding to the t-th time slot, P.sub.max.sup.SUAV represents a maximum signal transmission power of the superior unmanned aerial vehicle corresponding to the t-th time slot, f.sub.max.sup.SUAV represents a maximum CPU calculation frequency of the superior unmanned aerial vehicle, and C.sup.SUAV represents a number of CPU cycles required to process data for 1-bit by the superior unmanned aerial vehicle; and constructing a digital twin model DT.sup.BS of the central base station according to a following formula: D T BS = { L BS } , where L.sup.BS represents the position coordinates of the central base station, and then entering Step S33; and Step S33, constructing, based on the digital twin model of each of the inspection unmanned aerial vehicles respectively corresponding to each time slot, the digital twin model of the superior unmanned aerial vehicle respectively corresponding to each time slot, and the digital twin model of the central base station, the digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, to fit the position coordinates of each of the inspection unmanned aerial vehicles and the superior unmanned aerial vehicle, and the resource status of the system, and then entering Step S4.

4. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 3, wherein Step S4 includes following Step S41 to Step S42: Step S41, constructing, according to the digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, a general latency model of the video image data acquired by each of the inspection unmanned aerial vehicles at each time slot corresponding to each offload type respectively, and then entering Step S42; and Step S42, constructing, based on the constraints of the offload latency and the data task processing latency for the power grid lines stochastic inspection system, the energy consumption model or the balanced energy consumption model of the unmanned aerial vehicle group corresponding to each time slot respectively, according to the general latency model of the video image data acquired by each of the inspection unmanned aerial vehicles at each time slot corresponding to each offload type respectively; further constructing the objective function for minimizing energy consumption of the unmanned aerial vehicle group respectively corresponding to each time slot, and then entering Step S5.

5. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 4, wherein Step S41 includes following Step S411 to Step S413: Step S411, constructing, based on a fact that the inspection unmanned aerial vehicles are merely capable of choosing one between the superior unmanned aerial vehicle and the central base station to offload the video image data within one time slot, a communication latency model trans.sub.m,SUAV.sup.UAV(t) between each of the inspection unmanned aerial vehicles and the superior unmanned aerial vehicle corresponding to each time slot, according to a fact that the inspection unmanned aerial vehicles shares a common frequency spectrum to communicate with the superior unmanned aerial vehicle, that a data transmission rate between each of the inspection unmanned aerial vehicles and the superior unmanned aerial vehicle corresponding to the t-th time slot is R.sub.m.sup.SUAV(t), and that a data transmission rate between the superior unmanned aerial vehicle and the central base station corresponding to the t-th time slot is R.sup.SUAV(t), in accordance with a following formula: transT m , SUAV UAV ( t ) = D m UAV ( t ) R m UAV ( t ) , where trans.sub.m,SUAV.sup.UAV(t) represents a communication latency between the m-th inspection unmanned aerial vehicle and the superior unmanned aerial vehicle corresponding to the t-th time slot, and D.sub.m.sup.UAV(t) represents the video image data acquired by the m-th inspection unmanned aerial vehicles corresponding to the t-th time slot; and constructing a communication latency model transT.sub.m,BS.sup.SUAV(t) of the video image data acquired by each of the inspection unmanned aerial vehicles corresponding to each time slot respectively transmitted between the superior unmanned aerial vehicle and the central base station, according to a following formula: transT m , BS SUAV ( t ) = D m UAV ( t ) R SUAV ( t ) , where transT.sub.m,BS.sup.SUAV(t) represents a communication latency of the video image data acquired by the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot transmitted between the superior unmanned aerial vehicle and the central base station; and then entering Step S412; Step S412, constructing, based on a fact that the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot corresponding to a definition a.sub.m.sup.UAV(t)=0 are offloaded to the superior unmanned aerial vehicle for processing, a data processing latency model comT.sub.m.sup.SUAV(t) at a receiving terminal of the superior unmanned aerial vehicle for the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot, according to a following formula: comT m SUAV ( t ) = D m UAV ( t ) C SUAV f SUAV ( t ) , where C.sup.SUAV represents the number of CPU cycles required to process data for 1-bit by the superior unmanned aerial vehicle, and f.sup.SUAV(t) represents the CPU calculation frequency allocated to the superior unmanned aerial vehicle corresponding to the t-th time slot; and constructing, based on a fact that the superior unmanned aerial vehicle processes the video images in a non-preemptive mode in accordance with a channel power gain descending mode, a queue waiting latency model queT.sub.m.sup.SUAV for the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot before being processed by the superior unmanned aerial vehicle, according to a following formula: queT m SUAV = .Math. i = 1 , ( k ) = m k - 1 ( 1 - a ( i ) UAV ( t ) ) comT ( i ) SUAV ( t ) , where (i) represents a sequence number of the inspection unmanned aerial vehicles from which the superior unmanned aerial vehicle sequentially processes i-th video image data, and k represents a sequence number of the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot waiting to be processed by the superior unmanned aerial vehicle; and then constructing a general latency model T.sub.m,0(t) corresponding to offloading the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot to the superior unmanned aerial vehicle for processing according to a following formula: T m , 0 ( t ) = transT m , SUAV UAV ( t ) + comT m SUAV ( t ) + queT m SUAV , and then entering Step S413; and Step S413, constructing, based on a fact that the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot corresponding to a definition a.sub.m.sup.UAV(t)=1 are offloaded to the superior unmanned aerial vehicle for processing, a general latency model T.sub.m,1(t) corresponding to offloading the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot to the central base station for processing, according to a following formula: T m , 1 ( t ) = trans T m , BS SUAV ( t ) + queT m SUAV , and then entering step S42.

6. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 5, wherein each of the inspection unmanned aerial vehicles is respectively communicated with the superior unmanned aerial vehicle by adopting a non orthogonal multiple access mode, and the superior unmanned aerial vehicle is communicated with the central base station by adopting an orthogonal frequency division multiple access mode.

7. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 5, wherein Step S42 includes Step S421 to Step S422: Step S421, constructing, by a wired power supply mode, an energy consumption model E.sup.all(t) of the unmanned aerial vehicle group corresponding to the t-th time slot, based on the central base station according to a following formula: E all ( t ) = .Math. m = 1 M [ flyE m UAV ( t ) + transE m , SUAV UAV ( t ) + ( 1 - a m UAV ( t ) ) comE m SUAV ( t ) + a m UAV ( t ) ( 2 - a m UAV ( t ) ) transE m , BS SUAV ( t ) ] + flyE SUAV ( t ) where flyE m UAV ( t ) = W m UAV 2 .Math. L m UAV ( t ) - L m UAV ( t - 1 ) .Math. 2 , flyE m UAV ( t ) represents a flight energy consumption of the m-th inspection unmanned aerial vehicle at the t-th time slot; flyE.sup.SUAV(t)=W.sup.SUAV/2L.sup.SUAV(t)L.sup.SUAV(t1).sup.2, flyE.sup.SUAV(t) represents a flight energy consumption of the superior unmanned aerial vehicle at the t-th time slot; comE.sub.m.sup.SUAV(t)=K.sup.SUAVf.sup.SUAV(t).sup.2C.sup.SUAVD.sub.m.sup.SUAV(t), comE.sub.m.sup.SUAV(t) represents an energy consumed by offloading the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot to the superior unmanned aerial vehicle for processing, K.sup.SUAV represents an effective switched capacitor corresponding to a CPU of the superior unmanned aerial vehicle; transE.sub.m,SUAV.sup.UAV(t)=transT.sub.m,SUAV.sup.UAV(t)P.sub.m.sup.UAV(t), trans.sub.m,SUAV.sup.UAV(t) represents an transmission energy consumption of transmitting the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot with the superior unmanned aerial vehicle; transE.sub.m,BS.sup.SUAV(t)=transT.sub.m,BS.sup.SUAV(t)P.sup.SUAV(t), and transE.sub.m,BS.sup.SUAV(t) represents an transmission energy consumption of data D.sub.m.sup.UAV(t) between the superior unmanned aerial vehicle and the central base station, and then entering Step S422; Step S422, further constructing, based on an energy consumption model E.sup.all(t) of the unmanned aerial vehicle group corresponding to the t-th time slot, an objective function min E all ( t ) P m UAV ( t ) , P SUAV ( t ) L SUAV ( t ) , a m UAV ( t ) f SUAV ( t ) for minimizing energy consumption of the unmanned aerial vehicle group corresponding to each time slot, according to following formulas: min E all ( t ) P m UAV ( t ) , P SUAV ( t ) L SUAV ( t ) , a m UAV ( t ) f SUAV ( t ) s . t . C 1 : a m UAV ( t ) = { 0 , 1 } , m M C 2 : 0 < P m UAV ( t ) P max UAV , m M C 3 : 0 < P SUAV ( t ) P max SUAV C 4 : 0 < f SUAV ( t ) f max SUAV C 5 : x min x ( t ) < x max C 6 : y min y ( t ) < y max C 7 : h min h ( t ) < h max C 8 : R m UAV ( t ) R SUAV ( t ) , m M C 9 : ( 1 - a m UAV ( t ) ) T m , 0 ( t ) + a m UAV ( t ) ( 2 - a m UAV ( t ) ) T m , 1 ( t ) , m M , where C5 to C7 represent preset motion ranges for constraining the superior unmanned aerial vehicle, C8 represents a conditional requirement for a full-duplex communication of the superior unmanned aerial vehicle, and C9 represents that the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot needs to be offloaded within the time slot.

8. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 5, wherein Step S42 includes Step S421 to Step S422: Step S421, constructing, by a wired power supply mode, a balanced energy consumption model E.sub.even.sup.all(t) of the unmanned aerial vehicle group corresponding to the t-th time slot, based on the central base station according to a following formula: E even all ( t ) = .Math. m = 1 M [ flyE m UAV ( t ) + transE m , SUAV UAV ( t ) + ( 1 - a m UAV ( t ) ) comE m SUAV ( t ) + a m UAV ( t ) ( 2 - a m UAV ( t ) ) transE m , BS SUAV ( t ) ] + flyE SUAV ( t ) + .Math. m = 1 M .Math. m = 1 , m m M .Math. "\[LeftBracketingBar]" ( flyE m UAV ( t ) + transE m , SUAV UAV ( t ) ) - ( flyE m UAV ( t ) + transE m , SUAV UAV ( t ) ) .Math. "\[RightBracketingBar]" where represents a balanced energy consumption coefficient, flyE m UAV ( t ) = W m UAV 2 .Math. L m UAV ( t ) - L m UAV ( t - 1 ) .Math. 2 , flyE m UAV ( t ) represents a flight energy consumption of the m-th inspection unmanned aerial vehicle at the t-th time slot; flyE m UAV ( t ) = W SUAV 2 .Math. L SUAV ( t ) - L SUAV ( t - 1 ) .Math. 2 , flyE m SUAV ( t ) represents a flight energy consumption of the superior unmanned aerial vehicle at the t-th time slot; comE.sub.m.sup.SUAV(t)=.sup.SUAVf.sup.SUAV(t).sup.2C.sup.SUAVD.sub.m.sup.UAV(t), comE.sub.m.sup.SUAV(t) represents an energy consumed by offloading the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot to the superior unmanned aerial vehicle for processing, .sup.SUAV represents an effective switched capacitor corresponding to a CPU of the superior unmanned aerial vehicle; tranE.sub.m,SUAV.sup.UAV (t)=transT.sub.m,SUAV.sup.UAV (t)P.sub.m.sup.UAV (t), transE.sub.m,SUAV.sup.UAV(t) represents a transmission energy consumption of transmitting the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot with the superior unmanned aerial vehicle; transE.sub.m,BS.sup.SUAV(t)=transT.sub.m,BS.sup.SUAV(t)P.sup.SUAV(t), and transE.sub.m,BS.sup.SUAV(t) represents a transmission energy consumption of data D.sub.m.sup.UAV(t) between the superior unmanned aerial vehicle and the central base station, and then entering Step S422; and Step S422, further constructing, based on a balanced energy consumption model E.sub.even.sup.all(t) of the unmanned aerial vehicle group corresponding to the t-th time slot, an objective function min E even all ( t ) P m UAV ( t ) , P SUAV ( t ) L SUAV ( t ) , a m UAV ( t ) f SUAV ( t ) for minimizing energy consumption of the unmanned aerial vehicle group corresponding to each time slot, according to following formulas: min E even all ( t ) P m UAV ( t ) , P SUAV ( t ) L SUAV ( t ) , a m UAV ( t ) f SUAV ( t ) s . t . C 1 : a m UAV ( t ) = { 0 , 1 } , m M C 2 : 0 < P m UAV ( t ) P max UAV , m M C 3 : 0 < P SUAV ( t ) P max SUAV C 4 : 0 < f SUAV ( t ) f max SUAV C 5 : x min x ( t ) < x max C 6 : y min y ( t ) < y max C 7 : h min h ( t ) < h max C 8 : R m UAV ( t ) R SUAV ( t ) , m M C 9 : ( 1 - a m UAV ( t ) ) T m , 0 ( t ) + a m UAV ( t ) ( 2 - a m UAV ( t ) ) T m , 1 ( t ) , m M , where C5 to C7 represent preset motion ranges for constraining the superior unmanned aerial vehicle, C8 represents a conditional requirement for a full-duplex communication of the superior unmanned aerial vehicle, and C9 represents that the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot needs to be offloaded within the time slot.

9. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 7, wherein in Step S7, following Step S71 to Step S73 are performed, if an iteration overflow condition is not satisfied; Step S71, randomly initializing a population K(t) at the t-th time slot, K(t)={L.sub.1.sup.SUAV(t), L.sub.2.sup.SUAV(t), . . . , L.sub.i.sup.SUAV(t), . . . , L.sub.I.sup.SUAV(t)}, where 1iI, I represents a number of individuals in the population K(t) at the t-th time slot, and L.sub.i.sup.SUAV(t) represents i-th position coordinates of the superior unmanned aerial vehicle in the population K(t) at the t-th time slot, and then entering Step S72; Step S72, obtaining, based on the system status at the t-th time slot, a fitness respectively corresponding to each of the individuals in the population K(t) at the t-th time slot for each of the individuals in the population K(t) at the t-th time slot respectively, in, combination with system resource allocations and offload decision schemes for the video image data in the action space of the system at the t-th time slot corresponding to the position coordinates of the superior unmanned aerial vehicle, according to a following formula: Fit ( t ) L i SUAV ( t ) = 1 1 + E even all ( t ) L i SUAV ( t ) , and then entering Step S73; and Step S73, determining, whether the fitness corresponding to each of the individuals in the population K(t) at the t-th time slot satisfies a preset fitness threshold or not, if yes, selecting an individual corresponding to a highest fitness, that is, obtaining position coordinates of the superior unmanned aerial vehicle corresponding to the individual and updating the position coordinates of the superior unmanned aerial vehicle, and then returning to Step S6; if no, selecting, crossing, and mutating, based on the fitness of each of the individuals in the population K(t) at the t-th time slot, data in the population K(t) at the t-th time slot, updating each of the individuals in the population K(t) at the t-th time slot, and then returning to Step S72.

10. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 9, wherein in Step S73, the preset fitness threshold is a lower limit of the preset fitness, whether the fitness corresponding to each of the individuals respectively in the population K(t) at the t-th time slot is greater than the lower limit of the preset fitness or not is determined, when the preset fitness threshold is the lower limit of the preset fitness.

11. The method for the stochastic inspections on the power grid lines based on the unmanned aerial vehicle-assisted edge computing according to claim 1, wherein the iteration overflow condition in Step S7 is that a maximum preset iteration number, or a variance of the energy consumption of the unmanned aerial vehicle group corresponding to the t-th time slot in each iteration within a preset iteration number starting from a current iteration direction towards a historical iteration direction, is less than a preset range of energy consumption fluctuations.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0017] FIG. 1 illustrates an implement flow chart of a method for unmanned aerial vehicle-assisted stochastic inspections to power gird lines integrated with a mobile edge computing designed in one embodiment of the present disclosure.

[0018] FIG. 2 illustrates a model diagram of an unmanned aerial vehicle-assisted power grid lines stochastic inspection system in an application implementation designed in one embodiment of the present disclosure.

[0019] FIG. 3 illustrates a schematic diagram of a digital twin network for unmanned aerial vehicle-assisted PGL stochastic inspections in an application implementation designed in one embodiment of the present disclosure.

[0020] FIG. 4 illustrates a schematic diagram of DDPG for solving system resources allocations and task offload decision schemes in an application implementation designed in one embodiment of the present disclosure.

[0021] FIG. 5 illustrates a performance chart of average balanced energy consumption of the system corresponding to different algorithm schemes in an application implementation designed in one embodiment of the present disclosure.

[0022] FIG. 6 illustrates a relationship chart between the number of inspection unmanned aerial vehicles and balanced energy consumption of the system corresponding to the different algorithm schemes in an application implementation designed in one embodiment of the present disclosure.

[0023] FIG. 7 illustrates comparisons of the balanced energy consumption of the system relative to a value D corresponding to different schemes in an application implementation designed in one embodiment of the present disclosure.

DESCRIPTION OF EMBODIMENTS

[0024] In order to further reduce the inspection costs, an unmanned aerial vehicle-assisted edge computing method for stochastic inspections on power grid lines is provided by the present disclosure. Considering a limited carrying capacity of the unmanned aerial vehicles, the energy consumption of the unmanned aerial vehicles is reduced as much as possible with the help of utilizing the unmanned aerial vehicles to assist the power gird lines inspections, thereby extending the operation time of the unmanned aerial vehicles under the same energy consumption conditions, thus further enhancing continuous operating abilities of the unmanned aerial vehicles and improving the inspection efficiencies. Specifically, based on the information provided by the digital twin network, the objective of minimizing the balanced energy consumption of the unmanned aerial vehicle group is implemented through joint optimizations of computing resources, communication resources, unmanned aerial vehicle trajectories, and task offload decisions. Considering that latency requirements in inspection scenes are sensitive, couplings between variables is relatively high, and the digital twin network has time-varying properties (due to different positions of the unmanned aerial vehicles at different time slots), thus an algorithm combining a genetic algorithm with a reinforcement learning (GA-DDPG) is adopted to solve optimization problems of the above objectives. Based on trained strategies, the reinforcement learning can quickly provide action strategies, which is suitable for solving problems with the time-varying properties. Agents in the GA-DDPG reinforcement learning need to obtain comprehensive and accurate system status information, and the digital twin are embedded into the GA-DDPG algorithm in the present disclosure to construct a mapping between physical objects and virtual models, thus implementing the above objectives. The genetic algorithm in the GA-DDPG is used to reduce dimensions of decision spaces in the reinforcement learning algorithm and accelerate the training speed of the overall algorithm.

[0025] The exemplary embodiments are more comprehensively described in combination with the accompanying drawings now. However, the exemplary embodiments can be implemented in multiple forms and should not be understood as limited to the embodiments described herein. On the contrary, the embodiments provided herein enable the present disclosure to be more comprehensive and complete, and to fully convey concepts of the exemplary embodiments to a person skilled in the art. The same reference numbers in the drawings represent the same or similar parts, so repeated descriptions of them are omitted.

[0026] The described features, structures, or properties can be combined with one or more embodiments through any suitable modes. In the following description, many specific details are provided to lead to full understandings of the embodiments of the present disclosure. However, it can be realized by a person skilled in the art that the technical solutions of the present disclosure can be practiced without one or more among these specific details, or other methods, components, materials, devices, or operations can be employed. In these situations, it is not shown or described in detail of common structures, methods, devices, implementations, materials, or operations.

[0027] The flowcharts shown in the accompanying drawings are only the exemplary descriptions, which is not obliged to include all contents and operations or steps, and is not obliged to execute by the described order. For example, some operations or steps also can be decomposed, while some operations or steps can be merged or partially merged, thus the actual order of executions can be changed according to the actual situations.

[0028] The specific implements of the present disclosure are further described in detail in combination with the accompanying drawings of the specification.

[0029] Designed by the present disclosure is a method for stochastic inspections on power grid lines based on unmanned aerial vehicle-assisted edge computing, as illustrated in FIG. 2, based on a central base station arranged on a fixed position, by applying an unmanned aerial vehicle group including M inspection unmanned aerial vehicles (UAV) and a superior unmanned aerial vehicle (SUAV), an inspection is conducted on a target power gird area including power grid equipment and power transmission lines. Each of the inspection unmanned aerial vehicles is equipped with a high-speed image capture module. In one embodiment, as illustrated in FIG. 1, the following steps S1 to step S8 are specifically executed.

[0030] In Step S1, based on a flight mode of each of the inspection unmanned aerial vehicles in the unmanned aerial vehicle group, an unmanned aerial vehicle-assisted power grid lines stochastic inspection system is constructed. The inspection unmanned aerial vehicles are merely in charge of acquiring video images for the power gird equipment and the power transmission lines in the target power gird area, and data are processed on obtained video images by the superior unmanned aerial vehicle or the central base station, and then Step S2 is entered.

[0031] In one embodiment, the above-mentioned Step S1 is specifically executed in the following Step S11 to Step S13.

[0032] In Step S11, based on a constant motion status of each of the inspection unmanned aerial vehicles within each time slot, a moving speed .sub.m(t), a horizontal moving direction .sub.m(t), and a vertical moving direction .sub.m(t) of a m-th inspection unmanned aerial vehicle corresponding to the t-th time slot are obtained for each of the inspection unmanned aerial vehicles respectively according to following formulas:

[00001] v m ( t ) = 1 v m ( t - 1 ) + ( 1 - 1 ) v + 1 - 1 2 m m ( t ) = 2 m ( t - 1 ) + ( 1 - 2 ) m _ + 1 - 2 2 m m ( t ) = 3 m ( t - 1 ) + ( 1 - 3 ) m _ + 1 - 3 2 m , [0033] where 1mM, represents an average moving speed of all inspection unmanned aerial vehicles, .sub.m represents an average horizontal moving angle of the m-th inspection unmanned aerial vehicle corresponding to previous t1 time slot, .sub.m represents an average vertical moving angle of the m-th inspection unmanned aerial vehicle corresponding to the previous t1 time slot, .sub.m(t1), .sub.m(t1) and .sub.m(t1) Sequentially Represent a Moving Speed, a horizontal moving direction, and a vertical moving direction of the m-th inspection unmanned aerial vehicle corresponding to the previous t1 time slot, 0<.sub.1<1, .sub.1 represents a preset parameter used to adjust impacts of the moving speed of the inspection unmanned aerial vehicles corresponding to the previous t1 time slot on a moving speed of the inspection unmanned aerial vehicles corresponding to the t-th time slot; 0<.sub.2<1, .sub.2 represents a preset parameter used to adjust impacts of the horizontal moving direction of the inspection unmanned aerial vehicles corresponding to the previous t1 time slot on a horizontal moving direction of the inspection unmanned aerial vehicles corresponding to the t-th time slot; 0<.sub.3<1, .sub.3 represents a preset parameter used to adjust impacts of the vertical moving direction of the inspection unmanned aerial vehicles corresponding to the previous t1 time slot on a vertical moving direction of the inspection unmanned aerial vehicles corresponding to the t-th time slot; a preset parameter .sub.m that follows an independent gaussian distribution represents a randomness of the moving speed of the m-th inspection unmanned aerial vehicle, a preset parameter .sub.m that follows an independent gaussian distribution represents a randomness of the horizontal moving direction of the m-th inspection unmanned aerial vehicle, and a preset parameter .sub.m that follows an independent gaussian distribution represents a randomness of the vertical moving direction of the m-th inspection unmanned aerial vehicle, and then entering Step S12.

[0034] In Step S12, according to a length t of each time slot, the position coordinates L.sub.m.sup.UAV(t)=(x.sub.m(t), y.sub.m(t), h.sub.m(t)) of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot are obtained, for each of the inspection unmanned aerial vehicles respectively according to following formulas:

[00002] x m ( t ) = x m ( t - 1 ) + v m ( t - 1 ) cos ( m ( t - 1 ) ) y m ( t ) = y m ( t - 1 ) + v m ( t - 1 ) sin ( m ( t - 1 ) ) h m ( t ) = h m ( t - 1 ) + v m ( t - 1 ) sin ( m ( t - 1 ) ) , [0035] where x.sub.m(t), y.sub.m(t), h.sub.m(t) represent the values for the m-th inspection unmanned aerial vehicle respectively on coordinate axes x, y, z corresponding to the t-th time slot, x.sub.m(t1), y.sub.m(t1), h.sub.m(t1) represent values for the m-th inspection unmanned aerial vehicle respectively on coordinate axes x, y, z corresponding to the t1-th time slot, and then Step S13 is entered.

[0036] In Step S13, according to the moving speed, the horizontal moving direction, the vertical moving direction and the position coordinates of each of the inspection unmanned aerial vehicles respectively corresponding to the t-th time slot, the unmanned aerial vehicle-assisted power grid lines stochastic inspection system is constructed. The inspection unmanned aerial vehicles are merely in charge of acquiring video images for the power gird equipment and the power transmission lines in the target power gird area, and the data are processed on the obtained video images by the superior unmanned aerial vehicle or the central base station, and then Step S2 is entered.

[0037] In Step S2, the video images are acquired for the power gird equipment and the power transmission lines in the target power gird area by each of the inspection unmanned aerial vehicles in the unmanned aerial vehicle group based on the unmanned aerial vehicle-assisted power grid lines stochastic inspection system model, and the video image data acquired and obtained by the each of the inspection unmanned aerial vehicles corresponding to each time slot respectively are obtained, and then Step S3 is entered.

[0038] In Step S3, according to the video image data acquired and obtained by each of the inspection unmanned aerial vehicles corresponding to each time slot respectively, a digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, in combination with a weight, a signal transmission power and position coordinates of each of the inspection unmanned aerial vehicles, a weight, a signal transmission power, position coordinates, and a computing capacity of the superior unmanned aerial vehicle, and position coordinates of the central base station, as well as a system communication bandwidth, a digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system is constructed as illustrated in FIG. 3, to fit the position coordinates of each of the inspection unmanned aerial vehicles and the superior unmanned aerial vehicle, and a resource status of the system, and then Step S4 is entered.

[0039] In one embodiment, the above-mentioned Step S3 is specifically executed in the following Step S31 to Step S33.

[0040] In Step S31, according to the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, in combination with the weight of each of the inspection unmanned aerial vehicles, the video image data acquired by each of the inspection unmanned aerial vehicles respectively corresponding to each time slot, the signal transmission power of each of the inspection unmanned aerial vehicles, the weight of the superior unmanned aerial vehicle, the CPU calculation frequency allocated to each of the inspection unmanned aerial vehicles respectively corresponding to each time slot, and the signal transmission power of the superior unmanned aerial vehicle, and the position coordinates of the central base station, a real physical entity network is constructed, and then Step S32 is entered.

[0041] In Step S32, based on the real physical entity network, a digital twin model of each of the inspection unmanned aerial vehicles respectively corresponding to each time slot is constructed according to a following formula:

[00003] D T m U A V ( t ) = { W m U A V , D m U A V ( t ) , P m U A V ( t ) , L m U A V ( t ) , P max U A V } , [0042] where DT.sub.m.sup.UAV(t) represents a digital twin model of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot, W.sub.m.sup.UAV represents a weight of the m-th inspection unmanned aerial vehicle. D.sub.m.sup.UAV(t) represents video image data acquired by the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot. P.sub.m.sup.UAV(t) represents a signal transmission power of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot. L.sub.m.sup.UAV(t) represents position coordinates of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot, and P.sub.max.sup.UAV represents a maximum signal transmission power of the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot.

[0043] At the same time, a digital twin model of the superior unmanned aerial vehicle corresponding to each time slot is constructed according to a following formula:

[00004] D T S U A V ( t ) = { W S U A V , f S U A V ( t ) , P S U A V ( t ) , L S U A V ( t ) , P max S U A V , f max S U A V , C S U A V } , [0044] where DT.sup.SUAV(t) represents a digital twin model of the superior unmanned aerial vehicle corresponding to the t-th time slot, W SUAV represents a weight of the superior unmanned aerial vehicle, f.sup.SUAV(t) represents a CPU calculation frequency allocated to the superior unmanned aerial vehicle corresponding to the t-th time slot, P.sup.SUAV(t) represents a signal transmission power of the superior unmanned aerial vehicle corresponding to the t-th time slot, L.sup.SUAV(t) represents position coordinates of the superior unmanned aerial vehicle corresponding to the t-th time slot, P.sub.max.sup.SUAV represents a maximum signal transmission power of the superior unmanned aerial vehicle corresponding to the t-th time slot, f.sub.max.sup.SAUV represents a maximum CPU calculation frequency of the superior unmanned aerial vehicle, and C.sup.SUAV represents a number of CPU cycles required to processing data for 1-bit by the superior unmanned aerial vehicle.

[0045] Besides, a digital twin model DT.sup.BS of the central base station is constructed, according to a following formula:

[00005] D T B S = { L B S } , [0046] where L.sup.BS represents the position coordinates of the central base station, and then Step S33 is entered.

[0047] In Step S33, based on the digital twin models of each of the inspection unmanned aerial vehicles respectively corresponding to each time slot, the digital twin models of the superior unmanned aerial vehicle respectively corresponding to each time slot, and the digital twin model of the central base station, the digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system is constructed, to fit the position coordinates of each of the inspection unmanned aerial vehicles and the superior unmanned aerial vehicle, and the resource status of the system, and then Step S4 is entered.

[0048] In Step S4, according to the digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, based on constraints of an offload latency and a data task processing latency for the power grid lines stochastic inspection system, an energy consumption model or a balanced energy consumption model of the unmanned aerial vehicle group respectively corresponding to the each time slot is constructed, and an objective function for minimizing energy consumption of the unmanned aerial vehicle group respectively corresponding to each time slot or an objective function for minimizing balanced energy consumption of the group unmanned aerial vehicle respectively corresponding to the each time slot is further constructed, and then Step S5 is entered.

[0049] In one embodiment, the above-mentioned Step S4 is specifically executed in the following Step S41 to Step S42.

[0050] In Step S41, according to the digital twin network of the unmanned aerial vehicle-assisted power grid lines stochastic inspection system, a general latency model of the video image data acquired by each of the inspection unmanned aerial vehicles at each time slot corresponding to each offload type respectively is constructed, and then Step S42 is entered.

[0051] The above-mentioned Step S41 herein is further specifically executed in the following Step S411 to Step S413.

[0052] In Step S411, based on that the inspection unmanned aerial vehicles are merely capable of choosing one between the superior unmanned aerial vehicle and the central base station to offload the video image data, in accordance with a fact that each of the inspection unmanned aerial vehicles communicates with the superior unmanned aerial vehicle respectively by adopting a NOMA mode, that is, the inspection unmanned aerial vehicles shares a common frequency spectrum to communicate with the superior unmanned aerial vehicle, that the superior unmanned aerial vehicle communicate with the central base station by adopting an OFDMA mode, that a data transmission rate between each of the inspection unmanned aerial vehicles and the superior unmanned aerial vehicle corresponding to the t-th time slot is R.sub.m.sup.UAV(t), and that a data transmission rate between the m-th inspection unmanned aerial vehicle and the superior unmanned aerial vehicle corresponding to the t-th time slot is R.sub.m.sup.UAV(t), and

[00006] R m UAV ( t ) = B log 2 ( 1 + P m UAV ( t ) H m , SUAV UAV ( t ) .Math. i = k + 1 , ( k ) m ( M ) P ( i ) UAV ( t ) H ( i ) , SUAV UAV ( t ) + 2 ) , [0053] where B represents a bandwidth of a communication channel and .sup.2 represents an additional gaussian white noise. H.sub.m,SUAV.sup.UAV(t) represents a channel power gain between the m-th inspection unmanned aerial vehicle and the superior unmanned aerial vehicle within a time slot t, which is defined as

[00007] H m , SUAV U A V ( t ) = g 0 .Math. L m U A V ( t ) - L S U A V ( t ) .Math. 2 , where g.sub.0 represents a path loss per unit distance. A receiving terminal of the superior unmanned aerial vehicle decodes stacked signals transmitted by the M inspection unmanned aerial vehicles by adopting a continuous interference cancellation (SIC) mode, and a decoding sequence is executed in a descending order of the channel gain. Within the t-th time slot, the descending order of the channel gain can be expressed as H.sub.(1),SUAV.sup.UAV(t)H.sub.(2),SUAV.sup.UAV(t) H.sub.(M),SUAV.sup.UAV(t) and the k-th channel gain in the descending sequence can be expressed as (k)M; and .sub.i=k+1,(k)=m.sup.(M)P.sub.(i).sup.UAV(t)H.sub.(i),SUAV.sup.UAV(t) represents an interference of the other inspection unmanned aerial vehicles {k+1, . . . , (M)} with the data transmission rate when m-th inspection unmanned aerial vehicle is uploading data.

[0054] Within any time slot, the superior unmanned aerial vehicle communicates with the central base station by adopting the OFDMA (orthogonal frequency division multiple access) mode. According to a Shannon formula, a data transmission rate between the superior unmanned aerial vehicle and the central base station is

[00008] R S U A V ( t ) = B log 2 ( 1 + P S U A V ( t ) H S U A V B S ( t ) 2 ) , [0055] where H.sub.SUAV.sup.BS(t) represents a channel power gain between the superior unmanned aerial vehicle and the central base station within a t-th time slot, which is defined as

[00009] H S U A V B S ( t ) = g 0 .Math. L B S - L S U A V ( t ) .Math. 2 .

[0056] The video image data acquired by the corresponding m-th inspection unmanned aerial vehicle at the t-th time slot are offloaded to the superior unmanned aerial vehicle for processing. Since the amount of the data in processing results is relatively small, transmission latency and transmission energy consumption of the processing results from the superior unmanned aerial vehicle to the central base station can be ignored. The video image data acquired by the corresponding m-th inspection unmanned aerial vehicle at the t-th time slot is offloaded to the central base station for processing. Since power is supplied to the central base station by adopting a wired mode, computing energy consumption of the central base station can be ignored. Besides, only one offload mode can be chosen by the m-th inspection unmanned aerial vehicle within one time slot.

[0057] Further, a communication latency model transT.sub.m,SUAV.sup.UAV(t) between each of the inspection unmanned aerial vehicles and the superior unmanned aerial vehicle corresponding to each time slot is constructed according to a following formula:

[00010] t r a n s T m , SUAV UAV ( t ) = D m UAV ( t ) R m UAV ( t ) , [0058] where transT.sub.m,SUAV.sup.UAV(t) represents a communication latency between the m-th inspection unmanned aerial vehicle and the superior unmanned aerial vehicle corresponding to the t-th time slot, and D.sub.m.sup.UAV(t) represents the video image data acquired by the m-th inspection unmanned aerial vehicles corresponding to the t-th time slot.

[0059] In addition, a communication latency model transT.sub.m,BS.sup.SUAV(t) of the video image data acquired by each of the inspection unmanned aerial vehicles corresponding to each time slot respectively transmitted between the superior unmanned aerial vehicle and the central base station is constructed, according to a following formula:

[00011] t r a n s T m , B S S U A V ( t ) = D m U A V ( t ) R S U A V ( t ) , [0060] where transT.sub.m,BS.sup.SUAV(t) represents a communication latency of the video image data acquired by the m-th inspection unmanned aerial vehicle corresponding to the t-th time slot transmitted between the superior unmanned aerial vehicle and the central base station; and then Step S412 is entered.

[0061] In Step S412, based on a fact that the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot corresponding to a definition a.sub.m.sup.UAV(t)=0 are offloaded to the superior unmanned aerial vehicle for processing, a data processing latency model comT.sub.m.sup.SUAV(t) at a receiving terminal of the superior unmanned aerial vehicle for the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot is constructed according to a following formula:

[00012] comT m SUAV ( t ) = D m UAV ( t ) C SUAV f SUAV ( t ) ,

[0062] where C.sup.SUAV represents the number of CPU cycles required to processing data for 1-bit by the superior unmanned aerial vehicle, and f.sup.SUAV(f) represents the CPU calculation frequency allocated to the superior unmanned aerial vehicle corresponding to the t-th time slot.

[0063] Based on a fact that the superior unmanned aerial vehicle processes the video image data in a non preemptive mode in accordance with a channel power gain descending mode, a queue waiting latency model queT.sub.m.sup.SUAV for the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot before being processed by the superior unmanned aerial vehicle is constructed according to a following formula:

[00013] queT m SUAV = .Math. i = 1 , ( k ) = m k - 1 ( 1 - a ( i ) UAV ( t ) ) comT ( i ) SUAV ( t ) , [0064] where (i) represents a sequence number of the inspection unmanned aerial vehicles from which the superior unmanned aerial vehicle sequentially processes i-th video image data, and k represents a sequence number of the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot waiting to be processed by the superior unmanned aerial vehicle.

[0065] Then a general latency model T.sub.m,0(t) corresponding to offloading the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot to the superior unmanned aerial vehicle for processing is constructed according to a following formula:

[00014] T m , 0 ( t ) = transT m , SUAV UAV ( t ) + comT m SUAV ( t ) + queT m SUAV , [0066] and then Step S413 is entered.

[0067] In Step S413, based on a fact that the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot corresponding to a definition a.sub.m.sup.UAV(t)=1 are offloaded to the superior unmanned aerial vehicle for processing, a general latency model T.sub.m,1(t) corresponding to offloading the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot to the central base station for processing is constructed according to a following formula:

[00015] T m , 1 ( t ) = transT m , BS SUAV ( t ) + queT m SUAV , [0068] and then Step S42 is entered.

[0069] In Step S42, according to the general latency model of the video image data acquired by each of the inspection unmanned aerial vehicles at each time slot corresponding to each offload type respectively, based on the constraints of the offload latency and the data task processing latency for the power grid lines stochastic inspection system, the energy consumption model or the balanced energy consumption model of the unmanned aerial vehicle group corresponding to each time slot respectively is constructed, and further the objective function for minimizing energy consumption of the unmanned aerial vehicle group respectively corresponding to the each time slot is constructed, and then Step S5 is entered.

[0070] In one embodiment, the above-mentioned Step S42 is further designed to execute the following Step S421 to Step S422.

[0071] Step S42 includes Step S421 to Step S422.

[0072] In Step S421, an energy consumption model E.sup.all(t) of the unmanned aerial vehicle group corresponding to the t-th time slot is constructed by a wired power supply mode based on the central base station according to a following formula:

[00016] E all ( t ) = .Math. m = 1 M [ flyE m UAV ( t ) + trans E m , SUAV UAV ( t ) + ( 1 - a m UAV ( t ) ) comE m SUAV ( t ) + a m UAV ( t ) ( 2 - a m UAV ( t ) ) trans E m , BS SUAV ( t ) + flyE SUAV ( t ) M [0073] where

[00017] flyE m UAV ( t ) = W m UAV 2 .Math. L m UAV ( t ) - L m UAV ( t - 1 ) .Math. 2 , flyE m UAV ( t ) represents a flight energy consumption of the m-th inspection unmanned aerial vehicle at the t-th time slot;

[00018] flyE SUAV ( t ) = W SUAV 2 .Math. L SUAV ( t ) - L SUAV ( t - 1 ) .Math. 2 , flyE SUAV ( t ) represents a flight energy consumption of the superior unmanned aerial vehicle at the t-th time slot; comE.sub.m.sup.SUAV(t)=.sup.SUAVf.sup.SUAV(t).sup.2 C.sup.SUAV D.sub.m.sup.SUAV(t), comE.sub.m.sup.SUAV(t) represents an energy consumed by offloading the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot to the superior unmanned aerial vehicle for processing, .sup.SUAV represents an effective switched capacitor corresponding to a CPU of the superior unmanned aerial vehicle; trans.sub.m,SUAV.sup.UAV (t)=transT.sub.m,SUAV.sup.UAV(t)P.sub.m.sup.UAV(t), trans.sub.m,SUAV.sup.UAV(t) represents an transmission energy consumption of transmitting the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot with the superior unmanned aerial vehicle; transE.sub.m,BS.sup.SUAV(t)=transT.sub.m,BS.sup.SUAV(t)P.sup.SUAV(t), and transE.sub.m,BS.sup.SUAV(t) represents an transmission energy consumption of data D.sub.m.sup.UAV(t) between the superior unmanned aerial vehicle and the central base station, and then Step S422 is entered.

[0074] In Step S422, based on an energy consumption model E.sup.all(t) of the unmanned aerial vehicle group corresponding to the t-th time slot, an objective function min

[00019] E all ( t ) P m UAV ( t ) , P SUAV ( t ) L SUAV ( t ) , a m UAV ( t ) f SUAV ( t )

for minimizing energy consumption of the unmanned aerial vehicle group corresponding to each time slot is further constructed according to the following formulas:

[00020] min E all ( t ) P m UAV ( t ) , P SUAV ( t ) L SUAV ( t ) , a m UAV ( t ) f SUAV ( t ) s . t . C 1 : a m UAV ( t ) = { 0 , 1 } , m M C 2 : 0 < P m UAV ( t ) P max UAV , m M C 3 : 0 < P SUAV ( t ) P max SUAV C 4 : 0 < f SUAV ( t ) f max SUAV C 5 : x min x ( t ) < x max C 6 : y min y ( t ) < y max C 7 : h min h ( t ) < h max C 8 : R m UAV ( t ) R SUAV ( t ) , m M C 9 : ( 1 - a m UAV ( t ) ) T m , 0 ( t ) + a m UAV ( t ) ( 2 - a m UAV ( t ) ) T m , 1 ( t ) , m M , [0075] where C5 to C7 represent preset motion ranges for constraining the superior unmanned aerial vehicle, C8 represents a conditional requirement for a full-duplex communication of the superior unmanned aerial vehicle, and C9 represents that the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot needs to be offloaded and processed within the time slot.

[0076] In one embodiment, the above-mentioned Step S42 is further designed to execute the following Step S421 to Step S422

[0077] In Step S421, a balanced energy consumption model E.sub.even.sup.all(t) of the unmanned aerial vehicle group corresponding to the t-th time slot is constructed by a wired power supply mode based on the central base station according to a following formula:

[00021] E even all ( t ) = .Math. m = 1 M [ flyE m UAV ( t ) + trans E m , SUAV UAV ( t ) + ( 1 - a m UAV ( t ) ) comE m SUAV ( t ) + a m UAV ( t ) ( 2 - a m UAV ( t ) ) trans E m , BS SUAV ( t ) ] + flyE SUAV ( t ) + .Math. m = 1 M .Math. m = 1 , m m M .Math. "\[LeftBracketingBar]" ( flyE m UAV ( t ) + trans E m , SUAV UAV ( t ) ) - ( flyE m UAV ( t ) + trans E m , SUAV UAV ( t ) ) .Math. "\[RightBracketingBar]" , [0078] where represents a balanced energy consumption coefficient,

[00022] flyE m UAV ( t ) = W m UAV 2 .Math. L m UAV ( t ) - L m UAV ( t - 1 ) .Math. 2 , flyE m UAV ( t ) represents a flight energy consumption of the m-th inspection unmanned aerial vehicle at the t-th time slot;

[00023] flyE SUAV ( t ) = W UAV 2 .Math. L SUAV ( t ) - L SUAV ( t - 1 ) .Math. 2 , flyE SUAV ( t )

[0079] represents a flight energy consumption of the superior unmanned aerial vehicle at the t-th time slot; comE.sub.m.sup.SUAV(t)=.sup.SUAVf.sup.SUAV(t).sup.2C.sup.SUAVD.sub.m.sup.UAV(t), comE.sub.m.sup.SUAV(t) represents an energy consumed by offloading the video image data acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot to the superior unmanned aerial vehicle for processing, K.sup.SUAV represents an effective switched capacitor corresponding to a CPU of the superior unmanned aerial vehicle; transE.sub.m,SUAV.sup.UAV(t)=transT.sub.m,SUAV.sup.UAV(t)P.sub.m.sup.UAV(t), transE.sub.m,SUAV.sup.UAV(t) represents a transmission energy consumption of transmitting the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot with the superior unmanned aerial vehicle; transE.sub.m,BS.sup.SUAV(t)=transT.sub.m,BS.sup.SUAV(t) P.sup.SUAV(t), and transE.sub.m,BS.sup.SUAV(t) represents a transmission energy consumption of data D.sub.m.sup.UAV(t) between the superior unmanned aerial vehicle and the central base station, and then Step S422 is entered.

[0080] In Step S422, based on a balanced energy consumption model E.sup.all (t) of the unmanned aerial vehicle group corresponding to the t-th time slot, an objective function min

[00024] E all ( t ) P m UAV ( t ) , P SUAV ( t ) L SUAV ( t ) , a m UAV ( t ) f SUAV ( t )

for minimizing energy consumption of the unmanned aerial vehicle group corresponding to each time slot is constructed, according to the following formulas:

[00025] min E all ( t ) P m UAV ( t ) , P SUAV ( t ) L SUAV ( t ) , a m UAV ( t ) f SUAV ( t ) s . t . C 1 : a m UAV ( t ) = { 0 , 1 } , m M C 2 : 0 < P m UAV ( t ) P max UAV , m M C 3 : 0 < P SUAV ( t ) P max SUAV C 4 : 0 < f SUAV ( t ) f max SUAV C 5 : x min x ( t ) < x max C 6 : y min y ( t ) < y max C 7 : h min h ( t ) < h max C 8 : R m UAV ( t ) R SUAV ( t ) , m M C 9 : ( 1 - a m UAV ( t ) ) T m , 0 ( t ) + a m UAV ( t ) ( 2 - a m UAV ( t ) ) T m , 1 ( t ) , m M , [0081] where C5 to C7 represent preset motion ranges for constraining the superior unmanned aerial vehicle, C8 represents a conditional requirement for a full-duplex communication of the superior unmanned aerial vehicle, and C9 represents that the video image data D.sub.m.sup.UAV(t) acquired by the m-th inspection unmanned aerial vehicle at the t-th time slot needs to be offloaded and processed within the time slot.

[0082] In Step S5, the position coordinates of the superior unmanned aerial vehicle are randomly initialized, and based on the position coordinates and the video image data of each of the inspection unmanned aerial vehicles respectively corresponding to a t-th time slot, a system status at the t-th time slot is constructed, and then Step S6 is entered.

[0083] In Step S6, the energy consumption model of the unmanned aerial vehicle group corresponding to each time slot respectively is solved by adopting a DDPG algorithm in a deep reinforcement learning, based on the position coordinates of the superior unmanned aerial vehicle and the system status at the t-th time slot, according to the objective function for minimizing energy consumption of the unmanned aerial vehicle group respectively corresponding to each time slot or the objective function for minimizing balanced energy consumption of the unmanned aerial vehicle group corresponding to each time slot respectively; an action space of the system at the t-th time slot corresponding to the system status at the t-th time slot in combination with the position coordinates of the superior unmanned aerial vehicle, that is, the action space of the system at the t-th time slot corresponding to the system status at the t-th time slot in combination with the position coordinates of the superior unmanned aerial vehicle is obtained, and the action space of the system at the t-th time slot is composed of the signal transmission power of each of the inspection unmanned aerial vehicles corresponding to the t-th time slot respectively, an offload mode of each of the inspection unmanned aerial vehicles corresponding to the t-th time slot respectively regarding the superior unmanned aerial vehicle or the central base station, and the signal transmission power and an allocated CPU calculation frequency of the superior unmanned aerial vehicle corresponding to the t-th time slot, and then Step S7 is entered.

[0084] The above-mentioned Step S6 is specifically executed in the following operations.

[0085] Firstly, two groups of neural networks are constructed, separately named as an Actor network group and a Critic network group. The Actor network group includes two deep neural networks with the same parameters, that is, an Actor policy network with all parameters marked as .sup. and an Actor target network with all parameters marked as .sup.. The Critic network group includes two deep neural networks with the same parameters, that is, a Critic policy network with all parameters marked as .sup.Q and a Critic target network with all parameters marked as .sup.Q.

[0086] Then, based on the position coordinates of the superior unmanned aerial vehicle, within the t-th time slot, a current system status s.sub.t is input into the Actor policy network, actions (s.sub.t) is output by attaching stochastic noises N.sub.t to form action decisions a.sub.t for interacting with the environment, that is, a.sub.t=(s.sub.t|.sup.)+N.sub.t, thus obtaining rewards r.sub.t and entering the next time slot status of the system, and at the same time, this record {s.sub.t, a.sub.t, r.sub.t, s.sub.t+1} is stored in an experience playback pool.

[0087] The current system status s.sub.t, the action spaces a.sub.t, and reward function r.sub.t are separately represented as follows:

[00026] s t = { L 1 UAV ( t ) , L 2 UAV ( t ) , .Math. , L m UAV ( t ) , .Math. , L M UAV ( t ) , D 1 UAV ( t ) , D 2 UAV ( t ) , .Math. , D m UAV ( t ) , .Math. , D M UAV ( t ) } .

[0088] The selectable action spaces based on the current system status s.sub.t are that

[00027] a t = { P 1 UAV ( t ) , P 2 UAV ( t ) , .Math. , P m UAV ( t ) , .Math. , P M UAV ( t ) , a 1 UAV ( t ) , a 2 UAV ( t ) , .Math. , a m UAV ( t ) , .Math. , a M UAV ( t ) , f SUAV ( t ) , P SUAV ( t ) } .

[0089] Based on the current system status s.sub.t and the action decisions at the status, the obtained rewards r.sub.t are defined as:

[00028] r t = - E even all ( t ) - 1 000 , [0090] where 1000 in the reward function represents a penalty term. When the conditional requirement for a full-duplex communication of the superior unmanned aerial vehicle is not satisfied or the data acquired by the inspection unmanned aerial vehicles within the t-th time slot is not completely offloaded within this time slot, a default penalty value 1000 is given accordingly.

[0091] The above specific execution operations related to Step S6, the DDPG algorithm in the deep reinforcement learning in one embodiment, is executed specifically as follows as illustrated in FIG. 4.

[0092] In S61, starting from the first time slot, the above operations are repeated until the experience playback pool is filled.

[0093] In S62, N samples are randomly selected from the experience playback pool and one of the N samples is recorded as {s.sub.i, a.sub.i, r.sub.i, s.sub.i+1}.

[0094] In S63, status s.sub.i+1 and action decisions (s.sub.i+1|.sup.) are input into the Critic target network, and values Q obtained based on the current status and action decisions are output, and the values Q is Q(s.sub.i+1, (s.sub.i+1|.sup.)|.sup.Q), where action decisions (s.sub.i+1|.sup.) are provided by the Actor target network based on status and are recorded as

[00029] y i = r i + Q ( s i + 1 , ( s i + 1 .Math. "\[LeftBracketingBar]" ) .Math. "\[RightBracketingBar]" Q ) .

[0095] In S64, status s.sub.i and action decisions a.sub.i are input into the Critic policy network, and the values Q obtained based on the current status and action decisions are output, and the values Q is Q (s.sub.i, a.sub.i|.sup.Q).

[0096] In S65, a following loss function is adopted to update the parameters .sup.Q for the Critic policy network:

[00030] L ( Q ) = 1 N .Math. i ( y i - Q ( s i , a i | Q ) ) 2 .

[0097] In S66, the parameters .sup. for the Actor policy network is updated by adopting a policy gradient ascent method to implement a maximization of the policy objective function J(.sup.):

[00031] J 1 N .Math. i 0 Q ( s , a | Q ) | s = s i , a = ( s i ) ( s | ) | s i , [0098] where (s|.sup.)|.sub.s, is the action decisions obtained by the Actor policy network based on status s.sub.i, and .sub.i.sup..sub.aQ(s, a|.sup.Q)|.sub.s=s.sub.i.sub.,a=(s.sub.i.sub.) is the value Q obtained by the Critic policy network based on the status s.sub.i and the action decisions (s|.sup.)|.sub.s.sub.i.

[0099] In S67, the parameters .sup. for the Actor target network and the parameters .sup.Q for the Critic target network are updated regularly by using a soft updating mode:

[00032] = + ( 1 - ) Q = Q + ( 1 - ) Q .

[0100] In Step S7, whether iteration overflow condition is satisfied or not is determined, if yes, Step S8 is entered, if no, the position coordinates of the superior unmanned aerial vehicle are solved and updated by using a genetic algorithm based on the system status at the t-th time slot, in combination with system resource allocations and offload decision schemes for the video image data in the action space of the system at the t-th time slot corresponding to the position coordinates of the superior unmanned aerial vehicle, and Step S6 is returned.

[0101] The iteration overflow condition is that a maximum preset iteration number, or a variance of the energy consumption of the unmanned aerial vehicle group corresponding to the t-th time slot in each iteration within a preset iteration number starting from a current iteration direction towards a historical iteration direction, is less than a preset range of energy consumption fluctuations.

[0102] In one embodiment, in the above-mentioned Step S7, when the iteration overflow conditions are not satisfied, the following Step S71 to Step S71 are executed.

[0103] In Step S71, a population K(t)={L.sub.1.sup.SUAV(t), L.sub.2.sup.SUAV(t), , L.sub.i.sup.SUAV(t), , L.sub.I.sup.SUAV(t)} at the t-th time slot is randomly initialized, where 1iI, I represents a number of individuals in the population K (t) at the t-th time slot, and L.sub.i.sup.SUAV(t) represents i-th position coordinates of the superior unmanned aerial vehicle in the population K(t) at the t-th time slot, and then Step S72 is entered.

[0104] In practical applications, a phenotype of the position coordinates of the superior unmanned aerial vehicle is further transformed into a genotype by using a binary encoding mode, and a binary encoding method specifically lies in the following.

[0105] A range of x(t) is [x.sub.min, x.sub.max], and the parameter is expressed by a binary coding symbol with a length of , that is, this interval is divided into 2.sup.1 parts, and similarly, [y.sub.min, y.sub.max] and [h.sub.min, h.sub.max] are also divided into 2.sup.1 parts. The genotype corresponding to x(t) represents data at an interval [0, x.sub.maxx.sub.min], the same as y(t) and h(t), thus the genotype of one individual can be expressed as:

[00033] 10100 .Math. [ 0 , x max - x min ] , 11010 .Math. [ 0 , y max - y min ] , 01001 .Math. [ 0 , h max - h min ] .

[0106] In Step S72, for each of the individuals in the population K(t) at the t-th time slot respectively, based on the system status at the t-th time slot, in combination with system resource allocations and offload decision schemes for the video image data in the action space of the system at the t-th time slot corresponding to the position coordinates of the superior unmanned aerial vehicle, a fitness respectively corresponding to each of the individuals in the population K(t) at the t-th time slot is obtained according to a following formula:

[00034] Fit ( t ) L i SUAV ( t ) = 1 1 + E even all ( t ) L i SUAV ( t ) , [0107] and then Step S73 is entered.

[0108] In Step S73, whether the fitness corresponding to each of the individuals in the population K(t) at the t-th time slot satisfied a preset fitness threshold or not is determined, if yes, an individual corresponding to a highest fitness is selected, that is, position coordinates of the superior unmanned aerial vehicle corresponding to the individual are obtained and the position coordinates of the superior unmanned aerial vehicle are updated, and then Step S6 is returned; if no, based on the fitness of each of the individuals in the population K(t) at the t-th time slot, data in the population K(t) at the t-th time slot are selected, crossed, and mutated, and each of the individuals in the population K(t) at the t-th time slot is updated, and then Step S72 is returned. Corresponding to the binary encoding conversion operation adopted between Step S71 and Step S72, decoding herein (y(t) and h(t) as the same) is as follows:

[00035] x ( t ) = x min + ( .Math. i = 1 b i 2 i - 1 ) x max - x min 2 - 1 , [0109] where b.sub.i represents a binary number of the i-th digit.

[0110] In one embodiment, the preset fitness threshold herein is a lower limit of the preset fitness, when the preset fitness threshold is the lower limit of the preset fitness, whether the fitness corresponding to each of the individuals respectively in the population K(t) at the t-th time slot is greater than the lower limit of the preset fitness or not is determined.

[0111] In Step S8, according to the position coordinates of the superior unmanned aerial vehicle, and the system resource allocations and the offload decision schemes for the video image data in the action space of the corresponding system at the t-th time slot, the video image data acquired and obtained by each of the inspection unmanned aerial vehicles corresponding to each time slot in Step S2 are processed to offload the video image data to the superior unmanned aerial vehicle or the central base station for processing. The identification for the power grid system defects and the positioning for the power grid system defect are executed by the superior unmanned aerial vehicle or the central base station for the video image data offloaded by the inspection unmanned aerial vehicles.

[0112] The method for stochastic inspections on power grid lines based on unmanned aerial vehicle-assisted edge computing integrated with a mobile edge computing designed by the present disclosure is applied to practical applications. The performance comparison between different algorithm schemes under a condition of M=3 is as illustrated in FIG. 5. The Actor-Critical algorithm cannot reach a convergence status with an increase of the training times, that is because the Actor-Critical algorithm needs to synchronously update the Actor network and the Critic network during the training process, while the selection of the action decisions for the Actor network depends on the value evaluation provided by the Critic network. Considering that the Critic network itself is difficult to converge, the Actor-Critical algorithm is more difficult to converge in some scenarios. By contrast, thanks to a dual-network structure of the Critic evaluation network and the Critic target network, the correlations between the target value Q and the evaluation value Q are cut off by the DQN (Deep Q-Network) and the GA-DDPG (Goal-Auxiliary DDPG) during the training process, promoting the convergence of the Critic network. In addition, it can be seen from the figure that the DQN algorithm converges at an Episode=90 and the GA-DDPG algorithm converges at an Episode=200. Compared with the GA-DDPG algorithm, the DQN algorithm has a relatively fast converging rate but poor converging effects, that is because the DQN algorithm adopted by the present disclosure discretizes the continuous action spaces, reducing a breadth of the utilizable action spaces, leading that the best action decisions can not be found continuously and accurately, thus, the fluctuation phenomenon is observed in the balanced energy consumption of the system during the algorithm convergence stage.

[0113] The balanced energy consumption results obtained after by using the algorithm convergence, three algorithmic schemes under different settings for the number of inspection unmanned aerial vehicles (PUAVs) are compared, specifically including three schemes of GA-DDPG, DQN, and offloading all computing tasks to the superior unmanned aerial vehicle and the results are as illustrated in FIG. 6. It can be observed that for the same number of the inspection unmanned aerial vehicles, the balanced energy consumption of the system optimized by the GA-DDPG algorithm is lower compared with the DQN. That is because the GA-DDPG algorithm explores a continuous action space, takes precise actions, and finally obtains the optimal strategy, which significantly reduces the balanced energy consumption of the system, while the discretizations of actions in the DQN algorithm may cause the algorithm to skip better actions. In addition, balanced energy consumption of the system increases with an increase of the number of the inspection unmanned aerial vehicles, and as the number of the inspection unmanned aerial vehicles increases, the gap between the balanced energy consumption of the system optimized by the GA-DDPG algorithm and the DQN algorithm gradually widens. This is because the number of variables in the action spaces increases with the increase of the number of the inspection unmanned aerial vehicles, and more variables lead to an increase in the probability of the DQN algorithm skipping better actions, and thus optimization effects of the DQN algorithm gradually deteriorates. Finally, in the case of adopting the scheme of offloading all computing tasks to the superior unmanned aerial vehicle, when the number of the inspection unmanned aerial vehicles is relatively small, the gap of effects between this scheme and the DQN and the GA-DDPG is not significant. As the number of inspection unmanned aerial vehicles increases, the disadvantages of this scheme gradually become prominent, this is because a MEC server embedded in an terminal of the superior unmanned aerial vehicle cannot satisfy more computing needs, it is more reasonable to offload the computing tasks acquired by individual inspection unmanned aerial vehicles to the central base station at this time.

[0114] FIG. 7 illustrates comparisons between the balanced energy consumption of the system under different schemes relative to a value D when M=3 (it is affirmed that the amount of data acquired by the inspection unmanned aerial vehicles at any time slot follows a gaussian distribution with a mean value D). The blue curve represents the scheme proposed by the present disclosure, the purple curve represents the transmission power (PP) of the inspection unmanned aerial vehicles that has not been optimized based on the proposed scheme, the green curve represents the PP and the transmission power of the superior unmanned aerial vehicle (SP) that have not been optimized based on the proposed scheme, and the red curve represents the PP, the SP and the computing resources of the superior unmanned aerial vehicle (SC) that have not been optimized based on the proposed scheme. The following points can be seen from the figure. Firstly, as the value D increases, the balanced energy consumption of the system of the above four schemes increases respectively, this is because in general situations, an increase in the value D means that the amount of tasks acquired by each of the inspection unmanned aerial vehicles at different time slots increases, resulting in the consumption of more computing and communication resources. Secondly, by jointly optimizing the PP, the SP, and the SC, the performance of the scheme proposed by the present disclosure has been significantly improved and superior to the other three schemes. Finally, it can be observed that the performance gap between the blue curve and the purple curve is relatively significant, this is because the number of inspection unmanned aerial vehicles is not one. Therefore, optimizing the PP is equivalent to optimizing a plurality of variables, and synchronous optimization of a plurality of variables further improves the performance of the blue curve.

[0115] The detailed descriptions of the embodiments of the present disclosure are provided in conjunction with the accompanying drawings. However, the present disclosure is not limited to the above embodiments. Within the knowledge range possessed by ordinary technicians in the art, various variations can be made without departing from the objectives of the present disclosure.