METHOD AND SYSTEM FOR DETECTING TYPICAL OBJECT OF TRANSMISSION LINE BASED ON UNMANNED AERIAL VEHICLE (UAV) FEDERATED LEARNING
20230238156 · 2023-07-27
Assignee
Inventors
Cpc classification
B64U2101/30
PERFORMING OPERATIONS; TRANSPORTING
H01B7/32
ELECTRICITY
International classification
Abstract
A method and system for detecting a typical object of a transmission line based on UAV federated learning. The method includes: determining a detection model for a typical object of a transmission line by YOLOv3 object detection algorithm according to a prior database for the typical object; dividing a UAV network into multiple federated learning units; acquiring pictures, taken by the UAV network, of the typical object and tags corresponding to each picture to determine a training database; training, based on Horovod framework and FATE federated learning framework, each federated learning unit according to the training database and the detection model for the typical object, and determining the trained UAV network according to the trained federated learning unit; and determining, by the trained UAV network, the typical object in each picture. A congestion of communication links is avoided, thereby improving detection efficiency.
Claims
1. A method for detecting a typical object of a transmission line based on unmanned aerial vehicle (UAV) federated learning, comprising: determining a detection model for a typical object of a transmission line by using a you only look once, version 3 (YOLOv3) object detection algorithm according to a prior database for the typical object of the transmission line, wherein the prior database for the typical object of the transmission line comprises a plurality of pictures of the typical object of the transmission line and a tag corresponding to each of the plurality of pictures, and the typical object of the transmission line comprises an insulator, a wire, or a pin; dividing a UAV network into multiple federated learning units, and acquiring the plurality of pictures, taken by the UAV network, of the typical object of the transmission line and the tag corresponding to each of the plurality of pictures to determine a training database; training, based on a Horovod framework and a FATE federated learning framework, each federated learning unit according to the training database and the detection model for the typical object of the transmission line, and determining the trained UAV network according to the trained federated learning unit; and determining, by the trained UAV network, the typical object in each of the plurality of pictures of the typical object of the transmission line.
2. The method for detecting the typical object of the transmission line based on UAV federated learning according to claim 1, wherein training, based on the Horovod framework and the FATE federated learning framework, each federated learning unit according to the training database and the detection model for the typical object of the transmission line and determining the trained UAV network according to the trained federated learning unit specifically comprises: distributing, based on the Horovod framework, parallel computing power of each federated learning unit according to the training database and the detection model for the typical object of the transmission line to determine a weight of each federated learning unit; aggregating, based on the FATE federated learning framework, the weights of all the federated learning units; and transferring the aggregated weights to each federated learning unit, and returning to the step of distributing, based on the Horovod framework, parallel computing power of each federated learning unit according to the training database and the detection model for the typical object of the transmission line to determine a weight of each federated learning unit until a Loss error function converges; and determining the trained UAV network.
3. The method for detecting the typical object of the transmission line based on UAV federated learning according to claim 2, wherein the error function Loss is:
Loss=λ.sub.coordL.sub.1+λ.sub.coordL.sub.2+L.sub.3+L.sub.4; wherein, L.sub.1 represents a center coordinate error; L.sub.2 represents a width-height coordinate error; L.sub.3 represents a confidence error; L.sub.4 represents a classification error; and λ.sub.coord represents a joint error coefficient.
4. A system for detecting a typical object of a transmission line based on UAV federated learning, comprising: a detection model determining module for a typical object of a transmission line, configured to determine the detection model for the typical object of the transmission line by using a you only look once, version 3 (YOLOv3) object detection algorithm according to a prior database for the typical object of the transmission line, wherein the prior database for the typical object of the transmission line comprises a plurality of pictures of the typical object of the transmission line and a tag corresponding to each of the plurality of pictures, and the typical object of the transmission line comprises an insulator, a wire, or a pin; a federated learning unit and training database determining module, configured to divide a UAV network into multiple federated learning units and determine the training database by acquiring the plurality of pictures, taken by the UAV network, of the typical object of the transmission line and the tag corresponding to each of the plurality of pictures; a trained UAV network determining module, configured to train, based on a Horovod framework and a FATE federated learning framework, each federated learning unit according to the training database and the detection model for the typical object of the transmission line, and determine the trained UAV network according to the trained federated learning units; and a detecting module, configured to determine, by the trained UAV network, the typical object in each of the plurality of pictures of the typical object of the transmission line.
5. The system for detecting the typical object of the transmission line based on UAV federated learning according to claim 4, wherein the trained UAV network determining module specifically comprises: a weight training unit for the federated learning unit, configured to distribute, based on the Horovod framework, parallel computing power of each federated learning unit according to the training database and the detection model for the typical object of the transmission line to determine a weight of each federated learning unit; a weight aggregating unit for the federated learning unit, configured to aggregate, based on the FATE federated learning framework, the weights of all the federated learning units; and an iterating unit, configured to transfer the aggregated weights to each federated learning unit and return to the weight training unit of the federated learning unit.
6. The system for detecting the typical object of the transmission line based on UAV federated learning according to claim 5, wherein a error function Loss is:
Loss=Δ.sub.coordL.sub.1+λ.sub.coordL.sub.2+L.sub.3+L.sub.4; wherein, L.sub.1 represents a center coordinate error; L.sub.2 represents a width-height coordinate error; L.sub.3 represents a confidence error; L.sub.4 represents a classification error; and λ.sub.coord represents a joint error coefficient.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] To describe the embodiments of the present disclosure or the technical solutions in the prior art more clearly, the accompanying drawings required in the embodiments are briefly introduced below. Obviously, the accompanying drawings described below are only some embodiments of the present disclosure. A person of ordinary skill in the art may further obtain other accompanying drawings based on these accompanying drawings without creative effort.
[0032]
[0033]
[0034]
[0035]
[0036]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0037] The technical solutions of the embodiments of the present disclosure are clearly and completely described below with reference to the accompanying drawings. Apparently, the described embodiments are merely a part rather than all of the embodiments of the present disclosure. All other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.
[0038] The objective of the present disclosure is to provide a method and system for detecting a typical object of a transmission line based on UAV federated learning, which can avoid congestion of communication links and improve detection efficiency.
[0039] To make the above-mentioned objective, features, and advantages of the present disclosure clearer and more comprehensible, the present disclosure will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.
[0040]
[0041] In S101, a detection model for a typical object of a transmission line is determined by using a YOLOv3 object detection algorithm according to a prior database I.sub.0 for the typical object of the transmission line, that is, a weight Po of the model is determined; where, the prior database for the typical object of the transmission line includes a plurality of pictures of the typical object of the transmission line and tags corresponding to the plurality of pictures, and the typical object of the transmission line includes an insulator, a wire, or a pin.
[0042] In S102, a UAV network is divided into multiple federated learning units, and the plurality of pictures, taken by the UAV network, of the typical object of the transmission line and the tags corresponding to the plurality of pictures are acquired to determine a training database.
[0043] The UAV network containing N UAV nodes is divided into/federated learning units, each of which consists of a.sub.i UAVs.
[0044] Parameters a.sub.i, N, and l satisfy the following formula:
[0045] where, a.sub.i represents the number of the UAV nodes in the i.sup.th federated learning unit.
[0046] The number M of the pictures taken by the UAV satisfies the following formula:
[0047] where, L represents a total number of training tags marked in the M pictures; b.sub.j represents the number of the training tags in the j.sup.th picture; and Tag.sub.ik represents the k.sup.th training tag in the j.sup.th picture.
[0048] In S103, each federated learning unit is trained based on a Horovod framework and a FATE federated learning framework according to the training database and the detection model for the typical object of the transmission line, and the trained UAV network is determined according to the trained federated learning unit.
[0049] S103 specifically includes the following steps:
[0050] Parallel computing power of each federated learning unit is distributed based on the Horovod framework according to the training database and the detection model for the typical object of the transmission line to determine a weight of each federated learning unit;
[0051] The weights of all the federated learning units are aggregated based on the FATE federated learning framework; and
[0052] The aggregated weights are transferred to each federated learning unit, and the step that parallel computing power of each federated learning unit is distributed based on the Horovod framework according to the training database and the detection model for the typical object of the transmission line to determine a weight of each federated learning unit, is returned until a error function Loss converges, such that the trained UAV network is determined.
[0053] The error function Loss is:
Loss=λ.sub.coordL.sub.1+λ.sub.coordL.sub.2+L.sub.3+L.sub.4;
[0054] Where, L.sub.1 represents a center coordinate error; L.sub.2 represents a width-height coordinate error; L.sub.3 represents a confidence error; L.sub.4 represents a classification error; and λ.sub.coord represents a joint error coefficient.
[0055] As shown in
[0056] S1: a gradient of each UAV is calculated according to training data of the UAV;
[0057] S2: a gradient vector of each UAV is sliced into H segments which are approximately equal in length (where the number of the segments H is the same as the number of the UAVs);
[0058] S3: H−1 rounds of gradient transmission and gradient addition are performed such that a small part of the gradient vector of each UAV is a sum of the segmented gradients of all the UAVs;
[0059] S4: the sum, calculated in S3, of the segmented gradients of each gradient vector is broadcast to other UAVs through the H−1 rounds of gradient transmission; and
[0060] S5: the segmented gradients are merged on each UAV, and a model on the UAV is updated according to the gradients.
[0061] Assuming that a total number of parameters of the training model is set as X, the number of a Parameter Server unit in the model is 1, and the number of a Worker unit in the model is H.
[0062] Based on this, the number of times E of information transmission required for each UAV in the Horovod framework is:
E=2N−1;
[0063] The time T for completing batch interaction data transmission every time by the training model is:
[0064] As N increases gradually, the T approaches T′, and T′ satisfies the following formula:
[0065] All modules in the FATE federated learning framework have the following functions:
[0066] FATE Flow includes a Client part and a Server part, where the Client part is used by a user to submit a federated learning task to a FATE cluster. A FATE Flow Server serves as an access for the FATE cluster to provide external services.
[0067] MySQL is used to store some metadata related to the federated learning task, such as creation time and a state.
[0068] EGG/ROLL provides distributed computing and storage capabilities for a training task.
[0069] Meta Service is a set of data or a file, which can be sliced and distributed on different Eggs. Meta Service is responsible for managing and locating slicing information of the file.
[0070] Federation provides a function of transmitting and receiving data for the training task. Due to a special nature of federated learning, all participants exchange data for several times during training.
[0071] Proxy provides a reverse proxy service, and serves as the only access of the FATE cluster to the outside (to train other participants).
[0072] FATE Board provides visualization of the training task for the user.
[0073] FATE Serving provides an online reasoning service. The user can push the trained model to this service for online reasoning.
[0074] A process for executing the federated learning by a UAV federated learning unit A and a UAV federated learning unit B under the coordination of a collaborator C is taken as an example below to illustrate a process of Federated learning.
[0075] In Step S1, shared picture data of the UAV federated learning units A and B is confirmed based on an encrypted user sample alignment technology on the premise that the UAV federated learning units A and B do not disclose their respective data, so as to combine the characteristics of these picture data for modeling;
[0076] In step S2, the collaborator C distributes a public key to the UAV federated learning units A and B to encrypt data exchanged during training;
[0077] In step S3, the UAV federated learning units A and B respectively calculate an intermediate result of the gradient, and then send the result to the collaborator C in an encryption form; and the UAV federated learning unit B calculates a value of a loss function according to its tag data;
[0078] In step S4, the collaborator C decrypts the results sent from the UAV federated learning units A and B, and calculates a total gradient value by aggregation, and then sends the total gradient value to the UAV federated learning units A and B in an encryption form; and
[0079] In step S5, the UAV federated learning units A and B update parameters of their respective models according to the gradient, and start a new round of training.
[0080]
Loss=λ.sub.coordL.sub.1+λ.sub.coordL.sub.2+L.sub.3+L.sub.4
[0081] Where, L.sub.1 represents a center coordinate error; L.sub.2 represents a width-height coordinate error; L.sub.3 represents a confidence error; and L.sub.4 represents a classification error.
[0082] The center coordinate error L.sub.1 satisfies the following formula:
[0083] Where, S.sup.2 represents the number of grids, namely S*S; B represents the number of candidate frames; F.sub.pq.sup.obj represents a state that the j.sup.th candidate frame of the p.sup.th grid is responsible for a project; x.sub.p.sup.q and {circumflex over (x)}.sub.p.sup.q respectively represent a predicted value and an actual value of an abscissa of a center point of the j.sup.th candidate frame of the p.sup.th grid; and y.sub.p.sup.q and ŷ.sub.p.sup.q respectively represents a predicted value and an actual value of an ordinate of the center point of the j.sup.th candidate frame of the p.sup.th grid.
[0084] The width-height coordinate error L.sub.2 satisfies the following formula:
[0085] Where, w.sub.p.sup.q and ŵ.sub.p.sup.q respectively represent a predicted value and an actual value of a width of the j.sup.th candidate frame of the p.sup.th grid; and h.sub.p.sup.q and ĥ.sub.p.sup.q respectively represent a predicted value and an actual value of a height of the j.sup.th candidate frame of the p.sup.th grid.
[0086] The confidence error L.sub.3 satisfies the following formula:
[0087] Where, F.sub.pq.sup.noobj represents a state that the j.sup.th candidate frame of the p.sup.th grid is not responsible for the project; and C.sub.p.sup.q and Ĉ.sub.p.sup.q respectively represent a predicted value and an actual value of confidence of the j.sup.th candidate frame of the p.sup.th grid.
[0088] The classification error L.sub.4 satisfies the following formula:
[0089] Where, P.sub.p.sup.q and {circumflex over (P)}.sub.p.sup.q respectively represent a predicted value and an actual value of a classification probability of the j.sup.th candidate frame of the p.sup.th grid.
[0090] In S104, the typical object in each picture of the typical object of the transmission line is determined by the trained UAV network.
[0091]
[0092] A detection model determining module 501 for the typical object of the transmission line, configured to determine the detection model for the typical object of the transmission line by using a YOLOv3 object detection algorithm according to a prior database for the typical object of the transmission line, where the prior database for the typical object of the transmission line includes a plurality of pictures of the typical object of the transmission line and a tag corresponding to each picture, and the typical object of the transmission line includes an insulator, a wire, or a pin;
[0093] A federated learning unit and training database determining module 502 configured to divide a UAV network into multiple federated learning units and determine the training database by acquiring the plurality of pictures, taken by the UAV network, of the typical object of the transmission line and the tag corresponding to each picture;
[0094] A trained UAV network determining module 503 configured to train, based on a Horovod framework and a FATE federated learning framework, each federated learning unit according to the training database and the detection model for the typical object of the transmission line and determine the trained UAV network according to the trained federated learning unit; and
[0095] A detecting module 504 configured to determine, by the trained UAV network, the typical object in each picture of the typical object of the transmission line.
[0096] The trained UAV network determining module 503 specifically includes:
[0097] A weight training unit for the federated learning unit configured to distribute, based on the Horovod framework, parallel computing power of each federated learning unit according to the training database and the detection model for the typical object of the transmission line to determine a weight of the federated learning unit;
[0098] A weight aggregating unit for the federated learning unit configured to aggregate, based on the FATE federated learning framework, the weights of all the federated learning units; and
[0099] An iterating unit configured to transfer the aggregated weights to each federated learning unit and return to the weight training unit for the federated learning unit.
[0100] A error function Loss is:
Loss=λ.sub.coordL.sub.1+λ.sub.coordL.sub.2+L.sub.3+L.sub.4;
[0101] Where, L.sub.1 represents a center coordinate error; L.sub.2 represents a width-height coordinate error; L.sub.3 represents a confidence error; L.sub.4 represents a classification error; and λ.sub.coord represents a joint error coefficient.
[0102] The embodiments of the present specification are described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts between the embodiments may refer to each other. Since the system disclosed in an embodiment corresponds to the method disclosed in another embodiment, the description is relatively simple, and reference can be made to the method description.
[0103] Specific examples are used herein to explain the principles and implementations of the present disclosure. The foregoing description of the embodiments is merely intended to help understand the method of the present disclosure and its core ideas; besides, various modifications may be made by a person of ordinary skill in the art to specific implementations and the scope of application in accordance with the ideas of the present disclosure. In conclusion, the content of the present specification shall not be construed as limitations to the present disclosure.