Patent classifications
G06N3/098
SYSTEMS AND METHODS FOR LOCALITY PRESERVING FEDERATED LEARNING
Systems and methods for locality preserving federated learning are disclosed. In one embodiment, a method for locality preserving federated learning may include: (1) receiving, at an aggregator computer program and from each of a plurality of clients, weights for each client's local machine learning model; (2) generating, by the aggregator computer program, an averaged machine learning model based on the received weights; (3) sharing, by the aggregator computer program, the averaged machine learning model with the plurality of clients; and (4) applying, by each client, a scaling factor to the averaged machine learning model to update its local machine learning model.
SYSTEMS AND METHODS FOR LOCALITY PRESERVING FEDERATED LEARNING
Systems and methods for locality preserving federated learning are disclosed. In one embodiment, a method for locality preserving federated learning may include: (1) receiving, at an aggregator computer program and from each of a plurality of clients, weights for each client's local machine learning model; (2) generating, by the aggregator computer program, an averaged machine learning model based on the received weights; (3) sharing, by the aggregator computer program, the averaged machine learning model with the plurality of clients; and (4) applying, by each client, a scaling factor to the averaged machine learning model to update its local machine learning model.
UNCERTAINTY-AWARE FEDERATED LEARNING METHODS AND SYSTEMS IN MOBILE EDGE COMPUTING NETWORK
Uncertainty-ware federated learning methods and systems in a mobile edge computing network can include defining an average volume of a training parameter of each user equipment under an uncertainty of a mobile edge computing network based on a federated learning framework; determining an average model size factor and the minimum and maximum number of aggregators during each federated learning task request; determining the number of aggregators; constructing an auxiliary graph, and determining a location decision according to the auxiliary graph; determining a total cost during each federated learning task request according to the location decision; adjusting the number of aggregators according to the total cost with a resource capacity of the mobile edge computing network as a constraint to obtain the decision including aggregator placement, user equipment assignment and the optimal number of aggregators during each federated learning task request, and optimizing the federated learning framework.
A METHOD FOR A DISTRIBUTED LEARNING
A computer implemented method for training a learning model by a distributed learning system includes computing nodes. The computing nodes respectively implement the learning model and deriving a gradient information for updating the learning model based on training data. The method involves: encoding, by the computing nodes, the gradient information by exploiting a correlation across the gradient information from the respective computing nodes; exchanging, by the computing nodes, the encoded gradient information within the distributed learning system; determining an aggregate gradient information based on the encoded gradient information from the computing nodes; and updating the learning model of the computing nodes with the aggregate gradient information, thereby training the learning model.
A METHOD FOR A DISTRIBUTED LEARNING
A computer implemented method for training a learning model by a distributed learning system includes computing nodes. The computing nodes respectively implement the learning model and deriving a gradient information for updating the learning model based on training data. The method involves: encoding, by the computing nodes, the gradient information by exploiting a correlation across the gradient information from the respective computing nodes; exchanging, by the computing nodes, the encoded gradient information within the distributed learning system; determining an aggregate gradient information based on the encoded gradient information from the computing nodes; and updating the learning model of the computing nodes with the aggregate gradient information, thereby training the learning model.
ATTENTION NEURAL NETWORKS WITH CONDITIONAL COMPUTATION
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing machine learning task on a network input to generate a network output. In one aspect, one of the systems includes an attention neural network configured to perform the machine learning task, the attention neural network including one or more attention layers, each attention layer comprising an attention sub-layer and a feed-forward sub-layer. Some or all of the attention layers have a feed-forward sub-layer that applies conditional computation to the inputs to the sub-layer.
ATTENTION NEURAL NETWORKS WITH CONDITIONAL COMPUTATION
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing machine learning task on a network input to generate a network output. In one aspect, one of the systems includes an attention neural network configured to perform the machine learning task, the attention neural network including one or more attention layers, each attention layer comprising an attention sub-layer and a feed-forward sub-layer. Some or all of the attention layers have a feed-forward sub-layer that applies conditional computation to the inputs to the sub-layer.
ADAPTIVE OFFLOADING OF FEDERATED LEARNING
Adaptive offloading of federated learning is performed by partitioning, for each of a plurality of computational devices, a plurality of layers of a neural network model into a device partition and a server partition based on a computational capability attribute of the computational device and a network bandwidth attribute of the computational device, training, cooperatively with respect to each computational device through the network, the neural network model, and aggregating the updated weight values of neural network model instances received from the plurality of computational devices to generate an updated neural network model.
COLLABORATIVE INDUSTRIAL INTEGRATED DEVELOPMENT AND EXECUTION ENVIRONMENT
A method for providing access to a development and execution (D&E) platform for development of industrial software, including providing while the D&E platform is being accessed a GUI with a development tool having process flow and code editors and an execution tool and arranging two or more programming blocks of a process flow responsive to input from an author when the process flow editor is accessed. The two or more programming blocks, when arranged, are configured to be executed. The method further includes editing source code of the two or more programming blocks responsive to input from the author when the code editor is accessed, compiling at least one of the two or more programming blocks responsive to input from the author when the execution tool is accessed, and executing the compiled at least one programming block responsive to input from the author when the execution tool is accessed.
OPTIMIZING DEPLOYMENT OF MACHINE LEARNING WORKLOADS
A system for optimizing deployment of a machine learning workload is provided. A computer device receives information pertaining to a machine learning workload to be processed for a client device. The computer device determines a machine learning model for the workload and a processing location for the workload based, at least in part, on the information. The computer device generates a request to process the workload at the determined processing location utilizing the determined machine learning model.