Patent classifications
G06N3/098
PRIVACY-PRESERVING FEDERATED MACHINE LEARNING
A method preserving privacy in federated machine learning system is provided. In the method, a first computing entity in the federated learning system determines a first labeling matrix based on applying a first set of labeling functions to first data points. The first labeling matrix includes a plurality of first labels. The first computing entity obtains a similarity matrix indicating similarity scores between the first data points and second data points associated with a second computing entity. The first computing entity augments the first labeling matrix by transferring labels from a second labeling matrix into the first labeling matrix using the similarity scores between the first data points and the second data points. The first computing entity trains a discriminative machine learning model associated with the first computing entity based on the first augmented labeling matrix.
GENERATING A CONFIGURATION PORTFOLIO INCLUDING A SET OF MODEL CONFIGURATIONS
This disclosure relates to implementing a configuration portfolio having a compact set of model configurations that are predicted to perform well with respect to a wide variety of input tasks. Systems described herein involve evaluating machine learning models with respect to a set of training tasks to generate a regret matrix based on accuracy of the machine learning models in connection with predicting outputs for the training tasks. The systems described herein can identify a subset of model configurations from a plurality of model configurations based on the subset of model configurations having lower associated metrics of regret with respect to the training tasks. This ensures that each model configuration within the configuration portfolio will perform reasonably well for a given input task and provides a mechanism for selecting an output model configuration using significantly fewer processing resources than conventional model selection systems.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM
An information processing apparatus includes a controller configured to: acquire mobile body information including information that indicates the respective positions of a plurality of mobile bodies that are usable for machine learning; and select two or more mobile bodies to be used for machine learning based on the mobile body information.
UPPER ANALOG MEDIA ACCESS CONTROL (MAC-A) LAYER FUNCTIONS FOR ANALOG TRANSMISSION PROTOCOL STACK
A method of wireless communication by a user equipment (UE) includes generating, by an upper analog media access control (MAC-A) layer of a protocol stack, a data packet with a header and a data field. The header indicates a neural network identifier (ID) and a request ID. The data field includes gradient data for a federated learning iteration. The method also includes transferring the data packet to lower layers of the protocol stack for transmission to a network device across a wireless network.
MODEL COORDINATION METHOD AND APPARATUS
A model coordination method for a first device is provided. The first device stores at least one model segment. The at least one model segment is configured to realize a part of functions of a preset model. The method includes: determining a first model segment from the at least one model segment stored in the first device, wherein when the first model segment is executed and a second model segment is executed by a second device, a part of or all the functions of the preset model are realized, the second model segment is one of at least one model segment stored in the second device, and the at least one model segment stored in the second device is configured to realize a part of the functions of the preset model. A model coordination apparatus is also provided.
MACHINE LEARNING SYSTEM AND METHOD, INTEGRATION SERVER, INFORMATION PROCESSING APPARATUS, PROGRAM, AND INFERENCE MODEL CREATION METHOD
Provided are a machine learning system and method, an integration server, an information processing apparatus, a program, and an inference model creation method capable of suppressing a variation in learning data in federated learning and suppressing a variation in an inference accuracy of a model. The integration server receives an input that designates a data search condition and transmits the designated search condition to a plurality of client terminals. Each client terminal performs searching within a medical institution system to which each terminal belongs and transmits a totalization result of the number of pieces of data that matches the search condition to the integration server. The integration server receives an input that designates the required number of pieces of learning data and distributes the number of pieces of learning data used for learning on each client terminal based on the designated required number and on the received totalization result. The client terminal executes machine learning of a local model to be trained using the data in the medical institution system according to the designated type and number of pieces of learning data and transmits the learning result to the integration server. The integration server integrates the received learning results to update a master model.
Image sensor having on-chip compute circuit
In one example, an apparatus comprises: a first sensor layer, including an array of pixel cells configured to generate pixel data; and one or more semiconductor layers located beneath the first sensor layer with the one or more semiconductor layers being electrically connected to the first sensor layer via interconnects. The one or more semiconductor layers comprises on-chip compute circuits configured to receive the pixel data via the interconnects and process the pixel data, the on-chip compute circuits comprising: a machine learning (ML) model accelerator configured to implement a convolutional neural network (CNN) model to process the pixel data; a first memory to store coefficients of the CNN model and instruction codes; a second memory to store the pixel data of a frame; and a controller configured to execute the codes to control operations of the ML model accelerator, the first memory, and the second memory.
METHOD AND APPARATUS FOR TRAINING IMAGE PROCESSING MODEL
A method for training an image processing model is provided. After an augmented image is obtained, a soft label of the augmented image is obtained, and the image processing model is trained based on guidance of the soft label, to improve performance of the image processing model. In addition, according to the method, the image processing model is trained based on guidance of a soft label, with a relatively high score, selected from soft labels of the augmented image, to further improve performance of the image processing model.
METHOD FOR PROCESSING MODEL PARAMETERS, AND APPARATUS
Provided are a method for processing model parameters, and an apparatus. The method comprises: a model parameter set to be sharded is obtained, wherein the model parameter set comprises a multi-dimensional array corresponding to a feature embedding; attribute information for a storage system used for storing the model parameter set to be sharded is obtained, wherein the storage system used for storing the model parameter set to be sharded differs from a system on which a model corresponding to the model parameter set to be sharded is located when operating; the model parameter set to be sharded is stored in the storage system according to the attribute information.
METHOD AND APPARATUS FOR VERTICAL FEDERATED LEARNING
This disclosure relates to a method for vertical federated learning. In multiple participation nodes deployed in a multi-way tree topology, an upper-layer participation node corresponds to k lower-layer participation nodes. After the upper-layer participation node and the k lower-layer participation nodes exchange public keys with each other, the upper-layer participation node performs secure two-party joint computation with the lower-layer participation nodes with a first public key and second public keys as encryption parameters to obtain k two-party joint outputs of a federated model. Further, the upper-layer participation node aggregates the k two-party joint outputs to obtain a first joint model output corresponding to the federated model. As such, a multi-way tree topology deployment-based vertical federated learning architecture is provided, improving the equality of each participation node in a vertical federated learning process.