Patent classifications
G06N3/098
SYSTEMS AND METHODS FOR PROVIDING A MULTI-PARTY COMPUTATION SYSTEM FOR NEURAL NETWORKS
A system and method are disclosed for secure multi-party computations. The system performs operations including establishing an API for coordinating joint operations between a first access point and a second access point related to performing a secure prediction task in which the first access point and the second access point will perform private computation of first data and second data without the parties having access to each other's data. The operations include storing a list of assets representing metadata about the first data and the second data, receiving a selection of the second data for use with the first data, managing an authentication and authorization of communications between the first access point and the second access point and performing the secure prediction task using the second data operating on the first data.
TRAINING FEDERATED LEARNING MODELS
A computer system trains a federated learning model. A federated learning model is distributed to a plurality of computing nodes, each having a set of local training data comprising labeled data samples. Statistical data is received from each computing node that indicates the node's count of data samples for each label, and is analyzed to identify one or more computing nodes having local training data in which a label category is underrepresented beyond a threshold value with respect to data samples. Additional data samples labeled with the underrepresented labels are provided, and the computing nodes perform training. Results of training are received and are processed to generate a trained global model. Embodiments of the present invention further include a method and program product for training a federated learning model in substantially the same manner described above.
METHOD OF GENERATING PRE-TRAINING MODEL, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A method of generating a pre-training model, an electronic device and a storage medium, which relate to a field of an artificial intelligence technology, in particular to a computer vision and deep learning technology. The method includes: determining a performance index set corresponding to a candidate model structure set, the candidate model structure set is determined from a plurality of model structures included in a search space, and the search space is a super-network-based search space; determining, from the candidate model structure set, a target model structure corresponding to each chip according to the performance index set, each target model structure is a model structure meeting a performance index condition; and determining, for each chip, the target model structure corresponding to the chip as a pre-training model corresponding to the chip, the chip is configured to run the pre-training model corresponding to the chip.
DATA LABELING SYSTEM AND METHOD, AND DATA LABELING MANAGER
Embodiments of this application disclose a data labeling system and method, and a data labeling manager. The system includes a data labeling manager, a labeling model storage repository, and a basic computing unit storage repository. The data labeling manager receives a data labeling request, obtains a target basic computing unit, allocates a hardware resource to the target basic computing unit, establishes a target computing unit, obtains first storage path information of basic parameter data of a first labeling model, and sends the first storage path information to the target computing unit. The target computing unit obtains the basic parameter data of the to-be-used labeling model by using the first storage path information, combines a target model inference framework and the basic parameter data of the first labeling model to obtain the first labeling model, and labels to-be-labeled data by using the first labeling model.
Subject-Level Granular Differential Privacy in Federated Learning
Group-level privacy preservation is implemented within federated machine learning. An aggregation server may distribute a machine learning model to multiple users each including respective private datasets. The private datasets may individually include multiple items associated with a single group. Individual users may train the model using their local, private dataset to generate one or more parameter updates and to determine a count of the largest number of items associated with any single group of a number of groups in the dataset. Parameter updates generated by the individual users may be modified by applying respective noise values to individual ones of the parameter updates according to the respective counts to ensure differential privacy for the groups of the dataset. The aggregation server may aggregate the updates into a single set of parameter updates to update the machine learning model.
User-level Privacy Preservation for Federated Machine Learning
User-level privacy preservation is implemented within federated machine learning. An aggregation server may distribute a machine learning model to multiple users each including respective private datasets. Individual users may train the model using the local, private dataset to generate one or more parameter updates. Prior to sending the generated parameter updates to the aggregation server for incorporation into the machine learning model, a user may modify the parameter updates by applying respective noise values to individual ones of the parameter updates to ensure differential privacy for the dataset private to the user. The aggregation server may then receive the respective modified parameter updates from the multiple users and aggregate the updates into a single set of parameter updates to update the machine learning model. The federated machine learning may further include iteratively performing said sending, training, modifying, receiving, aggregating and updating steps.
DATA MIGRATION SCHEDULE PREDICTION USING MACHINE LEARNING
Various embodiments provide for one or more processor instructions and memory instructions that enable a memory sub-system to predict a schedule for migrating data between memory devices, which can be part of a memory sub-system.
DIALOG AGENTS WITH TWO-SIDED MODELING
A central learning model is deployed as a user model and as an assistant model. Sensitive information utterances from a corpus of previously stored conversation language corresponding to user queries and chat agent responses thereto are used to train the user model to become an updated user model and to train the assistant model to become an updated assistant model, respectively. The user model provides user contexts corresponding to user queries to the assistant model and the assistant model provides assistant contexts corresponding to chat agent responses to the user model. During training, the user model does not provide plain-text queries to the assistant model and the assistant model does not provide plain-text responses to the user model. The updated assistant model may facilitate a federated training process produce an updated central model. An updated central model may be used to provide real-time chat agent responses to live user queries.
MACHINE-LEARNABLE ROBOTIC CONTROL PLANS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using learnable robotic control plans. One of the methods comprises obtaining a learnable robotic control plan comprising data defining a state machine that includes a plurality of states and a plurality of transitions between states, wherein: one or more states are learnable states, and each learnable state comprises data defining (i) one or more learnable parameters of the learnable state and (ii) a machine learning procedure for automatically learning a respective value for each learnable parameter of the learnable state; and processing the learnable robotic control plan to generate a specific robotic control plan, comprising: obtaining data characterizing a robotic execution environment; and for each learnable state, executing, using the obtained data, the respective machine learning procedures defined by the learnable state to generate a respective value for each learnable parameter of the learnable state.
SYSTEM AND METHODS FOR IDENTIFYING AND TROUBLESHOOTING CUSTOMER ISSUES TO PREEMPT CUSTOMER CALLS
Disclosed embodiments may include a system that may receive an interaction message associated with an interaction a user has with an application or website, the interaction message may include an error message or a repeated action message. The system may identify, using a first machine learning model, one or more issues associated with the interaction message, retrieve one or more troubleshooting steps mapped to the one or more issues, and generate a first message comprising the one or more troubleshooting steps and a feedback request on an effectiveness of the one or more troubleshooting steps. The system may transmit the first message to the user, receive feedback from the user in response to the feedback request, and determine whether the feedback is negative. When the feedback is negative, the system may transmit a second message to a representative requesting the representative call the user.