Patent classifications
G06N5/00
Optimizing sparse graph neural networks for dense hardware
A computer-implemented method for computing node embeddings of a sparse graph that is an input of a sparse graph neural network is described. Each node embedding corresponds to a respective node of the sparse graph and represents feature information of the respective node and a plurality of neighboring nodes of the respective node. The method includes: receiving an adjacency matrix that represents edges of the sparse graph; receiving a weight matrix representing, for each node of the sparse graph, a level of influence of respective neighboring nodes on the node; initializing, for each node of the sparse graph, a respective node embedding; transforming the adjacency matrix into a low-bandwidth adjacency matrix, and performing the following operations at least once: generating a message propagation matrix as a product of the low-bandwidth adjacency matrix, the node embeddings of the nodes, and the weight matrix, wherein the message propagation matrix represents message propagation among the nodes of the sparse graph, and updating the node embeddings of the sparse graph by processing the message propagation matrix and the node embeddings of the nodes using an encoder neural network of the sparse graph neural network.
Method of and server for training a machine learning algorithm for estimating uncertainty of a sequence of models
There is provided a method and server for estimating an uncertainty parameter of a sequence of computer-implemented models comprising at least one machine learning algorithm (MLA). A set of labelled digital documents is received, which is to be processed by the sequence of models. For a given model of the sequence of models, at least one of a respective set of input features, a respective set of model-specific features and a respective set of output features are received. The set of predictions output by the sequence of models is received. A second MLA is trained to estimate uncertainty of the sequence of models based on the set of labelled digital documents, and the at least one of the respective set of input features, the respective set of model-specific features, the respective set of output features, and the set of predictions.
Static and dynamic non-deterministic finite automata tree structure application apparatus and method
A method includes processing a user input for generating a non-deterministic finite automata tree (NFAT) correlation policy. The user input indicates one or more of a static condition or a dynamic condition for inclusion in the NFAT correlation policy. The static condition includes a comparison between a defined entity and a first fixed parameter. The dynamic condition includes a comparison between the defined entity and a variable parameter. An applicable NFAT element is generated that includes at least one of the NFAT correlation policy generated based on a determination that the user input indicates the static condition or a NFAT template generated based on a determination that the user input indicates the dynamic condition. Event data received from a network device is processed to detect a status of a network entity associated with a communication network based on the applicable NFAT element.
SAFETY BELT DETECTION METHOD, APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
A safety belt detection method, apparatus, computer device and computer readable storage medium are disclosed. The safety belt detection method includes the steps as follows. An image to be detected is obtained. The image to be detected is inputted into a detection network which includes a global dichotomous branch network and a grid classification branch network. A dichotomous result, which indicates whether a driver is wearing a safety belt and is output from the global dichotomous branch network, is obtained. A grid classification diagram, which indicates a position information of the safety belt and is output from the grid classification branch network, is obtained based on image classification. A detection result of the safety belt, indicating whether the driver is wearing the safety belt normatively, is obtained based on the dichotomous result and the grid classification diagram.
ARTIFICIAL INTELLIGENCE (AI)-BASED MULTI-LEVEL PERSUASIVE REFERENCE FOR INDEPENDENT INSURANCE SALES AGENT
Methods and systems are provided for AI-based robotic automation for persuasive references. In one novel aspect, a robotic persuasive reference is generated based on a prospect product-service (P_PS) matrix, which is generated based on predictive analysis using DNN model and dynamically obtained feedbacks. In one embodiment, the DNN model is trained with customer personal profiles against associated PS revenues for each customer data set. In one embodiment, the predictive analysis uses a decision tree classifier. In one embodiment, the computer system detects one or more predefined triggering events comprising feedback information for the robotic persuasive reference and one or more predefined lifetime events, updates the P_PS matrix based and the robotic persuasive reference accordingly. In one embodiment, the feedback information is a sentiment analysis on responses from the prospect. In another embodiment, a recency, frequency, and page browsing analysis is performed based on the one or more detected lifetime events.
Data model generation using generative adversarial networks
Methods for generating data models using a generative adversarial network can begin by receiving a data model generation request by a model optimizer from an interface. The model optimizer can provision computing resources with a data model. As a further step, a synthetic dataset for training the data model can be generated using a generative network of a generative adversarial network, the generative network trained to generate output data differing at least a predetermined amount from a reference dataset according to a similarity metric. The computing resources can train the data model using the synthetic dataset. The model optimizer can evaluate performance criteria of the data model and, based on the evaluation of the performance criteria of the data model, store the data model and metadata of the data model in a model storage. The data model can then be used to process production data.
Data model generation using generative adversarial networks
Methods for generating data models using a generative adversarial network can begin by receiving a data model generation request by a model optimizer from an interface. The model optimizer can provision computing resources with a data model. As a further step, a synthetic dataset for training the data model can be generated using a generative network of a generative adversarial network, the generative network trained to generate output data differing at least a predetermined amount from a reference dataset according to a similarity metric. The computing resources can train the data model using the synthetic dataset. The model optimizer can evaluate performance criteria of the data model and, based on the evaluation of the performance criteria of the data model, store the data model and metadata of the data model in a model storage. The data model can then be used to process production data.
LEARNING SYSTEM, LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM
A learning system includes processing circuitry. The processing circuitry is configured to acquire a first data distribution for a first data set out of data sets based on a first cohort, to select a second cohort that is used to update a first model out of a plurality of second cohorts on the basis of the acquired first data distribution, and to update the first model on the basis of at least part of a second data set out of data sets based on the selected second cohort.
MULTI-CLASS CLASSIFICATION USING A DUAL MODEL
A method for receiving a full training data set including a plurality of individual training data set, dividing the plurality of individual training sets into N classes, where N is an integer greater than three, dividing the N classes into M full data classes and N-M partial data classes, performing training to obtain a trained fixed size machine learning (ML) classification model and a trained in-class confidence model, outputting a first set of prediction value(s) based on the performance of training, distributing each class of the N classes of individual training data sets to a different node of a distributed machine learning system; and outputting, from the nodes of the distributed machine learning system, a second set of prediction value(s) for each class of the N classes.
ACCELERATING INFERENCES PERFORMED BY ENSEMBLE MODELS OF BASE LEARNERS
A method is provided for accelerating machine learning inferences. The method uses an ensemble model run on input data. This ensemble model involves several base learners, where each of the base learners has been trained. The method first schedules tasks for execution. As a result of the task scheduling, one of the base learners is executed based on a subset of the input data. The execution of the tasks is then started to obtain respective task outcomes. An exit condition is repeatedly evaluated while executing the tasks by computing a deterministic function of the task outcomes obtained so far. This deterministic function output values indicate whether an inference result of the ensemble model has converged. Accordingly, the execution of the tasks can be interrupted if the exit condition evaluated last is found to be fulfilled. Eventually, an inference result of the ensemble model is estimated based on the task outcomes.