G06F18/29

MEMORY-AUGMENTED GRAPH CONVOLUTIONAL NEURAL NETWORKS
20230027427 · 2023-01-26 ·

System and method for processing a graph that defines a set of nodes and a set of edges, the nodes each having an associated set of node attributes, the edges each representing a relationship that connects two respective nodes, comprising: generating a first node embedding for each node by: generating, for the node and each of a plurality of neighbour nodes, a respective first edge attribute defining a respective relationship type between the node and the neighbour node based on the node attributes of the node and the node attributes of the neighbour node; generating a first neighborhood vector that aggregates information from the generated first edge attributes and the node attributes of the neighbour nodes; generating the first node embedding based on the node attributes of the node and the generated first neighborhood vector.

ANOMALY DETECTING METHOD IN SEQUENCE OF CONTROL SEGMENT OF AUTOMATION EQUIPMENT USING GRAPH AUTOENCODER

Disclosed is a method of analyzing a programmable logic controller (PLC) logic to detect whether an anomaly that deviates from a standard pattern occurs in a repeated cycle. After modeling and patterning an operation pattern of automation equipment and processes with a graph, an anomaly detecting model capable of detecting whether a pattern is abnormal may be constructed as a graph AutoEncoder model. By detecting the change in the process pattern, it is possible to early detect the anomaly of the equipment and processes.

FEATURE ENGINEERING USING INTERACTIVE LEARNING BETWEEN STRUCTURED AND UNSTRUCTURED DATA

A concept associated with a feature used in machine learning model can be determined, the feature extracted from a first data source. A second data source containing the concept can be identified. An additional feature can be generated by performing a natural language processing on the second data source. The feature and the additional feature can be merged. A second machine learning model can be generated, which use the merged feature. A prediction result of the first machine learning model can be compared with a prediction result of the second machine learning model relative to ground truth data, to evaluate effective of the merged feature. Based on the evaluated effectiveness, the feature can be augmented with the merged feature in machine learning.

METHODS FOR GENERATING AND PROVIDING CAUSAL EXPLANATIONS OF ARTIFICIAL INTELLIGENCE MODELS AND DEVICES THEREOF

Methods, non-transitory computer readable media, and causal explanation computing apparatus that assists with generating and providing causal explanation of artificial intelligence models includes obtaining a dataset as an input for an artificial intelligence model, wherein the obtained dataset is filtered to a disentangled low-dimensional representation. Next, a plurality of first factors from the disentangled low-dimensional representation of the obtained data that affect an output of the artificial intelligence model is identified. Further, a generative mapping from the disentangled low-dimensional representation between the identified plurality of first factors and the output of the artificial intelligence model, using causal reasoning is determined. An explanation data is generated using the determined generative mapping, wherein the generated explanation data provides a description of an operation leading to the output of the artificial intelligence model using the identified plurality of first factors. The generated explanation data is provided via a graphical user.

Capturing network dynamics using dynamic graph representation learning

Methods and systems for dynamic network link prediction include generating a dynamic graph embedding model for capturing temporal patterns of dynamic graphs, each of the graphs being an evolved representation of the dynamic network over time. The dynamic graph embedding model is configured as a neural network including nonlinear layers that learn structural patterns in the dynamic network. A dynamic graph embedding learning by the embedding model is achieved by optimizing a loss function that includes a weighting matrix for weighting reconstruction of observed edges higher than unobserved links. Graph edges representing network links at a future time step are predicted based on parameters of the neural network tuned by optimizing the loss function.

COGNITIVE METHOD TO SPLIT MONOLITHIC ARCHITECTURE INTO MICROSERVICE ARCHITECTURE
20230229741 · 2023-07-20 ·

A method and related system detail a split of an architecture of a monolithic application into an architecture of a micro service application. The method receives source code for the monolithic application, and maps the source code into a directed graph. The graph is split into subgraphs and optimized. The method further provides the detailing of the micro service application split, based on the subgraphs.

Automatic monitoring and adjustment of machine learning model training

Methods and systems for training a machine learning model include training a machine learning model using training data. A status of the machine learning model's training is determined based on an accuracy curve of the machine learning model over the course of the training. Parameters of the training are adjusted based on the status. Training of the machine learning model is completed using the adjusted parameters.

Methods and systems for predicting non-default actions against unstructured utterances

A method to adaptively predict non-default actions against unstructured utterances by an automated assistant operating in a computing-system is provided. The method includes extracting voice-features based on receiving an input utterance from at-least one speaker by an automatic speech recognition (ASR) device, identifying the input utterance as an unstructured utterance based on the extracted voice-features and a mapping between the input utterance with one or more default actions as drawn by the ASR, obtaining at least one probable action to be performed in response to the unstructured utterance through a dynamic bayesian network (DBN). The method further includes providing the at least one probable action obtained by the DBN to the speaker in an order of the posterior probability with respect to each action.

Abstract meaning representation parsing with graph translation

A computer-implemented method for generating an abstract meaning representation (“AMR”) of a sentence, comprising receiving, by a computing device, an input sentence and parsing the input sentence into one or more syntactic and/or semantic graphs. An input graph including a node set and an edge set is formed from the one or more syntactic and/or semantic graphs. Node representations are generated by natural language processing. The input graph is provided to a first neural network to provide an output graph having learned node representations aligned with the node representations in the input graph. The method further includes predicting via a second neural network, node label and predicting, via a third neural network, edge labels in the output graph. The AMR is generated based on the predicted node labels and predicted edge labels. A system and a non-transitory computer readable storage medium are also disclosed.

Systems and methods for event prediction using schema networks

A system for event prediction using schema networks includes a first antecedent entity state that represents a first entity at a first time; a first consequent entity state that represents the first entity at a second time; a second antecedent entity state that represents a second entity at the first time; and a first schema factor that couples the first and second antecedent entity states to the first consequent entity state; wherein the first schema factor is configured to predict the first consequent entity state from the first and second antecedent entity states.