Patent classifications
G06N5/013
Static and dynamic non-deterministic finite automata tree structure application apparatus and method
A method includes processing a user input for generating a non-deterministic finite automata tree (NFAT) correlation policy. The user input indicates one or more of a static condition or a dynamic condition for inclusion in the NFAT correlation policy. The static condition includes a comparison between a defined entity and a first fixed parameter. The dynamic condition includes a comparison between the defined entity and a variable parameter. An applicable NFAT element is generated that includes at least one of the NFAT correlation policy generated based on a determination that the user input indicates the static condition or a NFAT template generated based on a determination that the user input indicates the dynamic condition. Event data received from a network device is processed to detect a status of a network entity associated with a communication network based on the applicable NFAT element.
EXPLAINING A THEOREM PROVING MODEL
In an approach for explaining a theorem proving model, a processor predicts a truth value of a query through a pre-trained theorem proving model, based on the query and one or more facts and rules in a knowledge base. A processor ranks the one or more facts and rules according to contribution, calculated in a pre-defined scoring method, made to the predicted truth value of the query. A processor generates a proof of the predicted truth value, wherein the proof is one or more logical steps that demonstrate the predicted truth value in a natural language. A processor outputs the proof.
SYSTEM AND METHOD FOR ENCRYPTION AND DECRYPTION USING LOGIC SYNTHESIS
Method decrypting and/or encrypting an input message: providing at least five of sixteen first order logic functions; and decrypting and/or encrypting the input message based on the at least five first order logic functions.
INFERENCE APPARATUS, INFERENCE METHOD, AND COMPUTER READABLE RECORDING MEDIUM
An inference apparatus 10 includes: a hypothesis candidate generation unit 11 configured to perform inference by applying inference knowledge that includes information indicating a temporal sequence to observation in which facts that have been observed are expressed using logical expressions, and thereby generate a hypothesis candidate from which the observation can be derived; and a contradiction examination unit 12 configured to determine, on the basis of the information indicating a temporal sequence, whether or not the generated hypothesis candidate includes a temporal contradiction.
Systems and methods to semantically compare product configuration models
Systems and methods to semantically compare product configuration models. A method includes receiving a first configuration model and a second configuration model. The method includes generating a first order logic (FOL) representation of the first configuration model and an FOL representation of the second configuration model. The method includes performing a satisfiability modulo theories (SMT) solve for nonequivalence satisfiability on the FOL representation of the first configuration model and the FOL representation of the second configuration model. The method includes storing an indication that the first configuration model is equivalent to the second configuration model when the SMT solve for nonequivalence satisfiability is not satisfied.
Encoding and decoding tree data structures as vector data structures
Systems, computer-implemented methods, and computer program products that can facilitate encoding a tree data structure into a vector based on a set of constraints are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a constraint former that can form a set of constraints based on a first tree data structure and a vector encoder that can encode the first tree data structure into a vector based on the set of constraints.
Method for performing a disjunctive proof for two relations
The present disclosure relates to a method method for performing a disjunctive proof for two relations R.sub.0 and R.sub.1. The relation R.sub.0 is between an instance set X.sub.0 and a witness set W.sub.0 and defines a language L(R.sub.0) containing those elements x.sub.0∈X.sub.0 for which there exists a witness w.sub.0 that is related to x.sub.0 in accordance with R.sub.0. The relation R.sub.1 is between an instance set X.sub.1 and a witness set W.sub.1 and defining a language L(R.sub.1) containing those elements x.sub.1∈X.sub.1 for which there exists a witness w.sub.1 that is related to x.sub.1 in accordance with R.sub.1. For proving knowledge of a witness w.sub.b of at least one of instances x.sub.0 and x.sub.1, where b is 0 or 1, of the respective relations R.sub.0 and R.sub.1, the prover may generate using a bijective function a challenge from a simulated challenge c.sub.1-b.
DISTRIBUTED DECOMPOSITION OF STRING-AUTOMATED REASONING USING PREDICATES
Techniques are described for efficiently distributing across multiple computing resources satisfiability modulo theories (SMT) queries expressed in propositional logic with string variables. As part of the computing-related services provided by a cloud provider network, many cloud providers also offer identity and access management services, which generally help users to control access and permissions to the services and resources (e.g., compute instances, storage resources, etc.) obtained by users via a cloud provider network. By using resource policies, for example, users can granularly control which identities are able to access specific resources associated with the users' accounts and how those identities can use the resources. The ability to efficiently distribute the analysis of SMT queries expressed in propositional logic with string variables among any number of separate computing resources (e.g., among separate processes, compute instances, containers, etc.) enables the efficient analysis of such policies.
TRAINING A QUANTUM OPTIMIZER
Among the embodiments disclosed herein are variants of the quantum approximate optimization algorithm with different parametrization. In particular embodiments, a different objective is used: rather than looking for a state which approximately solves an optimization problem, embodiments of the disclosed technology find a quantum algorithm that will produce a state with high overlap with the optimal state (given an instance, for example, of MAX-2-SAT). In certain embodiments, a machine learning approach is used in which a “training set” of problems is selected and the parameters optimized to produce large overlap for this training set. The problem was then tested on a larger problem set. When tested on the full set, the parameters that were found produced significantly larger overlap than optimized annealing times. Testing on other random instances (e.g., from 20 to 28 bits) continued to show improvement over annealing, with the improvement being most notable on the hardest problems. Embodiments of the disclosed technology can be used, for example, for near-term quantum computers with limited coherence times.
Solving based introspection to augment the training of reinforcement learning agents for control and planning on robots and autonomous vehicles
Described is a system for controlling a mobile platform. A neural network that runs on the mobile platform is trained based on a current state of the mobile platform. A Satisfiability Modulo Theories (SMT) solver capable of reasoning over non-linear activation functions is periodically queried to obtain examples of states satisfying specified constraints of the mobile platform. The neural network is then trained on the examples of states. Following training on the examples of states, the neural network selects an action to be performed by the mobile platform in its environment. Finally, the system causes the mobile platform to perform the selected action in its environment.