G06N5/042

Outcome creation based upon synthesis of history
11687807 · 2023-06-27 · ·

A method of exercising effective influence over future occurrences using knowledge synthesis is described. Techniques include influencing methods that yield actions, once a proposed outcome has been assumed. This is different from methods, typically referred to as “predictive” or “prescriptive” that use analytics to model future results based upon existing data and predict most likely outcome. One or more methods of analysis of historical data, in a hierarchical manner, determine events which led to an observed outcome. The outcome-based algorithms use, as input, a future event or state and generate attributes that are necessary precursors. By creating these attributes, the future can be affected. Where necessary, synthetic contributors of such attributes are also created and made to act in ways consistent with generating the assumed outcome. These contributors might be called upon respectively, to post favorable opinions, to report balmy weather, or to describe sales to a certain population demographic.

Prediction of NBA talent and quality from non-professional tracking data

A computing system identifies broadcast video for a plurality of games in a first league. The broadcast video includes a plurality of video frames. The computing system generates tracking data for each game from the broadcast video of a corresponding game. The computing system enriches the tracking data. The enriching includes merging play-by-play data for the game with the tracking data of the corresponding game. The computing system generates padded tracking data based on the tracking data. The computing system projects player performance in a second league for each player based on the tracking data and the padded tracking data.

PROCESSING LOGIC RULES IN A SPARQL QUERY ENGINE

A computer-implemented method for processing a logic rule in a graph database. The method includes obtaining a graph database comprising at least one graph, each graph of the database being represented in one or more adjacency matrices (R-Matrix), each adjacency matrix representing a group of tuples of the graph comprising a same predicate, obtaining the logic rule concluding to a head predicate, generating a virtual adjacency matrix comprising one of the one or more adjacency matrices (R-Matrix) and an entailed data matrix (E-Matrix), the virtual adjacency matrix representing the head predicate, the entailed data matrix representing a group of tuples that are computed by applying the logic rule, and receiving a query by the database using the head predicate.

AIR Oracle Brain
20230170091 · 2023-06-01 · ·

Interactive and adaptable digital router for rendering diagnoses wherein interactive multi-dimensional figurines rendering diagnoses or reverse deducing diagnoses by rendering diagnoses-related adept networked repositories-rendered Interactive and changeable approximates, similars, accurate, match and same as queries.

SYSTEMS AND METHODS FOR FRAUD DETECTION USING GAME THEORY

Systems and methods for applying game theory for fraud detection. Rather than inspecting every transaction record, embodiments are directed to limiting incoming suspicious transaction records according to a schedule. The schedule may define time windows for various clients and transactions. These time windows may filter down the stream of incoming transaction records to a subset. As a result, fraud may be detected by strategically allocating resources in an optimal way rather than attempting to inspect each and every instance of transaction.

Natural language understanding using voice characteristics
11348601 · 2022-05-31 · ·

A system is provided for using voice characteristics in determining a user intent corresponding to an utterance. The system processes a NLU hypothesis and voice characteristics data, using a trained model, to determine an alternate NLU hypothesis based on the voice characteristics data. The voice characteristics data may indicate if a user's level of uncertainty when speaking the utterance, an age group of the user, a sentiment of the user when speaking the utterance, and other data.

SAFE REINFORCEMENT LEARNING BY LOGICAL NEURAL NETWORK

A method for safe reinforcement learning receives an action and a current state of an environment. The method evaluates, using a Logical Neural Network (LNN) structure, an action safetyness logical inference based on the current state of an environment and a current action candidate from an agent. The method outputs upper and lower bounds on the action, responsive to an evaluation of the action safetyness logical inference. The method calculates a contradiction value for the action by using the upper and lower bounds. The contradiction value indicates a level of contradiction for each of a plurality of logic rules implemented by the LNN structure. The method evaluates the action L with respect to safetyness based on the contradiction value. The method selectively performs the action responsive to an evaluation of the action indicating that the action is safe to perform based on the contradiction value exceeding a safetyness threshold.

LEARNING LATENT STRUCTURAL RELATIONS WITH SEGMENTATION VARIATIONAL AUTOENCODERS
20220156612 · 2022-05-19 · ·

Learning disentangled representations is an important topic in machine learning for a wide range of applications. Disentangled latent variables represent interpretable semantic information and reflect separate factors of variation in data. Although generative models may learn latent representations and generate data samples as well, existing models may ignore the structural information among latent representations. Described in the present disclosure are embodiments to learn disentangled latent structural representations from data using decomposable variational auto-encoders, which simultaneously learn component representation and encodes component relationships. Embodiments of a novel structural prior for latent representations are disclosed to capture interactions among different data components. Embodiments are applied to data segmentation and latent relation discovery among different data components. Experiments on several datasets demonstrate the utility of the present model embodiments.

METHOD AND APPARATUS FOR GENERATING MULTI-DRONE NETWORK COOPERATIVE OPERATION PLAN BASED ON REINFORCEMENT LEARNING

The present disclosure relates to a method and apparatus for generating a multi-drone network operation plan based on reinforcement learning. The method of generating a multi-drone network operation plan based on reinforcement learning includes defining a reinforcement learning hyperparameter and training an actor neural network for each drone agent by using a multi-agent deep deterministic policy gradient (MADDPG) algorithm based on the defined hyperparameter, generating Markov game formalization information based on multi-drone network task information and generating state-action history information by using the trained actor neural network based on the formalization information, and generating a multi-drone network operation plan based on the state-action history information.

FIRST-ORDER LOGICAL NEURAL NETWORKS WITH BIDIRECTIONAL INFERENCE

A system for configuring and using a logical neural network including a graph syntax tree of formulae in a represented knowledgebase connected to each other via nodes representing each proposition. One neuron exists for each logical connective occurring in each formula and, additionally, one neuron for each unique proposition occurring in any formula. All neurons return pairs of values representing upper and lower bounds on truth values of their corresponding subformulae and propositions. Neurons corresponding to logical connectives accept as input the output of neurons corresponding to their operands and have activation functions configured to match the connectives' truth functions. Neurons corresponding to propositions accept as input the output of neurons established as proofs of bounds on the propositions' truth values and have activation functions configured to aggregate the tightest such bounds. Bidirectional inference permits every occurrence of each proposition in each formula to be used as a potential proof.