Patent classifications
G06N7/06
SYSTEM FOR PROBABILISTIC REASONING AND DECISION MAKING ON DIGITAL TWINS
Aspects of the present disclosure provide systems, methods, and computer-readable storage media that support ontology driven processes to create digital twins that extend the capabilities of knowledge graphs. A dataset including an ontology and domain data corresponding to a domain associated with the ontology is obtained. A knowledge graph is constructed based on the ontology and the domain data is incorporated into the knowledge graph. The knowledge graph is exploited to derive random variables of a probabilistic graph model. The random variables may be associated with probability distributions, which may include unknown parameters. A learning process is executed to learn the unknown parameters and obtain a joint distribution of the probabilistic graph model, which may enable querying of the probabilistic graph model in a probabilistic and deterministic manner.
Characterization and sorting for particle analyzers
Non-parametric transforms such as t-distributed stochastic neighbor embedding (tSNE) are used to analyze multi-parametric data such as data derived from flow cytometry or other particle analysis systems and methods. These transforms may be included for dimensionality reduction and identification of subpopulations (e.g., gating). By nature, non-parametric transforms cannot transform new observations without training a new transformation based on the entire dataset including the new observations. The features described parameterize non-parametric transforms using a neural network thereby allowing a small training dataset to be transformed using non-parametric techniques. The training dataset may then be used to generate an accurate parametric model for assessing additional events in a manner consistent with the initial events.
Characterization and sorting for particle analyzers
Non-parametric transforms such as t-distributed stochastic neighbor embedding (tSNE) are used to analyze multi-parametric data such as data derived from flow cytometry or other particle analysis systems and methods. These transforms may be included for dimensionality reduction and identification of subpopulations (e.g., gating). By nature, non-parametric transforms cannot transform new observations without training a new transformation based on the entire dataset including the new observations. The features described parameterize non-parametric transforms using a neural network thereby allowing a small training dataset to be transformed using non-parametric techniques. The training dataset may then be used to generate an accurate parametric model for assessing additional events in a manner consistent with the initial events.
APPARATUS AND METHOD FOR FUZZING FIRMWARE
An apparatus for fuzzing firmware according to an embodiment includes an emulator that provides a user mode emulation environment for firmware installed in any Internet of Things (IoT) device, a generator that generates one or more test cases in which at least some of a plurality of pre-set mutation operators are applied to at least one of a plurality of seed files, and an executor that executes mutation-based fuzzing on the firmware in the user mode emulation environment based on the one or more test cases.
APPARATUS AND METHOD FOR FUZZING FIRMWARE
An apparatus for fuzzing firmware according to an embodiment includes an emulator that provides a user mode emulation environment for firmware installed in any Internet of Things (IoT) device, a generator that generates one or more test cases in which at least some of a plurality of pre-set mutation operators are applied to at least one of a plurality of seed files, and an executor that executes mutation-based fuzzing on the firmware in the user mode emulation environment based on the one or more test cases.
Neural Network Inference and Training Using A Universal Coordinate Rotation Digital Computer
A system and method of implementing a neural network with a non-linear activation function is disclosed. A Universal Coordinate Rotation Digital Computer (CORDIC) is used to implement the activation function. Advantageously, the CORDIC is also used during training for back propagation. Using a CORDIC, activation functions such as hyperbolic tangent and sigmoid may be implemented without the use of a multiplier. Further, the derivatives of these functions, which are needed for back propagation, can also be implemented using the CORDIC.
Characterization and Sorting for Particle Analyzers
Non-parametric transforms such as t-distributed stochastic neighbor embedding (tSNE) are used to analyze multi-parametric data such as data derived from flow cytometry or other particle analysis systems and methods. These transforms may be included for dimensionality reduction and identification of subpopulations (e.g., gating). By nature, non-parametric transforms cannot transform new observations without training a new transformation based on the entire dataset including the new observations. The features described parameterize non-parametric transforms using a neural network thereby allowing a small training dataset to be transformed using non-parametric techniques. The training dataset may then be used to generate an accurate parametric model for assessing additional events in a manner consistent with the initial events.
Characterization and Sorting for Particle Analyzers
Non-parametric transforms such as t-distributed stochastic neighbor embedding (tSNE) are used to analyze multi-parametric data such as data derived from flow cytometry or other particle analysis systems and methods. These transforms may be included for dimensionality reduction and identification of subpopulations (e.g., gating). By nature, non-parametric transforms cannot transform new observations without training a new transformation based on the entire dataset including the new observations. The features described parameterize non-parametric transforms using a neural network thereby allowing a small training dataset to be transformed using non-parametric techniques. The training dataset may then be used to generate an accurate parametric model for assessing additional events in a manner consistent with the initial events.
System for threshold detection using learning reinforcement
Systems, computer program products, and methods are described herein for dynamically determining performance benchmarking parameters based on reinforcement learning. The present invention is configured to implement the first distributed impact simulation model on an application; initiate a reinforcement learning algorithm on the application, wherein initiating further comprises receiving a performance assessment output for the one or more application parameters; initiate an optimization policy generation engine on the performance assessment output associated with the application parameters to generate an optimization to encode the performance assessment output into rewards and costs; initiate an implementation of the optimization policy on the application to maximize an aggregated reward calculated from the second portion of the first set of actions; automatically generate a second distributed impact simulation model using the second set of actions to be implemented on the application parameters; and implement the second distributed impact simulation model on the application.
System for threshold detection using learning reinforcement
Systems, computer program products, and methods are described herein for dynamically determining performance benchmarking parameters based on reinforcement learning. The present invention is configured to implement the first distributed impact simulation model on an application; initiate a reinforcement learning algorithm on the application, wherein initiating further comprises receiving a performance assessment output for the one or more application parameters; initiate an optimization policy generation engine on the performance assessment output associated with the application parameters to generate an optimization to encode the performance assessment output into rewards and costs; initiate an implementation of the optimization policy on the application to maximize an aggregated reward calculated from the second portion of the first set of actions; automatically generate a second distributed impact simulation model using the second set of actions to be implemented on the application parameters; and implement the second distributed impact simulation model on the application.