Patent classifications
G06F21/554
System for evaluation and weighting of resource usage activity
Embodiments of the present invention provide systems and methods for evaluating and weighting resource usage activity data. The system may establish a communicable link to a user device via a user application to receive resource activity data and historical data from one or more users or systems via multiple communication channels. The system may evaluate the historical data and determine evaluation criteria based on perceived chance of loss associated with particular metadata characteristics, and use the evaluation criteria as weighted metrics for determining an overall evaluation score for the user based on indication from resource activity data that the user has conducted resource transfers with entities or channels identified in the historical data.
Malicious enterprise behavior detection tool
Embodiments of the present disclosure provide systems, methods, and non-transitory computer storage media for identifying malicious enterprise behaviors within a large enterprise. At a high level, embodiments of the present disclosure identify sub-graphs of behaviors within an enterprise based on probabilistic and deterministic methods. For example, starting with the node or edge having the highest risk score, embodiments of the present disclosure iteratively crawl a list of neighbors associated with the nodes or edges to identify subsets of behaviors within an enterprise that indicate potentially malicious activity based on the risk scores of each connected node and edge. In another example, embodiments select a target node and traverse the connected nodes via edges until a root-cause condition is met. Based on the traversal, a sub-graph is identified indicating a malicious execution path of traversed nodes with associated insights indicating the meaning or activity of the node.
Information security system and method for anomaly and security threat detection
A system for detecting security threats in a computing device receives a first set of signals from components of the computing device. The first set of signals includes intercommunication electrical signals between the components of the computing device and electromagnetic radiation signals propagated from the components of the computing device. The system extracts baseline features from the first set of signals. The baseline features represent a unique electrical signature of the computing device. The system extracts test features from a second set of signals received from the component of the system. The system determines whether there is a deviation between the test features and baseline features. If the system detects the deviation, the system determines that the computing device is associated with a particular anomaly that makes the computing device vulnerable to unauthorized access.
Machine learning adversarial campaign mitigation on a computing device
Machine learning adversarial campaign mitigation on a computing device. The method may include deploying an original machine learning model in a model environment associated with a client device; deploying a classification monitor in the model environment to monitor classification decision outputs in the machine learning model; detecting, by the classification monitor, a campaign of adversarial classification decision outputs in the machine learning model; applying a transformation function to the machine learning model in the model environment to transform the adversarial classification decision outputs to thwart the campaign of adversarial classification decision outputs; determining a malicious attack on the client device based in part on detecting the campaign of adversarial classification decision outputs; and implementing a security action to protect the computing device against the malicious attack.
Tracking malicious software movement with an event graph
A multi-endpoint event graph is used to detect malware based on malicious software moving through a network.
SYSTEM FOR ACTIVE DETECTION AND MITIGATION OF UNAUTHORIZED ACTIVITY WITHIN A TECHNOLOGY INFRASTRUCTURE
Systems, computer program products, and methods are described herein for active detection and mitigation of unauthorized activity within a technology infrastructure. The present invention is configured to continuously monitor one or more incoming messages in one or more computing devices; detect one or more assessment vectors embedded in the one or more incoming messages; initiate an isolated virtual environment; redirect the one or more incoming messages associated with the one or more assessment vectors from the one or more computing devices to the isolated virtual environment; trigger an access routine to emulate, within the isolated virtual environment, an action of accessing the one or more incoming messages; determine, based on at least the access routine, whether the one or more incoming messages is associated with malware; and display a notification to the user indicating whether the one or more incoming messages is associated with malware.
ADAPTIVE DETECTION OF SECURITY THREATS THROUGH RETRAINING OF COMPUTER-IMPLEMENTED MODELS
Adapting detection of security threats, including by retraining computer-implemented models is disclosed. An indication is received that a natural language processing model should be retrained. A list of training samples is generated that includes at least one synthetic training sample. The natural language processing model is retrained at least in part by using the set of generated training samples. The retrained natural language processing model is used to determine a likelihood that a message poses a risk.
Context-based secure controller operation and malware prevention
In one implementation, a method for providing security on an externally connected controller includes launching, by the controller, a security layer that includes a whitelist of permitted processes on the controller, the whitelist including (i) signatures for processes that are authorized to be executed and (ii) context information identifying permitted controller contexts within which the processes are authorized to be executed; determining, by the security layer, whether the particular process is permitted to be run on the controller based on a comparison of the determined signature with a verified signature for the particular process from the whitelist; identifying, by the security layer, a current context for the controller; determining, by the security layer, whether the particular process is permitted to be run on the controller based on a comparison of the current context with one or more permitted controller contexts for the particular process from the whitelist.
Scoring events using noise-contrastive estimation for anomaly detection
Techniques for monitoring a computing environment for anomalous activity are presented. An example method includes receiving a request to invoke an action within the computing environment. An anomaly score is generated for the received request by applying a probabilistic model to properties of the request. The anomaly score generally indicates a likelihood that the properties of the request correspond to historical activity within the computing environment for a user associated with the request. The probabilistic model generally comprises a model having been trained using historical activity within the computing environment for a plurality of users, the historical activity including information identifying an action performed in the computing environment and contextual information about a historical request. Based on the generated anomaly score, one or more actions are taken to process the request such that execution of requests having anomaly scores indicative of unexpected activity may be blocked pending confirmation.
Systems and methods for detecting an attack on a battery management system
Systems and methods for detecting and/or identifying an attack on a battery management system (BMS) or a battery system. The voltage and/or state of charge (SOC) of the BMS or battery system can be monitored, and one or more datasets can be obtained. A principal component analysis (PCA) based unsupervised k-means approach can be applied on the one or more datasets to monitor for irregularities that indicate an attack.