Patent classifications
G06F2221/031
BOOTING AND OPERATING COMPUTING DEVICES AT DESIGNATED LOCATIONS
Aspects of the disclosure provide for mechanisms for booting and operating a computing device at a target location. A method of the disclosure includes determining, at a startup process of a computing device, whether the computing device is present at a designated target location by checking, using a first near-field communication (NFC) device associated with the computing device, presence of a second NFC device positioned at the target location. The method further includes in response to detecting the presence of the second NFC device, acquiring a cryptographic key from the second NFC device. The method also includes decrypting contents associated with the computing device using the cryptographic key, and performing, using the decrypted contents, a boot process for the computing device in response to determining that the computing device is at the target location.
Decentralized trust assessment
A decentralized trust assessment system, comprising a neural network, a trust module, and a local subsystem, wherein the trust module controls whether a plurality of inputs to the local subsystem are trustworthy. The decentralized trust assessment system provides rotorcraft and tiltrotor aircraft with airborne systems able to detect bad and spoofed data from a wide variety of data streams.
Method and system for automatically identifying and correcting security vulnerabilities in API
System and methods are provided to identify static security vulnerabilities in an API. The security vulnerabilities may be in API proxy bundle, which includes a configuration of an API proxy for the API. The API proxy is executable in an API gateway of an API management platform. A search is performed of any security policy specified in an API proxy bundle. A compliance failure may be determined, which is a failure of the configuration of the API proxy to comply with a set of security rules. The API proxy bundle may be corrected to address the compliance failure.
SYSTEMS AND METHODS FOR IMPROVING ACCURACY IN RECOGNIZING AND NEUTRALIZING INJECTION ATTACKS IN COMPUTER SERVICES
Systems and methods for analyzing SQL queries for constraint violations for injection attacks. Tokenizing a SQL query generates a token stream. A parse tree is constructed by iterating over lexical nodes of the token stream. The parse tree is compared to a SQL schema and access configuration for a database in order to analyze the SQL query for constraint violations. Evaluation flaws are also detected. A step-wise, bottom-up approach is employed to walk through the parse tree to detect types and to ascertain from those types whether the condition for SQL execution is static or dynamic. SQL request security engine logic refers to predetermined protective action data and takes the particular type of action specified by the predetermined protective action data. Security is further enhanced by limiting service of requests to requests of one or more specific, accepted data types. Each request is parsed into individual data elements, each an associated key-value pair. If the key is any data element of the request matches a predetermined allowed key, detection and neutralization of any injection attack in the associated value data of the data element is bypassed. A number of patterns that match information to be obscured in logs are established and any matching information is replaced with obscured data. When recording information to the logs, any data whose key is a predetermined masked key is replaced with obscured data.
Permission control method and related product
Disclosed are a permission control method and a related product, relating to the technical field of mobile terminals. The method comprises: a mobile terminal (200) using a processor (110) to notify, where it is determined that an operation requested by a user is of a pre-set operation type, more than one biological identification module of the mobile terminal to acquire N pieces of biological information about the user; and then, the processor (110) matching the N pieces of biological information with a pre-set biological information template, and if the N pieces of biological information all successfully match the pre-set biological information template, executing the operation requested by the user.
SYSTEMS AND METHODS FOR GENERATING ATTACK TACTIC PROBABILITIES FOR HISTORICAL TEXT DOCUMENTS
In one embodiment, a method includes receiving a historical text document that is associated with a breach event. The method also includes searching for an attack tactic within the historical text document using a machine learning algorithm. The method further includes generating a probability that the attack tactic exists within the historical text document, comparing the probability to a predetermined probability threshold, and categorizing the historical text document based on the probability.
SECURE FINGERPRINT IMAGE SYSTEM
Herein disclosed are approaches for protecting sensitive information within a fingerprint authentication system that can be snooped and utilized to access the device, secured information, or a secured application. The approaches can utilize encryption keys and hash functions that are unique to the device in which the fingerprint authentication is being performed to protect the sensitive information that can be snooped.
Protecting machine learning models from privacy attacks
This disclosure describes methods and systems for protecting machine learning models against privacy attacks. A machine learning model may be trained using a set of training data and causal relationship data. The causal relationship data may describe a subset of features in the training data that have a causal relationship with the outcome. The machine learning model may learn a function that predicts an outcome based on the training data and the causal relationship data. A predefined privacy guarantee value may be received. An amount of noise may be added to the machine learning model to make a privacy guarantee value of the machine learning model equivalent to or stronger than the predefined privacy guarantee value. The amount of noise may be added at a parameter level of the machine learning model.
Trusted service for detecting attacks on trusted execution environments
A method for providing a trusted service to a trusted execution environment running on a remote host machine includes receiving a message from the trusted execution environment and incrementing a counter of the trusted service. A response message is sent to the trusted execution environment using a value of the incremented counter.
Mutually Distrusting Enclaves
A method for accessing one or more service processes of service includes executing at least one service enclave and executing an enclave sandbox that wraps the at least one service enclave. The at least one service enclave provides an interface to the one or more service processes. The enclave sandbox is configured to establish an encrypted communication tunnel to the at least one service enclave interfacing with the one or more service processes, and communicate program calls to/from the one or more service processes as encrypted communications through the encrypted communication tunnel.