G06F11/1492

EMBEDDED DATA PROTECTION AND FORENSICS FOR PHYSICALLY UNSECURE REMOTE TERMINAL UNIT (RTU)

Systems and methods include a method for protecting data for a remote terminal unit (RTU) and providing audit trail information for forensics procedures. Monitoring is performed for conditions detected at an RTU that warrant a data protection operation at the RTU. The monitoring is performed by an instrumented security function (ISF) chip communicating with the RTU in a supervisory control and data acquisition system (SCADA) network. Upon determining that conditions are warranted, the data protection operation is initiated by the ISF chip. The system also provides audit trail information for forensics procedures upon detecting a threat in the vicinity of the RTU. The system invokes the forensics procedure by initiating the localization services (HBL) embedded as part of the RTU's disk apparatus triggered by a change to the disk apparatus such as a power disconnect or by receiving a security signal from the NAC or local occupancy sensors.

Generating Context Aware Consumable Instructions

A system, program product, and method for use with an information handling system to detect and resolve faults in a run-time environment. As faults are detected, one or more corresponding general query responses are identified and subject to a ranking based on relevance criteria. At least one modified response is transformed into a command, selectively blended with context, and encoded as a context aware instruction. The instruction is subject to testing with corresponding output being subject to measurement.

Transformation drift detection and remediation
10678660 · 2020-06-09 · ·

In various example embodiments, a system, computer-readable medium and method to detect and dynamically correct a transformation drift in a data pipeline, the method comprising detecting a change in a transformation performed by an upstream subsystem of the data pipeline on a data field of an output dataset of the upstream subsystem; classifying the data field as an impacted data field; identifying, based on the topology information, a downstream subsystem of the data pipeline downstream of the upstream subsystem; identifying an input dataset of the downstream subsystem including the impacted data field; and performing a corrective transformation on the impacted data field of the input dataset of the downstream subsystem

Global naming for inter-cluster replication

Systems for multi-cluster virtualized computing system management. A method for performing virtual entity replication between source computing clusters and target computing clusters commences upon establishing a virtual entity naming convention that is observed by both the source computing clusters and the target computing clusters. A snapshot from a source cluster is associated with a global snapshot ID before being transmitted to a target computing cluster. At some point in time, the source cluster will initiate acts to replicate a virtual entity to a particular data state that is associated with a particular named snapshot. A second replication protocol then commences. The second replication protocol includes exchanges that serve to determine whether or not the target computing cluster has a copy of a particular named snapshot as named by the global snapshot ID, and if so, to then initiate virtual entity replication at the target computing cluster using the named snapshot.

Managing a computing cluster using time interval counters

A method for processing state update requests in a distributed data processing system with a number of processing nodes includes maintaining a number of counters including a working counter indicating a current time interval, a replication counter indicating a time interval for which all requests associated with that time interval are replicated at multiple processing nodes of the number of processing nodes, and a persistence counter indicating a time interval of the number of time intervals for which all requests associated with that time interval are stored in persistent storage. The counters are used to manage processing of the state update requests.

GLOBAL NAMING FOR INTER-CLUSTER REPLICATION

Systems for multi-cluster virtualized computing system management. A method for performing virtual entity replication between source computing clusters and target computing clusters commences upon establishing a virtual entity naming convention that is observed by both the source computing clusters and the target computing clusters. A snapshot from a source cluster is associated with a global snapshot ID before being transmitted to a target computing cluster. At some point in time, the source cluster will initiate acts to replicate a virtual entity to a particular data state that is associated with a particular named snapshot. A second replication protocol then commences. The second replication protocol includes exchanges that serve to determine whether or not the target computing cluster has a copy of a particular named snapshot as named by the global snapshot ID, and if so, to then initiate virtual entity replication at the target computing cluster using the named snapshot.

TRANSFORMATION DRIFT DETECTION AND REMEDIATION
20190391887 · 2019-12-26 ·

In various example embodiments, a system, computer-readable medium and method to detect and dynamically correct a transformation drift in a data pipeline, the method comprising detecting a change in a transformation performed by an upstream subsystem of the data pipeline on a data field of an output dataset of the upstream subsystem; classifying the data field as an impacted data field; identifying, based on the topology information, a downstream subsystem of the data pipeline downstream of the upstream subsystem; identifying an input dataset of the downstream subsystem including the impacted data field; and performing a corrective transformation on the impacted data field of the input dataset of the downstream subsystem

Processing tasks in a processing system

A method of processing an input task in a processing system involves duplicating the input task so as to form a first task and a second task; allocating memory including a first block of memory configured to store read-write data to be accessed during the processing of the first task; a second block of memory configured to store a copy of the read-write data to be accessed during the processing of the second task; and a third block of memory configured to store read-only data to be accessed during the processing of both the first task and the second task; and processing the first task and the second task at processing logic of the processing system so as to, respectively, generate first and second outputs.

Self healing fast sync any point in time replication systems using augmented Merkle trees
11921747 · 2024-03-05 · ·

Replication operations of replicating data from a production site to a replica site. The replication is performed using independent Merkle trees. The Merkle trees are updated asynchronously using Merkle trees that have been augmented with a time-based value. The synchronization is verified by comprising root hashes of the independent Merkle trees at certain points in time. The replication and Merkle trees are self-healing and trigger a resynchronization when a discrepancy is discovered.

PREDICTIVE ANALYSIS, SCHEDULING AND OBSERVATION SYSTEM FOR USE WITH LOADING MULTIPLE FILES
20190370129 · 2019-12-05 ·

A method for creating a common platform graphical user interface is provided. The interface may enable a user to trigger a data load job from a tool. The tool may monitor file upload events, trigger jobs and identify lists of missing or problematic file names. The tool may run on a single thread, thereby consuming relatively less system resources than a multi-thread program to perform its capabilities. The tool may enable selection of file names using wildcard variables or keyword variables. The tool may validate a list of files received against a master file list for each data load job. The tool may receive user input relating to each data load job. The tool may generate a loop within the single thread to receive information. The tool may analyze the received information and use the received information to predict future metadata associated with future data load jobs.