Patent classifications
G06F2201/835
Virtual time test environment
A test environment apparatus having processing circuitry is provided for testing an embedded system-under-test. The processing circuitry may be configured to implement the system-under-test for interaction with external test participants via messaging and control operation of an inner agent and an outer agent. The inner agent may be implemented within a virtual machine that is also implementing the system-under-test and the outer agent may be implemented external to the virtual machine implementing the system-under-test. The inner agent and the outer agent may be controlled to operate collaboratively to trigger captures of snapshots that store current states of the system-under-test at respective times and trigger a rollback of the system-under-test based on a timestamp of a delayed message using a snapshot for a selected time that provides a state of the system-under-test prior to the timestamp to permit subsequent delivery of the delayed message with the system-under-test in a rollback state.
UTILIZING A TABLESPACE TO EXPORT TO A NATIVE DATABASE RECOVERY ENVIRONMENT
Systems and methods to utilize a tablespace to export to a native database recovery environment are described. The system receives file information and script information at a source host that operates in a native database recovery environment. The file information and the script information are received from a backup host that utilizes foreign snapshot files and foreign incremental files for storing the file information. The file information includes native snapshot files and native incremental files. The script information includes one or more scripts that execute, at the source host, to perform operations comprising: mounting the directories; opening an auxiliary database; restoring a tablespace in the auxiliary database; recovering the tablespace in the auxiliary database based on the native incremental files; exporting the tablespace metadata information from the auxiliary database; recovering the tablespace in the database based on the tablespace metadata information; and unmounting the directories.
DATA DISTRIBUTION CONTROL APPARATUS, DATA DISTRIBUTION CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
The confidentiality of data is maintained in a case where analysis of an operation state of a facility is entrusted to the outside. Time difference information in operation between the manufacturing apparatuses (RB1), (RB2), . . . , arranged on the production line (LN) is stored for each of the apparatuses, in an inter-apparatus time difference information storage unit (33). At the occurrence of a failure in one of the manufacturing apparatuses (RB1), (RB2), . . . , the operation estimation unit (13) selects, from among the peripheral apparatuses on the upstream side and the downstream side with respect to the manufacturing apparatus concerned, one on the upstream side and one on the downstream side, estimates an operation period of each selected peripheral apparatus associated with the failure, reads log data corresponding to the estimated operation period from the operation history storage unit (31), and transmits the read piece to the external support center (SC).
PREDICTIVE BATCH JOB FAILURE DETECTION AND REMEDIATION
Systems, methods, and computer programming products for predicting, preventing and remediating failures of batch jobs being executed and/or queued for processing at future scheduled time. Batch job parameters, messages and system logs are stored in knowledge bases and/or inputted into AI models for analysis. Using predictive analytics and/or machine learning, batch job failures are predicted before the failures occur. Mappings of processes used by each batch job, historical data from previous batch jobs and data identifying the success or failure thereof, builds an archive that can be refined over time through active learning feedback and AI modeling to predictively recommend actions that have historically prevented or remediated failures from occurring. Recommended actions are reported to the system administrator or automatically applied. As job failures occur over time, mappings of the current system log to logs for the unsuccessful batch jobs help the root cause analysis becomes simpler and more automated.
System and method for measuring navigation of a computer application
Described herein is a computer implemented method comprising receiving a navigation event generated at an application; determining the navigation event is a start navigation session event and recording the event as a startpoint of a navigation session; determining an end of the navigation session has occurred and, in response, recording an endpoint of the navigation session. The method further comprises calculating a navigation session metric for the navigation session; determining a particular category of the navigation session; and determining a measure of navigation success for the navigation session based on the navigation session metric and particular category.
Dynamic triggering of block-level backups based on block change thresholds and corresponding file identities
A data storage management approach is disclosed that performs backup operations flexibly, based on a dynamic scheme of monitoring block changes occurring in production data. The illustrative system monitors block changes based on certain block-change thresholds and triggers block-level backups of the changed blocks when a threshold is passed. Block changes may be monitored in reference to particular files based on a reverse lookup mechanism. The illustrative system also collects and stores historical information on block changes, which may be used for reporting and predictive analysis.
SYSTEMS AND METHODS OF CONTINUOUS STACK TRACE COLLECTION TO MONITOR AN APPLICATION ON A SERVER AND RESOLVE AN APPLICATION INCIDENT
Systems and methods are provided for performing, at a server, a stack trace of an application at a predetermined interval to generate a plurality of stack traces, where each stack trace of the plurality of stack traces is from a different point in time based on the predetermined interval. The stack trace is performed when the application is operating normally and when the application has had a failure. The plurality of stack traces stored are indexed by timestamp. The server may determine a state of the application based on at least one of the plurality of stack traces. The server may condense data for at least one of the plurality of stack traces that are indexed using predetermined failure scenarios for the application. The server may generate a report based on the condensed data and the state of the application, and may transmit the report for display.
Data backup technique for backing up data to an object storage service
A system, method, and computer program product for a block-based backing up a storage device to an object storage service is provided. This includes the generation of a data object that encapsulates a data of a data extent. The data extent covers a block address range of the storage device. The data object is named with a base name that represents a logical block address (LBA) of the data extent. The base name is appended with an identifier that deterministically identifies a recovery point that the data object is associated with. The base name combined with the identifier represents a data object name for the data object. The named data object is then transmitted to the object storage service for backup of the data extent. At an initial backup, the full storage device is copied. In incremental backups afterwards, only those data extents that changed are backed up.
System and method for improved fault tolerance in a network cloud environment
Described herein are systems and methods for fault tolerance in a network cloud environment. In accordance with various embodiments, the present disclosure provides an improved fault tolerance solution, and improvement in the fault tolerance of systems, by way of failure prediction, or prediction of when an underlying infrastructure will fail, and using the predictions to counteract the failure by spinning up or otherwise providing new component pieces to compensate for the failure.
EVENT AND INCIDENT TIMELINES
In some examples, a non-transitory computer-readable medium stores machine-readable instructions, which, when executed by a processor, cause the processor to: identify an event of a computing device from operational data of the computing device; evaluate the event to determine if the event is a non-routine event; and store the event to a timeline if the event is a non-routine event, where the timeline includes an incident of the computing device.