Patent classifications
G06F11/326
Test controller for cloud-based applications
A method for testing a cloud-based software application for offline operation may include generating a test user interface displaying a first result of the cloud-based software application operating in an offline mode and updating the test user interface to display a second result of the cloud-based software application operating in an online mode. Inconsistencies between the first result of the cloud-based software application operating in the offline mode and the second result of the cloud-based software application operating in the online mode may be detected based on the first result and the test result displayed in test user interface. A runtime environment of the cloud-based software application operating in the offline mode may be modified, for example, iteratively, in order to eliminate the inconsistencies between the first result and the second result. Related systems and articles of manufacture are also provided.
MULTI-LAYERED DISASTER RECOVERY MANAGER
A system includes a production server, a backup server, a telemetry analyzer, a memory, and a hardware processor. The telemetry analyzer takes snapshots of various performance metrics of the production server. The memory stores a log of previous disasters that occurred on the production server. The log includes a snapshot of the production server performance metrics from the time each disaster occurred. The memory also stores recovery scripts for each logged disaster. Each script provides instructions for resolving the linked disaster. The hardware processor uses a machine learning architecture to train an autoencoder. The trained autoencoder receives new snapshots from the telemetry analyzer and generates a reconstruction of the new snapshots. The hardware processor then determines a threshold for distinguishing between server disasters and minor anomalies. This distinction is made by comparing the difference between the reconstruction of the new snapshots and the new snapshots with the threshold.
FILE CAPTURE AND PROCESSING SYSTEM WITH IN-LINE ERROR DETECTION
A system is provided for file capture and in-line error correction. The system comprises: a controller configured to capture and process a file during a file capture sequence comprising one or more capture events, the controller being configured to: capture a file using a file capture device associated with a user device, wherein capturing the file comprises receiving a file image and a user input data field associated with the file; identify, using an in-line file analysis module, an error with a capture event during the file capture sequence, wherein the error is a discrepancy identified between the user input data field and an image-derived data field; correct the error in the file before the file capture sequence is completed and the file is exported; and export the file, wherein the file capture sequence is completed.
SECURE CONFIGURATION CORRECTIONS USING ARTIFICIAL INTELLIGENCE
Methods and systems for detecting and responding to erroneous application configurations are presented. In one embodiment, a method is provided that includes receiving a configuration for an application and receiving execution metrics for the application. The configuration and the execution metrics may be compared to a knowledge base of reference configurations and reference execution metrics and a particular reference configuration may be identified from the knowledge base that corresponds to the configuration. The particular reference configuration may represent an erroneous configuration of the application that needs to be corrected. A configuration correction may then be identified based on the particular reference configuration.
Backup control method and backup control system
A backup control method is proposed to include: (A) two control units executing firmware such that the control units respectively operate in a master mode and a slave mode; (B) the control unit that operates in the master mode generating a health signal when executing the firmware; (C) a logic arithmetic unit determining, based on the health signal, whether the control unit that operates in the master mode functions normally; and (D) when the control unit that operates in the master mode is determined to not function normally, the logic arithmetic unit controlling a light emitting element to emit light, and notifying the control unit that operates in the slave mode such that the control unit which operates in the slave mode enters the master mode.
INTERACTIVE ELECTRONIC DOCUMENTATION FOR OPERATIONAL ACTIVITIES
Various embodiments support or provide for interactive electronic documentation (or an electronic document) for operational activities associated with a system or service, such as one monitored or maintained by a system administrator or engineer. In particular, some embodiments provide for an interactive electronic document associated with a runbook, which can comprise a set of actions (e.g., list of operations, procedures, steps, and the like) to be performed with respect to a system or service in connection with an operational event, such as a system/service incident, scheduled maintenance, or a support operation.
ERROR RATE MEASURING APPARATUS AND ERROR COUNTING METHOD
An error rate measuring apparatus includes an operation unit that sets one Codeword length and one FEC Symbol length of FEC according to a communication standard of a device under test, data division means for dividing symbol string data obtained by converting a signal received from the device under test into MSB data and LSB data, a data comparison unit that compares each of the divided MSB data and LSB data with error data to detect MSB errors and LSB errors of each one Codeword length, and detects FEC Symbol Errors of each of the MSB data and the LSB data at one FEC Symbol interval, and error counting means for counting the detected MSB errors and LSB errors, and counting the FEC Symbol Errors.
ASSIGNMENT OF TEST CASE PRIORITIES BASED ON COMBINATORIAL TEST DESIGN MODEL ANALYSIS
A method for assigning test case priority includes analyzing, based on a set of test vectors, one or more test cases from a set of test cases on source code to determine a particular combination of attribute values associated with the one or more analyzed test cases. The method further includes generating a priority value for each attribute in the determined particular combination of attribute values. A priority value for each of the analyzed one or more test cases is generated based on the generated priority values of the particular combination of attribute values associated with the analyzed one or more test cases.
SELF-POWERED INDICATOR
A rack-mountable computer system includes: a power supply unit comprising a power storage module and a power switch module; a controller configured to generate a signal indicating status information of the rack-mountable computer system; an indicator configured to indicate the status information of the rack-mountable computer system; and a signal latch module. The power supply unit switches a power supply of the indicator from the external power supply unit to the power supply unit in a power loss event when the rack-mountable computer system is detached from a rack or a system power of the rack is lost from an external power supply unit and supplies power to the indicator after the power loss event occurs. The signal latch module is configured to latch the signal that is generated by the controller and indicates the status information of the rack-mountable computer system in response to the power loss event.
System and method for data error notification in interconnected data production systems
An error notification system includes a plurality of data production systems in communication with a monitoring server. Each data production system has a data processor configured to receive input data from a first set of data production systems, process the input data to produce output data, and make the output data accessible to a second set of data production systems. The monitoring server is configured to monitor data transmissions between the data production systems and to identify, for each data transmission, originating and receiving systems. The monitoring server is further configured to map data flow from each originating source system to identify all downstream data production systems. Upon identification of a data error in the originating source system, the monitoring server obtains data error information, assembles a data error notification, and transmits the data error notification to data production systems meeting system notification criteria.