Patent classifications
G06F11/0727
ADAPTIVE READ THRESHOLD VOLTAGE TRACKING WITH BIT ERROR RATE ESTIMATION BASED ON NON-LINEAR SYNDROME WEIGHT MAPPING
Adaptive read threshold voltage tracking techniques are provided that employ bit error rate estimation based on a non-linear syndrome weight mapping. An exemplary device comprises a controller configured to determine a bit error rate for at least one of a plurality of read threshold voltages in a memory using a non-linear mapping of a syndrome weight to the bit error rate for the at least one of the plurality of read threshold voltages.
Control system, control method, and control program
A control system includes an information processing device that communicates with a controller that controls a control target. The controller or the information processing device include a storage device that stores one or more SQL statements to be executed and an execution result the one or more SQL statements in association with each other as log data. The information processing device includes a display controller that displays on a display an SQL statement to be corrected that has an unsuccessful execution result; an operation unit that accepts a correction operation on the SQL statement and an execution operation; and a communication interface that sends an execution instruction to execute the corrected SQL statement to the controller upon receipt of the execution operation and to receive an execution result of the corrected SQL statement from the controller. The display controller displays an execution result of the corrected SQL statement.
CONFIGURATION ASSESSMENT BASED ON INVENTORY
Systems and methods are described for facilitating operation of a plurality of computing devices. Data indicative of enumerated resources of a computing device is collected. The data is collected without dependency on write permissions to a file system of the one computing device. A condition of the computing device is determined based on historical data associated with enumerated resources of other computing devices. The identified condition can be updated as updated historical data becomes available. A communication to the computing device may be sent based on the identified condition.
Addressing Storage Device Performance
Improving storage device performance including initiating, on a storage device, execution of a rehabilitative action from a set of rehabilitative actions that can be performed on the storage device; determining that the storage device is operating outside of a defined range of expected operating parameters after the rehabilitative action has been executed; and initiating execution of a higher level rehabilitative action responsive to determining that the higher level rehabilitative action exists.
Maintaining A Synchronous Replication Relationship Between Two Or More Storage Systems
Maintaining a synchronous replication relationship between two or more storage systems, including: receiving, by at least one of a plurality of storage systems across which a dataset will be synchronously replicated, timing information for at least one of the plurality of storage systems; and establishing, based on the timing information, a synchronous replication lease describing a period of time during which the synchronous replication relationship is valid, wherein a request to modify the dataset may only be acknowledged after a copy of the dataset has been modified on each of the storage systems.
COMPUTING SYSTEMS AND METHODS FOR CREATING AND EXECUTING USER-DEFINED ANOMALY DETECTION RULES AND GENERATING NOTIFICATIONS FOR DETECTED ANOMALIES
A computing platform may be installed with software technology for creating and executing user-defined anomaly detection rules that configures the computing platform to: (1) receive, from a client device, data defining a given anomaly detection rule that has been created by a user, wherein the given anomaly detection rule comprises at least one anomaly condition that is to be applied to at least one streaming event queue, (2) store a data representation of the given anomaly detection rule in a data store, (3) convert the data representation of the given anomaly detection rule to a streaming query statement, (4) iteratively apply the streaming query statement to the at least one streaming event queue, and (5) while iteratively applying the streaming query statement, make at least one determination that the at least one anomaly condition is satisfied and then cause at least one anomaly notification to be issued to the user.
STORAGE ERROR IDENTIFICATION/REDUCTION SYSTEM
A storage error identification/reduction system includes a storage error identification/reduction subsystem coupled to a storage subsystem including a block. The storage error identification/reduction subsystem receives first data, and writes the first data to first storage locations in the block while writing storage error identification data to second storage location(s) in the block that each are located adjacent at least one of the first storage locations, with the storage error identification data including predetermined values that are written to predetermined locations included in the second storage location(s) in the block. The storage error identification/reduction subsystem then reads the storage error identification data from the second storage location(s) and, based on the predetermined values and predetermined locations of the storage error identification data, identifies errors resulting from the reading of the storage error identification data. Based on the errors, the storage error identification/reduction subsystem determines and performs error reduction operation(s).
Troubleshooting for a distributed storage system by cluster wide correlation analysis
A troubleshooting technique provides faster and more efficient troubleshooting of issues in a distributed system, such as a distributed storage system provided by a virtualized computing environment. The distributed system includes a plurality of hosts arranged in a cluster. The troubleshooting technique uses cluster-wide correlation analysis to identify potential causes of a particular issue in the distributed system, and executes workflows to remedy the particular issue.
MODEL TRAINING METHOD, FAILURE DETERMINING METHOD, ELECTRONIC DEVICE, AND PROGRAM PRODUCT
Embodiments of the present disclosure relate to a model training method, a failure determining method, an electronic device, and a computer program product. The model training method includes: acquiring a plurality of disk failure data sets collected in a first time period; acquiring another disk failure data set that is collected at a predetermined time point after the first time period and indicates failure information of at least one failed sector set; and training a failure determining model based on the plurality of disk failure data sets and the failure information, so that a probability of matching of predicted failure information at a predetermined time point determined by the trained failure determining model based on the plurality of disk failure data sets and the failure information is greater than a first threshold probability. By using the technical solution of the present disclosure, it is possible to predict the failure information that will occur in the sector set included in a disk based on the disk failure data set associated with a failed sector, so that a user or administrator of the disk can know the failure condition that will occur in the sector set of the disk in advance.
Systems And Methods For Self-Healing And/Or Failure Analysis Of Information Handling System Storage
Systems and methods are provided that may be implemented to perform failure analysis and/or self-healing of information handling system storage. In one example, an information handling system may perform self-recovery actions to self-heal system storage issues when there is a OS boot failure due to a failure to detect a system storage drive by determining one or more possible recovery actions based on a current system storage drive status retrieved by an embedded controller (EC) or other programmable integrated circuit of the information handling system. In another example, manufacturing quality control analysis may be performed on boot failure information that is collected at a remote server from multiple failed information handling systems.