G06F11/3086

IN-FLIGHT DETECTION OF SERVER TELEMETRY REPORT DRIFT

A first information handling system may receive a telemetry metric report from a client information handling system. The first information handling system may determine that one or more characteristics of the telemetry metric report do not match one or more predetermined telemetry metric report characteristics. The first information handling system may perform one or more corrective actions based, at least in part, on the determination that the one or more characteristics of the telemetry metric report do not match one or more predetermined telemetry metric report characteristics.

Message Cloud

A method for error management is provided. The method comprises receiving a message call request regarding an error event generated by a software application. The message call request comprises a message ID associated with an error type. In response to the call request a message cache is searched for the message ID. If the ID is in the cache, an error message associated with the ID is returned. The error message provides a description of the error and suggested remedial action. If the message ID is not in the cache, the error message is fetched from a message repository that contains error messages corresponding to respective message IDs. The fetched error message is loaded into the cache and returned. Message call request data is stored in a metrics repository. The message call request data comprises frequency metrics that describe how often the message ID is received.

ANALYSIS INFORMATION MANAGEMENT METHOD AND ANALYSIS INFORMATION MANAGEMENT SYSTEM
20230019010 · 2023-01-19 ·

One mode of the analysis information management method according to the present invention is a method for managing information related to an analysis by an analyzing device, using a computer or computers, including the steps of: collecting, as comprehensive log information, work-log information related to a use of the analyzing device; calculating the number of executions of each of types of work including a manual analysis operation and a batch analysis operation, using at least a portion of the comprehensive log information collected in the step of collecting; presenting, to a user, information of the number of executions of each type of work obtained in the step of calculating; receiving, from the user, an input of one or more types of work selected from the types of work; and presenting, to the user, the work-log information concerning the one or more types of work received from the user.

SYSTEM EVENT ANALYSIS AND DATA MANAGEMENT

Techniques are provided for analyzing events incoming through a message broker and configuring a database schema for storing the events based on the analysis. The analysis is performed on all the attributes of the incoming events with reference to a primary identifier of an event source. The analysis determines the characteristics of the attributes, which facilitates development of the database schema with availability, accuracy, existence, and other factors of various attributes. Analysis is supported for various formats of events, such as AVRO, XML, complex JSON, etc. In some examples, the attributes of interest for database schema generation can be provided via a configuration for the respective databases including relational, time-series, analytical, graph, etc. Also, if a given database supports direct ingestion of data through the message broker, then the ingestion specification can be generated.

Elastic buffer in a memory sub-system for debugging information

A processing device in a memory system determines to send system state information associated with the memory device to a host system and identifies a subset of a plurality of event entries from a staging buffer based on one or more filtering factors, the plurality of event entries corresponding to events associated with the memory device. The processing device further sends the subset of the plurality of event entries as the system state information to the host system over a communication pipe having limited bandwidth.

Tool for interrogating heterogeneous computing systems
11544170 · 2023-01-03 · ·

Embodiments implement a tool for interrogating heterogeneous computing systems. Environment variables of a computing device including at least an operating system can be detected. Script commands configured using the retrieved environment variables can be built, where the built script commands are customized based on the detected operating system. Structured query level commands configured to retrieve metadata about enterprise elements associated with the computing device can be built. The SQL commands and script commands can be sequentially executed on the computing device, where the execution of the SQL commands and script commands is customized to the computing device such that device specific database execution parameters and application execution parameters are returned. A structured language document organized according to the returned database execution parameters and application execution parameters can be generated.

Enhanced tracking of data flows

Disclosed are various embodiments for tracking the flow of data through a network environment. A monitor can detect that a data transaction event has occurred. Then, the monitor can identify data involved in the data transaction event. Next, a trace identifier can be assigned to the data involved in the data transaction event. Subsequently, a transaction data subset representing a subset of the data involved in the data transaction event that is subject to a common data processing event can be identified. Then, a span identifier can be assigned to the transaction data subset. Next, a correlation identifier can be link to a combination of the span identifier and the trace identifier. Finally, a transaction event record can be written to a distributed ledger, the transaction event record comprising the span identifier and the transaction data subset.

DECISION IMPLEMENTATION WITH INTEGRATED DATA QUALITY MONITORING
20220414071 · 2022-12-29 · ·

Computer-implemented methods and systems include downstream execution for individual rule-based flagging of upstream data quality errors by receiving upstream data from a plurality of sources, identifying a downstream task to be executed, applying a plurality of rules to the upstream data, generating a plurality of outputs including at least one output for each of the plurality of rules applied to the upstream data, each of the plurality of outputs being associated with a corresponding rule of the plurality of rules, identifying a tagged population based on the plurality of outputs, determining that at least one of the plurality of outputs does not meet a corresponding rule threshold, and activating the downstream execution for the tagged population after at least one of (i) updating the corresponding rule threshold or (ii) overriding an error.

System and method for detecting anomalies by discovering sequences in log entries
11513935 · 2022-11-29 · ·

A method for detecting an anomaly includes retrieving a log file that includes log entries, grouping the log entries into clusters of log entry types based on number of occurrences and average time interval, and discovering a sequence of the log entry types within each of the clusters. The sequence of the log entry types is based on a shortest path from a first one of the log entry types to a last one of the log entry types.

SYSTEM AND METHOD FOR MANAGING DATASET QUALITY IN A COMPUTING ENVIRONMENT

A system for managing dataset quality in a computing environment is disclosed. The plurality of subsystems includes a data receiving subsystem, configured to receive a dataset. The plurality of subsystems includes a data analysis subsystem configured to compute data metrics for each field of the received dataset based on type of the dataset. The data analysis subsystem assigns a domain label for each of the received dataset based on the computed data metrics. Further, the data analysis subsystem compares the computed data metrics and the assigned domain label for each field of the received dataset with stored values of data metrics and domain label for pre-processed non-anomalous datasets to determine a one or more deviations. The data analysis subsystem is configured to determine a statistical difference between the values of received dataset and non-anomalous historical dataset.