Patent classifications
H04L41/0622
PID Controller For Event Ingestion Throttling
Accepted events are processed. Each accepted event is associated with a respective ingested timestamp, a respective processing-start timestamp, and a respective processing-complete timestamp. Events are accepted at a rate of a target rate limit. A current error value is obtained as a difference between a target lag time and a second value. The target lag time indicates a configured maximum time within which an accepted event is to be processed. The second value is an average of differences between the respective processing-complete timestamps and the respective ingested timestamps. A base throttle is obtained based on previous error values and the current error value. The base throttle is smaller than the target rate limit and indicates a maximum number of events per a unit of time to be accepted for processing. Subsequent events are accepted at a new rate obtained from the base throttle.
System, method, and computer program for mitigating falsified log data provided to an AI-learning system
A system, method, and computer program product are provided for mitigating falsified log data provided to an AI-learning system. In use, from an artificial intelligence (AI) analysis system, suspicious data of a predicted situation is received. Additionally, event log data associated with the predicted situation is received. Simulated log data is created based on the event log data. The simulated log data is sent to the AI analysis system. Imitation data of the predicted situation is received from the AI analysis system. The imitation data of the predicted situation is compared with the suspicious data of a predicted situation to verify the event log data. When the imitation data matches the suspicious data, at least one the suspicious data or an originator of the suspicious data are authenticated.
System, method, and computer program for dynamic prioritization of monitoring system related alerts
As described herein, a system, method, and computer program are provided for dynamic prioritization of monitoring system related alerts. A plurality of alerts generated for a monitoring system are accessed. A first set of alert features predefined as high-level features are identified, wherein each of the high-level features is mapped to one or more alert features in a second set of alert features predefined as low-level features. The plurality of alerts are processed to determine a plurality of the most central high-level features. The plurality of alerts are grouped according to the plurality of the most central high-level features. Each group of alerts is processed to determine a plurality of the most central low-level features for the alerts in the group of alerts. A prioritized set of alerts are selected from the plurality of alerts based on the plurality of the most central low-level features.
Systems and methods for performing end-to-end link-layer and IP-layer health checks between a host machine and a network virtualization device
Described are systems and methods of monitoring network health and traffic. Monitoring network health and traffic can include sending a request to a compute instance to trigger a response from the compute instance, monitoring, via a network virtualization device, communications from a virtual network interface card (VNIC) associated with the compute instance, storing information indicative of a last received packet by the VNIC, monitoring the stored information indicative of the last received packet to determine a health status of the compute instance associated with the VNIC, updating a table configured to track received responses from the compute instance, and notifying a downstream user of the health status of the compute instance.
Alarm information processing method and apparatus, system, and computer storage medium
An alarm information processing method, apparatus, system, and computer storage medium are provided. The method includes receiving alarm information generated by a first node at a first time. It is determined whether the alarm information is a root alarm, and in response to a determination that the alarm information is a root alarm, a plurality of first links comprising the first node are obtained, to generate a first link set. The plurality of first links of the first link set are searched for one or more nodes located before the first node, and alarm information is generated within a first time range before the first time and a second time range after the first time, to obtain a second node. Alarm root source analysis is performed on the second node, and an analysis result is notified.
Methods and systems to determine baseline event-type distributions of event sources and detect changes in behavior of event sources
Automated methods and systems to determine a baseline event-type distribution of an event source and use the baseline event type distribution to detect changes in the behavior of the event source are described. In one implementation, blocks of event messages generated by the event source are collected and an event-type distribution is computed for each of block of event messages. Candidate baseline event-type distributions are determined from the event-type distributions. The candidate baseline event-type distribution has the largest entropy of the event-type distributions. A normal discrepancy radius of the event-type distributions is computed from the baseline event-type distribution and the event-type distributions. A block of run-time event messages generated by the event source is collected. A run-time event-type distribution is computed from the block of run-time event messages. When the run-time event-type distribution is outside the normal discrepancy radius, an alert is generated indicating abnormal behavior of the event source.
Managing Event Data in a Network
A method (100, 200, 300) for managing network event data are disclosed. The method for managing network event data comprises receiving incoming network event data, the network event data comprising notifications of network events occurring within a network (102). The method further comprises, for individual notified network events within the received network event data, identifying a category of the notified network event (104) and filtering the received network event data on the basis of co-occurrence in the network of network events in individual network categories with network events in other network categories (106). Also disclosed are a Manager (600), a System (700) and a computer program.
Cross domain topology from machine learning
A processor retrieves alarm data associated with an operation support system. A processor filters the alarm data. A processor groups the filtered alarm data. A processor extracts cross-domain node and port information for the grouped alarm data. A processor generates a cross-domain topology of the operation support system based on the extracted cross-domain node and port information.
Data storage method, storage server, and storage medium and system
The present disclosure provides a data storage method, belonging to the field of data processing. The method is applied to a storage server in a cloud storage system. The method includes: monitoring data transmission status of a data acquisition device; obtaining data exception information according to the monitored data transmission exceptional status; transmitting a first data backhaul request to the data acquisition device, the data acquisition device being configured to return first data acquired within the exception time period upon receiving the first data backhaul request; and storing the first data upon receiving the first data.
Data lifecycle management
A method and technique for data lifecycle management includes identifying a fault from a monitored system. A time period window is defined associated with the fault based on a time the fault occurred. One or more metrics that are related to the fault that fall within the time period window are identified from the monitored system and stored in a memory. A lifespan is assigned to the one or more metrics based on the fault, and the one or more metrics are removed from the memory when their associated lifespans are over.