G06F11/0781

Detecting system events based on user sentiment in social media messages

Methods and systems are disclosed herein for using anomaly detection in timeseries data of user sentiment to detect incidents in computing systems and identify events within an enterprise. An anomaly detection system may receive social media messages that include a timestamp indicating when each message was published. The system may generate sentiment identifiers for the social media messages. The sentiment identifiers and timestamps associated with the social media messages may be used to generate a timeseries dataset for each type of sentiment identifier. The timeseries datasets may be input into an anomaly detection model to determine whether an anomaly has occurred. The system may retrieve textual data from the social media messages associated with the detected anomaly and may use the text to determine a computing system or event associated with the detected anomaly.

Learning Causal Relationships

A computer-implemented method is provided that includes learning causal relationships between two or more application micro-services, and applying the learned causal relationships to dynamically localize an application fault. First micro-service error log data corresponding to selectively injected errors is collected. A learned causal graph is generated based on the collected first micro-service error log data. Second micro-service error log data corresponding to a detected application and an ancestral matrix is built using the learned causal graph and the second micro-service error log data. The ancestral matrix is leveraged to identify the source of the error, and the micro-service associated with the identified error source is also subject to identification. A computer system and a computer program product are also provided.

Systems and methods for escalation policy activation
11556871 · 2023-01-17 · ·

A production environment monitoring system notes when problems or issues arise in a computer-based production environment. A noted problem or issue can trigger an escalation policy that calls for notifying an individual identified in the escalation policy to ask the individual to resolve or mitigate the problem or issue. The notification sent to the individual identified in the escalation policy also includes information about one or more individuals that are knowledgeable about the problem or issue that triggered the escalation policy and that may be able to provide assistance in resolving or mitigating the problem or issue.

AUTOMATED GENERATION OF PRIVACY AUDIT REPORTS FOR WEB APPLICATIONS

Various embodiments comprise systems and methods to generate privacy audit reports for web applications. In some examples a computing system comprises a data extraction component, a risk assessment component, and an exposure component. The data extraction component crawls a web application and identifies data, data exposure points, and security policies implemented by the web application. The risk assessment component generates a risk score for the web application based on the amount data, the data sensitivity, the amount and type of data exposure points, and the security policies. The risk assessment component generates the privacy audit report for the web application. The privacy audit report comprises the risk score, an inventory of data types, an inventory of the data exposure points, and a graphical representation of historical risk scores. The exposure component transfers the privacy audit report for delivery to an operator of the web application.

Systems, methods, and apparatuses for detecting and creating operation incidents

Techniques for determining insight are described. An exemplary method includes receiving a request to provide insight into potential abnormal behavior; receiving one or more of anomaly information and event information associated with the potential abnormal behavior; evaluating the received one or more of the anomaly information and event information associated with the abnormal behavior to determine there is insight as to what is causing the potential abnormal behavior and to add to an insight at least two of an indication of a metric involved in the abnormal behavior, a severity for the insight indication, an indication of a relevant event involved in the abnormal behavior, and a recommendation on how to cure the potential abnormal behavior; and providing an insight indication for the generated insight.

Computer system and method for presenting asset insights at a graphical user interface

A computing system is configured to derive insights related to asset operation and present these insights via a GUI. To these ends, the computing system (a) receives data related to the operation of assets, (b) based on this data, derives a plurality of insights related to the operation of at least a subset of the assets, (c) from the insights, defines a given subset of insights to be presented to a user, (d) defines at least one aggregated insight representative of one or more individual insights in the given subset of insights that are related to a common underlying problem, and (e) causes the user's client station to display a visualization of the given subset of insights including (i) an insights pane that provides a high-level overview of the subset of insights and (ii) a details pane that provides additional details regarding a selected one of the subset of insights.

Failure Prediction Using Informational Logs and Golden Signals

Embodiments relate to a computer platform to support processing of informational logs and corresponding performance data to detect and mitigate occurrence of anomalous behavior. Metrics are extracted from the informational logs and correlated with performance data, and in an exemplary embodiment golden signal metrics. A window or block of the logs is classified as potential candidates or indicators of anomalous behavior, which in an embodiment is indicative of potential failure or service outage. A control signal is dynamically issued to an operatively coupled device associated with the window or block of logs. The control signal is configured to selectively control a state of a physical device or process controlled by software, with the control directed at mitigating or eliminating the effect(s) of the anomalous behavior.

Presentation System Of Trouble Recovery Means, Presentation Method Of Trouble Recovery Means, And Presentation Program Of Trouble Recovery Means
20230229544 · 2023-07-20 ·

A presentation system of trouble recovery means, including: an acquisition section that acquires, generated based on error time-series information, in which operation information on a robot and information on errors of the robot are linked with time, and error solution method information in which past occurrences of the errors and solution methods of the errors are linked, first information displayed as dots along time for each error number, wherein time is a first axis of a graph and error number is a second axis of the graph, second information, in which contents of errors corresponding to error numbers are displayed in language or numbers, and occurrence frequency of each error is displayed in degrees, and the third information regarding resolution means for resolving errors, the third information being generated based on the first information and the second information; and a notification section that notifies the information acquired by the acquisition section.

Failure Prediction in a Computing System Based on Machine Learning Applied to Alert Data

An embodiment may involve persistent storage containing a machine learning trainer application configured to apply one or more learning algorithms. One or more processors may be configured to: obtain alert data from one or more computing systems; generate training vectors from the alert data, wherein elements within each of the training vectors include: results of a set of statistics applied to the alert data for a particular computing system of the one or more computing systems, and an indication of whether the particular computing system is expected to fail given its alert data; train, using the machine learning trainer application and the training vectors, a machine learning model, wherein the machine learning model is configured to predict failure of a further computing system based on operational alert data obtained from the further computing system; and deploy the machine learning model for production use.

AUTOMATICALLY CLASSIFYING CLOUD INFRASTRUCTURE COMPONENTS FOR PRIORITIZED MULTI-TENANT CLOUD ENVIRONMENT RESOLUTION USING ARTIFICIAL INTELLIGENCE TECHNIQUES

Methods, apparatus, and processor-readable storage media for automatically classifying cloud infrastructure components for prioritized multi-tenant cloud environment resolution using artificial intelligence techniques are provided herein. An example computer-implemented method includes obtaining historical data pertaining to a multi-tenant cloud environment; training one or more artificial intelligence techniques, using at least a portion of the obtained historical data, for classifying cloud infrastructure components for prioritizing incident-related resolution; classifying one or more cloud infrastructure components, within the multi-tenant cloud environment and associated with one or more server-related issues, into one or more of multiple resolution priority classes; and performing one or more automated actions based at least in part on the classifying of the one or more cloud infrastructure components.