Patent classifications
G06F11/3447
Machine learning analysis of user interface design
Techniques and solutions are described for improving user interfaces, such as by analyzing user interactions with a user interface with a machine learning component. The machine learning component can be trained with user interaction data that includes an interaction identifier and a timestamp. The identifiers and timestamps can be used to determine the duration of an interaction with a user interface element, as well as patterns of interactions. Training data can be used to establish baseline or threshold values or ranges for particular user interface elements or types of user interface elements. Test data can be obtained that includes identifiers and timestamps. The time taken to complete an interaction with a user interface element, and optionally an interaction pattern, can be analyzed. If the machine learning component determines that an interaction time or pattern is abnormal, various actions can be taken, such as providing a report or user interface guidance.
AUTOMATIC TUNING OF A HETEROGENEOUS COMPUTING SYSTEM
The present invention provides a method of configuring program parameters during run-time of a computing program for computation in a heterogeneous computing system. A compile program is processed in an autotuning system to optimize the parameters of an application for processing in a heterogeneous system comprising, for example CPU and GPU cores.
Method, Apparatus, and Device for Updating Hard Disk Prediction Model, and Medium
A method, apparatus, and device for updating a hard disk prediction model, and a storage medium. The method comprises: acquiring first sample data used to update a hard disk prediction model, and determining, according to the first sample data, a target decision tree requiring updating in the hard disk prediction model; selecting second sample data from the first sample data according to a preset selection rule; determining, according to the second sample data, a target leaf node requiring updating in the target decision tree; and splitting the target leaf node according to a splitting rule of the hard disk prediction model so as to update the target decision tree. The entire updating process is simple, and a new hard disk prediction model need not be re-established, thereby reducing the time used for updating. Moreover, the accuracy of hard disk fault prediction is improved, and user requirements are better met.
DISK USAGE GROWTH PREDICTION SYSTEM
Certain embodiments described herein relate to an improved disk usage growth prediction system. In some embodiments, one or more components in an information management system can determine usage status data of a given storage device, perform a validation check on the usage status data using multiple prediction models, compare validation results of the multiple prediction models to identify the best performing prediction model, generate a disk usage growth prediction using the identified prediction model, and adjust the available space of the storage device according to the disk usage growth prediction.
Control apparatus, analysis apparatus, communication system, data processing method, data transmission method, and non-transitory computer readable medium
An object of the present disclosure is to provide a control apparatus that controls a plurality of communication systems so that the plurality of communication systems can perform analysis with high accuracy. The control apparatus (30) according to the present disclosure includes a communication unit (31) and a determination unit (32). The communication unit (31) receives, from an analysis apparatus (10) configured to perform machine learning using communication logs collected from a communication apparatus in order to generate a learning model, statistical information about each of the communication logs and information about the learning model. The determination unit (32) determines an analysis apparatus (20) to which the information about the learning model is applied based on the statistical information.
Mathematical models of graphical user interfaces
A graph model of a graphical user interface (GUI) may be generated by processing usage data of the GUI where the usage data comprises sequences of GUI pages and actions between GUI pages. The nodes of the graph model may be determined by obtaining GUI pages from the usage data, identifying dynamic GUI elements in the GUI pages, generating canonical GUI pages by modifying the GUI pages using the dynamic GUI elements, and creating graph nodes using the canonical GUI pages. The edges of the graph may be determined by processing actions from the GUI data that were performed by users to transition from one GUI page to another GUI page. The graph model of the GUI may be used for any appropriate application, such as determining statistics relating to the GUI or statistics relating to individual users of the GUI.
Efficient Fault Prevention and Repair in Complex Systems
A method of supervising a complex system includes acquiring and storing failures data and repair resources information regarding the complex system, identifying failure networks and structures of the complex system. Failure types associated with the failure networks of the complex system are determined. The method includes generating a plurality of failure prevention and repair (FPR) sequences, wherein each FPR is associated with the failure networks and the failure types. The generated FPR sequences are analyzed to select a set of FPR sequences and associated repair resources. The method further comprises applying the selected one of the plurality of failure prevention and repair sequences to the complex system, thereby managing the complex system.
PREDICTION OF BUFFER POOL SIZE FOR TRANSACTION PROCESSING WORKLOADS
Techniques are described herein for prediction of an buffer pool size (BPS). Before performing BPS prediction, gathered data are used to determine whether a target workload is in a steady state. Historical utilization data gathered while the workload is in a steady state are used to predict object-specific BPS components for database objects, accessed by the target workload, that are identified for BPS analysis based on shares of the total disk I/O requests, for the workload, that are attributed to the respective objects. Preference of analysis is given to objects that are associated with larger shares of disk I/O activity. An object-specific BPS component is determined based on a coverage function that returns a percentage of the database object size (on disk) that should be available in the buffer pool for that database object. The percentage is determined using either a heuristic-based or a machine learning-based approach.
Generation, validation and implementation of storage-orchestration strategies using virtual private array (VPA) in a dynamic manner
A data storage management layer comprises computing device(s), operatively connected to storage resources, which comprise data storage units and control units. The data storage management layer is operatively connected to the storage resources. They are operatively connected to host computers. A sub-set of the storage resources are assigned to each host, in order to provide storage services according to performance requirements predefined for the host, thereby generating Virtual Private Arrays (VPA). The computing device(s) are configured to perform a method of managing the data storage system comprising: (a) implement storage management strategies, comprising rules. The rules comprise conditions and actions. The actions are capable of improving VPA performance in a dynamic manner; (b) repetitively performing: (i) monitor VPA performance for detection of compliance of VPA with the condition(s); and (ii) responsive to detection of compliance of VPA with the condition(s), performing the action(s).
RESOURCE ALLOCATION OPTIMIZATION FOR MULTI-DIMENSIONAL MACHINE LEARNING ENVIRONMENTS
Some embodiments of the present application include obtaining first data from a data feed to be provided to a plurality of machine learning models and detecting a changepoint in the first data. In response to the changepoint being detected, a first machine learning model may be executed on the first data to obtain first output datasets. A first performance score for the first machine learning model may be computed based on the first output datasets. A second machine learning model may be caused to execute on the first data based on the first performance score satisfying a first condition.