Patent classifications
G06F11/3466
Dynamic graphical processing unit register allocation
Systems, apparatuses, and methods for dynamic graphics processing unit (GPU) register allocation are disclosed. A GPU includes at least a plurality of compute units (CUs), a control unit, and a plurality of registers for each CU. If a new wavefront requests more registers than are currently available on the CU, the control unit spills registers associated with stack frames at the bottom of a stack since they will not likely be used in the near future. The control unit has complete flexibility determining how many registers to spill based on dynamic demands and can prefetch the upcoming necessary fills without software involvement. Effectively, the control unit manages the physical register file as a cache. This allows younger workgroups to be dynamically descheduled so that older workgroups can allocate additional registers when needed to ensure improved fairness and better forward progress guarantees.
Node health prediction based on failure issues experienced prior to deployment in a cloud computing system
To improve the reliability of nodes that are utilized by a cloud computing provider, information about the entire lifecycle of nodes can be collected and used to predict when nodes are likely to experience failures based at least in part on early lifecycle errors. In one aspect, a plurality of failure issues experienced by a plurality of production nodes in a cloud computing system during a pre-production phase can be identified. A subset of the plurality of failure issues can be selected based at least in part on correlation with service outages for the plurality of production nodes during a production phase. A comparison can be performed between the subset of the plurality of failure issues and a set of failure issues experienced by a pre-production node during the pre-production phase. A risk score for the pre-production node can be calculated based at least in part on the comparison.
Systems and methods for dynamic aggregation of data and minimization of data loss
A computer-implemented system for dynamic aggregation of data and minimization of data loss is disclosed. The system may be configured to perform instructions for: aggregating information from a plurality of networked systems by collecting a set of data from the networked systems, the set of data comprising data associated with a predetermined period of time and comprising one or more central variables that are included in data associated with more than one networked systems of the plurality of networked systems and one or more associated variables that describe one or more aspects of the central variables; retrieving one or more data transformation rules based on a relational map among the central variables and the associated variables; and aggregating the first set of data into one or more master data structures corresponding to the central variables based on the data transformation rules.
Optimizing host CPU usage based on virtual machine guest OS power and performance management
Techniques for optimizing CPU usage in a host system based on VM guest OS power and performance management are provided. In one embodiment, a hypervisor of the host system can capture information from a VM guest OS that pertains to a target power or performance state set by the guest OS for a vCPU of the VM. The hypervisor can then perform, based on the captured information, one or more actions that align usage of host CPU resources by the vCPU with the target power or performance state.
Dynamic generation of instrumentation locators from a document object model
Systems for web page or web application instrumentation. Embodiments commence upon identification of a computer-readable user interface description comprising at least some markup language conforming to a respective document object model that is codified in a computer-readable language. An injector process modifies the user interface description by inserting markup text and code into the user interface description, where the inserted code includes instrumentation code to invoke dynamic generation of instrumentation locator IDs using the hierarchical elements found in the document object model. The modified computer-readable interface description is transmitted to a user device. Log messages are emitted upon user actions taken while using the user device. The log messages comprise the instrumentation locator IDs that are formed using hierarchical elements found in the document object model.
Anomaly pattern detection system and method
Provided is an anomaly pattern detection system including an anomaly detection device connected to one or more servers. The anomaly detection device may include an anomaly detector configured to model input data by considering all of the input data as normal patterns, and detect an anomaly pattern from the input data based on the modeling result.
ANOMALY DETECTION USING USER BEHAVIORAL BIOMETRICS PROFILING METHOD AND APPARATUS
Techniques for determining anomalous user behavior in connection with an online application are disclosed. In one embodiment, a method is disclosed comprising obtaining user behavior data in connection with a user of an application, generating feature data using the obtained user behavior data, obtaining one or more user behavior anomaly predictions from one or more anomaly prediction models trained to output a user behavior anomaly prediction in response to the feature data. Each user behavior anomaly prediction indicates a probability that the user behavior is anomalous. A user behavior anomaly determination is made using the user behavior anomaly prediction(s).
Method and apparatus for employing machine learning solutions
A method, system and computer program product, the method comprising: obtaining computer code of an employed system comprising a plurality of components; obtaining data related to operating the plurality of components; based on the computer code and the data, identifying: a first component from the plurality of components, to be maintained; and a second component from the plurality of components, to be at least partly replaced by a machine learning component; and providing to a user an identification of the first component and the second component.
Predicting and managing requests for computing resources or other resources
Requests for computing resources and other resources can be predicted and managed. For example, a system can determine a baseline prediction indicating a number of requests for an object over a future time-period. The system can then execute a first model to generate a first set of values based on seasonality in the baseline prediction, a second model to generate a second set of values based on short-term trends in the baseline prediction, and a third model to generate a third set of values based on the baseline prediction. The system can select a most accurate model from among the three models and generate an output prediction by applying the set of values output by the most accurate model to the baseline prediction. Based on the output prediction, the system can cause an adjustment to be made to a provisioning process for the object.
Pre-migration detection and resolution of issues in migrating databases systems
Implementations include providing, by a computer-executed migration advisor executing within a run-time of a source database system, a query data set including queries processed by the source database system during production use of the source database system, providing, by the migration advisor, an object data set including data representative of database objects stored within a database of the source database system, generating, by the migration advisor, a list of query-level features and a list of object-level features, each feature in the list of query-level features and each feature in the list of object-level features including a feature that is deprecated in a target database system, resolving one or more issues represented by features of one or more of the list of query-level features and the list of object-level features, and executing migration of the database of the source database system to the database of the target database system.