Patent classifications
G06F11/302
Optimizing distribution of heterogeneous software process workloads
A request is received to schedule a new software process. Description data associated with the new software process is retrieved. A workload resource prediction is requested and received for the new software process. A landscape directory is analyzed to determine a computing host in a managed landscape on which to load the new software process. The new software process is executed on the computing host.
Dynamically adjusting statistics collection time in a database management system
Each of one or more commit cycles may be associated with a predicted number of updates. A statistics collection time for a database table can be determined by estimating a sum of predicted updates included in one or more commit cycles. Whether the estimated sum of predicted updates is greater than a first threshold may be determined. In addition, a progress point for a first one of the commit cycles can be determined. A time to collect statistics may be selected based on the progress point of the first commit cycle.
Embedded persistent queue
Various aspects are disclosed for distributed application management using an embedded persistent queue framework. In some aspects, task execution data is monitored from a plurality of task execution engines. A task request is identified. The task request can include a task and a Boolean predicate for task assignment. The task is assigned to a task execution engine embedded in a distributed application process if the Boolean predicate is true, and a capacity of the task execution engine is sufficient to execute the task. The task is enqueued in a persistent queue. The task is retrieved from the persistent queue and executed.
Orchestration for automated performance testing
Methods, systems, and devices supporting orchestration for automated performance testing are described. A server may orchestrate performance testing for software applications across multiple different test environments. The server may receive a performance test indicating an application to test and a set of test parameters. The server may determine a local or a non-local test environment for running the performance test. The server may deploy the application to the test environment, where the deploying involves deploying a first component of the performance test to a first test artifact in the test environment and deploying a second component of the performance test different from the first component to a second test artifact in the test environment. The server may execute the performance test to obtain a result set, where the executing involves executing multiple performance test components as well as orchestrating results across multiple test artifacts to obtain the result set.
Management of microservices failover
Embodiments described herein are generally directed to intelligent management of microservices failover. In an example, responsive to an uncorrectable hardware error associated with a processing resource of a platform on which a task of a service is being performed by a primary microservice, a failover trigger is received by a failover service. A secondary microservice is identified by the failover service that is operating in lockstep mode with the primary microservice. The secondary microservice is caused by the failover service to takeover performance of the task in non-lockstep mode based on failover metadata persisted by the primary microservice. The primary microservice is caused by the failover service to be taken offline.
Productivity platform providing user specific functionality
An apparatus in one embodiment comprises at least one processing platform including a plurality of processing devices. The processing platform is configured to receive a request to deploy one or more applications of a plurality of selected applications, wherein the plurality of selected applications are selected based on a determined role of an individual within an enterprise, and to deploy the one or more applications for at least one user device responsive to the request. The processing platform is further configured to monitor execution of the one or more applications in connection with the at least one user device, to receive and analyze data corresponding to the execution of the one or more applications, and to automatically generate one or more recommendations in connection with the deployment of the one or more applications for the at least one user device based on the received and analyzed data.
Intelligently adaptive log level management of a service mesh
Systems, methods and/or computer program products dynamically managing log levels of microservices in a service mesh based on predicted error rates of calls made to the service mesh. A first AI module predicts health, status and/or failures of microservices individually or as part of microservice chains with a particular confidence level. Using health status mapped to the microservices and historical information inputted into a knowledge base (including error rates), the first AI module predicts error rates of the API call for each user profile or generally by the service mesh. A second AI module analyzes the predictions provided by the first AI module and determines whether the predictions meet threshold levels of confidence. To improve the confidence of predictions that are below threshold levels, the second AI module dynamically adjusts application logs of the microservices and/or proxies thereof to an appropriate level to capture more detailed information within the logs.
Dynamic management of network policies between microservices within a service mesh
Systems, methods and/or computer program products optimizing network policies between microservices of a service mesh. The service mesh tracks incoming API calls of applications and based on the historical transactions, the context of API calls, and the microservices in the microservice chain being invoked, network controls and policy configurations are set to optimize the transactions performed by the service mesh. Dimensions of the communications between microservices of the service mesh are dynamically optimized via the service mesh control plane using a policy optimizer. Optimized dimensions of service mesh transactions includes automated policy adjustments to retries between microservices, circuit breaking between microservices, automated timeout adjustments between microservices and intelligent rate limiting between microservices and/or rate limiting applied to user profiles.
RESOURCE ALLOCATION OPTIMIZATION FOR MULTI-DIMENSIONAL MACHINE LEARNING ENVIRONMENTS
Some embodiments of the present application include obtaining first data from a data feed to be provided to a plurality of machine learning models and detecting a changepoint in the first data. In response to the changepoint being detected, a first machine learning model may be executed on the first data to obtain first output datasets. A first performance score for the first machine learning model may be computed based on the first output datasets. A second machine learning model may be caused to execute on the first data based on the first performance score satisfying a first condition.
Error remediation systems and methods
A computer system is provided. The computer system includes a memory, a network interface, and at least one processor configured to monitor a user interface comprising a plurality of user interface elements; detect at least one changed element within the plurality of user interface elements; classify, in response to detecting the at least one changed element, the at least one changed element as either indicating or not indicating an error; generate, in response to classifying the at least one changed element as indicating an error, an error signature that identifies the at least one changed element; identify, using the error signature, a remediation for the error; and provide the remediation in association with the at least one changed element.