G06F11/3495

Dynamic management of network policies between microservices within a service mesh

Systems, methods and/or computer program products optimizing network policies between microservices of a service mesh. The service mesh tracks incoming API calls of applications and based on the historical transactions, the context of API calls, and the microservices in the microservice chain being invoked, network controls and policy configurations are set to optimize the transactions performed by the service mesh. Dimensions of the communications between microservices of the service mesh are dynamically optimized via the service mesh control plane using a policy optimizer. Optimized dimensions of service mesh transactions includes automated policy adjustments to retries between microservices, circuit breaking between microservices, automated timeout adjustments between microservices and intelligent rate limiting between microservices and/or rate limiting applied to user profiles.

RESOURCE ALLOCATION OPTIMIZATION FOR MULTI-DIMENSIONAL MACHINE LEARNING ENVIRONMENTS

Some embodiments of the present application include obtaining first data from a data feed to be provided to a plurality of machine learning models and detecting a changepoint in the first data. In response to the changepoint being detected, a first machine learning model may be executed on the first data to obtain first output datasets. A first performance score for the first machine learning model may be computed based on the first output datasets. A second machine learning model may be caused to execute on the first data based on the first performance score satisfying a first condition.

Data sampling for model exploration utilizing a plurality of machine learning models

The disclosed embodiments provide a system for processing data. During operation, the system obtains a training dataset containing a first set of records associated with a first set of identifier (ID) values and an evaluation dataset containing a second set of records associated with a second set of ID values. Next, the system selects a random subset of ID values from the second set of ID values. The system then generates a sampled evaluation dataset comprising a first subset of records associated with the random subset of ID values in the second set of records. The system also generates a sampled training dataset comprising a second subset of records associated with the random subset of ID values in the first set of records. Finally, the system outputs the sampled training dataset and the sampled evaluation dataset for use in training and evaluating a machine learning model.

Long running workflows for robotic process automation

Systems and methods for executing a robotic process automation (RPA) workflow are provided. The RPA workflow is executed by a first robot. The execution of the RPA workflow is suspended by the first robot. A current context of the RPA workflow is serialized at a time of the suspension and the current context of the RPA workflow is stored. The execution of the RPA workflow is resumed by a second robot based on a triggering condition by retrieving the current context of the RPA workflow. The first robot and the second robot may be the same robot or different robots.

Intelligent management of stub files in hierarchical storage

Intelligent management of stub files in hierarchical storage is provided by: in response to identifying a file to migrate from a file system to offline storage, providing metadata for the file to a machine learning engine; receiving a stub profile for the file from the machine learning engine that indicates an offset from a beginning of the file and a length from the offset for previewing the file; and migrating the portion of the file from the file system to an offline storage based on the stub profile. In some embodiments this further comprises: monitoring file system operations; in response to detecting a read operation of the portion of the file: determining a file type; providing file data to the machine learning engine; and performing a supervised learning operation based on the file type and the file data to update the machine learning engine.

TECHNOLOGY ENVIRONMENT FOR A SOFTWARE APPLICATION

A system is configured to obtain information relating to a current application environment of a software application and build a plurality of model application environments based on the obtained information. The system runs the software application using the current application environment and each of the model application environments. The system collects a plurality of performance metrics related to performance of the software application in the current application environment and each of the model application environments while running in the simulated environment. The system generates a recommendation report based on the performance metrics, wherein the recommendation report comprises a recommendation of a different technology product for at least one of the technology components used in the current application environment, wherein the different technology product is different from a current technology product used for the at least one technology component in the current application environment.

DETERMINING AN IMPROVED TECHNOLOGY ENVIRONMENT FOR A SOFTWARE APPLICATION

A system is configured to obtain information relating to a current application environment and a plurality of model application environments of a software application. The system runs the software application using the current application environment and each of the model application environments. The system collects a plurality of performance metrics related to performance of the software application in the current application environment and each of the model application environments while running in the simulated environment. The system assigns a score to each performance metric and determines a model application environment that yielded a higher score for a performance metric as compared to the score of the performance metric in the current application environment. The system recommends at least one technology product used for a corresponding technology component associated with the performance metric in the determined model application environment.

REDUCING THE ENVIRONMENTAL IMPACT OF DISTRIBUTED COMPUTING
20230017632 · 2023-01-19 ·

A process includes obtaining a workload and a set of candidate computing resources and predicting amounts of carbon emissions attributable to executing the workload on different members of the set of candidate computing resources. The process also includes predicting measures of computing performance in executing the workload of the different members of the set of candidate computing resources and computing a set of scores based on the amounts of carbon emissions and the measures of computing performance. The process also includes orchestrating the workload based on the scores.

GENERATING TECHNOLOGY ENVIRONMENTS FOR A SOFTWARE APPLICATION

A system is configured to obtain information relating to a current application environment of a software application including information relating to technology components and technology products being used in the current application environment. The system builds one or more model application environments for the software application. The system receives a request for a level of performance associated with a technology component and selects a technology product for the technology component that satisfies the requested level of performance, based on a performance benchmark of the technology product. The system builds one of the model application environments using the selected technology product for the technology component that satisfies the requested level of performance

ENHANCED REDEPLOYING OF COMPUTING RESOURCES
20230224256 · 2023-07-13 ·

Examples described herein relate to method, resource management system, and non-transitory machine-readable medium for redeploying a computing resource. Data related to a performance parameter corresponding to a plurality of computing resources deployed on a plurality of host-computing nodes may be received. The performance parameter is associated with one or both of: communication between computing resources of the plurality of computing resources, or communication of the plurality of computing resources with a network device. Further, for a computing resource of the plurality of computing resources, a candidate host-computing node is determined from the plurality of host-computing nodes based on the data related to the performance parameter and the computing resource may be redeployed on the candidate host-computing node.