Patent classifications
G06F2209/5019
Cloud application scaler
A system includes a processing system and a memory system. The memory system stores instructions for identifying a cloud application in a cloud environment as a non-disposable application and monitoring a plurality of instances of the non-disposable application running in the cloud environment. The instructions when executed by the processing system further result in determining that a number of the instances of the non-disposable application should be modified based on one or more demand predictions by an artificial intelligence scaler, adjusting the number of the instances of the non-disposable application running in the cloud environment based on the one or more demand predictions, and modifying an allocation of one or more resources of the cloud environment associated with adjusting the number of the instances of the non-disposable application.
WORKLOAD AWARE VIRTUAL PROCESSING UNITS
A processing unit is configured differently based on an identified workload, and each configuration of the processing unit is exposed to software (e.g., to a device driver) as a different virtual processing unit. Using these techniques, a processing system is able to provide different configurations of the processing unit to support different types of workloads, thereby conserving system resources. Further, by exposing the different configurations as different virtual processing units, the processing system is able to use existing device drivers or other system infrastructure to implement the different processing unit configurations.
Resource processing method and apparatus for mobile terminal, computer device and storage medium
A resource processing method includes: determining a current application scenario and usage data of the mobile terminal; inputting the usage data into a machine learning algorithm model corresponding to the current application scenario to obtain predicted recommendation parameters; and configuring resources of the mobile terminal based on the recommendation parameters.
Determining a future operation failure in a cloud system
Examples described relate to determining a future operation failure in a cloud system. In an example, a historical utilization of resources for performing an operation in a cloud system may be determined. A current utilization of resources in the cloud system may be determined. Based on the historical utilization of resources for performing the operation in the cloud system and the current utilization of resources in the cloud system, a determination may be made whether a future performance of the operation in the cloud system is likely to be a failure. In response to a determination that the future performance of the operation in the cloud system is likely to be a failure, an alert may be generated.
Workload tenure prediction for capacity planning
Disclosed are various embodiments for automating the prediction of workload tenures in datacenter environments. In some embodiments, parameters are identified for a plurality of workloads of a software defined data center. A machine learning model is trained to determine a predicted tenure based on parameters of the workloads. A workload for the software defined data center is configured to include at least one workload parameter. The workload is processed using the trained machine learning model to determine the predicted tenure. An input to the machine learning model includes the at least one workload parameter.
Optimizing distribution of heterogeneous software process workloads
A request is received to schedule a new software process. Description data associated with the new software process is retrieved. A workload resource prediction is requested and received for the new software process. A landscape directory is analyzed to determine a computing host in a managed landscape on which to load the new software process. The new software process is executed on the computing host.
Using predictive analytics to determine expected use patterns of vehicles to recapture under-utilized computational resources of vehicles
A distributed computing network includes one or more vehicles, each vehicle configured to act as a node in the distributed computing network, and a remote server including a processor and a memory module storing one or more non-transient processor-readable instructions that when executed by the processor cause the remote server to establish a data connection with the one or more vehicles, predict a pattern-of-use of the one or more vehicles, determine a predicted current use of the one or more vehicles, and allocate a computational task to the one or more vehicles based on the predicted pattern-of-use and the predicted current use.
Systems and methods for determining peak memory requirements in SQL processing engines with concurrent subtasks
The present invention is generally directed to systems and methods of determining and provisioning peak memory requirements in Structured Query Language Processing engines. More specifically, methods may include determining or obtaining a query execution plan; gathering statistics associated with each database table; breaking the query execution plan into one or more subtasks: calculating an estimated memory usage for each subtask using the statistics; determining or obtaining a dependency graph of the one or more subtasks; based at least in part on the dependency graph, determining which subtasks can execute concurrently on a single worker node; and totaling the amount of estimated memory for each subtask that can execute concurrently on a single worker node and setting this amount of estimated memory as the estimated peak memory requirement for the specefic database query.
Automated orchestration of containers by assessing microservices
Performing container scaling and migration for container-based microservices is provided. A first set of features is extracted from each respective microservice of a plurality of different microservices. A number of containers required at a future point in time for each respective microservice of the plurality of different microservices is predicted using a trained forecasting model and the first set of features extracted from each respective microservice. A scaling label and a scaling value are assigned to each respective microservice of the plurality of different microservices based on a predicted change in a current number of containers corresponding to each respective microservice according to the number of containers required at the future point in time for each respective microservice. The current number of containers corresponding to each respective microservice of the plurality of different microservices is adjusted based on the scaling label and the scaling value assigned to each respective microservice.
APPARATUS FOR MACHINE LEARNING SERVICE, METHOD FOR MACHINE LEARNING SERVICE AND PROGRAM THEREOF
To eliminate the need for a resource design process needed by the user in using a machine learning service and thereby reduce the time and costs which impose a burden on the user.
A machine learning service device includes a requirement specifying functional unit (11) used to specify a task, a model, throughput, and performance that are desired in machine learning; and a resource design unit (12) configured to predict achievable performance at a plurality of resource settings by machine learning using the task, the model, and the throughput specified via the requirement specifying functional unit and select a resource setting that satisfies the specified performance based on results of the prediction.