Patent classifications
G06F2209/5019
Feature Resource Self-Tuning and Rebalancing
An apparatus comprises at least one processing device that includes a processor coupled to a memory. The processing device is configured to identify a plurality of resource objects associated with a processing device, to group correlated resource objects according to processing device utilization of the resource objects, to assign a first weight to a first resource object grouping, wherein the first weight is associated with a performance impact of the first resource object grouping on the processing device, and to release at least some of the first resource object grouping to provide additional resources to a second resource object grouping, the additional resources resulting from the releasing, wherein the first object grouping is selected for the releasing based on a comparison between the first weight and a second weight associated with the second resource object grouping, wherein the releasing is performed to improve performance of the processing device.
Distribution of quantities of an increased workload portion into buckets representing operations
In some examples, a computing system receives an indication of an increased workload portion to be added to a workload of a storage system, the workload comprising buckets of operations of different characteristics. The computing system computes, based on quantities of operations of the different characteristics in the workload, factor values that indicate distribution of operations of the increased workload portion to the buckets of operations of the different characteristics, and distributes, according to the factor values, the operations of the increased workload portion into the buckets of operations of the different characteristics.
Optimization of configurable distributed computing systems
The subject matter of this specification can be implemented in, among other things, a method that includes accessing a plurality of target tasks for a computing system, the computing system comprising a plurality of resources, wherein the plurality of resources comprises a first server and a second server, accessing a plurality of configurations of the computing system, wherein each of the plurality of configurations identifies one or more resources of the plurality of resources to perform the respective target task of the plurality of target tasks, and performing, for each of the plurality of configurations, a simulation to determine a plurality of performance metrics, wherein each of the plurality of performance metrics predicts performance of at least one of the plurality of resources executing the plurality of target tasks on the computing system.
Lock scheduling using machine learning
The present approach relates to systems and methods for facilitating run time predictions for cloud-computing automated tasks (e.g., automated tasks), and using the predicted run time to schedule resource locking. A predictive model may predict the automated task run time based on historical run time to completion, and the run time may be updated using machine learning. Resource lock schedules may be determined for a queue of automated tasks utilizing the resource based on the predicted run time for the various types of automated tasks. The predicted run time may be used to reserve a resource for the given duration, such that the resource is not available for use for another task.
CONTAINER SCHEDULING METHOD AND APPARATUS, AND NON-VOLATILE COMPUTER-READABLE STORAGE MEDIUM
A container scheduling method and apparatus, and a computer-readable storage medium, which relate to the technical field of computers. The method includes: according to a resource usage amount of a container set copy which has run, determining a predicted resource usage amount of a container set copy to be scheduled, wherein the type of container set copy which has run is the same as the type of container set copy to be scheduled; according to the predicted resource usage amount and a resource supply amount supported by each candidate node, determining a candidate node matching the container set copy which has run; and scheduling the container set copy which has run to the matched candidate node for running.
DYNAMIC ALLOCATION OF RESOURCES IN SURGE DEMAND
Embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for the generation of a recommendation for one or more resource transformation actions to be performed based at least in part on an optimized resource transformation scenario. The optimized resource transformation scenario can be identified based at least in part on a hybrid resource transformation scenario that can be based at least in part on a resource priority score for a residual resource and a downgrade-only resource transformation scenario. The downgrade set of a plurality of resources can be determined based at least in part on resource transformation data associated with the plurality of resources.
Distributed Processing System
A distributed processing system including a plurality of distributed systems, transmission media connecting the plurality of distributed systems and a control node connected to the plurality of distributed systems, wherein each of the distributed systems includes one or more distributed nodes constituting a distributed node group and a piece of electric equipment accommodating the distributed node group. Each of the distributed nodes includes interconnects to connect to any of the transmission media and/or other distributed nodes; and the control node determines, based on a quantity of computational resources required for a job, distributed systems, distributed systems and distributed nodes in the distributed systems to execute the job from the plurality of distributed systems, selects a connection path for data to be processed among the distributed systems, and provides information about an interconnect connection path for the distributed nodes to execute the job.
COMMUNICATION MANAGEMENT APPARATUS AND COMMUNICATION MANAGEMENT METHOD
A communication management apparatus suppresses occurrence of a communication anomaly in a cluster by including: an acquisition unit acquiring quantities of traffic of communications performed by one or more communication units operating in each of a plurality of computers constituting a cluster; a prediction unit predicting future quantities of traffic of the communications; an identification unit calculating, for each of the computers, a total of the future quantities of traffic of the communication units operating in the computer and identifying a first computer for which the total exceeds a threshold; and a move control unit controlling, for a communication unit operating in the first computer, move to a second computer.
Accelerating large-scale image distribution
Methods and systems for deploying images to computing systems include predicting an environment for a plurality of processing nodes. Image deployment to the plurality of processing nodes is simulated to determine a subset of the plurality of processing nodes for deployment. One or more images is pre-loaded to the subset of the plurality of processing nodes in advance of a deployment time.
MICROSERVICES SERVER AND STORAGE RESOURCE CONTROLLER
Aspects of the present disclosure relate to controlling resource consumption of a server and storage array. In embodiments, a request can be received by a server that is communicatively coupled to a storage array. Further, the services required to process the request can be identified. Additionally, services' activation can be controlled based on a mapping of request-related actions and initiated services.