G06F9/5022

APPARATUSES AND METHODS FOR SCHEDULING COMPUTING RESOURCES

Apparatus and methods for scheduling computing resources is disclosed that facilitate the cooperation of resource managers in the resource layer and workload schedulers in the workload layer working together so that resource managers can efficiently manage and schedule resources for horizontally and vertically scaling resources on physical hosts shared among workload schedulers to run workloads.

PREDICTIVE SCALING OF CONTAINER ORCHESTRATION PLATFORMS

Systems, methods, and computer programming products leveraging recurrent neural network architectures to proactively predict workload demand of container orchestration platforms. The platform continuously collects metric data from clusters of the platform and train multiple parallel neural networks with different architectures to predict future platform workload demands. At periodic intervals, the registered neural networks in consideration for controlling the scaling operations of the platform are compared against one another to identify the neural network demonstrating the highest performance and/or most accurate workload prediction strategy for scaling the orchestration platform. The selected neural network is enforced as controller for the platform to implement the workload prediction strategy. The neural network controller enforced by the platform predictively scales up or down the number of pods within nodes of the platform and/or the number of clusters providing computational resources to the platform, in anticipation of future increased or decreased end user demand.

Electronic device for securing usable dynamic memory and operating method thereof
11579927 · 2023-02-14 · ·

An electronic device including an application processor and a communication processor. The communication processor including a resource memory, the communication processor configured to monitor an occupancy rate of the resource memory, determine whether the electronic device is in an idle state, forcibly release a network connection, clear the resource memory, and reconnect the network connection.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM
20230010895 · 2023-01-12 · ·

An information processing apparatus includes: a memory; and a processor coupled to the memory and configured to: divide a job in units of computing nodes for a plurality of computing nodes; determine execution of scale-out or scale-in on the basis of a load in a case where each of the computing nodes is caused to execute a job obtained by the division; execute, in a case where determining execution of the scale-out, the scale-out according to the division of the job in units of computing nodes; and execute, in a case where determining execution of the scale-in, the scale-in according to the division of the job in units of computing nodes.

DIFFERENTIATED WORKLOAD TELEMETRY
20230009332 · 2023-01-12 ·

In an approach for generating differentiated workload telemetry data, a processor corresponds one or more services with a workload related telemetry generating an event emitter. A processor performs a correlation analysis of corresponding relationship and connection among connected resources and current traffic into and out of the one or more services. A processor labels domain context for each telemetry event. A processor communicates each telemetry event to a global event handler. A processor performs a cross-correlation in real-time of telemetry data with the global event handler. A processor updates a real-time differentiated workload report.

DISTRIBUTION OF WORKLOADS IN CLUSTER ENVIRONMENT USING SERVER WARRANTY INFORMATION

Systems and methods take into account the criticality of workloads, the warranty needs of workloads, the warranty available time, and the lifetime of a workload to provide an optimal solution that ensures servers are used to highest extent. The warranty health of servers is computed and categorized as critical, warning, or healthy based on the number of days remaining in warranty. Workloads are tagged as short-term or long-term workloads. Workloads are also classified based on criticality. The quarantine mode for proactive high availability of servers is divided into multiple modes, including a long-time, critical-workload quarantine mode, a critical-workload quarantine mode, and a standard quarantine mode. Servers that are in quarantine mode are assigned new workloads based upon the warranty health, workload term, and workload criticality.

DEFERRED RECLAIMING OF SECURE GUEST RESOURCES

Deferred reclaiming of secure guest resources within a computing environment is provided, which includes initiating, by a host of the computing environment, removal of a secure guest from the computing environment, while leaving one or more resources of the secure guest to be reclaimed asynchronous to the removal of the secure guest. The deferring also includes reclaiming the one or more secure guest resources asynchronous to the removal of the secure guest, where the one or more secure guest resources are available for reuse as the one or more secure guest resources are reclaimed asynchronous to the removal of the secure guest.

Allocation of memory access bandwidth to clients in an electronic device
11709711 · 2023-07-25 · ·

An electronic device includes a memory; a plurality of clients; at least one arbiter circuit; and a management circuit. A given client of the plurality of clients communicates a request to the management circuit requesting an allocation of memory access bandwidth for accesses of the memory by the given client. The management circuit then determines, based on the request, a set of memory access bandwidths including a respective memory access bandwidth for each of the given client and other clients of the plurality of clients that are allocated memory access bandwidth. The management circuit next configures the at least one arbiter circuit to use respective memory access bandwidths from the set of memory access bandwidths for the given client and the other clients for subsequent accesses of the memory.

System and Method for Providing Dynamic Provisioning Within a Compute Environment
20230239221 · 2023-07-27 · ·

The disclosure relates to systems, methods and computer-readable media for dynamically provisioning resources within a compute environment. The method aspect of the disclosure comprises A method of dynamically provisioning resources within a compute environment, the method comprises analyzing a queue of jobs to determine an availability of compute resources for each job, determining an availability of a scheduler of the compute environment to satisfy all service level agreements (SLAs) and target service levels within a current configuration of the compute resources, determining possible resource provisioning changes to improve SLA fulfillment, determining a cost of provisioning; and if provisioning changes improve overall SLA delivery, then re-provisioning at least one compute resource.

CONTROL METHOD AND APPARATUS OF CLUSTER RESOURCE, AND CLOUD COMPUTING SYSTEM
20230004439 · 2023-01-05 ·

This disclosure relates to a control method and apparatus of cluster resources, and a cloud computing system, and relates to the field of computer technologies. The method includes: in the case where a to-be-controlled resource is a to-be-expanded resource, determining a binding relationship between the to-be-expanded resource and an application; adding the to-be-expanded resource that is initialized into a resource pool of a corresponding application having the binding relationship with the to-be-expanded resource; generating a to-be-executed data packet of a to-be-processed application according to a deployment type of the to-be-processed application; and deploying the to-be-executed data packet on the to-be-expanded resource in the resource pool of the to-be-processed application for execution.