G06F9/505

System and method for scaling resources of a secondary network for disaster recovery

A system and method for scaling resources of a secondary network for disaster recovery uses a disaster recovery notification from a primary resource manager of a primary network to a secondary resource manager of the secondary network to generate a scale-up recommendation for additional resources to the secondary network. The additional resources are based on latest resource demands of workloads on the primary network included in the disaster recovery notification. A scale-up operation for the additional resources is then executed based on the scale-up recommendation from the secondary resource manager to operate the secondary network with the additional resources to run the workloads on the secondary network.

APPLICATION LIFECYCLE MANAGEMENT BASED ON REAL-TIME RESOURCE USAGE
20230010567 · 2023-01-12 ·

Application lifecycle management based on real-time resource usage. A first plurality of resource values that quantify real-time computing resources used by a first instance of an application is determined at a first point in time. Based on the first plurality of resource values, one or more utilization values are stored in a profile that corresponds to the application. Subsequent to storing the one or more utilization values in the profile, it is determined that a second instance of the application is to be initiated. The profile is accessed, and the second instance of the application is caused to be initiated on a first computing device utilizing the one or more utilization values identified in the profile.

AUTOMATED SERVER WORKLOAD MANAGEMENT USING MACHINE LEARNING
20230216914 · 2023-07-06 · ·

Systems and methods are disclosed for managing workload among server clusters is disclosed. According to certain embodiments, the system may include a memory storing instructions and a processor. The processor may be configured to execute the instructions to determine historical behaviors of the server clusters in processing a workload. The processor may also be configured to execute the instructions to construct cost models for the server clusters based at least in part on the historical behaviors. The cost model is configured to predict a processor utilization demand of a workload. The processor may further be configured to execute the instructions to receive a workload and determine efficiencies of processing the workload by the server clusters based at least in part on at least one of the cost models or an execution plan of the workload.

SYSTEMS, METHODS, AND APPARATUS FOR WORKLOAD OPTIMIZED CENTRAL PROCESSING UNITS (CPUS)

Systems, methods, and apparatus for workload optimized central processing units are disclosed herein. An example apparatus includes a workload analyzer to determine an application ratio associated with the workload, the application ratio based on an operating frequency to execute the workload, a hardware configurator to configure, before execution of the workload, at least one of (i) one or more cores of the processor circuitry based on the application ratio or (ii) uncore logic of the processor circuitry based on the application ratio, and a hardware controller to initiate the execution of the workload with the at least one of the one or more cores or the uncore logic.

Game Engine Resource Processing Method And Apparatus, And Electronic Device And Computer-Readable Storage Medium
20230214272 · 2023-07-06 · ·

The present disclosure provides a game engine resource processing method and apparatus, an electronic device and a computer readable storage medium. The game engine resource processing method includes: by a first preset interface of a game engine, receiving an obtaining request for any game resource of any operation platform (S110); based on the obtaining request, obtaining any game resource by using a resource management system of the game engine, where the resource management system includes the first preset interface, resources, a resource manager, a resource loader and a resource registry (S120); returning any obtained game resource (S130).

Distributable and customizable load-balancing of data-associated computation via partitions and virtual processes

Methods, systems, computer-readable media, and apparatuses for determining partitions and virtual processes in a simulation are presented. A plurality of partitions of a simulated world may be determined, and each partition may correspond to a different metric for entities in the simulated world. A plurality of virtual processes for the simulated world may also be determined. The system may assign a different virtual process to each partition. An indication of the partitions may be sent to one or more partition enforcer services, and an indication of the virtual processes may be sent to a virtual process manager.

RESOURCE CAPACITY MANAGEMENT IN COMPUTING SYSTEMS

Techniques for capacity management in computing systems are disclosed herein. In one embodiment, a method includes analyzing data representing a number of enabled users or a number of provisioned users to determine whether the analyzed data represents an anomaly based on historical data. The method can also include upon determining that the data represents an anomaly, determining a conversion rate between a change in the number of enabled users or the number of provisioned users and a change in a number of active users of the computing service and deriving a future value of the number of active users of the computing service based on both the detected anomaly and the determined conversion rate. The method can further include allocating and provisioning an amount of the computing resource in the distributed computing system in accordance with the determined future value of the active users of the computing resource.

METHOD OF SCHEDULING CACHE BUDGET IN MULTI-CORE PROCESSING DEVICE AND MULTI-CORE PROCESSING DEVICE PERFORMING THE SAME

A method is provided. The method includes: receiving a plurality of characteristic information associated with a plurality of tasks allocated to a plurality of processor cores; monitoring a task execution environment while the plurality of processor cores perform the plurality of tasks based on at least one operating condition; and allocating a plurality of cache areas of at least one cache memory to the plurality of processor cores based on the plurality of characteristic information and the task execution environment. Sizes of the plurality of cache areas are set differently for the plurality of processor cores.

USING MULTIPLE QUOTA TREES IN RESOURCE SCHEDULING

Systems, computer-implemented methods, and computer program products to facilitate using multiple quota trees in resource scheduling are provided. According to an embodiment, a system can comprise a processor that executes computer executable components stored in memory. The computer executable components comprise an evaluation component that executes admissibility of a job request based on a scope property of one or more quota trees that apply to the job request.

READINESS STATES FOR PARTITIONED INTERNAL RESOURCES OF A MEMORY CONTROLLER

Apparatus, systems, and methods are presented for controlling readiness states for partitioned internal resources of a memory controller. The controller may include at least one internal hardware resource that is partitioned so that readiness states for individual partitions of the internal hardware resource are individually controllable. The controller may determine a value for a parameter that corresponds to upcoming workload for the controller. The controller may compare the value to a set of thresholds. The controller may control the readiness states for the partitions of the internal hardware resource based on the comparison of the parameter to the set of thresholds.