G06F2209/5022

Technologies for switching network traffic in a data center

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

Normalizing target utilization rates of a cross-application table of concurrently executing applications to schedule work on a command queue of a graphics processors
11593175 · 2023-02-28 · ·

In general, embodiments are disclosed herein for tracking and allocating graphics hardware resources. In one embodiment, a software and/or firmware process constructs a cross-application command queue utilization table based on one or more specified command queue quality of service (QoS) settings, in order to track the target and current utilization rates of each command queue on the graphics hardware over a given frame and to load work onto the graphics hardware in accordance with the utilization table. Based on the constructed utilization table for a given frame, any command queues that have exceed their respective target utilization value may be moved to an “inactive” status for the duration of the current frame. For any command queues that remain in an “active” status for the current frame, work from those command queues may be loaded on to slots of the appropriate data masters of the graphics hardware in any desired order.

Infrastructure load balancing using software-defined networking controllers

A system to facilitate infrastructure management is described. The system includes one or more processors and a non-transitory machine-readable medium storing instructions that, when executed, cause the one or more processors to execute an infrastructure management controller to automatically balance utilization of infrastructure resources between a plurality of on-premise infrastructure controllers.

WORKLOAD MIGRATION RECOMMENDATIONS IN HETEROGENEOUS WORKSPACE ENVIRONMENTS

Systems and methods for workload migration recommendations in heterogeneous workspace environments are described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: identify a set of workspaces launched by a set of users, among a plurality of workspaces launched by a plurality of users, where each workspace in the set of workspaces is associated with a performance or user experience metric below a threshold value; select, among a plurality of workloads executed within the set of workspaces, one or more workloads suitable for migration; and for each user of the set of users, determine whether to migrate any of the selected one or more workloads based, at least in part, upon an allocation of cloud and device resources available to the plurality of users.

SERVICES THREAD SCHEDULING BASED UPON THREAD TRACING

One embodiment provides a method, including: producing, for each of a plurality of containers, a resource profile for each thread in each of the plurality of containers; identifying, for each of the plurality of containers and from, at least in part, the resource profiles, container dependencies between threads on a single of the plurality of containers; determining service dependencies between threads across different of the plurality of containers; scheduling, based upon the container dependencies and the service dependencies, threads to cores, wherein the scheduling is based upon minimizing thread processing times; and publishing the container dependencies and the service dependencies on a registry of the node clusters.

Method of task transition between heterogenous processors

A method, system, and apparatus determines that one or more tasks should be relocated from a first processor to a second processor by comparing performance metrics to associated thresholds or by using other indications. To relocate the one or more tasks from the first processor to the second processor, the first processor is stalled and state information from the first processor is copied to the second processor. The second processor uses the state information and then services incoming tasks instead of the first processor.

Edge computing workload balancing

A set of workload criteria is determined from a workload associated with a plurality of sources. The workload is divided among a set of workload groups according to the set of workload criteria and a first workload scheduler. A set of edge computing resources is assigned to each workload group within the set according to the set of workload criteria and the set of workload groups. A portion of the workload associated with a subset of the plurality of sources is handled by a first subset of edge computing resources and a second workload scheduler, where the subset of sources is associated with a first workload group. The handling includes balancing, by the second workload scheduler, the portion of the workload among the subset of sources. The handled workload is reported to a control center.

Predictive virtual machine launch-based capacity management

A host computer inventory system within a provider network detects patterns of launch requests on an individual user account basis. For a customer that cyclically submits similar launch requests, the inventory system may allocate slots in specific host computers consistent with the detected launch pattern so that future attempts to launch the virtual machines will be honored using the pre-allocated hosts.

MULTIPLE METRIC-BASED WORKLOAD BALANCING BETWEEN STORAGE RESOURCES
20220357998 · 2022-11-10 ·

An apparatus comprises a processing device configured to determine a workload level of each storage resource in a set of two or more storage resources, the workload levels being based at least in part on a processor performance metric, a memory performance metric, and a load performance metric. The processing device is also configured to identify a performance imbalance rate for the set of two or more storage resources, and to perform workload balancing for the set of two or more storage resources responsive to (i) the performance imbalance rate for the set of two or more storage resources exceeding a designated imbalance rate threshold and (ii) at least one storage resource in the set of two or more storage resources having a workload level exceeding a designated threshold workload level.

SERVICE MANAGEMENT SYSTEM FOR SCALING SERVICES BASED ON DEPENDENCY INFORMATION IN A DISTRIBUTED DATABASE
20220357861 · 2022-11-10 ·

A service management system manages scaling and migration of a plurality of services in a content management system. The service management system may maintain a plurality of services that are distributed across a plurality of clusters, each service serving a functionality in the content management system. Responsive to receiving a request to scale a service, the service management system may access dependency data describing dependencies among the plurality of services. Based on the dependency data, the service management system may determine a set of services to scale and determine a scaling sequence in which the set of services are to be scaled. The service management system may further determine other parameters for the scaling process such as scaling ratios, allocation ratios and scaling factors associated with the services and the scaling is further based on the parameters.