G06F9/5083

LOAD DISTRIBUTION APPARATUS, LOAD DISTRIBUTION METHOD AND PROGRAM

A load distribution apparatus connected, via a network, to a plurality of relay apparatuses that relay communication performed by a terminal, and to the terminal, including: storage means configured to store relay apparatus identifiers that identify each of the plurality of relay apparatuses, installation site information that indicates installation sites of each of the plurality of relay apparatuses, and load information that indicates loads of each of the plurality of relay apparatuses; load management means configured to collect the load information from each of the plurality of relay apparatuses to store the load information in the storage means; selection means configured, when receiving a request from the terminal, to select a relay apparatus for relaying communication performed by the terminal from among the plurality of relay apparatuses based on the installation site information or the load information; and transmission means configured to transmit, to the terminal that transmits the request, a relay apparatus identifier of the relay apparatus selected by the selection means.

OUTPUT DEVICE, DATA STRUCTURE, OUTPUT METHOD, AND OUTPUT PROGRAM
20180004869 · 2018-01-04 · ·

An output device 10 is provided with an output unit 11 for outputting, on the basis of job feature information indicating the features of the job of a distributed processing system, estimation model application information that is information in a format suitable for an estimation model that estimates the amount of computer resources required for processing a task constituting the job. The estimation model application information may include word-containing information having binary information that indicates whether or not a character string indicated by the character string information included in the job feature information includes a prescribed word. The estimation model application information may include numerical inversion label information having, as string label information, a value derived by converting, by a prescribed function, the numeric value indicated by the numerical information included in the job feature information.

METHOD AND DEVICE FOR PROCESSING, AT A NETWORK EQUIPMENT, A PROCESSING REQUEST FROM A TERMINAL
20180007125 · 2018-01-04 ·

Network equipment for processing a request from a terminal configured to be connected to a network to which the network equipment can be connected is described. The network equipment includes a receiver configured to receive, from the terminal, a message part of the processing request, a relay agent configured to provide a network identification information into the received message, and a load balancer configured to forward the received message to one of a plurality of processing units of the network equipment, depending on workload information associated with the processing units. The processing units are further configured to retrieve, based on the network identification information extracted from the received message, context information from a database unit shared between the processing units and to process the received message according to a state of the processing request, the processing request state being retrieved from the context information.

CENTRALIZED LOAD BALANCER WITH WEIGHTED HASH FUNCTION
20180007126 · 2018-01-04 ·

A method, apparatus, and machine readable storage medium is disclosed for balancing loads among a plurality of virtual machines (VMs) from a central dispatcher, wherein the dispatcher receives data packets and maps the data packets to VMs selected from the plurality of VMs, using a weighted hash function, having an associated weighting for each VM and forwarding each packet to a VM accordingly, wherein a load balancer decrements a weighting for a VM, responsive to an indication of the load on the VM exceeding a first load threshold. Weightings can correspond to a number of bins associated with each VM. Weightings are adjusted in response to receiving invite and disinvite messages from the VMs, representing their respective loads.

PERFORMANCE VARIABILITY REDUCTION USING AN OPPORTUNISTIC HYPERVISOR

An opportunistic hypervisor determines that a guest virtual machine of a virtualization host has voluntarily released control of a physical processor. The hypervisor uses the released processor to identify and initiate a virtualization management task which has not been completed. In response to determining that at least a portion of the task has been performed, the hypervisor enters a quiescent state, releasing the physical processor to enable resumption of the guest virtual machine.

SYSTEM AND METHOD FOR SCALING APPLICATION CONTAINERS IN CLOUD ENVIRONMENTS

A method includes polling, via a service specific manager operating on a software container in a cloud infrastructure, usage of different application resources and parameters for each service of a plurality of services provided in the cloud infrastructure to yield respective polled data for each service, collating, at the service specific manager, the respective polled data for each service to yield a collation, and based on the collation, deriving a respective weight for each service which a container manager can use to create multiple instances of a new service. The method further includes communicating the respective weight for each service to the container manager and determining, via the container manager, whether to scale up or scale down container services based on the respective weight for each service.

METHOD AND APPARATUS FOR LOAD ESTIMATION
20180011747 · 2018-01-11 · ·

A disclosed load estimation method includes: collecting run information of a processor being executing a predetermined program; specifying execution status of the processor based on the collected run information; and estimating a load of the predetermined program based on a result of comparison between the execution status of the processor and execution characteristics of the processor. Each of the execution characteristics is stored in association with a load level of the predetermined program.

System and method for multi-tenant implementation of graphics processing unit
11709713 · 2023-07-25 · ·

A method for graphics processing, wherein a graphics processing unit (GPU) resource is allocated among applications, such that each application is allocated a set of time slices. Commands of draw calls are loaded to rendering command buffers in order to render an image frame for a first application. The commands are processed by the GPU resource within a first time slice allocated to the first application. The method including determining at least one command has not been executed at an end of the first time slice. The method including halting execution of commands, wherein remaining one or more commands are not processed in the first time slice. A GPU configuration is preserved for the commands after processing a last executed command, the GPU configuration used when processing in a second time slice the remaining commands.

Non-disruptive firmware upgrade of symmetric hardware accelerator systems
11709667 · 2023-07-25 · ·

In a symmetric hardware accelerator system, an initial hardware accelerator is selected for an upgrade of firmware. The initial and other hardware accelerators handle workloads that have been balanced across the hardware accelerators. Workloads are rebalanced by directing workloads having low CPU utilization to the initial hardware accelerator. A CPU fallback is conducted of the workloads of the initial hardware accelerator to the CPU. While the CPU is handling the workloads, firmware of the initial hardware accelerator is upgraded.

System and Method for Managing a Hybrid Compute Environment
20230236903 · 2023-07-27 · ·

Disclosed are systems, hybrid compute environments, methods and computer-readable media for dynamically provisioning nodes for a workload. In the hybrid compute environment, each node communicates with a first resource manager associated with the first operating system and a second resource manager associated with a second operating system. The method includes receiving an instruction to provision at least one node in the hybrid compute environment from the first operating system to the second operating system, after provisioning the second operating system, pooling at least one signal from the resource manager associated with the at least one node, processing at least one signal from the second resource manager associated with the at least one node and consuming resources associated with the at least one node having the second operating system provisioned thereon.