Patent classifications
G06F2209/5022
TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuity is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Methods and apparatus to improve workload domain management in virtualized server systems using a free pool of virtualized servers
Methods, apparatus, systems, and articles of manufacture are disclosed to improve workload domain management of virtualized server systems. An example apparatus includes a resource pool handler to generate a pool of virtualized servers including a first virtualized server based on a policy, ones of the virtualized servers to be allocated to a workload domain to execute an application, a resource status analyzer to determine a health status associated with the workload domain and determine whether the health status satisfies a threshold based on the policy, and a resource allocator to allocate the first virtualized server to the workload domain to execute the application when the health status is determined to satisfy the threshold.
Estimating resource requests for workloads to offload to host systems in a computing environment
Provided are a computer program product, system, and method for estimating resource requests for workloads to offload to host systems in a computing environment. A calculation is made required resources of computational resources required to complete processing a plurality of unfinished workloads that have not completed. A determination is made of allocated resources that are not yet provisioned to workloads. The required resources are reduced by the allocated resources not yet provisioned to determine resources to provision. The resources to provision for the unfinished workloads are requested.
SCALABLE SOFTWARE DEPLOYMENT ON AUTONOMOUS MOBILE ROBOTS
Various aspects related to methods, systems, and computer readable media for scalable software deployment on autonomous mobile robots are described herein. A mobile robotics system can include a storage component configured to store a containerized software package, a server in operative communication with the storage component, and, an autonomous mobile robot (AMR) in operative communication with the server. The containerized software installation package is configured to direct the AMR to maneuver to perform at least one robotic task, monitor computational resource usage of resources of the AMR associated with the at least one robotic task, and, responsive to a determination that computational resource usage at the AMR is or will be above a threshold, sending a request to the server to perform a portion of processing tasks such that resource usage at the AMR is reduced to below the threshold or maintained below the threshold.
System and method for determining an amount of virtual machines for use with extract, transform, load (ETL) processes
In accordance with an embodiment, described herein are systems and methods for determining or allocating an amount, quantity, or number of compute instances or virtual machines for use with extract, transform, load (ETL) processes. In an example embodiment, a particular (e.g., optimal) number of virtual machines (VM's) can be determined by predicting ETL completion times for customers, using historical data. ETL processes can be simulated with an initial/particular number of virtual machines. If the predicted duration is greater than the desired duration, the number of virtual machines can be incremented, and the simulation repeated. Actual completion times from ETL processes can be fed back, to update a determined number of compute instances or virtual machines. In accordance with an embodiment, the system can be used, for example, to generate alerts associated with customer service level agreements (SLA's).
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MANAGING STORAGE SYSTEM
Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for managing a storage system. The method includes: based on respective task types of a plurality of tasks to be executed, allocating the plurality of tasks to a plurality of accelerator resources in a storage system for processing; at least for a first accelerator resource in the plurality of accelerator resources, determining a first polling interval based on an average task size of a first group of tasks allocated to the first accelerator resource; and scheduling the execution of the first group of tasks at the first accelerator resource at the first polling interval. The embodiments of the present disclosure can optimize the scheduling of the tasks to be executed on the plurality of accelerator resources, thereby optimizing system performance.
SYSTEMS AND METHODS TO TRIGGER WORKLOAD MIGRATION BETWEEN CLOUD-BASED RESOURCES AND LOCAL RESOURCES
Embodiments of systems and methods are provided to trigger migration of a workload from cloud-based resources to local resources, or vice versa. In the disclosed embodiments, an orchestration service receives telemetry data from a client system associated with a user and cloud resource usage data corresponding to the user from a plurality of cloud service providers. Before the end of each cloud computing service billing cycle, the orchestration service: uses the cloud resource usage data and/or the telemetry data to determine a cloud resource usage, which is expected for the user at the end of the cloud computing service billing cycle; generates a trigger to migrate the user's workload from cloud-based resources to local resources, or vice versa, based on the expected cloud resource usage; and initiates migration of the user's workload if a trigger is generated. As such, the orchestration service can be used to effectively manage per-user cloud resource costs.
Information processing system and method for controlling information processing system
An information processing system includes an information processing apparatus and a management apparatus. A first processor of the information processing apparatus controls resource allocation to a first virtual machine that operates on the information processing apparatus and executes a virtual load balancer that distributes a first load to a plurality of second virtual machines. The first processor notifies, when a second load of the virtual load balancer exceeds a predetermined first threshold value, an occurrence of an overload to the management apparatus. The first processor receives and executes an addition command of adding a resource allocated to the first virtual machine. A second processor of the management apparatus creates, upon being notified of the occurrence of the overload, the addition command based on resource information of the information processing apparatus and management information of the virtual load balancer. The second processor notifies the addition command to the information processing apparatus.
Dynamic application migration across storage platforms
Embodiments of the present disclosure relate to load balancing application processing between storage platforms. Input/output (I/O) workloads can be anticipated during one or more time-windows. Each I/O workload can comprise one or more I/O operations corresponding to one or more applications. Processing I/O operations of each application can be dynamically migrated to one or more storage platforms of a plurality of storage platforms based on the anticipated workload.
PROCESSING SYSTEM CONCURRENCY OPTIMIZATION SYSTEM
A processing system concurrency optimization system includes a processing system having first and second processing subsystems, a power system that is coupled to the first and second processing subsystems, a processing system concurrency optimization database, and a processing system concurrency optimization subsystem that is coupled to the power system and the processing system concurrency optimization database. The processing system concurrency optimization subsystem determines that a first workload has been provided for performance by the processing system, and identifies a first processing system concurrency optimization profile that is associated with the first workload in the processing system concurrency optimization database. Based on the first processing system concurrency optimization profile, the processing system concurrency optimization subsystem configures the power system to provide first power in a first power range to the first processing subsystem and second power in a second power range to the second processing subsystem.