Patent classifications
G06F9/5033
MASTER ELECTRONICS APPARATUS, ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF
A master electronic apparatus, an electronic apparatus, and a controlling method thereof where the master electronic apparatus includes a communication interface and a processor. The processor receives first data and second data regarding predicted power consumption amounts corresponding to respective tasks of a first electronic apparatus and a second electronic apparatus, calculates summed-up values of the predicted power consumption amounts for respective times, and compares the summed-up values with instantaneous power amount limits for the respective times. The processor, based on the summed-up values being smaller than the instantaneous power amount limits, transmits a task approval signal to the second electronic apparatus, and based on identifying a time a summed-up value is greater than or equal to the instantaneous power amount limit, transmits a control signal controlling an operation in the identified time to at least one of the first electronic apparatus and the second electronic apparatus based on priorities.
System and method for cloud workload provisioning
Disclosed is a system and method for cloud workload provisioning. In one implementation, the present invention provides a system enabling an automated guidance to the user for the workload to be provisioned. The present invention matches the user's workload profile based on a wide variety of historical data set and makes easy for users to choose the cloud provisioning for various kinds of workloads. The system can automatically readjust a workload profile for cloud provisioning. The system can provide a manual selection option for cloud provisioning. In one embodiment, the present invention provides a system and method that derives a workload provision scaling factor mechanism using historic data set. Furthermore, the system and method can automatically or manually readjust the provision scaling factor based on a workload profile for cloud provisioning.
User-Centric Widgets and Dashboards
User-centric widgets and dashboards are automatically modified to reflect a user's goals and needs.
JOB SCHEDULING MANAGEMENT
Resource utilization data for a set of system components of a computing system is collected. The resource utilization data includes performance records for a set of jobs. By analyzing the collected resource utilization data for the set of system components, a resource allocation is identified for a particular job of the set of jobs. A first execution time for the particular job is determined based on the resource allocation for the particular job and the resource utilization data for the set of system components. A location at which to execute the particular job is determined based on how the particular job has been executed at the location previously. The first execution time may be a time when the computer system achieves a resource availability threshold with respect to the resource allocation. Aspects are also directed toward performing the particular job at the first execution time.
CONTAINER SCHEDULING METHOD AND APPARATUS, AND NON-VOLATILE COMPUTER-READABLE STORAGE MEDIUM
A container scheduling method and apparatus, and a computer-readable storage medium, which relate to the technical field of computers. The method includes: according to a resource usage amount of a container set copy which has run, determining a predicted resource usage amount of a container set copy to be scheduled, wherein the type of container set copy which has run is the same as the type of container set copy to be scheduled; according to the predicted resource usage amount and a resource supply amount supported by each candidate node, determining a candidate node matching the container set copy which has run; and scheduling the container set copy which has run to the matched candidate node for running.
MEMORY ALLOCATION METHOD, RELATED DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
This application provides a memory allocation method. The method includes: obtaining a computation graph corresponding to a neural network; sequentially allocating memory space to M pieces of tensor data based on a sorting result of the M pieces of tensor data, where if at least a part of the allocated memory space can be reused for one of the M pieces of tensor data, the at least a part of the memory space that can be reused for the tensor data is allocated to the tensor data, the allocated memory space is memory space that has been allocated to the M pieces of tensor data before the tensor data, the sorting result indicates a sequence of allocating memory space to the M pieces of tensor data, and the sorting result is related to information about each of the M pieces of tensor data.
Provisioning using pre-fetched data in serverless computing environments
A method for data provisioning a serverless computing cluster. A plurality of user defined functions (UDFs) are received for execution on worker nodes of the serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a master node of the serverless computing cluster, a plurality of worker node tickets are received, each ticket indicating a resource availability of a corresponding worker node. The one or more data locations and the plurality of worker node tickets are analyzed to determine eligible worker nodes capable of executing the first UDF. The master node transmits a pre-fetch command to one or more of the eligible worker nodes, causing the eligible worker nodes to become a provisioned worker node for the first UDF by storing a pre-fetched first UDF data before the first UDF is assigned for execution.
METHOD AND SYSTEM FOR ALLOCATING GRAPHICS PROCESSING UNIT PARTITIONS FOR A COMPUTER VISION ENVIRONMENT
Techniques described herein relate to a method for allocating graphics processing unit partitions for a computer vision environment. The method includes obtaining, by a computer vision (CV) manager, an initial graphics processing unit (GPU) partition allocation request associated with a CV workload; in response to obtaining the initial GPU partition allocation request: obtaining CV workload information associated with the CV workload; obtaining first CV environment configuration information associated with the GPU partition allocation request; generating an optimal GPU partition allocation based on the first CV environment configuration information and the CV workload information using a GPU partition model; and initiating performance of the CV workload in a CV environment based on the optimal GPU partition allocation.
MONITORING ENGINE FOR MULTIPLE BLOCKCHAIN LEDGERS
Provided are systems and methods for auto-performing short-term investments on an intermittent basis via a crypto-network and returning the principal and interest before the cash is needed. As an example, the method may include installing a blockchain smart contract on a blockchain ledger with read access to content stored on the blockchain ledger, establishing a communication channel between a monitoring engine and the blockchain smart contract, configuring the monitoring engine to identify stop conditions from content stored on the blockchain ledger, monitoring the blockchain ledger for updates to content stored on the blockchain ledger via the communication channel and detecting a stop condition based on an update to the blockchain ledger via the monitoring engine, and in response the detected stop condition, transmitting a request to the crypto-exchange server to return the funds to the external data source.
DYNAMIC GPU-ENABLED VIRTUAL MACHINE PROVISIONING ACROSS CLOUD PROVIDERS
Systems and methods are provided for dynamic GPU-enabled VM provisioning across cloud service providers. An example method can include providing a VM pool that includes a GPU-optimized VM and a non-GPU-optimized VM operating in different clouds. A control plane can receive an indication that a user has submitted a machine-learning workload request, determine whether a GPU-optimized VM is available and instruct the non-GPU-optimized VM to send the workload to the GPU-optimized VM in a peer-to-peer manner. The GPU-optimized VM computes the workload and returns a result to the requesting VM. The control plane can instantiate a new GPU-optimized VM (or terminate it when the workload is complete) to dynamically maintain a desired number of available GPU-optimized VMs.