Patent classifications
G06F9/5083
SCALABLE SERVER-BASED WEB SCRIPTING WITH USER INPUT
Disclosed are techniques and apparatuses that are configured to receive an indication that a web browsing session executing on an enterprise server needs additional information based on a request for additional information being sent to a client device. The request may include an identifier of the web browsing session and an identifier of an enterprise server that initiated the web browsing session. A globally unique identifier related to the web browsing session and an identifier of the enterprise server is stored in a common data store. The web browsing session may be paused when the web browsing session requests additional information from a client device. The client device may respond with the additional information. The system may provide the identifier of the enterprise server to a load balancing component so the identified web browsing session executing on the enterprise server may continue to be used.
CPU Resource Reservation Method and Apparatus, and Related Device Thereof
Provided are a Central Processing Unit (CPU) resource reservation method, apparatus, and device, and a computer-readable memory medium. The method includes: selecting a target working node according to a received Virtual Machine (VM) startup request; obtaining a total number of virtual cores and a number of allocatable physical cores in the target working node statistically; performing calculation to obtain an available CPU quota according to the total number of virtual cores and the number of allocatable physical cores; and performing CPU resource reservation configuration on the target working node by use of the available CPU quota. According to the CPU resource reservation method, the reservation of CPU resources in a VM system may be implemented more flexibly and efficiently.
Accelerating large-scale image distribution
Methods and systems for deploying images to computing systems include predicting an environment for a plurality of processing nodes. Image deployment to the plurality of processing nodes is simulated to determine a subset of the plurality of processing nodes for deployment. One or more images is pre-loaded to the subset of the plurality of processing nodes in advance of a deployment time.
METHOD AND SYSTEM FOR PERFORMING DISTRIBUTED COMPUTER VISION WORKLOADS IN A COMPUTER VISION ENVIRONMENT
Techniques described herein relate to a method for managing a computer vision environment. The method includes identifying a CV alert; generating a CV alert case associated with the CV alert; identifying nearby CV nodes of the plurality of CV nodes; transmitting CV alert to the nearby CV nodes; for each of the nearby CV nodes: receiving the CV alert; determining, based on CV environment configuration information of the nearby CV node and the CV alert, whether to perform a distributed CV workload; when the determination is to perform the distributed CV workload: initiating performance of the distributed CV workload by the nearby CV nodes to generate CV data; updating the CV alert case using CV data generated during the performance of the distributed CV workload to obtain an updated CV alert case; and transmitting by the nearby CV node to the VMS the updated CV alert case.
AUTOMATIC BACKUP DISTRIBUTION FOR CLUSTERED DATABASES
A data management platform may receive, from a user of a data management platform, a first job request to perform a backup of data from a data source to a database managed by the user. In some examples, the database may be configured as a set of database instances running on a set of computing nodes of a computing cluster. The data management platform may store a backup load indication that indicates which computing node is assigned to perform the backup of the data based on receiving the first job request. The data management platform may receive one or more second job requests subsequent to receiving the first job request and may determine a backup load for one or more computing nodes of the set of computing nodes. The data management platform may then assign one or more target computing nodes for performing the one or more second job requests.
DYNAMIC GPU-ENABLED VIRTUAL MACHINE PROVISIONING ACROSS CLOUD PROVIDERS
Systems and methods are provided for dynamic GPU-enabled VM provisioning across cloud service providers. An example method can include providing a VM pool that includes a GPU-optimized VM and a non-GPU-optimized VM operating in different clouds. A control plane can receive an indication that a user has submitted a machine-learning workload request, determine whether a GPU-optimized VM is available and instruct the non-GPU-optimized VM to send the workload to the GPU-optimized VM in a peer-to-peer manner. The GPU-optimized VM computes the workload and returns a result to the requesting VM. The control plane can instantiate a new GPU-optimized VM (or terminate it when the workload is complete) to dynamically maintain a desired number of available GPU-optimized VMs.
Compute cluster preemption within a general-purpose graphics processing unit
Embodiments described herein provide techniques enable a graphics processor to continue processing operations during the reset of a compute unit that has experienced a hardware fault. Threads and associated context state for a faulted compute unit can be migrated to another compute unit of the graphics processor and the faulting compute unit can be reset while processing operations continue.
EDGE FUNCTION BURSTING
One example method includes determining that local resources at an edge site are inadequate to support performance of a function needed by software running on the edge site, invoking a client agent, in response to invoking the client agent, receiving an execution manifest, determining, by the client agent, where to execute the function, wherein the determining comprises identifying a target execution environment for the function and the determining is based in part on information contained in the execution manifest, and transmitting, by the client agent, the execution manifest to a server agent of the target execution environment, and the execution manifest facilitates execution of the function in the target execution environment.
HYBRID COMPUTING SYSTEM MANAGEMENT
A method, a system and a computer program product for hybrid computing system management are proposed. In the method, workload information associated with a set of application server instances running in a first computing system is obtained by a server controller in response to a scaling request for changing the number of instances in the set of application server instances from a request controller. The set of application server instances serves at least one application running in a second computing system. A scaling decision indicating whether to change the number of instances in the set of application server instances is determined by a predictor based on the workload information from the server controller. The second computing system is enabled by the request controller to handle requests associated with the at least one application for the set of application server instances based on the scaling decision.
DISTRIBUTED STORAGE SYSTEM AND VOLUME MANAGEMENT METHOD
In a distributed storage system that has a plurality of computer nodes having processors and a storage drive and that provides a volume, each of the plurality of computer nodes provides a sub-volume, the processor of the computer node manages settings of each sub-volume of the computer node, the volume can be configured by using a plurality of sub-volumes provided by the plurality of computer nodes, and the sub-volumes include a plurality of logical storage areas formed by being allocated with physical storage areas of the storage drive. The plurality of computer nodes move the logical storage areas between the sub-volumes that belong to the same volume and that are provided by different computer nodes.