G06F2209/5011

Container Orchestration System
20230222006 · 2023-07-13 ·

The present disclosure provides a system for coordinating the distribution of resource instances (e.g. Kubernetes nodes) that belong to different infrastructure providers providing resource instances at different locations. Each infrastructure provider provides one or more resource instances. The resource instances provide by an infrastructure can be spread over multiple locations. Several Kubernetes master nodes are deployed to manage the RIs spread among multiple infrastructure providers and multiple locations.

Data query method, apparatus and device

A method including obtaining resource overheads according to feature information of a received query request; according to the resource overheads and a compute node resource, dynamically adjusting a compute node in a resource pool; and querying, by using the compute node, data corresponding to the query request. A compute node in a resource pool may be dynamically adjusted, so that the compute node in the resource pool may process all the received query requests, and therefore, the processing efficiency and a resource utilization rate of the compute node are more effectively improved, such that the compute node may more efficiently perform parallel processing on the multiple query requests, and the utilization rates of a CPU resource, a memory resource and a network bandwidth resource are increased, thus achieving better effect from the perspectives of overall computing resource and user query load and improving the usage experience of a user.

HARDWARE RESOURCE MANAGEMENT FOR MANAGEMENT APPLIANCES RUNNING ON A SHARED CLUSTER OF HOSTS

A method of reserving hardware resources for management appliances of a software-defined data center (SDDC) that have been deployed onto one or more hosts of a cluster of hosts, includes reserving hardware resources of the cluster for a resource pool that has been created for the management appliances, the hardware resources including at least processor resources of the hosts and memory resources of the hosts, and assigning the management appliances to the resource pool created for the management appliances. The management appliances share the hardware resources of the cluster with one or more other resource pools and, after the steps of reserving and assigning, are allocated at least the hardware resources that have been reserved for the resource pool created for the management appliances.

SYSTEM AND METHOD FOR REMOTELY INTERACTING WITH CLOUD-BASED CLIENT APPLICATIONS
20230216933 · 2023-07-06 ·

Systems and methods for enabling various devices to remotely interact with cloud-based client applications are provided. A method comprises receiving a first request from a first client device of a user to initiate an interactive session with a cloud-based client application, reserving an application engine for executing the cloud-based client application remotely from the first client device, receiving interaction data from the first client device as the user engages with a first media data associated with the cloud-based client application, modifying the cloud-based client application executing within the application engine that is reserved based on the interaction data received from the first client device, receiving a second request from the first client device to end the interactive session with the cloud-based client application that is modified, and deallocating the application engine that is reserved, wherein the application engine that is reserved is delinked from the first client device.

Node recovery solution for composable and disaggregated environment

In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may be a pod manager. The pod manager receives receive a request for composing a target composed-node. The pod manager employs a first set of pooled hardware resources of the computing pod to build the target composed-node. The pod manager determines to reserve a second set of pooled hardware resources of the computing pod for a backup node of the target composed-node. The pod manager determines that the target composed-node has failed. The pod manager employs the second set of pooled hardware resources to build the backup node.

Virtual computing systems and methods

A computer system (10) for providing virtual computers includes a pool facility (38) for storing a pool (40) of suspended virtual computers (42) based on at least one virtual computer template (44). A provision manager (32) provides a series (46) of virtual computers (18) as a result of a series (50) of system logon requests by a user (54). The provision manager (32) includes an update facility (100), a resume facility (102) and a customization facility (104). The update facility (104) is provided for updating one or each at least one virtual computer template (44). The resume facility (102) is provided for resuming virtual computers from the pool (40) of suspended virtual computers (42) provided by the pool facility (38). The customization facility (104) is provided for customizing virtual computers after being resumed from the pool (40) to provide active virtual computers.

Work provenance in computing pools

A system and method for participating in and operating a distributed computing pool are disclosed. Computing pools combine computational resources from a plurality of computing devices over a network by splitting jobs into smaller jobs and distributing those smaller jobs to the computing devices so that they can be solved in parallel with little or no overlap in the work performed. The computing devices attempt to find solutions to the smaller jobs. Solutions found are signed and submitted back to the pool. The pool uses the signature to confirm the true origin of the solution and that the solution has not been tampered with.

Dynamic size of static SLC cache
11693700 · 2023-07-04 · ·

Apparatus and methods are disclosed, including using a memory controller to track a maximum logical saturation over the lifespan of the memory device, where logical saturation is the percentage of capacity of the memory device written with data. A portion of a pool of memory cells of the memory device is reallocated from single level cell (SLC) static cache to SLC dynamic cache storage based at least in part on a value of the maximum logical saturation, the reallocating including writing at least one electrical state to a register, in some examples.

Method and system for disaster recovery of a regional cloud based desktop fabric

A system and method for ensuring the availability of virtual desktops in a cloud based system. The system includes a primary regional datacenter having a primary desktop pool accessible by a desktop client providing access to a desktop to a desktop user. A secondary regional datacenter includes a secondary desktop pool. A control plane orchestrates communication between the desktop client and the regional datacenters. The control plane creates a copy of the desktop from the primary regional datacenter. The control plane performs an activation procedure when a disaster event occurs. The activation procedure includes creating the desktop in the secondary desktop pool from the copy. The activation procedure also directs the desktop client to the secondary desktop pool to access the desktop from the secondary regional datacenter. A deactivation procedure directs the desktop client in the secondary desktop pool to reestablish availability to desktops in the primary desktop pool.

Routing network using global address map with adaptive main memory expansion for a plurality of home agents
11693805 · 2023-07-04 · ·

An adaptive memory expansion scheme is proposed, where one or more memory expansion capable Hosts or Accelerators can have their memory mapped to one or more memory expansion devices. The embodiments below describe discovery, configuration, and mapping schemes that allow independent SCM implementations and CPU-Host implementations to match their memory expansion capabilities. As a result, a memory expansion host (e.g., a memory controller in a CPU or an Accelerator) can declare multiple logical memory expansion pools, each with a unique capacity. These logical memory pools can be matched to physical memory in the SCM cards using windows in a global address map. These windows represent shared memory for the Home Agents (HAs) (e.g., the Host) and the Slave Agent (SAs) (e.g., the memory expansion device).