G06F2209/5011

Configurable NVM set to tradeoff between performance and user space
11567862 · 2023-01-31 · ·

An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to determine a set of requirements for a persistent storage media based on input from an agent, dedicate one or more banks of the persistent storage media to the agent based on the set of requirements, and configure at least one of the dedicated one or more banks of the persistent storage media at a program mode width which is narrower than a native maximum program mode width for the persistent storage media. Other embodiments are disclosed and claimed.

ALLOCATING RESOURCES FOR NETWORK FUNCTION VIRTUALIZATION

Controlling allocation of resources in network function virtualization. Data defining a pool of available physical resources is maintained. Data defining one or more resource allocation rules is identified. An application request is received. Physical resources from the pool are allocated to virtual resources to implement the application request, on the basis of the maintained data, the identified data and the received application request.

SYSTEMS AND METHODS WITH INTEGRATED MEMORY POOLING AND DIRECT SWAP CACHING
20230229498 · 2023-07-20 ·

Systems and methods related to integrated memory pooling and direct swap caching are described. A system includes a compute node comprising a local memory and a pooled memory. The system further includes a host operating system (OS) having initial access to: (1) a first swappable range of memory addresses associated with the local memory and a non-swappable range of memory addresses associated with the local memory, and (2) a second swappable range of memory addresses associated with the pooled memory. The system further includes a data-mover offload engine configured to perform a cleanup operation, including: (1) restore a state of any memory content swapped-out from a memory location within the first swappable range of memory addresses to the pooled memory, and (2) move from the local memory any memory content swapped-in from a memory location within the second swappable range of memory addresses back out to the pooled memory.

EDGE ARTIFICIAL INTELLIGENCE (AI) COMPUTING IN A TELECOMMUNICATIONS NETWORK

Disclosed herein is the integration into edge nodes of a telecommunications network system of client computer system and server computer system where the server computer system includes a pool of shareable accelerators and the client computer runs an application program that is assisted by the pool of accelerators. The edge nodes connect to user equipment, and some of the user equipment can themselves act as one of the client computer systems. In some embodiments, the accelerators are GPUs, and in other embodiments, the accelerators are artificial intelligence accelerators.

ON-BOARDING VIRTUAL INFRASTRUCTURE MANAGEMENT SERVER APPLIANCES TO BE MANAGED FROM THE CLOUD

A method of on-boarding a virtual infrastructure management (VIM) server appliance in which VIM software for locally managing a software-defined data center (SDDC) is installed, to enable the VIM server appliance to be centrally managed through a cloud service includes upgrading the VIM server appliance from a current version to a higher version that supports communication with agents of the cloud service, modifying configurations of the upgraded VIM server appliance according to a prescriptive configuration required by the cloud service, and deploying a gateway appliance for running the agents of the cloud service that communicate with the cloud service and the upgraded VIM server appliance.

Method, device and computer program product for shrinking storage space

Techniques for shrinking a storage space involve determining a used storage space in a storage pool allocated to a plurality of file systems, and determining a usage level of a storage space in the storage pool based on the used storage space in and a storage capacity of the storage pool. The techniques further involve shrinking a storage space from one or more of the plurality of file systems based on the usage level of the storage pool. Such techniques may automatically shrink storage space in one or more file systems from the global level of the storage pool, which determines an auto shrink strategy according to overall performance of the storage pool, thereby improving efficiency of auto shrink and balancing system performance and saving space.

Controlling placement of virtualized resources based on desirability index for host pools

Various techniques for managing heat and backwards-incompatible updates in cloud-based networks are described. In an example method, a virtualized resource, is identified. At least one first host may include an updated version of an element and at least one second host may include a previous version of the element. The updated version may be incompatible with the previous version. A first desirability index corresponding to the at least one first host may be less than a second desirability index corresponding to the at least one second host. The virtualized resource may be live-migrated from the source host to a target host among the at least one first host.

Systems and methods for resource-based scheduling of commands
11704058 · 2023-07-18 · ·

A system and method for scheduling commands for processing by a storage device. A command is received from an application and stored in a first queue. Information is obtained on a first set of resources managed by the storage device. A second set of resources is synchronized based on the information on the first set of resources. The second set of resources is allocated into a first pool and a second pool. A condition of the second set of resources in the first pool is determined. One of the second set of resources in the first pool is allocated to the command based on a first determination of the condition, and one of the second set of resources in the second pool is allocated to the command based on a second determination of the condition.

Quantum computing service supporting multiple quantum computing technologies

A quantum computing service includes connections to multiple quantum hardware providers that are configured to execute quantum circuits using quantum computers based on different quantum technologies. The quantum computing service enables a customer to define a quantum algorithm/circuit in an intermediate representation and select from any of a plurality of supported quantum computing technologies to be used to execute the quantum algorithm/quantum circuit.

Systems and methods for dynamic job performance in secure multiparty computation

Disclosed herein are systems and methods for dynamic job performance in secure multiparty computation (SMPC). The method may comprise receiving an SMPC query that indicates a processing job to be performed on a data input. The method may split the data input to generate a plurality of partial data inputs, based on parameters and the query type of the SMPC query. The method may generate a plurality of jobs to perform on the plurality of partial data inputs and determine a combined result of the processing job. The method may adjust the amount of worker processes in a worker pool based on at least one of: required computation, time of day, date, financial costs, power consumption, and available network bandwidth.