G06F9/5033

TECHNIQUES FOR MULTI-SOURCE TO MULTI-DESTINATION WEIGHTED ROUND ROBIN ARBITRATION

Examples include techniques to arbitrate a plurality of input requests received from input clients that request data to be stored or placed in a destination. An arbiter may be arranged to grant an input request based on an assigned weight and based on an indication that the destination is ready to receive the data to be stored or placed in the destination.

Distributed machine learning technique used for data analysis and data computation in distributed environment

A system for distributed computing allows researchers performing analysis on data having legal or policy restrictions on movement of the data to perform the analysis on data at multiple sites without exporting the restricted data from each site. A primary system determines which sites contain the data to be executed by a compute job and sends the compute job to the associated site. Resulting calculated data can be exported to a primary collecting and recording system. The system allows the rapid development of analysis code for the compute jobs that can be executed on a local data set then passed to the primary system for distributed machine learning on the multiple sites without any changes to the code of the compute job.

Systems and methods for virtual machine resource optimization using tree traversal techniques representing alternate configurations

Systems described herein may allow for the intelligent configuration of containers onto virtualized resources. Different configurations may be generated based on the simulation of alternate placements of containers onto nodes, where the placement of a particular container onto a particular node may serve as a root for several branches which may themselves simulate the placement of additional containers on the node (in addition to the container(s) indicated in the root). Once a set of configurations are generated, a particular configuration may be selected according to determined selection parameters and/or intelligent selection techniques.

COMPUTING RESOURCE MANAGEMENT METHOD, ELECTRONIC EQUIPMENT AND PROGRAM PRODUCT
20230100110 · 2023-03-30 ·

A technique for computing resource management involves determining a first resource request frequency based on the number of trigger signals received from a storage device during a first period. The trigger signals are generated when a data amount of modified metadata stored in the storage device reaches a threshold data amount. The technique further involves determining a second resource request frequency based on the number of trigger signals received from the storage device during a second period subsequent to the first period. The technique further involves adjusting computing resources for performing an operation of copying the modified metadata in the storage device to a storage medium based on a comparison of the first resource request frequency and the second resource request frequency. Accordingly, computing resources can be fully utilized, and an operation of copying modified metadata to a magnetic disk can be timely performed.

REGULATION OF THROTTLING OF POLLING BASED ON PROCESSOR UTILIZATIONS
20230094430 · 2023-03-30 ·

A process includes determining a first degree of throttling to apply to a polling of hardware devices by a hardware processor based on a historical total utilization of the hardware processor; and determining a second degree of throttling to apply to the polling of hardware devices by the hardware processor based on a historical polling utilization of the hardware processor. The hardware processor includes, responsive to an upcoming hardware device polling cycle for the hardware processor and based on the first degree of throttling and the second degree of throttling, regulating whether the hardware processor bypasses the hardware device polling cycle or executes the hardware device polling cycle.

Handling Execution of a Function

There is provided a method for handling execution of a function in a function-as-a- service (FaaS) system. According to the method, in response (200) to a trigger on a first node of the FaaS system for execution of a function, execution of the function is initiated (206) on the first node of the FaaS system if data to be accessed for the execution is stored on the first node of the FaaS system and/or execution of the function is initiated (210) on a second node of the FaaS system if the data to be accessed for the execution is stored on the second node of the FaaS system.

Optimizing clustered applications in a clustered infrastructure

This disclosure describes techniques for providing virtual resources (e.g., containers, virtual machines, etc.) of a clustered application with information regarding a cluster of physical servers on which the distributed clustered application is running. A virtual resource that supports the clustered application is executed on a physical server of the cluster of physical servers. The virtual resource may receive an indication of a database instance (or other application) running on a particular physical server of the cluster of physical servers that is nearest the physical server. The database instance may be included in a group of database instances that are maintaining a common data set on respective physical servers of the group of physical servers. The virtual resource may then access the database instance on the particular physical server based at least in part on the database instance running on the particular server that is nearest the physical server.

SYSTEMS AND METHODS FOR WORKLOAD DISTRIBUTION ACROSS PROCESSING UNITS
20230089812 · 2023-03-23 ·

Workload distribution in a system including a non-volatile memory device is disclosed. A request is received including an address associated with a memory location of the non-volatile memory device. A hash value is calculated based on the address. A list of node values is searched, and one of the node values in the list is identified based on the hash value. A processor is identified based on the one of the node values, and the address is stored in association with the processor. The request is transmitted to the processor for accessing the memory location.

OPTIMIZED NETWORKING THREAD ASSIGNMENT
20220350647 · 2022-11-03 ·

Some embodiments provide a method for scheduling networking threads associated with a data compute node (DCN) executing at a host computer. When a virtual networking device is instantiated for the DCN, the method assigns the virtual networking device to a particular non-uniform memory access (NUMA) node of multiple NUMA nodes associated with the DCN. Based on the assignment of the virtual networking device to the particular NUMA node, the method assigns networking threads associated with the DCN to the same particular NUMA node and provides information to the DCN regarding the particular NUMA node in order for the DCN to assign a thread associated with an application executing on the DCN to the same particular NUMA node.

DECENTRALIZED ARTIFICIAL INTELLIGENCE (AI)/MACHINE LEARNING TRAINING SYSTEM

A decentralized training platform is described for training an Artificial Intelligence (AI) model where training data (e.g., medical images) is distributed across multiple sites (nodes) and due to confidentiality, legal, or other reasons the data at each site is unable to be shared or leave the site and so cannot be copied to a central location for training. The method comprises training a teacher model locally at each node and then moving each of the teacher models to a central node and using these to train a student model using a transfer dataset. This may be facilitated by setting up the cloud service using inter-region peering connections between the nodes to make the nodes appear as a single cluster. In one variation the student module may be trained at each node using the multiple trained teacher models. In another variation we train multiple student models where each student model is trained by each teacher model at the node the teacher model was trained on, and once the plurality of student models are trained, an ensemble model is generated from the plurality of trained student models. Loss function weighting and node under sampling to enable load balancing may be used to improve accuracy and time/cost efficiency.