G06F15/17343

Managing Traffic for Endpoints in Data Center Environments to Provide Cloud Management Connectivity

Techniques for combining the functionality of fabric interconnects and switches (e.g., Top-of-Rack (ToR) switches) into one network entity, thereby reducing the number of devices in a fabric and complexity of communications in the fabric. By collapsing FI and ToR switch functionality into one network entity, server traffic may be directly forwarded by the ToR switch and an entire tier is now eliminated from the topology hierarchy which may improve the control, data, and management plane. Further, this disclosure describes techniques for dynamically managing the number of gateway proxies running on one or more computer clusters based on a number of managed switch domains.

SWITCH UNIT, ETHERNET NETWORK, AND METHOD FOR ACTIVATING COMPONENTS IN AN ETHERNET NETWORK

A switch unit for an Ethernet network having a switch and a microprocessor, the switch including at least three ports, which are connected to inputs and outputs of the switch unit, a signal detector and generator for detecting and initiating a bus activity being arranged in each case between the ports and the inputs and outputs of the switch unit. For each input and output an allocation rule to the other inputs and outputs of the switch unit is stored in a memory , the switch unit being designed such that when a bus activity is detected at a signal detector and generator, the assigned inputs and outputs of this input and output are read out from the memory and the associated signal detectors and generators are woken up so that they generate a bus activity at their inputs and outputs.

Network node and method for controlling resources in a communication network

A network node (10) controls resources (22) in a network. The node (10) includes processing units (12) and assigns, for each set of resources (22), to a unit (12) a master role consisting in reserving and releasing resources (22) and to two processing units (12), named controllers, the role of controlling resources (22). A controller (12c) operates in a first mode when a master role is assigned to a processing unit (12m) and the unit (12m) is available to reserve and release resources (22). A controller (12c) operates in a second mode when no master role is assigned or when a master role is assigned to a unit (12m) which is not available to reserve and release resources (22). In the second mode, a controller (12c) maintains a list (14) of resources (22) to be released and selects a resource (22) from the list (14) to reserve a resource (22).

LEVERAGING MULTIPROCESSOR FABRIC LINK AGGREGATION

Data access patterns between at least three nodes within a single symmetric multiprocessing server may be monitored by at least one hypervisor. At the hypervisor, mappings for the data access patterns may be generated for the at least three nodes. Based upon the mappings, the hypervisor may determine that the data access patterns for at least two of the at least three nodes are outside of a bandwidth threshold. In response to determining that the data access patterns for at least two of the at least three nodes are outside of a bandwidth threshold, the hypervisor may formulate an asymmetric cabling plan. Based upon the asymmetric cabling plan, a recommendation to alter the multiprocessor fabric link aggregation may be displayed to a user through a graphical user interface.

Leveraging multiprocessor fabric link aggregation

Data access patterns between at least three nodes within a single symmetric multiprocessing server may be monitored by at least one hypervisor. At the hypervisor, mappings for the data access patterns may be generated for the at least three nodes. Based upon the mappings, the hypervisor may determine that the data access patterns for at least two of the at least three nodes are outside of a bandwidth threshold. In response to determining that the data access patterns for at least two of the at least three nodes are outside of a bandwidth threshold, the hypervisor may formulate an asymmetric cabling plan. Based upon the asymmetric cabling plan, a recommendation to alter the multiprocessor fabric link aggregation may be displayed to a user through a graphical user interface.

ELECTRONIC DEVICE INCLUDING A PLURALITY OF CHIPLETS AND METHOD FOR TRANSMITTING TRANSACTION THEREOF
20250077467 · 2025-03-06 ·

An electronic device comprising a plurality of chiplets is disclosed. The electronic device comprises a first chiplet that generates a transaction, a second chiplet that receives the transaction, and at least one third chiplet that relays the transaction, wherein the first chiplet determines a route path for the transaction that passes through the at least one third chiplet, and transmits the transaction through the determined route path for the transaction.

Leveraging multiprocessor fabric link aggregation

Data access patterns between at least three nodes within a single symmetric multiprocessing server may be monitored by at least one hypervisor. At the hypervisor, mappings for the data access patterns may be generated for the at least three nodes. Based upon the mappings, the hypervisor may determine that the data access patterns for at least two of the at least three nodes are outside of a bandwidth threshold. In response to determining that the data access patterns for at least two of the at least three nodes are outside of a bandwidth threshold, the hypervisor may formulate an asymmetric cabling plan. Based upon the asymmetric cabling plan, a recommendation to alter the multiprocessor fabric link aggregation may be displayed to a user through a graphical user interface.

Load balancing system for the execution of applications on reconfigurable processors

A data processing system is presented in a client-server configuration for executing first and second applications that a client in the client-server configuration can offload for execution onto the data processing system. The data processing system includes a server and a pool of reconfigurable data flow resources that is configured to execute the first application in a first runtime context and the second application in a second runtime context. The server is configured to establish a session with the client, receive first and second execution requests for executing the first application and the second application from the client, start respective first and second execution of the first and second applications in the respective first and second runtime contexts in response to receiving the first and second execution requests, and balance a first load from the first execution with a second load from the second execution.

METHOD FOR ACCESSING SYSTEM-ON-CHIP (SOC) MEMORY FROM USER SPACE

The present disclosure is directed to a method for accessing memory. The method includes mapping an address space for the memory to an address space for a kernel space. The method includes mapping the address space for the memory to an address space for a user space using the kernel space. The method includes accessing the memory via the address space for the kernel space and the address space for the user space.