Patent classifications
H04L49/30
SYSTEM TO TRANSMIT MESSAGES USING MULTIPLE NETWORK PATHS
A system includes reception of an instruction to send a message to a computer server, determination of a plurality of segments of the message, determination, for each of the plurality of segments, of a network path from a plurality of network paths to the computer server based on performance-related characteristics of the plurality of network paths, and assignment, for each of the plurality of segments, of the segment to a transmission queue associated with the network path determined for the segment.
Non-posted write transactions for a computer bus
Systems and devices can include a controller and a command queue to buffer incoming write requests into the device. The controller can receive, from a client across a link, a non-posted write request (e.g., a deferred memory write (DMWr) request) in a transaction layer packet (TLP) to the command queue; determine that the command queue can accept the DMWr request; identify, from the TLP, a successful completion (SC) message that indicates that the DMWr request was accepted into the command queue; and transmit, to the client across the link, the SC message that indicates that the DMWr request was accepted into the command queue. The controller can receive a second DMWr request in a second TLP; determine that the command queue is full; and transmit a memory request retry status (MRS) message to be transmitted to the client in response to the command queue being full.
Distributed artificial intelligence extension modules for network switches
Distributed machine learning systems and other distributed computing systems are improved by compute logic embedded in extension modules coupled directly to network switches. The compute logic performs collective actions, such as reduction operations, on gradients or other compute data processed by the nodes of the system. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the extension modules may take over some or all of the processing of the distributed system during the collective phase. An inline version of the module sits between a switch and the network. Data units carrying compute data are intercepted and processed using the compute logic, while other data units pass through the module transparently to or from the switch. Multiple modules may be connected to the switch, each coupled to a different group of nodes, and sharing intermediate results. A sidecar version is also described.
Quasi-Output Queue Behavior of a Packet Switching Device Achieved Using Virtual Output Queue Ordering Independently Determined for each Output Queue
In one embodiment, quasi-Output Queue behavior of a packet switching device is achieved using virtual output queue (VOQ) ordering independently determined for each particular output queue (OQ), including using maintained latency information of the VOQs of the particular OQ. In one embodiment, all packets from all VOQs with a same port-priority destination experience similar latency within specific time-window, which is similar to the packet service provided by an Output Queue switch architecture. In one embodiment, all input ports that send traffic to same output port-priority receive bandwidth which is proportional to their bandwidth demand divided by total bandwidth. Prior approaches that emulate the performance of an OQ switch architecture require complex and time-consuming scheduling determinations and do not scale. Independently determining the order for sending packets from the VOQs associated with each particular OQ provides a scalable and implementable system with quasi-Output Queue behavior.
Quality of service in virtual service networks
A switch in a slice-based network can be used to enforce quality of service (“QoS”). Agents can run in the switches, such as in the core of each switch. The switches can sort ingress packets into slice-specific ingress queues in a slice-based pool. The slices can have different QoS prioritizations. A switch-wide policing algorithm can move the slice-specific packets to egress interfaces. Then, one or more user-defined egress policing algorithms can prioritize which packets are sent out into the network first based on slice classifications.
Power-over-ethernet (POE) breakout module
Presented herein are embodiments of a power-over-Ethernet (PoE) breakout system that may be used to breakout a PoE port from a PoE information handling system into a number of breakout ports. In one or more embodiments, a PoE breakout system comprises: a PoE port for connecting to a PoE information handling system, such as a PoE switch; a plurality of breakout ports for connecting to powered devices, wherein each breakout port is configured to supply power to a powered device; and a power management module electrically coupled to the PoE port and configured to supply power to each breakout port according to a configuration that sets a power level for that breakout port. In one or more embodiments. the PoE breakout system comprises a data communications module that switches data traffic to a correct PoE breakout port according to its intended powered device.
EXCHANGE MANAGEMENT APPARATUS, EXCHANGE MANAGEMENT METHOD, AND PROGRAM
A replacement management apparatus includes a detection unit configured to detect, for both a communication apparatus to be replaced and a communication apparatus for replacement, communication speeds of physical ports used for connection, in units of communication apparatuses that are connection destinations, and a replacement determination unit configured to derive, for both the communication apparatus to be replaced and the communication apparatus for replacement, a communication capacity that is a sum of the communication speeds for each of the communication apparatuses that are connection destinations, and determine, for all of the communication apparatuses that are connection destinations, in a case in which the communication capacity of the communication apparatus for replacement is equal to or greater than the communication capacity of the communication apparatus to be replaced, that the communication apparatus to be replaced is replaceable with the communication apparatus for replacement.
EXCHANGE MANAGEMENT APPARATUS, EXCHANGE MANAGEMENT METHOD, AND PROGRAM
A replacement management apparatus includes a detection section which detects a first communication speed and first setting information of a physical port used for connection for each of a communication apparatus to be replaced and a communication apparatus to be connected, and detects a second communication speed of a physical port used for connection for a replacing communication apparatus, and a generation section which generates second setting information of the physical port used for connection for each of the replacing communication apparatus and the communication apparatus to be connected based on the detected first communication speed, the detected second communication speed, and the detected first setting information.
Scaling host policy via distribution
Techniques are disclosed for processing data packets and implementing policies in a software defined network (SDN) of a virtual computing environment. At least two SDN appliances are configured to disaggregate enforcement of policies of the SDN from hosts of the virtual computing environment. The hosts are implemented on servers communicatively coupled to network interfaces of the SDN appliance. The servers host a plurality of virtual machines. The servers are communicatively coupled to network interfaces of at least two top-of-rack switches (ToRs). The SDN appliance comprises a plurality of smart network interface cards (sNICs) configured to implement functionality of the SDN appliance. The sNICs have a floating network interface configured to provide a virtual port connection to an endpoint within a virtual network of the virtual computing environment.
TECHNOLOGIES FOR DYNAMIC ACCELERATOR SELECTION
Technologies for dynamic accelerator selection include a compute sled. The compute sled includes a network interface controller to communicate with a remote accelerator of an accelerator sled over a network, where the network interface controller includes a local accelerator and a compute engine. The compute engine is to obtain network telemetry data indicative of a level of bandwidth saturation of the network. The compute engine is also to determine whether to accelerate a function managed by the compute sled. The compute engine is further to determine, in response to a determination to accelerate the function, whether to offload the function to the remote accelerator of the accelerator sled based on the telemetry data. Also the compute engine is to assign, in response a determination not to offload the function to the remote accelerator, the function to the local accelerator of the network interface controller.