Patent classifications
H04L49/256
DIS-AGGREGATED SWITCHING AND PROTOCOL CONFIGURABLE INPUT/OUTPUT MODULE
An input/output module (IOM) for use within a network storage system mounted within a rack enclosure. The IOM includes a switching component configured to provide top-of-rack (TOR) switching for data to be routed from input connectors to data storage devices within the rack enclosure. The IOM also includes a protocol interface configured to convert a protocol of the data from an input data protocol (e.g., Ethernet, Fibre Channel or InfiniBand) to a protocol for use with the storage devices (e.g., nonvolatile memory express (NVMe) and Peripheral Component Interconnect Express (PCIe)). Among other features, the IOM allows switching to be dis-aggregated from a TOR switch and distributed throughout the data network of the rack.
SEGMENTATION AND REASSEMBLY OF NETWORK PACKETS FOR SWITCHED FABRIC NETWORKS
Reassembly of member cells into a packet comprises receiving an incoming member cell of a packet from a switching fabric wherein each member cell comprises a segment of the packet and a header, generating a reassembly key using selected information from the incoming member cell header wherein the selected information is the same for all member cells of the packet, checking a reassembly table in a content addressable memory to find an entry that includes a logic key matching the reassembly key, and using a content index in the found entry and a sequence number of the incoming member cell within the packet, to determine a location offset in a reassembly buffer area for storing the incoming member cell at said location offset in the reassembly buffer area for the packet for reassembly.
Weighted cost multipath routing with intra-node port weights and inter-node port weights
A technique includes determining a first set of intra-node port weights for a first switch of a first routing node, determining a set of inter-node port weights including a first inter-node port weight for routing traffic to a second routing node, determining a first inter-node weighted port group for the first switch for traffic directed to the second routing node, the first inter-node weighted port group including a first total port weight based on a first intra-node port weight and the first inter-node port weight and which is applied to a first port of the first switch, and a second total port weight based on a second intra-node port weight and the first inter-node port weight and which is applied to the second port of the first switch, and routing traffic to an output port of the first switch based on the first inter-node weighted port group.
INTERCONNECTION OF GLOBAL VIRTUAL PLANES
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
EXPANDER DEVICE CHANNEL SWITCHING FOR A MEMORY DEVICE
A method includes receiving, by a memory device interface, a signal from a host that includes a header, decoding, by the memory device interface, the header to determine an instruction, selecting, by the memory device interface, a first channel associated with a first memory resource based on the instruction, sending, by the memory device interface, the header to a second channel associated with a second memory resource, and sending, by the memory device interface, subsequent packets of the header to the first channel.
Elastic Multi-Directional Resource Augmentation in a Switched CXL Fabric
Embodiments for communicating using a switch. A first Virtual CXL Switch (VCS) routes messages, conforming to a first CXL protocol, from a first switch port to a Resource Provisioning Unit (RPU). A second VCS routes messages, conforming to a second CXL protocol, from the RPU to the second switch port. The RPU terminates the first and second CXL protocols and translates at least some of the messages conforming to the first CXL protocol to at least some of the messages conforming to the second CXL protocol. Optionally, the first CXL protocol comprises CXL.mem, the second CXL protocol comprises CXL.cache, and the RPU manages snoop and invalidation message flows and maintains transaction order requirements.
Elastic multi-directional resource augmentation in a switched CXL fabric
Embodiments for communicating using a switch. A first Virtual CXL Switch (VCS) routes messages, conforming to a first CXL protocol, from a first switch port to a Resource Provisioning Unit (RPU). A second VCS routes messages, conforming to a second CXL protocol, from the RPU to the second switch port. The RPU terminates the first and second CXL protocols and translates at least some of the messages conforming to the first CXL protocol to at least some of the messages conforming to the second CXL protocol. Optionally, the first CXL protocol comprises CXL.mem, the second CXL protocol comprises CXL.cache, and the RPU manages snoop and invalidation message flows and maintains transaction order requirements.
Network architecture with harmonic connections
A computer network organized in a logical grid having rows and columns can include network nodes coupled according to harmonics. Each network node can be coupled to network nodes of the same row using a set of horizontal strands according to a set of horizontal harmonics. Each of the horizontal harmonics specifies a node distance along the row between adjacent connection points on the corresponding horizontal strand. Each network node can also be coupled to network nodes of the same column using a set of vertical strands according to a set of vertical harmonics. Each of the vertical harmonics specifies a node distance along the column between adjacent connection points on the corresponding vertical strand.
SELECTIVE ADAPTIVE ROUTING
Systems, devices, and methods are provided. In one example, a system is described that includes circuits to route data using a first adaptive routing technique; detect a ratio of ingress flows to egress flows is below a threshold; and in response to detecting the ratio of ingress flows to egress flows is below the threshold, switch from routing the data using the first adaptive routing technique to routing the data using a second adaptive routing technique.
MULTI-PORT NETWORK INTERFACE CARD (NIC)
Systems, methods, and apparatuses disclosed herein can enable a computing system to connect to a network. These systems, methods, and apparatuses can connect the computing system to the network through multiple network connections. These multiple network connections represent distinct, physically separate signal pathways to improve security. This physical separation, or isolation, from one another creates distinct boundaries between these multiple network connections, for example, to minimize shared resources these systems, methods, and apparatuses. These distinct boundaries can reduce the vulnerability of these systems, methods, and apparatuses to, for example, attacks that exploit shared sources among these systems, methods, and apparatuses, data leakage within these systems, methods, and apparatuses, and/or unauthorized access to these systems, methods, and apparatuses. Moreover, these distinct boundaries can additionally enhance security by improving fault isolation, enforcing access control, and/or supporting secure communication protocols, among others, within these systems, methods, and apparatuses.