Patent classifications
H04L49/3072
FLOW CONTROL TECHNOLOGIES
Examples described herein relate to a switch that is to receive a message identifying congestion in a second switch; drop the message; generate a pause frame; and cause transmission of the pause frame to at least one sender of packets to a congested queue in the second switch. In some examples, the message includes one or more of: a destination IP address, Differentiated Services Code Point (DSCP) value, or pause duration for the congested queue. In some examples, the DSCP value is to identify a traffic class of the congested queue. In some examples, the pause frame is consistent with Priority Flow Control (PFC) of IEEE 802.1Qbb (2011). In some examples, the switch is to: store, from the message identifying congestion in the second switch, congestion information associated with the congested queue comprising one or more of: destination internet protocol (IP) address, Differentiated Services Code Point (DSCP) value, or pause end time of the congested queue.
WEB SERVER SECURITY
A system (30) for protecting a server (20) from network attacks is provided. The system (30) comprises a data splitter (31) and a parameter extractor (33). The data splitter (31) is configured to receive network communications from a client (10); send network data comprising at least payload information included in the received network communications to the parameter extractor (33); and send network data comprising at least communication state information included in the received network communications to the server (20). The parameter extractor (33) is configured to apply predefined parameter extraction rules to network data received from the data splitter (31) in order to extract parameters, and to forward extracted parameters to the server (20). The system (30) is also configured to enforce unidirectional dataflow over at least part of the network connection path to the server (20) via the parameter extractor (33), such that dataflow to the server (20) over the network connection path is allowed, but dataflow in the opposite direction is not allowed for at least part of the network connection path. A server (20), data splitter (31) and parameter extractor (33) for use with the system (30) are also provided, and a corresponding method for protecting a server (20) from network attacks is provided.
DISTRIBUTOR NODE, AUTOMATION NETWORK AND METHOD FOR TRANSMITTING TELEGRAMS
In an automation network comprising a plurality of network segments, fragmenting subscribers that support a fragmentation method as well as standard subscribers that do not support the fragmentation method can be provided for in the network. A distribution node in the automation network has at least one input/output interface that is in communication with a network segment. The switching unit in the distribution node checks whether a subscriber in a network segment to which a telegram is to be sent supports the fragmentation method, and whether the telegram to be sent is fragmented. If the subscriber does not support the fragmentation procedure and the telegram to be sent is fragmented, the switching unit in the distribution node assembles the telegram fragments to form the telegram and then sends the assembled telegram on to the subscriber.
MULTI-STAGE SWITCHING TOPOLOGY
A novel multi-stage folded Clos network and a linecard for use in a network is disclosed. The Clos network can consist of three stages, an access stage, a lower stage, and an upper stage. The access stage and the upper stage can include a plurality of switches or conventional access points. The lower stage can include a plurality of linecards. Each linecard can be made of two switch chips, each of which are connected to the ports of the linecard, and contain the same number of ports. Each switch chip can forward information in only one direction and one is used to send direction from the access stage to the upper stage, and the other from the upper stage to the access stage. The lower stage can consist of a number of sub-stages, each sub-stage can be entirely of either conventional switches or linecards. Accordingly, compared to a conventional Clos network, the provided network can increase the throughput by any power of 2 by replacing the conventional switches used in the lower stage or sub-stages with linecards.
RE-PURPOSING BYTE ENABLES AS CLOCK ENABLES FOR POWER SAVINGS
Systems, apparatuses, and methods for efficient data transfer in a computing system are disclosed. A source generates packets to send across a communication fabric (or fabric) to a destination. The source generates partition enable signals for the partitions of payload data. The source negates an enable signal for a particular partition when the source determines the packet type indicates the particular partition should have an associated asserted enable signal in the packet, but the source also determines the particular partition includes a particular data pattern. Routing components of the fabric disable clock signals to storage elements assigned to store the particular partition. The destination inserts the particular data pattern for the particular partition in the payload data.
PACKET STORAGE BASED ON PACKET PROPERTIES
In some examples, a system on chip (SOC) comprises a network switch configured to receive a packet and to identify a flow identifier (ID) corresponding to a header of the packet. The SOC comprises a direct memory access (DMA) controller coupled to the network switch, where the DMA controller is configured to divide the packet into first and second fragments based on the flow ID and to assign a first hardware queue to the first fragment and a second hardware queue to the second fragment, and wherein the DMA controller is further configured to assign memory regions to the first and second fragments based on the first and second hardware queues. The SOC comprises a snoopy cache configured to store the first fragment to the snoopy cache or to memory based on a first cache allocation command, where the first cache allocation command is based on the memory region assigned to the first fragment, where the snoopy cache is further configured to store the second fragment to the snoopy cache or to memory based on a second cache allocation command, and where the second cache allocation command is based on the memory region assigned to the second fragment.
ACCURATE ANALYTICS, QUALITY OF SERVICE AND LOAD BALANCING FOR INTERNET PROTOCOL FRAGMENTED PACKETS IN DATA CENTER FABRICS
A network device receives a fragmented packet of an internet protocol (IP) packet. The fragmented packet is subsequently received relative to an initial fragmented packet of the IP packet and includes a first set of tuple information. The network device determines an entry of a hash table associated with the IP packet, based on the first set of tuple information and a fragment identifier (ID) within the fragmented packet. The network device retrieves a second set of tuple information associated with the fragmented packet from the hash table entry, and transmits an indication of the first and second sets of tuple information.
Variable-length packet header vectors
Methods and network interface modules for processing packet headers are provided. The method comprises: receiving a packet comprising a header and a payload; generating, using the header, an initial packet header vector (PHV); providing the initial PHV to a pipeline comprising a plurality of processing stages; and processing the initial PHV in the pipeline, wherein the processing comprises, for a current processing stage in the plurality of processing stages: receiving, by the current processing stage, an input PHV, wherein the input PHV (i) is the initial PHV or a modified version of the initial PHV and (ii) comprises one or more flits, and applying a feature to the input PHV to generate an output PHV, including increasing an initial length of the input PHV if the initial length is not sufficient to apply the feature.
EFFICIENT PACKET QUEUEING FOR COMPUTER NETWORKS
A method during a first cycle includes receiving, at a first port of a device, a plurality of network packets. The method may include storing, by the device, at least some portion of a first packet of the plurality of network packets at a first address within a first record bank and storing, by the device and concurrent with storing the at least some portion of the first packet from the first address, at least some portion of a second packet of the plurality of network packets at a second address within a second record bank, different than the first record bank. The method may further include storing, by the device, the first address within the first record bank and the second address within the second record bank in the first link stash associated with the first record bank and updating, by the device, a tail pointer to reference the second address.
Multi-processor/endpoint data splitting system
A multi-endpoint adapter device includes a splitter device that is coupled to a network port and a plurality of endpoint subsystems that are each coupled to a processing subsystem. The splitter device receives, via the network port, a first data payload, and identifies both a first data sub-payload that is included in the first data payload and that is associated with a first endpoint subsystem included in the plurality of endpoint subsystems and a second data sub-payload that is included in the first data payload and that is associated with a second endpoint subsystem included in the plurality of endpoint subsystems. The splitter device then splits the first data payload into the first data sub-payload and the second data sub-payload, and forwards both the first data sub-payload to the first endpoint subsystem and the second data sub-payload to the second endpoint subsystem.