Patent classifications
H04L49/3072
DEVICE AND METHOD FOR PROVIDING DATA
Embodiments of a device and a method for providing data are disclosed. In an embodiment, a device includes a processing system configured to split data of a request into messages by splitting the data based on a node of the data, where the messages fit a supported size, and provide the messages that include the data of the request to a communications interface.
Multi-stage switching topology
A novel multi-stage folded Clos network and a linecard for use in a network is disclosed. The Clos network can consist of three stages, an access stage, a lower stage, and an upper stage. The access stage and the upper stage can include a plurality of switches or conventional access points. The lower stage can include a plurality of linecards. Each linecard can be made of two switch chips, each of which are connected to the ports of the linecard, and contain the same number of ports. Each switch chip can forward information in only one direction and one is used to send direction from the access stage to the upper stage, and the other from the upper stage to the access stage. The lower stage can consist of a number of sub-stages, each sub-stage can be entirely of either conventional switches or linecards. Accordingly, compared to a conventional Clos network, the provided network can increase the throughput by any power of 2 by replacing the conventional switches used in the lower stage or sub-stages with linecards.
Re-purposing byte enables as clock enables for power savings
Systems, apparatuses, and methods for efficient data transfer in a computing system are disclosed. A source generates packets to send across a communication fabric (or fabric) to a destination. The source generates partition enable signals for the partitions of payload data. The source negates an enable signal for a particular partition when the source determines the packet type indicates the particular partition should have an associated asserted enable signal in the packet, but the source also determines the particular partition includes a particular data pattern. Routing components of the fabric disable clock signals to storage elements assigned to store the particular partition. The destination inserts the particular data pattern for the particular partition in the payload data.
Distributor node, automation network and method for transmitting telegrams
In an automation network comprising a plurality of network segments, fragmenting subscribers that support a fragmentation method as well as standard subscribers that do not support the fragmentation method can be provided for in the network. A distribution node in the automation network has at least one input/output interface that is in communication with a network segment. The switching unit in the distribution node checks whether a subscriber in a network segment to which a telegram is to be sent supports the fragmentation method, and whether the telegram to be sent is fragmented. If the subscriber does not support the fragmentation procedure and the telegram to be sent is fragmented, the switching unit in the distribution node assembles the telegram fragments to form the telegram and then sends the assembled telegram on to the subscriber.
Packet storage based on packet properties
In some examples, a system on chip (SOC) comprises a network switch configured to receive a packet and to identify a flow identifier (ID) corresponding to a header of the packet. The SOC comprises a direct memory access (DMA) controller coupled to the network switch, where the DMA controller is configured to divide the packet into first and second fragments based on the flow ID and to assign a first hardware queue to the first fragment and a second hardware queue to the second fragment, and wherein the DMA controller is further configured to assign memory regions to the first and second fragments based on the first and second hardware queues. The SOC comprises a snoopy cache configured to store the first fragment to the snoopy cache or to memory based on a first cache allocation command, where the first cache allocation command is based on the memory region assigned to the first fragment, where the snoopy cache is further configured to store the second fragment to the snoopy cache or to memory based on a second cache allocation command, and where the second cache allocation command is based on the memory region assigned to the second fragment.
Zero-copy processing
In one embodiment, a system includes a peripheral device including a memory access interface to receive from a host device headers of packets, while corresponding payloads of the packets are stored in a host memory of the host device, and descriptors being indicative of respective locations in the host memory at which the corresponding payloads are stored, a data processing unit memory to store the received headers and the descriptors without the payloads of the packets, and a data processing unit to process the received headers, wherein the peripheral device is configured, upon completion of the processing of the received headers by the data processing unit, to fetch the payloads of the packets over the memory access interface from the respective locations in the host memory responsively to respective ones of the descriptors, and packet processing circuitry to receive the headers and payloads of the packets, and process the packets.
Network Interface Device
A network interface device having an FPGA for providing an FPGA application. A first interface between a host computing device and the FPGA application is provided, allowing the FPGA application to make use of data-path operations provided by a transport engine on the network interface device, as well as communicate with the host. The FPGA application sends and receives data with the host via a memory that is memory mapped to a shared memory location in the host computing device, whilst the transport engine sends and receives data packets with the host via a second memory. A second interface is provided to interface the FPGA application and transport engine with the network, wherein the second interface is configured to back-pressure the transport engine.
Flow control for a multiple flow control unit interface
Implementations of the present disclosure are directed to systems and methods for flow control using a multiple flit interface. A credit return field is used in a credit-based flow control system to indicate that one or more credits are being returned to a sending device from a receiving device. Based on the number of credits available, the sending device determines whether to send device or wait until more credits are returned. The amount of buffer space used by the receiver to store the packet is determined by the number of transfer cycles used to receive the packet, not the number of flits comprising the packet. This is enabled by having the buffer be as wide as the bus. The receiver returns credits to the sender based on the number of buffer rows used to store the received packet, not the number of flits comprising the packet.
Web server security
A system (30) for protecting a server (20) from network attacks is provided. The system (30) comprises a data splitter (31) and a parameter extractor (33). The data splitter (31) is configured to receive network communications from a client (10); send network data comprising at least payload information included in the received network communications to the parameter extractor (33); and send network data comprising at least communication state information included in the received network communications to the server (20). The parameter extractor (33) is configured to apply predefined parameter extraction rules to network data received from the data splitter (31) in order to extract parameters, and to forward extracted parameters to the server (20). The system (30) is also configured to enforce unidirectional dataflow over at least part of the network connection path to the server (20) via the parameter extractor (33), such that dataflow to the server (20) over the network connection path is allowed, but dataflow in the opposite direction is not allowed for at least part of the network connection path. A server (20), data splitter (31) and parameter extractor (33) for use with the system (30) are also provided, and a corresponding method for protecting a server (20) from network attacks is provided.
Network interface device
A network interface device having an FPGA for providing an FPGA application. A first interface between a host computing device and the FPGA application is provided, allowing the FPGA application to make use of data-path operations provided by a transport engine on the network interface device, as well as communicate with the host. The FPGA application sends and receives data with the host via a memory that is memory mapped to a shared memory location in the host computing device, whilst the transport engine sends and receives data packets with the host via a second memory. A second interface is provided to interface the FPGA application and transport engine with the network, wherein the second interface is configured to back-pressure the transport engine.