Patent classifications
H04L49/9047
SYSTEM AND METHOD FOR DATA LOSS AND DATA LATENCY MANAGEMENT IN A NETWORK-ON-CHIP WITH BUFFERED SWITCHES
A buffered switch system for end-to-end data congestion and traffic drop prevention. More specifically, and without limitation, the various aspects and embodiments of the invention relates to the management of buffered switch to prevent the balancing act of buffer sizing, latency, and traffic drop.
Distributed Contiguous Reads in a Network on a Chip Architecture
Systems and techniques for network on a chip based computer architectures and distributing data without shared pointers therein are described. A described system includes computing resources; and a memory resource configured to maintain a dedicated memory region of the memory resource for distributed read operations requested by the computing resources. The computing resources can generate a packet to fetch data from the dedicated memory region without using memory addresses of respective data elements. The memory resource can receive the first packet, determine whether the first packet indicates the distributed read operation, and determine that the dedicated memory region is non-empty. Further, the memory resource can fetch one or more data elements from the dedicated memory region based on the first packet indicating the distributed read operation and the dedicated memory region being non-empty, and send a packet that includes the one or more fetched data elements.
METHOD AND APPARATUS FOR ACCELERATING VM-TO-VM NETWORK TRAFFIC USING CPU CACHE
Methods and apparatus for accelerating VM-to-VM Network Traffic using CPU cache. A virtual queue manager (VQM) manages data that is to be kept in VM-VM shared data buffers in CPU cache. The VQM stores a list of VM-VM allow entries identifying data transfers between VMs that may use VM-VM cache “fast-path” forwarding. Packets are sent from VMs to the VQM for forwarding to destination VMs. Indicia in the packets (e.g., in a tag or header) is inspected to determine whether a packet is to be forwarded via a VM-VM cache fast path or be forwarded via a virtual switch. The VQM determines the VM data already in the CPU cache domain while concurrently coordinating with the data to and from the external shared memory, and also ensures data coherency between data kept in cache and that which is kept in shared memory.
Detecting attacks using passive network monitoring
Embodiments are directed to detecting one or more attacks in a network. One or more network flows may be monitored using one or more network monitoring computers (NMCs). If one or more file write operations are detected based on information included in one or more packets of the one or more network flows, one or more detection rules may be executed to analyze one or more portions of the one or more packets to identify file information that is associated with the one or more file write operations. One or more metrics may be provided based on the one or more detection rules and one or more of the file information, the one or more file write operations, or the like. If one or more metrics exceed one or more threshold values, one or more reports of one or more attacks may be provided.
ROUTERLESS NETWORKS-ON-CHIP
The disclosed technology concerns methods, apparatus, and systems for designing and generating networks-on-chip (“NoCs”), as well as to hardware architectures for implementing such NoCs. The disclosed NoCs can be used, for instance, to interconnect cores of a chip multiprocessor (aka a “multi-core processor”). In one example implementation, a wire-based routerless NoC design is disclosed that uses deterministically specified wire loops to connect the cores of the chip multiprocessor. The disclosed technology also comprises network interface architectures for use in an NoC. For example, a core can be equipped with a low-area-cost interface that is deadlock-free, uses buffering sharing, and provides low latency.
DYNAMIC BUFFER ALLOCATION
The present disclosure relates to a switch for a network, and specifically the dynamic allocation of buffer memory within the switch. A communication channel is established between the switch and a network device. The switch configures and allocates a portion of memory to a receive socket buffer for the established channel. Upon receipt of a signal from the network device, the switch allocates a second portion of memory to the receive socket buffer.
Communication apparatus, system, rollback method, and non-transitory medium
A communication apparatus comprises a rollback control unit to create a second process to roll back a currently working first process thereto; a storage to store states shared by the first and the second processes, the second process taking over a state(s) stored in the storage unit; a buffer; and a timing control unit that controls of timing of rollback. The rollback control unit starts event buffering to store in the buffer all of an event(s) received during when the first process is processing and destined to the first process, and upon completion of the processing of the event by the first process, the rollback control unit performs switching of a working process from the first process to the second process, sends the event(s) stored therein from start of the event buffering to the second process and stop event buffering.
Communication apparatus, system, rollback method, and non-transitory medium
A communication apparatus comprises a rollback control unit to create a second process to roll back a currently working first process thereto; a storage to store states shared by the first and the second processes, the second process taking over a state(s) stored in the storage unit; a buffer; and a timing control unit that controls of timing of rollback. The rollback control unit starts event buffering to store in the buffer all of an event(s) received during when the first process is processing and destined to the first process, and upon completion of the processing of the event by the first process, the rollback control unit performs switching of a working process from the first process to the second process, sends the event(s) stored therein from start of the event buffering to the second process and stop event buffering.
SYSTEM AND METHOD FOR FACILITATING EFFICIENT PACKET FORWARDING IN A NETWORK INTERFACE CONTROLLER (NIC)
A network interface controller (NIC) capable of efficient packet forwarding is provided. The NIC can be equipped with a host interface, a packet generation logic block, and a forwarding logic block. During operation, the packet generation logic block can obtain, via the host interface, a message from the host device and for a remote device. The packet generation logic block may generate a plurality of packets for the remote device from the message. The forwarding logic block can then send a first subset of packets of the plurality of packets based on ordered delivery. If a first condition is met, the forwarding logic block can send a second subset of packets of the plurality of packets based on unordered delivery. Furthermore, if a second condition is met, the forwarding logic block can send a third subset of packets of the plurality of packets based on ordered delivery.
Programmatically configured switches and distributed buffering across fabric interconnect
Programmable switches and routers are described herein for enabling their internal network fabric to be configured with a topology. In one implementation, a programmable switch is arranged in a network having a plurality of switches and an internal fabric. The programmable switch includes a plurality of programmable interfaces and a buffer memory component. Also, the programmable switch includes a processing component configured to establish each of the plurality of programmable interfaces to operate as one of a user-facing interface and a fabric-facing interface. Based on one or more programmable interfaces being established as one or more fabric-facing interfaces, the buffer memory component is configured to store packets received from a user-facing interface of an interconnected switch of the plurality of switches via one or more hops into the internal fabric.