H04L49/9084

Mechanisms for preventing IPID overflow in computer networks
10630610 · 2020-04-21 · ·

In an embodiment, a computer-implemented method for preventing an Internet Protocol identifier (IPID) overflow in computer networks is disclosed. In an embodiment, the method comprises: receiving, by an edge service gateway, a first packet that requires fragmentating, and determining whether the edge service gateway is configured to prevent IPID overflow. In response to determining that the edge service gateway is configured to prevent IPID overflow, a plurality of packet fragments for the first packet is created based on, at least in part, contents of the first packet. A packet fragment, of the plurality of packet fragments, comprises an IP header, an additional header, and a portion of the first packet, wherein an additional key field in the additional header and an IPID field in the IP header of the packet fragment cumulatively store a packet sequence number for the first packet.

Packet capture ring: reliable, scalable packet capture for security applications

Embodiments are directed to a packet capture ring that provides a single network tap for packet capture and a series of processors (or appliances) for handling serialization and search request processing in a confederated and highly scalable manner. One such appliance (a primary appliance) maintains a tap port to the network. Each packet capture appliance has a locally attached repository that stores raw packets and a juxtaposed index that allows for retrieval of those packets. The primary appliance sends a single copy of encapsulated packets in opposite directions around the ring to its descendants. A designation is made across the system as to a currently designated appliance for servicing requests for indexing and storage of captured packets. This current designation shifts from appliance to appliance in the system, as a previously designated appliance has its storage capacity filled.

Methods and apparatus for memory resource management in a network device
10594631 · 2020-03-17 · ·

Packets that are to be transmitted via a plurality of egress interfaces of a network device are stored in a memory of the network device. The packets are stored in a plurality of queues that respectively correspond to the egress interfaces. The network device determines a set of queues, from among the plurality of queues, for which packet dropping is enabled. The network device determines whether a utilization level of the memory meets a threshold. In response to determining that the utilization level of the memory meets the threshold: the network device randomly or pseudorandomly selects a first queue from the set of queues for which packet dropping is enabled, dequeues a first packet from the selected first queue, and deletes the first packet that was dequeued from the selected first queue.

Detecting microbursts

Examples provided herein describe a method for facilitating detection of microbursts in queues. For example, a physical processor of a computing device may dynamically determine, for each queue of a plurality of queues of a network switch, whether a monitoring threshold based on an amount of usage of a buffer memory by the plurality of queues. The physical processor may detect, for each queue, whether congestion exists based on whether throughput on the queue exceeds the determined monitoring threshold. The first physical processor may then report information about a set of queues experiencing microbursts in the network switch based on the detection of congestion for each queue.

SENDING DATA USING A PLURALITY OF CREDIT POOLS AT THE RECEIVERS
20200076742 · 2020-03-05 ·

Examples relate to methods for sending data between a senders and receivers coupled by a link. These methods comprise allocating a plurality of credit pools in a buffer on the receiver. These credits represent a portion of memory space in the buffer to store data received from the sender. Then, the sender allocates a number of credits from a plurality of credits to each virtual channel. A number of virtual channels from the plurality of virtual channels is mapped to the credit pools. The sender sends a data block to the receiver through a particular virtual channel when there are enough credits available in at least one of the particular virtual channel and the data pool to which the particular virtual channel is mapped. The sender decrements a credit counter associated with the corresponding at least one of the particular virtual channel and the data pool.

Streaming media delivery system
10567453 · 2020-02-18 · ·

Streaming media, such as audio or video files, is sent via the Internet. The media are immediately played on a user's computer. Audio/video data is transmitted from the server under control of a transport mechanism. A server buffer is prefilled with a predetermined amount of the audio/video data. When the transport mechanism causes data to be sent to the user's computer, it is sent more rapidly than it is played out by the user system. The audio/video data in the user buffer accumulates; and interruptions in playback as well as temporary modem delays are avoided.

Real-time data communication over internet of things network

System(s) and method(s) for real-time data communication over an Internet of Things (IoT) network are described. According to the present subject matter, the system(s) implement the described method(s) for real-time data communication over the IoT network. The method includes encoding, at a source communication device, data to be exchanged between peer sub-layers of IoT entities based on a Forward Error Correction (FEC) context to generate encoded data packets, the IoT entities comprising the source communication device and a destination communication device. The method further includes identifying time delay to be maintained for transmission of the encoded data packets from the source communication device to the destination communication device to have minimal data packet drop due to queue overflow at the source communication device. The method further includes transmitting the encoded data packets over the IoT network.

METHOD OF DATA CACHING IN DELAY TOLERANT NETWORK BASED ON INFORMATION CENTRIC NETWORK, COMPUTER READABLE MEDIUM AND DEVICE FOR PERFORMING THE METHOD

Provided is a method of data caching in delay tolerant network based on information centric network and a recording medium and a device for performing the same. The data caching method includes: the step of checking a remaining buffer amount and a buffer usage amount of node, the step of caching data in the node which is received from another node according to a data caching policy, in case remaining buffer amount of the node is greater than a preset remaining buffer amount threshold, the step of deleting data cached in the node from the node according to a data deletion policy, in case the buffer usage amount of the node is less than a preset buffer usage amount threshold, and the step of setting an initial Time-to-Live (TTL) value of the data received from another node or updating a TTL of the data cached in the node using information of the data received from another node or information of the node.

Activating and deactivation functional units of a line card

In some implementations, a method includes analyzing an amount of data communicated by a set of network interfaces. The data communicated by the set of network interfaces is processed by a set of functional units and a set of queues includes the data communicated by the set of network interfaces. The method also includes activating a first functional unit of the set of functional units when a first size of a first queue of the set of queues is above a first threshold. The method further includes deactivating the first functional unit of the set of functional units when the first size of the first queue of the set of queues is below a second threshold. The method further includes causing the data to be forward to one or more active functional units via a data interconnect coupled to the set of network interfaces and the set of functional units.

Egress flow mirroring in a network device

A packet is received at a network device. The packet is processed by the network device to determine at least one egress port via which to transmit the packet, and to perform egress classification of the packet based at least in part on information determined for the packet during processing of the packet. Egress classification includes determining whether the packet should not be transmitted by the network device. When it is not determined that the packet should not be transmitted by the network device, a copy of the packet is generated for mirroring of the packet to a destination other than the determined at least one egress port, and the packet is enqueued in an egress queue corresponding to the determined at least one egress port. The packet is subsequently transferred to the determined at least one egress port for transmission of the packet.