Patent classifications
H04L49/9084
METHOD AND COMPUTING DEVICE FOR MINIMIZING ACCESSES TO DATA STORAGE IN CONJUNCTION WITH MAINTAINING A B-TREE
Methods for modifying a B-tree are disclosed. According to an implementation, a computing device receives requests for updates to a B-tree, groups two or more of the requests into a batch that are destined for a particular node on the B-tree, but refrains from modifying the node until a buffer of a node above it is full (or will be full with this batch of requests). Once the buffer is full, the computing device provides the requests to that particular node. The techniques described herein may result in the computing device carrying out fewer of reads from and writes to storage than existing B-tree maintenance techniques, thereby saving time and bandwidth. Reducing the number of reads and writes also saves money, particularly when the storage is controlled by a third party SaaS provider that charges according to the number of transactions.
Methods and apparatus for memory resource management in a network device
A network device determines whether a utilization threshold is reached, the utilization threshold associated with memory resources of the network device, the memory resources including a shared memory and a reserved memory. Available memory in the shared memory is available for any egress interfaces in a plurality of egress interfaces, and the reserved memory includes respective sub-pools for exclusive use by respective egress interfaces among at least some of the plurality of egress interfaces. First packets to be transmitted are stored in the shared memory until a utilization threshold is reached, and in response to determining that the utilization threshold is reached, a second packet to be transmitted is stored in the reserved memory.
Shared traffic manager
A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Schedulers within a traffic manager may generate and queue read instructions for reading buffered portions of data units that are ready to be sent to the egress blocks. The traffic manager may be configured to select a read instruction for a given buffer bank from the read instruction queues based on a scoring mechanism or other selection logic. To avoid sending too much data to an egress block during a given time slot, once a data unit portion has been read from the buffer, it may be temporarily stored in a shallow read data cache. Alternatively, a single, non-bank specific controller may determine all of the read instructions and write operations that should be executed in a given time slot.
System and method for supporting efficient virtual output queue (VOQ) packet flushing scheme in a networking device
A system and method can support packet switching in a network environment. The system can include an ingress buffer on a networking device, wherein the ingress buffer, which includes one or more virtual output queues, operate to store one or more incoming packets that are received at an input port on the networking device. Furthermore, the system can include a packet flush engine, which is associated with the ingress buffer, wherein said packet flush engine operates to flush a packet that is stored in a said virtual output queue in the ingress buffer, and notify one or more output schedulers that the packet is flushed, wherein each output scheduler is associated with an output port.
Traffic Management in a Network Switching System with Remote Physical Ports
A switching system includes a port extender device coupled to a central switching device. Packets processed by the central switching device are forwarded to the port extender device and enqueued in ones of a plurality of egress queues in the port extender device for transmission of the packets via the front ports of the port extender device. Respective egress queues in the port extender device have a queue depth that is less than a queue depth of corresponding respective egress queues in the central switching device. A flow control message indicative of congestion in a particular egress queue of the port extender device is generated and transmitted to the central switch device to control transmission of packets from the central switching device to the particular egress queue of the port extender device.
EGRESS FLOW MIRRORING IN A NETWORK DEVICE
A packet is received at a network device. The packet is processed by the network device to determine at least one egress port via which to transmit the packet, and to perform egress classification of the packet based at least in part on information determined for the packet during processing of the packet. Egress classification includes determining whether the packet should not be transmitted by the network device. When it is not determined that the packet should not be transmitted by the network device, a copy of the packet is generated for mirroring of the packet to a destination other than the determined at least one egress port, and the packet is enqueued in an egress queue corresponding to the determined at least one egress port. The packet is subsequently transferred to the determined at least one egress port for transmission of the packet.
Fine grain traffic shaping offload for a network interface card
A network interface card with traffic shaping capabilities and methods of network traffic shaping with a network interface card are provided. The network interface card and method can shape traffic originating from one or more applications executing on a host network device. The applications can execute in a virtual machine or containerized computing environment. The network interface card and method can perform or include several traffic shaping mechanisms including, for example and without limitation, a delayed completion mechanism, a time-indexed data structure, a packet builder, and a memory manager.
METHOD AND APPARATUS FOR ANALYZING COMMUNICATION QUALITY, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
A method for analyzing communication qualities between virtual machines serving as transmission sources and virtual machines serving as transmission destinations on a virtual network includes identifying, by a computer, based on the temporal sequence of the input queue information and the temporal sequence of the output queue information, a first pair and a second pair as mirroring targets, wherein the input queue information indicates a first queue length indicating a number of pieces of data addressed from one of the transmission sources to one of the transmission destinations, wherein the output queue information indicates a second queue length indicating a number of pieces of data included in an output queue of the one of the transmission destinations.
Apparatus and method for adjusting a rate at which data is transferred from a media access controller to a memory in a physical-layer circuit
A physical-layer circuit including a memory, a physical-layer device and a control circuit. The memory receives data from a media access controller (MAC) at a first rate. The MAC is separate from the physical-layer circuit. The physical-layer device receives the data from the memory and transmits the data from the physical-layer circuit to a peer device. The physical-layer device transfers the data from the memory to the peer device at a second rate. An amount of data stored in the memory is based on a difference between the first and second rates. The second rate is less than the first rate. The control circuit is connected between the memory and the physical layer device. The control circuit monitors the amount of the data stored in the memory and, based on the amount of the data stored in the memory, transmits a frame to the MAC to decrease the first rate.
Method and system for storing packets for a bonded communication links
Method and system for storing packets received from a bonded communication links according to latency of the communication link that has the largest latency among all communication links of the bonded communication links. Embodiments of present inventions can be applied to bonded communication links, including wireless connection, Ethernet connection, Internet Protocol connection, asynchronous transfer mode, virtual private network, WiFi, high-speed downlink packet access, GPRS, LTE, and X.25. The present invention presents methods comprising the steps of estimating storage size of a queue, wherein the queue is for storage the one or more packets received from the bonded communication links. The storage size is based on one or more factors, including largest latency, bandwidth of each of the plurality of communication links, and allowed time duration of packet storage