H04L49/9047

DATA CACHING METHOD AND APPARATUS, MEDIUM AND NETWORK EQUIPMENT
20240372820 · 2024-11-07 ·

A data caching method comprises: receiving message data returned by a second device in response to a read command; dividing the message data into two paths of data according to a preset strategy, sending one of the two paths of data to a first random access memory for storage, and distributing the other path to a double data rate synchronous dynamic random access memory for storage through a first input first output queue, wherein a sum of a working bandwidth of the first random access memory and a working bandwidth of the double data rate synchronous dynamic random access memory is greater than or equal to a receiving bandwidth of the message data; and sending data stored in the first random access memory and data stored in the double data rate synchronous dynamic random access memory to a first device connected with network equipment.

Method and system for providing network egress fairness between applications

Methods and systems are provided to facilitate network egress fairness between applications. At an egress port of a network, an arbitrator can provide fairness-based traffic shaping to data associated with applications. The desired fairness-based traffic shaping can be provided based on bandwidth, traffic classes, or other parameters. Consequently, the egress link's bandwidth can be allocated with fairness among the applications.

Method and system for providing network egress fairness between applications

Methods and systems are provided to facilitate network egress fairness between applications. At an egress port of a network, an arbitrator can provide fairness-based traffic shaping to data associated with applications. The desired fairness-based traffic shaping can be provided based on bandwidth, traffic classes, or other parameters. Consequently, the egress link's bandwidth can be allocated with fairness among the applications.

Alleviating congestion in a cable modem

A method, system and computer program product for ingress level filtering of packets is provided. The system includes a Media Access Control (MAC) and a buffer pool that includes buffers configured to store packets. The MAC includes a memory configured to store an incoming packet and an inspection engine coupled to the memory. The inspection engine is configured to parse the incoming packet to determine a priority level of the incoming packet, determine whether there is a buffer available in the buffer pool to store the incoming packet, and allocate a buffer in the buffer pool to store the incoming packet based on the priority level of the incoming packet.

In-line packet processing

A method and apparatus for in-line processing a data packet while routing the packet through a router in a system transmitting data packets between a source and a destination over a network including the router. The method includes receiving the data packet and pre-processing layer header data for the data packet as the data packet is received and prior to transferring any portion of the data packet to packet memory. The data packet is thereafter stored in the packet memory. A routing through the router is determined including a next hop index describing the next connection in the network. The data packet is retrieved from the packet memory and a new layer header for the data packet is constructed from the next hop index while the data packet is being retrieved from memory. The new layer header is coupled to the data packet prior to transfer from the router.

PACKET FILTERING USING BINARY SEARCH TREES
20180062998 · 2018-03-01 · ·

A packet filtering system uses linked zero-based binary search trees to filter received packets. The binary search trees may be generated from filter conditions defining filter parameters for filtering packets.

SELF TUNING BUFFER ALLOCATION IN A SHARED-MEMORY SWITCH

An N-port, shared-memory switch allocates a shared headroom buffer pool (Ps) for a priority group (PG). Ps is smaller than a worst case headroom buffer pool (Pw), where Pw equals the sum of worst case headrooms corresponding to each port-priority tuple (PPT) associated with the PG. Each worst case headroom comprises headroom required to buffer worst case, post-pause, traffic received on that PPT. Subject to a PPT maximum, each PPT may consume Ps as needed. Because rarely will all PPTs simultaneously experience worst case traffic, Ps may be significantly smaller than Pw, e.g., Ps<(Pw/A) where M>=2. Ps may be size-adjusted based on utilization of Ps, without halting traffic to or from the switch. If Ps utilization exceeds an upper utilization threshold, Ps may be increased, subject to a maximum threshold (Pmax). Conversely, if utilization falls below a lower utilization threshold, Ps may be decreased.

PACKET BUFFERING
20180026902 · 2018-01-25 ·

A first device as a buffer server in an Ethernet transmits a first buffer client querying packet from a port of enabling a distributed buffer function of the first device, receives a first buffer client registering packet from a second device through the port, and adds the second device into a distributed buffer group of the port. When the first device detects that a sum of sizes of packets entering the port and not transmitted reaches a preset first flow-splitting threshold in a first preset time period, the first device forwards a packet entering the port and not transmitted to a buffer client selected from the distributed buffer group of the port.

Packet transmission method and apparatus
09866482 · 2018-01-09 · ·

A method of the present invention includes: determining, when packet congestion occurs or network bandwidth decreases, at least one to-be-discarded packet in a packet buffer; and discarding the at least one to-be-discarded packet so that at least one unbuffered packet enters the packet buffer to prevent discarding too many unbuffered packets, where the unbuffered packet is a packet that has not entered the packet buffer. The present invention is primarily applied in a packet buffering process.

Dynamic protection of shared memory and packet descriptors used by output queues in a network device

A network switch includes a buffer to store network packets and packet descriptors (PDs) used to link the packets into queues for output ports. The buffer and PDs are shared among the multiple traffic pools. The switch receives a multicast packet for queues in a given pool. The switch determines if there is unused buffer space available for packets in the given pool based on a pool dynamic threshold, if there is unused buffer space available for packets in each queue based on a queue dynamic threshold for the queue, if there are unused PDs available to the given pool based on a pool dynamic threshold for PDs, and if there are unused PDs available for each queue based on a queue dynamic threshold for PDs for the queue. The network switch admits the packet only into the queues for which all of the determining operations pass.