Patent classifications
H04L49/9047
Method of dynamically allocating buffers for packet data received onto a networking device
A method of dynamically allocating buffers involves receiving a packet onto an ingress circuit. The ingress circuit includes a memory that stores a free buffer list, and an allocated buffer list. Packet data of the packet is stored into a buffer. The buffer is associated with a buffer identification (ID). The buffer ID is moved from the free buffer list to the allocated buffer list once the packet data is stored in the buffer. The buffer ID is used to read the packet data from the buffer and into an egress circuit and is stored in a de-allocation buffer list in the egress circuit. A send buffer IDs command is received from a processor onto the egress circuit and instructs the egress circuit to send the buffer ID to the ingress circuit such that the buffer ID is pushed onto the free buffer list.
MEMORY DEVICE
A memory device is configured as single chip to achieve routing control, bandwidth control, traffic monitoring, buffering, and access control of network functions. The memory device includes a search unit that includes a first memory unit and performs a search operation by searching, from the first memory unit, a piece of data corresponding to an input search key, a statistical information processing unit that includes a second memory unit that stores statistical information including the input search key, with which the piece of data has been successfully searched by the search unit, and an address of the piece of data in the first memory unit, and an arithmetic operation unit that updates the statistical information when the search unit successfully searches the pieces of data corresponding to the input search key.
Multiplexing device and multiplexing method
According to an embodiment, a multiplexing device includes: a packet generating unit which generates one or more third packets based on at least one of one or more first packets and a second packet; a main signal generating unit which generates from the third packets a main signal; an information generating unit which generates transmission multiplexing control information; a slot generating unit which generates a slot by combining the transmission multiplexing control information and the main signal corresponding to the information described in the transmission multiplexing control information having been generated a predetermined number of frames prior to the currently generated transmission multiplexing control information; and a time writing unit which writes a time in the second packet in the main signal included in the generated slot.
Methods and apparatus for memory resource management in a network device
A network device determines whether a utilization threshold is reached, the utilization threshold associated with memory resources of the network device, the memory resources including a shared memory and a reserved memory. Available memory in the shared memory is available for any egress interfaces in a plurality of egress interfaces, and the reserved memory includes respective sub-pools for exclusive use by respective egress interfaces among at least some of the plurality of egress interfaces. First packets to be transmitted are stored in the shared memory until a utilization threshold is reached, and in response to determining that the utilization threshold is reached, a second packet to be transmitted is stored in the reserved memory.
DISASTER RECOVERY OF MOBILE DATA CENTER VIA LOCATION-AWARE CLOUD CACHING
A method for copying first data stored at a primary data center to a secondary data center is provided. The method includes initiating a first replication task to copy the first data from the primary data center to the secondary data center. The method also includes receiving a first portion of the first data from the primary data center via a first access point, wherein a first bandwidth between the primary data center and the first access point is greater than a second bandwidth between the primary data center and the secondary data center. The method further includes storing the first portion of data in a first cache associated with the first access point. The method also includes transmitting the first portion of data from the first cache to the secondary data center. A system and non-transitory computer-readable medium are also provided.
Self tuning buffer allocation in a shared-memory switch
An N-port, shared-memory switch allocates a shared headroom buffer pool (Ps) for a priority group (PG). Ps is smaller than a worst case headroom buffer pool (Pw), where Pw equals the sum of worst case headrooms corresponding to each port-priority tuple (PPT) associated with the PG. Each worst case headroom comprises headroom required to buffer worst case, post-pause, traffic received on that PPT. Subject to a PPT maximum, each PPT may consume Ps as needed. Because rarely will all PPTs simultaneously experience worst case traffic, Ps may be significantly smaller than Pw, e.g., Ps<(Pw/A) where M>=2. Ps may be size-adjusted based on utilization of Ps, without halting traffic to or from the switch. If Ps utilization exceeds an upper utilization threshold, Ps may be increased, subject to a maximum threshold (Pmax). Conversely, if utilization falls below a lower utilization threshold, Ps may be decreased.
DEADLOCK-FREE MULTICAST ROUTING ON A DRAGONFLY NETWORK
Systems and methods are provided for managing multicast data transmission in a network having a plurality of switches arranged in a Dragonfly network topology, including: receiving a multicast transmission at an edge port of a switch and identifying the transmission as a network multicast transmission; creating an entry in a multicast table within the switch; routing the multicast transmission across the network to a plurality of destinations via a plurality of links, wherein at each of the links the multicast table is referenced to determine to which ports the multicast transmission should be forwarded; and changing, when necessary, the virtual channel used by each copy of the multicast transmission as the copy progresses through the network.
DEADLOCK-FREE MULTICAST ROUTING ON A DRAGONFLY NETWORK
Systems and methods are provided for managing multicast data transmission in a network having a plurality of switches arranged in a Dragonfly network topology, including: receiving a multicast transmission at an edge port of a switch and identifying the transmission as a network multicast transmission; creating an entry in a multicast table within the switch; routing the multicast transmission across the network to a plurality of destinations via a plurality of links, wherein at each of the links the multicast table is referenced to determine to which ports the multicast transmission should be forwarded; and changing, when necessary, the virtual channel used by each copy of the multicast transmission as the copy progresses through the network.
CROSSBAR WITH AT-DISPATCH DYNAMIC DESTINATION SELECTION
A Dynamic Destination Selection (DDS) crossbar, system for routing a packet, and a switch are provided. An illustrative DDS crossbar includes one or more adaptive routing circuits to track destination credit and port availability at a time of dispatching a packet, group multiple destinations into super destination groups, perform dynamic destination routing within a super destination group, and use the destination credit and port availability for the super destination group at the time of receiving the packet to select an output destination for the packet.
CROSSBAR WITH AT-DISPATCH DYNAMIC DESTINATION SELECTION
A Dynamic Destination Selection (DDS) crossbar, system for routing a packet, and a switch are provided. An illustrative DDS crossbar includes one or more adaptive routing circuits to track destination credit and port availability at a time of dispatching a packet, group multiple destinations into super destination groups, perform dynamic destination routing within a super destination group, and use the destination credit and port availability for the super destination group at the time of receiving the packet to select an output destination for the packet.