Patent classifications
H04L12/861
System and method for equalizing transmission delay in a network
A network device includes an antenna connected to an RF chip and a processor coupled to an Ethernet port, the RF chip, a program memory, a packet buffer memory, a pointer buffer memory, and a program memory. The program memory contains instruction that, when executed by the processor, cause a plurality of packets received by the antenna and the RF chip in a first order to be stored in the packet buffer memory in such order, cause a pointer associated with each one of the plurality of packets to be stored in the pointer buffer memory, cause the pointers stored in the pointer buffer memory to be placed in a second order in accordance with a timestamp that is included with each packet, cause the packets stored in the packet buffer memory to be passed along to the Ethernet port in accordance with the sorted pointer to each packet.
Network interface connection teaming system
A network connection teaming system includes a processing system coupled to a memory system in an IHS chassis. The memory system is operable to receive instruction that, when executed by the processing system, cause the processing system to provide an operating system (OS). At least one network interface controller (NIC) including a plurality of network connections is located in the IHS chassis and coupled to the processing system. The NIC(s) are not directly visible to an OS provided to by the processing system. A NIC teaming controller is coupled between the processing system and the NIC(s). The NIC teaming controller includes a plurality of hardware connections that are configurable to team the plurality of network connections included on the NIC(s) to provide at least one teamed network connection. An OS provided by the processing system is presented the at least one teamed network connection by the NIC teaming controller.
Methods and apparatus for memory resource management in a network device
A network device determines whether a utilization threshold is reached, the utilization threshold associated with memory resources of the network device, the memory resources including a shared memory and a reserved memory. Available memory in the shared memory is available for any egress interfaces in a plurality of egress interfaces, and the reserved memory includes respective sub-pools for exclusive use by respective egress interfaces among at least some of the plurality of egress interfaces. First packets to be transmitted are stored in the shared memory until a utilization threshold is reached, and in response to determining that the utilization threshold is reached, a second packet to be transmitted is stored in the reserved memory.
METHOD AND DEVICE FOR FORWARDING DATA MESSAGES
The present application discloses a method and device for forwarding a data message. A specific embodiment of the method comprises: receiving the data message and reading a data context length value of a first row in the data message; determining whether the data context length value is less than or equal to a maximum segment size in a single transmission according to a transmission control protocol; reading data from the data message in segments in response to the data context length value being less than or equal to the maximum segment size in the single transmission according to the transmission control protocol; reading data from the data message in rows in response to the data context length value being greater than the maximum segment size in the single transmission according to the transmission control protocol; and storing the read data in a user buffer, and sending the data in the user buffer to a terminal if the data in the user buffer exceeds a preset capacity threshold. According to this embodiment, the data messages can be quickly and efficiently forwarded.
CHANNEL ACCESS BASED ON UPLINK VIRTUAL QUEUES
An access point may receive, from a set of electronic devices, one or more buffer status reports that indicate that at least a subset of the electronic devices have uplink data associated with one or more access categories. In response, the access point may create a group of uplink virtual queues for one or more electronic devices in the subset based on the one or more buffer status reports, where a given uplink virtual queue corresponds to a particular access category and a given electronic device. The access point may start one or more backoff counters with a one-to-one correspondence to uplink virtual queues in the group of uplink virtual queues. When a backoff counter for the given uplink virtual queue reaches zero, the access point may transmit a trigger frame to an electronic device in the subset that corresponds to the given uplink virtual queue.
Adaptive interrupt coalescing for energy efficient mobile platforms
Methods and apparatus relating to adaptive interrupt coalescing for energy efficient mobile platforms are discussed herein. In one embodiment, one or more interrupts are buffered based on communication throughput. At least one of the one or more interrupts are released in response to expiration of an interrupt coalescing time period. Other embodiments are also claimed and described.
System and method for supporting efficient virtual output queue (VOQ) packet flushing scheme in a networking device
A system and method can support packet switching in a network environment. The system can include an ingress buffer on a networking device, wherein the ingress buffer, which includes one or more virtual output queues, operate to store one or more incoming packets that are received at an input port on the networking device. Furthermore, the system can include a packet flush engine, which is associated with the ingress buffer, wherein said packet flush engine operates to flush a packet that is stored in a said virtual output queue in the ingress buffer, and notify one or more output schedulers that the packet is flushed, wherein each output scheduler is associated with an output port.
Apparatus for managing data queues in a network
An apparatus for managing data queues is disclosed. The apparatus includes at least one sensor for collecting data, a data interface for receiving data from the sensor(s) and for placing the collected data in a set of data queues, and a priority sieve for organizing the set of data queues according to data priority of a specific task. The priority sieve includes a scoreboard for identifying queue priority and a system timer for synchronization.
Self-configuring computer network router
A self-configuring router includes a resource allocator that automatically assigns processors to queues, such that queue workload is distributed as evenly as possible among the processors, and the processors are as fully utilized as possible. Consequently, packets do not remain on queues longer than necessary, thereby decreasing latency of packets traversing the router, and available and expensive resources, namely the processors, are kept busy. The router automatically allocates its own resources (processors) to its own queues.
Buffer control for multi-transport architectures
A system and method for automating connection management in a manner that may be transparent to any actively communicating applications operating in a Network on Terminal Architecture (NoTA). An application level entity may access another node by making a request to a high level communication structure via an interface. The high level structure may interact with a lower level structure configured to manage communication by establishing communication with another device via one or more transports. In at least one embodiment, provisions may be made to guard against data being lost when a transport fails, including storing data that is passed from a transport-independent buffer to a transport-specific buffer in case the transport fails. When a failure occurs, the stored data may readily be forwarded for sending using another transport.