Patent classifications
H04L49/9005
SYSTEM AND METHOD FOR DATA LOSS AND DATA LATENCY MANAGEMENT IN A NETWORK-ON-CHIP WITH BUFFERED SWITCHES
A buffered switch system for end-to-end data congestion and traffic drop prevention. More specifically, and without limitation, the various aspects and embodiments of the invention relates to the management of buffered switch to prevent the balancing act of buffer sizing, latency, and traffic drop.
SYSTEM AND METHOD FOR DATA LOSS AND DATA LATENCY MANAGEMENT IN A NETWORK-ON-CHIP WITH BUFFERED SWITCHES
A buffered switch system for end-to-end data congestion and traffic drop prevention. More specifically, and without limitation, the various aspects and embodiments of the invention relates to the management of buffered switch to prevent the balancing act of buffer sizing, latency, and traffic drop.
CONGESTION AVOIDANCE IN A NETWORK DEVICE
A network device receives a packet is received from a network, and determines at least one port, among a plurality of ports of the network device, via which the packet is to be transmitted. The network device also determines an amount of free buffer space in a buffer memory of the network device, and dynamically determines, based at least in part on the amount of free buffer space, respective thresholds for triggering ones of multiple traffic management operations to be performed based on the packet. Using the respective thresholds, the network device determines whether or not to trigger ones of the multiple traffic management operations with respect to the packet. The network device performs one or more of the traffic management operations with respect to the packet determined to be triggered based on the corresponding one of the respective thresholds.
Managing a Jitter Buffer Size
It is presented a method for managing a jitter buffer depth for receiving real-time communication. The method is performed in a receiver and comprises the steps of: determining an adaptive bitrate state of the receiver when a current capacity of a communication channel for receiving the real-time communication is below a maximum bitrate for receiving the real-time communication; and increasing a depth of a jitter buffer for receiving the real-time communication when the adaptive bitrate state is determined.
Flexible Link Level Retry For Shared Memory Switches
Disclosure is made of a shared memory switch and methods and system for controlling such. The shared memory switch may allocate cells in a storage array to respective use cases, the use cases including input buffering, output queuing, free cell allocation, and retry buffering. A set of data packets may be stored in the cells allocated to output queuing, wherein each cell allocated to output queuing stores a respective data packet of the set of data packets. A subset of the set of data packets may be transmitted to a destination external to the shared memory switch. The cells storing the subset of data packets may be reallocated to the retry buffering use case, wherein cells allocated to retry buffering use case are a retry buffer.
NETWORK BUFFER CREDIT ALLOCATION
A method for dynamically allocating buffer credits between a system and a storage area network (SAN). The method includes one or more computer processors determining a forecast of a change related to a pattern of network traffic that originates from a computing system that links to a storage area network (SAN) via a network connection. The method further includes determining whether the forecast change related to the pattern of network traffic dictates a change to a buffer credit allocation associated with the network connection. The method further includes responding to determining that the forecast change related to the pattern of network traffic dictates the buffer credit allocation change by determining a value for the buffer credit allocation associated with the change. The method further includes transmitting a request to a switch of the SAN to modify a buffer credit allocation value corresponding to a port of the switch linked to the network connection.
METHOD AND APPARATUS FOR ACTIVE QUEUE MANAGEMENT FOR WIRELESS NETWORKS USING SHARED WIRELESS CHANNEL
A method of managing a queue and a communication node that may maintain state information for each flow of a corresponding node, may estimate a time of arrival of each packet of each flow based on flow information that is received from other communication nodes within a collision range and that includes the number of flows and the state information, and may determine dropping and queue scheduling associated with the packets based on the estimated time of arrival (ETA).
Algorithmic changing in a streaming environment
A stream computing application may permit one job to connect to a data stream of a different job. As more jobs dynamically connect to the data stream, the connections may have a negative impact on the performance of the stream computing application. A variety of performance indicators (e.g., CPU utilization or tuple rate) may be monitored to determine if the dynamic connections are harming performance. If they are, the stream algorithm may be modified to mitigate the effects of the dynamic connections.
Allocating multiple operand data areas of a computer instruction within a program buffer
The disclosure herein provides systems, methods, and computer program products for managing a plurality of operands in a computer instruction. To manage the plurality of operands, a data buffer manager executed by a processor receives information from a caller. The information relates to the plurality of operands. The data buffer manager, also, compares a free data area size to a requested minimum data area of an operand size identified by the information; selects an address when the requested minimum data area is less than or equal to the free data area size; and inserts the operand at the address.
DYNAMIC BUFFER ALLOCATION
The present disclosure relates to a switch for a network, and specifically the dynamic allocation of buffer memory within the switch. A communication channel is established between the switch and a network device. The switch configures and allocates a portion of memory to a receive socket buffer for the established channel. Upon receipt of a signal from the network device, the switch allocates a second portion of memory to the receive socket buffer.