Patent classifications
H04L47/29
METHOD FOR ALLOCATING RESOURCE FOR STORING VISUALIZATION INFORMATION, APPARATUS, AND SYSTEM
A method for allocating a resource for storing visualization information, an apparatus, and a system are provided. The method includes: a first network device determines a first queue based on a constraint condition, where the first queue is a queue that needs to be visualized. Then, the first network device allocates a first storage resource to the first queue, where the first storage resource is used to store visualization information of the first queue, and the visualization information is information used to visualize the first queue. Therefore, occupation of storage resources in the first network device is reduced.
Data transfer with multiple threshold actions
One example may include transmitting data between a client device and a server over a first channel, determining an error rate on at least one of the first channel and a second channel not mirrored with the first channel, when the error rate crosses a first error rate threshold then mirroring the first channel and the second channel, and when the error rate is between the first error rate threshold and a second error rate threshold that is different than the first error rate threshold, determining whether to continue mirroring or discontinue the mirroring of the first channel and the second channel.
Communication Method and Communication Apparatus
A communication method includes that a first network device generates indication information based on service usage of a first network slice and a service usage threshold of the first network slice, and sends the indication information to a second network device. The second network device controls resource pre-emption based on the indication information.
HARDWARE-BASED PACKET FLOW PROCESSING
Techniques are disclosed for processing data packets by a hardware-based networking device configured to disaggregate processing of data packets from hosts of a virtualized computing environment. The hardware-based networking device includes a hardware-based component implementing a plurality of behavioral models indicative of packet processing graphs for data flows in the virtualized computing environment. A data packet having a source from or destination to an endpoint in a virtual network of the virtualized computing environment is received. Based on determining that the data packet is a first packet of a data flow to or from the endpoint, one of the behavioral models is mapped to the data flow. The packet is modified in accordance with the mapped behavioral model. A state of the data flow is stored. Subsequent data packets of the data flow are processed based on the stored state.
METHOD AND APPARATUS FOR HOP-BY-HOP FLOW CONTROL
The present disclosure relates to methods and apparatuses. According to some embodiments of the disclosure, a method performed by a communication device, includes: receiving, from a base station, a first configuration information indicating a threshold to configure the communication device, wherein the threshold is associated with data volume; determining whether congestion happens at the communication device based on the threshold; and transmitting a congestion indication to a first parent node via a Backhaul Adaptation Protocol (BAP) signaling message when it is determined that the congestion happens at the communication device.
CLOUD-NATIVE WORKLOAD OPTIMIZATION
Techniques for orchestrating workloads based on policy to operate in optimal host and/or network proximity in cloud-native environments are described herein. The techniques may include receiving flow data associated with network paths between workloads hosted by a cloud-based network. Based at least in part on the flow data, the techniques may include determining that a utilization of a network path between a first workload and a second workload is greater than a relative utilization of other network paths between the first workload and other workloads. The techniques may also include determining that reducing the network path would optimize communications between the first workload and the second workload without adversely affecting communications between the first workload and the other workloads. The techniques may also include causing at least one of a redeployment or a network path re-routing to reduce the networking proximity between the first workload and the second workload.
Method and System for Effective Use of Internal and External Memory for Packet Buffering within a Network Device
A mechanism is provided to maximize utilization of internal memory for packet queuing in network devices, while providing an effective use of both internal and external memory to achieve high performance, high buffering scalability, and minimizing power utilization. Embodiments initially store packet data received by the network device in queues supported by an internal memory. If internal memory utilization crosses a predetermined threshold, a background task performs memory reclamation by determining those queued packets that should be targeted for transfer to an external memory. Those selected queued packets are transferred to external memory and the internal memory is freed. Once the internal memory consumption drops below a threshold, the reclamation task stops.
Congestion control method and network device
A network device adds a fixed value to a congestion threshold (CT) when a first period ends. Detects whether a difference obtained by subtracting average traffic load of a queue in the first period from average traffic load of the queue in a second period is greater than a target increase value, sets the CT based on a detection result when the second period ends, where the first period is previous to the second period; marks a received packet when a quantity of packets buffered in the queue is greater than the CT, enqueues the marked packet and sends the marked packet to a receiving device.
Packet forwarding method and apparatus
A packet forwarding method to shorten a transmission latency of an elephant flow is provided. In the method, for a first packet flow used as an elephant flow, a network device may receive a plurality of packets of the first packet flow, and determine a characteristic parameter of the first packet flow based on the plurality of packets, where the characteristic parameter of the first packet flow is used to indicate a transmission latency of the first packet flow. After determining the characteristic parameter of the first packet flow, the network device determines a forwarding policy of the first packet flow based on the characteristic parameter of the first packet flow. The forwarding policy of the first packet flow is used to indicate latency sensitivity of the first packet flow.
Method and apparatus for determining packet dequeue rate
A method for determining a packet dequeue rate includes allocating a plurality of consecutive blocks in a first memory to a first packet, storing the first packet and a first length in the plurality of blocks, where the first length is of a first packet queue and is obtained when the first packet is enqueued into the first packet queue, and determining, based on a first span and the first length stored, a first rate at which a packet in the first packet queue is dequeued, where the first span is equal to a difference between a second time and a first time, the first time is when the first packet is enqueued into the first packet queue, and the second time is when the first packet is dequeued from the first packet queue.