Patent classifications
H04L49/9047
Vehicular micro clouds for on-demand vehicle queue analysis
The disclosure includes embodiments for a connected vehicle to form a vehicular micro cloud. In some embodiments, a method includes determining, by an onboard vehicle computer, that a queue is present in a roadway environment and that a vehicle that includes the onboard vehicle computer is present in the queue. The method includes causing a set of member vehicles to form a vehicular micro cloud in the roadway environment responsive to determining that the queue is present in the roadway environment so that determining that the queue is present triggers a formation of the vehicular micro cloud, where the vehicular micro cloud includes a set of vehicles which each share all of their unused vehicular computing resources with one another to generate a pool of vehicular computing resources that exceeds a total vehicular computing resources of any single member vehicle and is used to benefit the set of member vehicles.
PACKET FORWARDING DEVICE AND QUEUE MANAGEMENT METHOD
A packet forwarding device and a queue management method are provided. The queue management method is applicable to a plurality of priority queues each associated with a different transmission priority. The queue management method includes: allocating at least one buffer from a free buffer pool to each of the priority queues; monitoring a number of dropped packets of an observation queue of the priority queues; and increasing a number of buffers for the observation queue and decreasing a number of buffers for at least one of the priority queues which has a lower transmission priority than the observation queue, according to the number of dropped packets.
Data packet processing method and apparatus, and device
Embodiments of the present invention disclose a data packet processing method and apparatus, and a device. The method includes: if a first data packet is received, determining a first cache queue that is in the first buffer and that is used to store the first data packet; buffering the first data packet in the second buffer if a state of the first cache queue is an invalid state, where a data amount of the first data packet is less than the capacity of the second buffer, and the state of the first cache queue is set to the invalid state when a current data amount of the first buffer reaches the capacity of the first buffer; and if a data amount of the second buffer reaches the capacity of the second buffer, sending all data packets that are in the second buffer to a control plane device.
PAGE FAULT SUPPORT FOR VIRTUAL MACHINE NETWORK ACCELERATORS
Systems and methods for supporting page faults for virtual machine network accelerators. In one implementation, a processing device may receive, at a network accelerator device of a computer system, from a network, a first incoming packet and a second incoming packet. Responsive to receiving a first notification that an attempt to store the first incoming packet at a first buffer of a plurality of buffers associated with the network accelerator device caused a page fault, the processing device may store the first incoming packet at a second buffer and append a first identifier of the first buffer to a faulty buffer data structure. Responsive to receiving a second notification indicating a resolution of the page fault, the processing device may remove the first identifier from the faulty buffer data structure. The processing device may store the second incoming packet at the first buffer. The processing device may forward, to a driver of the network accelerator device, a second identifier of the second buffer and the first identifier of the first buffer.
Multi-destination traffic handling optimizations in a network device
When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.
Multi-destination traffic handling optimizations in a network device
When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.
Dynamic Reserve Allocation on Shared-Buffer
A network device includes multiple ports, packet processing circuitry, a memory and a reserved-memory management circuit (RMMC). The ports are to communicate packets over a network. The packet processing circuitry is to process the packets using a plurality of queues. The memory is to store a shared buffer. The RMMC is to allocate segments of the shared buffer to the queues, including allocating reserve segments of the shared buffer to selected queues that meet a reserve-allocation criterion.
Dynamic Reserve Allocation on Shared-Buffer
A network device includes multiple ports, packet processing circuitry, a memory and a reserved-memory management circuit (RMMC). The ports are to communicate packets over a network. The packet processing circuitry is to process the packets using a plurality of queues. The memory is to store a shared buffer. The RMMC is to allocate segments of the shared buffer to the queues, including allocating reserve segments of the shared buffer to selected queues that meet a reserve-allocation criterion.
BUFFER CONFIGURATION METHOD AND SWITCHING DEVICE
This application provides a buffer configuration method and a switching device, to ensure no packet loss, and belongs to the field of network technologies. The method includes: sending, by a first switching device, a first measurement frame to a second switching device; receiving, by the first switching device, a second measurement frame sent by the second switching device, where the second measurement frame is generated through triggering based on the first measurement frame; determining, by the first switching device, a buffer configuration parameter based on the first measurement frame and the second measurement frame; and setting, by the first switching device, a local buffer based on the configuration parameter. This application is used to automatically configure a buffer of a switching device, thereby reducing buffer space without packet loss.
Apparatus and buffer control method thereof in wireless communication system
A 5G communication system or pre-5G communication system for supporting a higher data rate than that of a beyond 4G communication system such as an LTE is provided. A method by an apparatus for controlling buffers in a wireless communication system comprises storing information related to a packet in at least one of a first buffer or a second buffer, transmitting data generated based on the packet, and, when an acknowledgement signal is received for the data, discarding the information.