Patent classifications
H04L47/6215
Increasing QoS throughput and efficiency through lazy byte batching
Described embodiments improve the performance of a computer network via selectively forwarding packets to bypass quality of service (QoS) processing, avoiding processing delays during critical periods of high demand, increasing throughput and efficiency may be increased by sacrificing a small amount of QoS accuracy. QoS processing may be applied to a subset of packets of a flow or connection, referred to herein as “lazy” processing or lazy byte batching. Packets that bypass QoS processing may be immediately forwarded with the same QoS settings as packets of the flow for which QoS processing is applied, resulting in tremendous overhead savings with only minimal decline in accuracy.
Prioritized MSRP transmissions to reduce traffic interruptions
This technology enables prioritization of Multiple Stream Reservation Protocol (“MSRP”) transmissions in Audio Video Bridging (“AVB”) virtual local area networks (“VLANs”). An AVB switch receives a status from listener devices, associates a state with each of the statuses indicating whether each listener device is active or in-active, and stores each state in a database. For each listener device, a queue of MSRP protocol data unit (“PDU”) packets exists to be transmitted to the listener device. The AVB switch searches the database for listener devices with an active state, searches the queue for each active listener device for packets associated with an active state, and transmits the packets associated with the active state to each active listener device. Subsequently, the AVB switch searches each listener device's queue for packets associated with an in-active state and transmits the packets associated with an in-active state to each listener device.
Distinguishing Traffic-Engineered Packets and Non-Traffic-Engineered Packets
Various embodiments providing for an indicator (termed the “Traffic Category Indicator,” TCI) to be encoded into packets, different values of which can be used, e.g., to distinguish Traffic Engineered (TE) packets and non-TE packets. In an example embodiment, the TCI can be used, e.g., to configure a network node to implement different packet queues, on each link, for TE packets and non-TE packets. In embodiments corresponding to the DiffServ TE paradigm, a node can be configured to implement different queues within each Forwarding Class for each link, said different queues distinguished by different respective TCI values. Example benefits of TCI include, but are not limited to fate separation of TE and non-TE packets in a node. The TCI concept can beneficially be applied to different packet-switching technologies supporting Source Routing, such as the IP, MPLS, Ethernet, etc.
Method of Managing Data Transmission for Ensuring Per-Flow Fair Bandwidth Sharing
A computer-implementation method includes receiving a data packet; identifying a virtual queue from a list of virtual queues to which the data packet pertains; and determining whether the identified virtual queue size exceeds a threshold maximum size. When the first size does not exceed the threshold maximum size, the identified virtual queue is increased based on a size of the data packet and the data packet is forwarded. The method further includes setting a virtual queue from the list of virtual queues as a target queue; determining a service capacity based on an update time interval and increasing a credit allowance based on the service capacity. The target queue is reduced by an amount based on the credit allowance size, and the credit allowance is reduced by the same amount.
Scheduling solution configuration method and apparatus, computer readable storage medium thereof, and computer device
A scheduling scheme configuration method includes performing state verification on a plurality of operation dimensions involved in generating a scheduling scheme, and, in response to one or more of the operation dimensions being abnormal, removing the one or more abnormal operation dimensions to generate a new scheduling scheme.
Scheduling method and apparatus for a quality of service data flow
This application provides a scheduling method and an apparatus. The method includes: determining, by an application processor, a type of a to-be-sent data packet, and putting, by the application processor, the to-be-sent data packet into a quality of service QoS data flow corresponding to the type of the to-be-sent data packet, where the type of the to-be-sent data packet is a GBR type or a non-GBR type; and scheduling, by the application processor, a to-be-sent data packet in a QoS data flow corresponding to the GBR type to send the to-be-sent data packet to a modem in a terminal in which the application processor is located, and after determining that a data transmission rate requirement of the GBR type is met, scheduling, by the application processor, a to-be-sent data packet in a QoS data flow corresponding to the non-GBR type to send the to-be-sent data packet to the modem.
SYSTEM FOR QUEUING FLOWS TO CHANNELS
A system for queuing flows to channels.
Low-Latency Delivery of In-Band Telemetry Data
A network device includes processing circuitry and a plurality of ports. The ports connect to a communication network. The processing circuitry is configured to receive, via an input port, data packets and probe packets that are addressed to a common output port, to store the data packets in a first queue and the probe packets in a second queue, both the first queue and the second queue are served by the output port, to produce telemetry data indicative of a state of the network device, based on a processing path that the data packets traverse within the network device, to schedule transmission of the data packets from the first queue at a first priority, and schedule transmission of the probe packets from the second queue at a second priority higher than the first priority, and to modify the scheduled probe packets so as to carry the telemetry data.
HARDWARE-IMPLEMENTED TABLES AND METHODS OF USING THE SAME FOR CLASSIFICATION AND COLLISION RESOLUTION OF DATA PACKETS
Introduced here are approaches to classifying traffic that comprises data packets. For each data packet, a classification engine implemented on a computing device can identify an appropriate class from amongst multiple classes using a lookup table implemented in a memory. The memory could be, for example, static random-access memory (SRAM) as further discussed below. Moreover, the classification engine may associate an identifier with each data packet that specifies the class into which the data packet has been assigned. For example, each data packet could have an identifier appended thereto (e.g., in the form of metadata). Then, the data packets can be placed into queues based on the identifiers. Each queue may be associated with a different identifier (and thus a different class).
Incremental data processing
Incremental data processing at a computerized device includes determining a number of data sets from a plurality of data sets, each comprising values in at least two dimensions. The device accesses priority lists for a subset of the data sets. The priority lists specify data values for an ordered number of dimension value sets. Each priority list is sequentially processed to determine the specified data values for combinations of dimension values that apply to device requirements. Processing is aborted when a data value is determined for each combination of the dimension values that apply to the device requirements. A data value is selected among the determined data values. A number of data sets is determined based on the selected data values. A network route from a source device to a target device can be determined in this manner.