Patent classifications
H04L47/129
SYSTEMS AND METHODS FOR ROUTING NETWORK MESSAGES
Networks and methods are provided for use in directing retry requests for content based on intervals to retry defined by network conditions. One example method generally includes receiving, from a computing device, an application programming interface (API) request for content and determining whether the API request exceeds a predefined rate limit of API requests. The method then includes, in response to the API request exceeding the predefined rate limit, calculating a retry interval for the API request based on the predefined rate limit of API requests and a number of expected API requests for an upcoming interval, appending the retry interval to a failure notice, and transmitting the failure notice to the computing device thereby indicating to the computing device to retry the API request based on the retry interval rather than immediately or rather than at another preset interval of the computing device.
SWITCH FABRIC PACKET FLOW REORDERING
An ingress fabric endpoint coupled to a switch fabric within a network device reorders packet flows based on congestion status. In one example, the ingress fabric endpoint receives packet flows for switching across the switch fabric. The ingress fabric endpoint assigns each packet for each packet flow to a fast path or a slow path for packet switching. The ingress fabric endpoint processes, to generate a stream of cells for switching across the switch fabric, packets from the fast path and the slow path to maintain a first-in-first-out ordering of the packets within each packet flow. The ingress fabric endpoint switches a packet of a first packet flow after switching a packet of a second packet flow despite receiving the packet of the first packet flow before the packet of the second packet flow.
RECEIVER-BASED PRECISION CONGESTION CONTROL
Examples described herein relate to a network agent, when operational, to: receive a packet, determine transmit rate-related information for a sender network device based at least on operational and telemetry information accumulated in the received packet, and transmit the transmit rate-related information to the sender network device. In some examples, the network agent includes a network device coupled to a server, a server, or a network device. In some examples, the operational and telemetry information comprises: telemetry information generated by at least one network device in a path from the sender network device to the network agent.
Switch fabric packet flow reordering
An ingress fabric endpoint coupled to a switch fabric within a network device reorders packet flows based on congestion status. In one example, the ingress fabric endpoint receives packet flows for switching across the switch fabric. The ingress fabric endpoint assigns each packet for each packet flow to a fast path or a slow path for packet switching. The ingress fabric endpoint processes, to generate a stream of cells for switching across the switch fabric, packets from the fast path and the slow path to maintain a first-in-first-out ordering of the packets within each packet flow. The ingress fabric endpoint switches a packet of a first packet flow after switching a packet of a second packet flow despite receiving the packet of the first packet flow before the packet of the second packet flow.
ADAPTIVE ENCODING NETWORK
Systems and methods of improving the functioning of a computer system by implementing an adaptive encoding network are disclosed. In some example embodiments, a computer system transmits a new encoding assignment representing an encoding of a value with a new code to a consensus server, receives an approval of the new encoding assignment from the consensus server, and, based on the receiving of the approval of the new encoding assignment from the consensus server, applies the new encoding assignment to the value in subsequent messages to one or more machines, with the applying of the new encoding assignment comprising including the new code of the new encoding assignment in the subsequent messages in association with the value.
Independent buffer memory for network element
Technology is described for forwarding packets from a network element to a buffer node. A packet may be received at the network element. The network element may determine that packets stored in the buffer memory exceed a defined threshold for data size. The packet may be forwarded from the network element to the buffer node in a service provider environment for storage of the packet at the buffer node. The network element may receive the packet from the buffer node.
Apparatus and method for routing data in a switch
Apparatuses, methods and storage medium associated with routing data in a switch are provided. In embodiments, the switch may include route lookup circuitry determine a first set of output ports that are available to send a data packet to a destination node. The lookup circuitry may further select, based on respective congestion levels associated with the first set of output ports, a plurality of output ports for a second set of output ports from the first set of output ports. An input queue of the switch may buffer the data packet and route information associated with the second set of output ports. The switch may further include route selection circuitry to select a destination output port from the second set of output ports, based on updated congestion levels associated with the output ports of the second set of output ports. Other embodiments may be described and/or claimed.
MOBILE CORE DYNAMIC TUNNEL END-POINT PROCESSING
The present technology is directed to a system and method for using cloud based processing to co-locate one or more tunnel end points, associated with mobile user generated traffic traversing a Core network, with the serving machine located on application provider network. The describe system/method involves early stage identification of traffic flow (i.e., at the Packet Data network Gateway device using Application Detection and Control function) and dynamically instantiating an end point for the aforementioned traffic flow at the server where the application request is being served. The traffic is then directly tunneled to the endpoint thus avoiding decapsulated mobile traffic from traversing across provider network.
SWITCH FABRIC PACKET FLOW REORDERING
An ingress fabric endpoint coupled to a switch fabric within a network device reorders packet flows based on congestion status. In one example, the ingress fabric endpoint receives packet flows for switching across the switch fabric. The ingress fabric endpoint assigns each packet for each packet flow to a fast path or a slow path for packet switching. The ingress fabric endpoint processes, to generate a stream of cells for switching across the switch fabric, packets from the fast path and the slow path to maintain a first-in-first-out ordering of the packets within each packet flow. The ingress fabric endpoint switches a packet of a first packet flow after switching a packet of a second packet flow despite receiving the packet of the first packet flow before the packet of the second packet flow.
INCREASING QOS THROUGHPUT AND EFFICIENCY THROUGH LAZY BYTE BATCHING
Described embodiments improve the performance of a computer network via selectively forwarding packets to bypass quality of service (QoS) processing, avoiding processing delays during critical periods of high demand, increasing throughput and efficiency may be increased by sacrificing a small amount of QoS accuracy. QoS processing may be applied to a subset of packets of a flow or connection, referred to herein as lazy processing or lazy byte batching. Packets that bypass QoS processing may be immediately forwarded with the same QoS settings as packets of the flow for which QoS processing is applied, resulting in tremendous overhead savings with only minimal decline in accuracy.