Patent classifications
H04L47/225
BANDWIDTH CONTROL INSIDE A SHARED NETWORK INTERFACE CARD
A smart network interface card (smartNIC) may receive first traffic for a first process configured with a first bandwidth limit. The smartNIC may receive second traffic for a second process configured with a second bandwidth limit, the second bandwidth limit corresponding to a larger value between a second transmit limit and a second receive limit associated with the second process. The smartNIC may queue the received traffic associated with the first process and the second process in a scheduler, the scheduler having a first set of queues configured to store traffic from the first process, and a second set of queues configured to store traffic from the second process. The smartNIC may forward queued traffic from the first set of queues or the second set of queues, a maximum amount of forwarded first process traffic corresponding to the first bandwidth limit minus an amount of forwarded second process traffic.
Systems and methods for adjusting a congestion window value of a content delivery network
Aspects of the present disclosure involve systems, methods, computer program products, and the like, for controlling a congestion window (CWND) value of a communication session of a CDN. In particular, a content server may analyze a request to determine or receive an indication of the type of content being requested. The content server may then set the initial CWND based on the type of content being requested. For example, the content server may set a relatively high CWND value for requested content that is not particularly large, such as image files or text, so that the data of the content is received at the client device quickly. For larger files or files that a have a determined smaller urgency, the initial CWND may be set at a lower value to ensure that providing the data of the content does not congest the link between the devices.
TECHNIQUES FOR DYNAMICALLY ALLOCATING RESOURCES IN A STORAGE CLUSTER SYSTEM
Various embodiments are directed to techniques for dynamically adjusting a maximum rate of throughput for accessing data stored within a volume of storage space of a storage cluster system based on the amount of that data that is stored within that volume. An apparatus includes an access component to monitor an amount of client data stored within a volume defined within a storage device coupled to a first node, and to perform a data access command received from a client device via a network to alter the client data stored within the volume; and a policy component to limit a rate of throughput at which at least the client data within the volume is exchanged as part of performance of the data access command to a maximum rate of throughput, and to calculate the maximum rate of throughput based on the stored amount.
Interworking between variable capacity optical layer and ethernet/IP/MPLS layer
Systems and methods for coordinating an optical layer and a packet layer in a network, include a Software Defined Networking (SDN) Internet Protocol (IP) application configured to implement a closed loop for analytics, recommendations, provisioning, and monitoring, of a plurality of routers in the packet layer; and a variable capacity application configured to determine optical path viability, compute excess optical margin, and recommend and cause capacity upgrades and downgrades, by communicating with a plurality of network elements in the optical layer, wherein the SDN IP application and the variable capacity application coordinate activity therebetween based on conditions in the network. The activity is coordinated based on underlying capacity changes in the optical layer and workload changes in the packet layer.
METHODS AND APPARATUS FOR PERFORMANCE SCALING WITH PARALLEL PROCESSING OF SLIDING WINDOW MANAGEMENT ON MULTI-CORE ARCHITECTURE
Methods, apparatus, and articles of manufacture have been disclosed for performance scaling with parallel processing of sliding window management on multi-core architecture. An example apparatus includes at least one memory, instructions in the apparatus, and processor circuitry to at least one of execute or instantiate the instructions to partition a packet flow into two or more sub flows based on a packet flow distribution configuration, the two or more sub flows associated respectively with two or more sliding windows that are able to slide in parallel, provide the two or more sub flows to a buffer to schedule distribution of the two or more sub flows, dequeue the two or more sub flows from the buffer to one or more hardware cores, and transmit the two or more sub flows to a destination device.
Systems and methods for intelligent throughput distribution amongst applications of a User Equipment
A method of distributing throughput intelligently amongst a plurality of applications residing at a User Equipment (UE) is provided. The method includes receiving, at the UE, recommended bit rate (RBR) information from a network node, the RBR information indicating a throughput value allocated to the UE, allocating a codec rate from the allocated throughput value to at least one voice over internet protocol (VoIP) application from the plurality of applications, and allocating, from remaining throughput value of the allocated throughput value, a bit rate to each of a plurality of non-VoIP applications from the plurality of applications, based on corresponding throughput requirement associated with the plurality of non-VoIP applications.
CONVERGED AVIONICS DATA NETWORK
An avionics data network includes a network switch core configured for a time-sensitive networking (TSN) schema, a first and second set of networking end nodes communicatively coupled with the network switch core. The first set of networking end nodes includes a first subset of networking end nodes configured for a TSN schema and second subset of networking end nodes configured for a legacy Ethernet schema. The network switch core is configured to receive, from the first set of networking end nodes, a set of data frames, determine the respective schema of the set of data frames, forward the set of data frames to a predetermined queue on an egress port based on the determined respective schema, and transmit set of data frames to an end node having a corresponding schema.
System and method for reducing congestion in a network
Systems and methods of communicating in a network use rate limiting. Rate limiting units (either receive side or transmit side) can perform rate limiting in response to a) a maximum number of bytes that can be solicited over a first period of time is exceeded, b) a maximum number of bytes that are outstanding over a second period of time is exceeded; or c) a maximum number of commands that are outstanding over a period of time is exceeded as part of CMD_RXRL. The CMD_RXRL can have three components (a) max bytes, b) outstanding bytes, c) outstanding commands. TXRL contains the component of max bytes or maximum number of bytes that can be transmitted over a third period of time to match the speed of a receive link, or any node or link through the network/fabric.
System and method for autonomous and dynamic resource allocation in storage systems
Embodiments are described for an autonomously and dynamically allocating resources in a distributed network based on forecasted a-priori CPU resource utilization, rather than a manual throttle setting. A multivariate (CPU idle %, disk I/O, network and memory) rather than single variable approach for Probabilistic Weighted Fuzzy Time Series (PWFTS) is used for forecasting compute resources. The dynamic throttling is combined with an adaptive compute change rate detection and correction. A single spike detection and removal mechanism is used to prevent the application of too many frequent throttling changes. Such a method can be implemented for several use cases including, but not limited to: cloud data migration, replication to a storage server, system upgrades, bandwidth throttling in storage networks, and garbage collection.
SLIDING WINDOW PROTOCOL FOR COMMUNICATION AMONG MORE THAN TWO PARTICIPANTS
A system includes multiple nodes that communicate among one another. Each of the multiple nodes includes at least one data storage container. The system also includes a sender sliding window (SSW) that controls sending of data from at least one node of the multiple nodes to at least one other node of the multiple nodes. The system further includes a receiver sliding window (RSW) that controls receiving of the data from the at least one node of the multiple nodes at the at least one other node of the multiple nodes. At least one of the SSW or the RSW is sharable amongst more than one node of the multiple nodes.