Patent classifications
H04L12/727
Apparatus and method for optimized route invalidation using modified no-path DAO signaling
Example apparatus and methods for optimized route invalidation using modified no-path Destination Oriented Directed Acyclic Graph Advertisement Object (DAO) signaling are disclosed. In one example method, a node switching its current parent is adapted to send a regular DAO message. Using the changed signaling, a common ancestor node generates a No-Path destination oriented directed acyclic graph advertisement object message (NPDAO) on behalf of the switching node on receiving a refreshed DAO from an alternate path. The common ancestor node reuses a same Path Sequence from the regular DAO based on which the NPDAO gets generated. The common ancestor node detects routing anomaly using next hop mismatch on reception of the DAO to generate the NPDAO on behalf of the target node. The No-Path DAO traverses downward/downstream along the previous path.
Heuristics for selecting nearest zone based on ICA RTT and network latency
Described embodiments provide systems and methods for zone selection for distributed services. A device records latency data measured for interactions between each of a plurality of clients and a service hosted by servers in two or more zones. The device directs network communications from each of the plurality of clients to respective servers hosting the service based on zones assigned to each of the plurality of clients. The device assigns clients to zones based on the recorded latency data. For example, the device identifies a grouping for a client, determines whether the recorded latency data indicates that latency for clients in the grouping is increasing faster than a threshold rate, and selects, responsive to the determination, a zone indicated by a selected set of recorded latency data as lowest in latency.
Diversity routing to improve delay-jitter tradeoff in uncertain network environments
Systems and methods reduce delivery delay jitter in a delivery network. A processor identifies a plurality of routes between an originating node and a destination node. Each route has a respective mean delivery delay time and a respective delivery delay jitter. The processor solves a convex optimization problem for a plurality of values of delivery delay, thereby yielding a plurality of solutions. Each solution represents a corresponding allocation of traffic among the plurality of routes. Each allocation of traffic has a corresponding mean delivery delay time and a corresponding mean delivery delay jitter. The processor selects, from the plurality of solutions, a selected solution, which has a mean delivery delay jitter less than the delivery delay jitter of any route of the plurality of routes. Traffic is automatically distributed over the plurality of routes according to the allocation of traffic that corresponds to the selected solution.
WEIGHTED COST MULTIPATH PACKET PROCESSING
The disclosed systems and methods provide weighted cost multipath for packet processing devices. A method includes receiving a network packet for routing through one of a number of paths of a network switch device. The method also includes selecting, via a first function applied to the network packet, a record from a plurality of records corresponding to the number of paths, wherein each of the plurality of records includes a threshold, a first routing index, and a second routing index. The method also includes determining, via a second function applied to the network packet, a routing value within a predefined range of values. The method also includes choosing, from the selected record, the first routing index or the second routing index based on whether the routing value meets the threshold of the selected record. The method also includes routing the network packet based on the chosen routing index.
System and method for computational transport network-on-chip (NoC)
A system and method are disclosed for performing operations on data passing through the network to reduce latency. The overall system allows data transport to become an active component in the computation, thereby improving the overall system latency, bandwidth, and/or power.
Weighted cost multipath packet processing
The disclosed systems and methods provide weighted cost multipath for packet processing devices. A method includes receiving a network packet for routing through one of a number of paths of a network switch device. The method also includes selecting, via a first function applied to the network packet, a record from a plurality of records corresponding to the number of paths, wherein each of the plurality of records includes a threshold, a first routing index, and a second routing index. The method also includes determining, via a second function applied to the network packet, a routing value within a predefined range of values. The method also includes choosing, from the selected record, the first routing index or the second routing index based on whether the routing value meets the threshold of the selected record. The method also includes routing the network packet based on the chosen routing index.
Flexible scheduling of data transfers between computing infrastructure collections for efficient resource utilization
A data delivery service of a service provider may receive respective job specifications for different data transfer jobs between computing infrastructure collections (e.g., data centers). A job specification for a data transfer job may include an amount of data to be transferred for the data transfer job, one or more destinations of data transfers for the data transfer job, and/or one or more flexibility parameters for successful transfer of the data for the data transfer job (e.g., a deadline to transfer the data, available data delivery techniques). The data delivery service may determine a schedule for performing different data transfer jobs between two or more infrastructures based on an analysis of the amount of data to be transferred for each job, the destinations of the data transfer for each job, the flexibility parameters for each job (e.g., included in the respective job specifications), and the connectivity between computing infrastructure collections.
Systems and methods for managing resources in a serverless workload
Various approaches for allocating resources to an application having multiple application components, with at least one executing one or more functions, in a serverless service architecture include identifying multiple routing paths, each routing path being associated with a same function service provided by one or more containers or serverless execution entities; determining traffic information on each routing path and/or a cost, a response time and/or a capacity associated with the container or serverless execution entity on each routing path; selecting one of the routing paths and its associated container or serverless execution entity; and causing a computational user of the application to access the container or serverless execution entity on the selected routing path and executing the function(s) thereon.
Method and Network Node for Obtaining Target Transmission Path
A method and network node for obtaining a target transmission path, where the method includes obtaining, by a first network node in a network domain, topology information of a plurality of network nodes on each path between an ingress node and an egress node that are in the network domain, obtaining, by the first network node, a transmission delay of each path according to the topology information, where the transmission delay of each path includes a sum of physical link delays between all network nodes on each path and node residence times of all the network nodes on each path, and determining, by the first network node, the target transmission path according to the transmission delay of each path.
DATA FORWARDING METHOD AND DEVICE
This application discloses a data forwarding method and device. The method includes: obtaining a first data unit sequence stream by using a first logical ingress port, where the first data unit sequence stream includes at least one first data unit; determining, according to a preconfigured mapping relationship between at least one logical ingress port and at least one logical egress port, a first logical egress port corresponding to the first logical ingress port, where the at least one logical ingress port includes the first logical ingress port; adjusting a quantity of idle units in the first data unit sequence stream, so that a rate of an adjusted first data unit sequence stream matches a rate of the first logical egress port; and sending the adjusted first data unit sequence stream by using the first logical egress port.