Patent classifications
H04L12/727
Data forwarding method and device
This application discloses a data forwarding method and device. The method includes: obtaining a first data unit sequence stream by using a first logical ingress port, where the first data unit sequence stream includes at least one first data unit; determining, according to a preconfigured mapping relationship between at least one logical ingress port and at least one logical egress port, a first logical egress port corresponding to the first logical ingress port, where the at least one logical ingress port includes the first logical ingress port; adjusting a quantity of idle units in the first data unit sequence stream, so that a rate of an adjusted first data unit sequence stream matches a rate of the first logical egress port; and sending the adjusted first data unit sequence stream by using the first logical egress port.
METHOD AND APPARATUS FOR PREFERRED PATH ROUTE INFORMATION DISTRIBUTION AND MAINTENANCE
A method implemented in a domain in a multi-domain network, comprising maintaining a link state database (LSDB) comprising information describing a topology of the domain, receiving, from a network element (NE) in an area of the domain, preferred path route (PPR) information describing a PPR from a source to a destination in the area, the PPR information comprising a PPR identifier (PPR-ID) and a plurality of PPR description elements (PPR-PDEs) each representing an element on the PPR, and constructing an end-to-end path between the source and the destination based on the PPR information.
NETWORK DESIGN DEVICE, NETWORK DESIGN METHOD, AND NETWORK DESIGN PROCESSING PROGRAM
With a network design apparatus, a network design method, and a network design processing program, a network configuration is designed for a network in which a transfer apparatus is disposed at each of a plurality of communication hubs and the communication hubs are connected via a link by a link portion apparatus in the transfer apparatus. In design of a network configuration, a threshold value of an inter-end delay and the number of redundant paths are calculated for each line on the basis of topology information, line information, and design parameter information. A path candidate set is calculated for each line on the basis of the threshold value of the inter-end delay and the number of redundant paths.
Accelerating computer network policy search
Systems and methods for accelerating computer network policy searching are provided. According to one embodiment, a packet is received by a policy search engine (PSE) of a packet processing device. A set of candidate policies are identified from among multiple policies of the packet processing device by screening the multiple policies by a speculation unit of the PSE based on metadata associated with the received packet. Finally, a matching policy for the received packet is identified by a policy search processor (PSP) of the PSE by executing policy-search-specific instructions and general purpose instructions.
COMMUNICATION SYSTEM AND METHOD FOR APPLYING QUANTUM KEY DISTRIBUTION SECURITY FOR A TIME SENSITIVE NETWORK
A method includes identifying connections between plural components of a time sensitive network (TSN) that are interconnected via a predetermined connection plan. The method also includes determining quantum key distribution (QKD) information of the components. Also, the method further includes scheduling flows for the TSN based on the QKD information of the components.
Adaptive routing pipelines for variable endpoint performance
A system includes determination of a respective performance level associated with each of a plurality of endpoints assigned to a first routing pipeline, determination of a slow one of the plurality of endpoints based on the respective performance levels, and, in response to the determination of the slow one of the plurality of endpoints, instantiation of a second routing pipeline and assignment of the slow one of the plurality of endpoints to the second routing, wherein the first routing pipeline is to receive messages and to route a first plurality of the messages to the plurality of endpoints other than the slow one of the plurality of endpoints, and wherein the second routing pipeline is to receive the messages and to route a second plurality of the messages to the slow one of the plurality of endpoints.
ACCELERATING MULTI-NODE PERFORMANCE OF MACHINE LEARNING WORKLOADS
Examples described herein relate to a network interface and at least one processor that is to indicate whether data is associated with a machine learning operation or non-machine learning operation to manage traversal of the data through one or more network elements to a destination network element and cause the network interface to include an indication in a packet of whether the packet includes machine learning data or non-machine learning data. In some examples, the indication in a packet of whether the packet includes machine learning data or non-machine learning data comprises a priority level and wherein one or more higher priority levels identify machine learning data. In some examples, for machine learning data, the priority level is based on whether the data is associated with inference, training, or re-training operations. In some examples, for machine learning data, the priority level is based on whether the data is associated with real-time or time insensitive inference operations.
COMMUNICATION DEVICE, COMMUNICATION RELAY SYSTEM, AND MASTER STATION APPARATUS
A communication device according to an embodiment is capable of communicating with another communication device via a first network and a second network each transmitting radio signal data by different communication methods. The communication device includes: a first communicator capable of communicating with another communication device via the first network; a second communicator capable of communicating with another communication device via the second network; a delay parameter acquirer to acquire a delay parameter of the first network; and a delay parameter reflector to reflect the delay parameter of the first network acquired by the delay parameter acquirer on a delay parameter of the second network.
Zero-overhead data initiated AnyCasting
A communication node of a multi-node communication network includes a communication interface and a controller. In embodiments, the controller is configured to: receive a data packet from a first additional communication node; determine if the data packet comprises a time-sensitive data packet; determine if a length of the data packet is less than a length threshold; transmit the data packet via a conventional routing procedure to at least one second additional communication node of the multi-node communication network if the data packet comprises a non-time-sensitive data packet or if the length of the data packet is greater than the data packet length threshold; and transmit the data packet via a packet flooding procedure to the at least one second additional communication node if the data packet comprises a time-sensitive data packet and if the length of the data packet is less than the length threshold.
USER- AND APPLICATION-BASED NETWORK TREATMENT POLICIES
Systems, methods, and devices are disclosed for personalizing quality of service for network traffic. A user priority is assigned to a specific user and an application priority is assigned to a type of application. A header including an identifier is added to a packet from a client device associated with the type of application and the specific user in order to generate a modified packet. The identifier is based on a combination of the user priority associated with the specific user and an application priority. The modified packet is forwarded end to end through a network that is personalized to the specific user and the type of application by mapping a treatment policy to the identifier.