Patent classifications
H04L47/806
Flow Tagging for Service Policy Implementation
A flow tagging technique includes tagging a data flow at a plurality of points in the data flow. For example, the data flow can be tagged at a socket and at a proxy manager API. By tagging the data flow at multiple points, it becomes possible to map network service usage activities to the appropriate initiating applications.
Delaycast queue prioritization
Systems and methods are described for optimizing resource utilization in a communications network while also optimizing subscriber engagement with media content over the communications network. Requested content objects can be identified as delayable objects that can be queued for opportunistically delayed communication to both requesting and non-requesting subscribers. Queued delayed content objects are scored with an eye toward optimizing both subscriber engagement and utilization of opportunistically available communications link resources. For example, a storage manager calculates a likelihood that each subscriber will engage with the content if it is opportunistically delivered, and a scheduler calculates a priority order in which to queue each requested delayable content object. Content objects can then be multicast to the subscribers in priority order and with associated information that can be used by the subscribers to determine whether to locally store the content objects as they are opportunistically received.
DISTRIBUTION OF MULTICAST INFORMATION IN A ROUTING SYSTEM
A routing system for distributing multicast routing information for a multicast service includes a plurality of routers including a multicast source router and a plurality of multicast receiver routers, the plurality of routers providing a multicast service, wherein the routers are configured to exchange multicast information associated with the multicast service including identification of multicast sources and the multicast receivers.
CONVEYING NETWORK-ADDRESS-TRANSLATION (NAT) RULES IN A NETWORK
In one embodiment, a first networking device associated with a switched network comprises one or more processors and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising configuring, on the first networking device, a network-address-translation (NAT) rule indicating that a first multicast group is to be translated to a second multicast group. The acts further include, at least partly in response to the configuring of the NAT rule, storing the NAT rule at the first networking device, generating a message indicating the NAT rule, and sending the message to at least a second networking device associated with the switched network.
Stream allocation using stream credits
Systems and methods for allocating resources are disclosed. Resources such as streams are allocated using a stream credit system. Credits are issued to the clients in a manner that ensure the system is operating in a safe allocation state. The credits can be used not only to allocate resources but also to throttle clients where necessary. Credits can be granted fully, partially, and in a number greater than a request. Zero or negative credits can also be issued to throttle clients.
Channel access indication method and device to avoid unnecessary probe delay
A channel access indication method and device. The method includes: receiving, by a first communications device, a channel synchronization request sent by a second communications device, where the channel synchronization request is used to request the first communications device to send a synchronization frame to the second communications device, and a wake-up receiver is configured for the second communications device; and according to the channel synchronization request and a time at which the second communications device is woken up and that is learned by the first communications device based on preset signaling, sending, by the first communications device when a channel is idle, the synchronization frame to the woken-up second communications device, where the synchronization frame is used to instruct the woken-up second communications device to access the channel after receiving the synchronization frame.
Establishing a Multicast Flow Path Through a Network Using Multicast Resources Currently Associated with a Different Multicast Flow Path
In one embodiment, resource availability reallocation is used in establishing one or more new designated multicast flow paths with guaranteed availability of resources currently allocated and/or used by one or more designated existing multicast flow path to allocate/use for the new designated flow path(s). These resources typically include allocated guaranteed bandwidth of a network path between two adjacent or non-adjacent nodes of the network, and possibly forwarding/processing/memory resources of a network node. One embodiment communicates multicast control messages between nodes identifying to establish a new multicast flow path with resource availability reallocation from a designated multicast flow path. In one embodiment, a Protocol Independent Multicast-Sparse Mode (PIM-SM) Join/Prune Message identifies Pruning of one or more multicast flow paths and Joining of one or more different multicast flow paths and designating resource availability reallocation from these Pruned multicast flow path(s) to these Joined multicast flow path(s).
Network recovery systems and methods
A first network device is configured with a rule preventing network traffic from travelling from the first network device to one or more other network devices. The first network device is configured to receive and distribute network traffic to the one or more other network devices. A second network device receives and distributes network traffic to the one or more other network devices. The first network device determines that the second network device has failed. In response to determining that the second network device has failed, the first network device removes the rule so that the first network device receives and distributes network traffic to the one or more other network devices.
Scalable in-network computation for massively-parallel shared-memory processors
A network device configured to perform scalable, in-network computations is described. The network device is configured to process pull requests and/or push requests from a plurality of endpoints connected to the network. A collective communication primitive from a particular endpoint can be received at a network device. The collective communication primitive is associated with a multicast region of a shared global address space and is mapped to a plurality of participating endpoints. The network device is configured to perform an in-network computation based on information received from the participating endpoints before forwarding a response to the collective communication primitive back to one or more of the participating endpoints. The endpoints can inject pull requests (e.g., load commands) and/or push requests (e.g., store commands) into the network. A multicast capability enables tasks, such as a reduction operation, to be offloaded to hardware in the network device.
Method, device, and system for transmitting multicast packet
A method, a device, and a system for transmitting a multicast packet are provided. The method includes: when a device that is in a source subnet and that is connected to a core network receives a multicast packet of a target multicast group, determining addresses of devices that are in a plurality of destination subnets corresponding to the target multicast group and that are connected to the core network, replicating the multicast packet, to obtain a plurality of multicast packets whose quantity is equal to a quantity of the plurality of destination subnets, separately adding outer encapsulation to each multicast packet, and forwarding the multicast packet to which the outer encapsulation is added, where a destination address in the outer encapsulation is an address of a device that is in each destination subnet and that is connected to the core network.