H04L12/54

Congestion notification system
09832125 · 2017-11-28 · ·

A congestion notification system includes a networking device coupling a sender device to a receiver device. The networking device is configured to detect a congestion situation. When the networking device will provide a first congestion notification in a first packet received from the sender device in response to detecting the congestion situation, as well as retrieve sender device information from the first packet and store that sender device information in a database. Following the sending of the first packet to the receiver device, the networking device receives a second packet that was sent from the receiver device prior to the receiver device receiving the first packet. In response to determining that the second packet includes the sender device information that is stored in the database, the networking device provides a second congestion notification in the second packet. The networking device then sends the second packet to the sender device.

Method for Obtaining Port Path and Apparatus
20170338976 · 2017-11-23 ·

A method for obtaining a port path and an apparatus to improve a network capacity, where the method includes receiving, by a controller, a request message from a first server, where the request message requests port path information, and the port path information includes a port that a logical link from the first server to a second server passes through, obtaining, by the controller, a first absolute port path (APP) and a second APP according to network topology information, where the first APP includes a port that a logical link from a root node to the first server passes through, and the second APP includes a port that a logical link from the root node to the second server passes through, obtaining, by the controller, the port path information according to the first APP and the second APP, and sending the port path information to the first server.

Message Attack Defense Method and Apparatus
20170338998 · 2017-11-23 ·

The present disclosure discloses a message attack defense method and apparatus. The method includes: receiving, by a controller, a report message sent by at least one switch; respectively storing, by the controller in a switch queue corresponding to each switch, the received report message that is sent by each switch; and performing, by the controller, round-robin scheduling on the switch queue corresponding to each switch.

Buffer control for multi-transport architectures

A system and method for automating connection management in a manner that may be transparent to any actively communicating applications operating in a Network on Terminal Architecture (NoTA). An application level entity may access another node by making a request to a high level communication structure via an interface. The high level structure may interact with a lower level structure configured to manage communication by establishing communication with another device via one or more transports. In at least one embodiment, provisions may be made to guard against data being lost when a transport fails, including storing data that is passed from a transport-independent buffer to a transport-specific buffer in case the transport fails. When a failure occurs, the stored data may readily be forwarded for sending using another transport.

Load balancer bypass

Redirecting message flows to bypass load balancers. A destination intermediary receives a source-side message that includes a virtual address of a load balancer as a destination, and that is augmented to include a network address of a destination machine as a destination. The destination intermediary determines that a source intermediary should address subsequent network messages that originate from a source machine and that are associated with the same multi-message flow to the destination machine while bypassing the load balancer. The destination intermediary modifies the source-side message so the destination for the source-side message addresses the destination machine, and passes the modified source-side message to the destination machine. The destination intermediary receives a response from the destination machine identifying the source machine as its destination, and modifies the response so a source address identifies the virtual address of the load balancer, and dispatches the modified response to the source machine.

METHOD, APPARATUS, SYSTEM AND MEDIA FOR TRANSMITTING MESSAGES BETWEEN NETWORKED DEVICES IN DATA COMMUNICATION WITH A LOCAL NETWORK ACCESS POINT

A method for transmitting messages between a first networked device and a second networked device via a local network provided by a local network access point is disclosed. The method involves on the first networked device, determining whether the second networked device meets local communications criteria by determining at least one of whether the second networked device is accessible via the local network at a local network address, and whether the second networked device has registered for communications via local networks. The method also involves, in response to a determination that the second networked device meets the local communications criteria, transmitting the message via the local network access point to the local network address associated with the second networked device.

Transmission apparatus, transmission method, reception apparatus, and reception method

A transmission apparatus that includes circuitry configured to generate transport protocol selection information used for selecting a transport protocol to be used in a specific service from a plurality of transport protocols conforming to a predetermined standard; and transmit, together with the transport protocol selection information, a content provided by the specific service according to the transport protocol set in the transport protocol selection information, and the plurality of transport protocols include at least ROUTE (Real-Time Object Delivery over Unidirectional Transport) and MMT (MPEG Media Transport).

On-demand access to compute resources
11496415 · 2022-11-08 · ·

Disclosed are systems, methods and computer-readable media for controlling and managing the identification and provisioning of resources within an on-demand center as well as the transfer of workload to the provisioned resources. One aspect involves creating a virtual private cluster within the on-demand center for the particular workload from a local environment. A method of managing resources between a local compute environment and an on-demand environment includes detecting an event associated with a local compute environment and based on the detected event, identifying information about the local environment, establishing communication with an on-demand compute environment and transmitting the information about the local environment to the on-demand compute environment, provisioning resources within the on-demand compute environment to substantially duplicate the local environment and transferring workload from the local-environment to the on-demand compute environment. The event can be a threshold or a triggering event within or outside of the local environment.

Sticky service sessions in a datacenter

Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.

Networking system having multiple components with multiple loci of control

Each switch unit in a networking system shares its local state information among other switch units in the networking system, collectively referred to as the shared forwarding state. Each switch unit creates a respective set of output queues that correspond to ports on other switch unites based on the shared forwarding state. A received packet on an ingress switch unit operating in accordance with a first routing protocol instance can be enqueued on an output queue in the ingress switch; the packet is subsequently processed by the egress switch unit, operating in accordance with a second routing protocol instance that corresponds to the output queue.