H04L12/913

Network resource isolation method for container network and system thereof

A network resource isolation method for container networks and a system thereof, including a computation system for network resource isolation, or a system using network resource isolation, or a network resource isolation system for container networks, and methods of implementation thereof. The system provides container overlay networks with a resource isolation scheme that also reduces the use threshold for isolation of network resources and optimizes the utilization rate of network resources.

IMPLICIT DISCOVERY CONTROLLER REGISTRATION OF NON-VOLATILE MEMORY EXPRESS (NVME) ELEMENTS IN AN NVME-OVER-FABRICS (NVME-OF) SYSTEM

Presented herein are embodiments for implicitly or indirectly registering elements of a non-volatile memory express (NVMe) entity in an NVMe-over-Fabric (NVMe-oF) environment. In one or more embodiments, one or more interactions between an NVMe entity and a centralized storage fabric service component, such as part of the Link Layer Discovery Protocol (LLDP) process or the Multicast Domain Name System (mDNS) process, may be used by the centralized storage fabric service to extract information about the NVMe entity and automatically register it with a centralized registration datastore. In one or more embodiments, the centralized registration datastore may be used to facilitate services in the NVMe-oF system, such as discovery of NVMe entities, provisioning, and access control. In one or more embodiments, an implicitly registered NVMe entity may also subsequently explicitly register, which may include supplying additional information about the NVMe entity.

TUNNEL PROVISIONING WITH LINK AGGREGATION
20210126859 · 2021-04-29 ·

A method for processing data packets in a communication network includes establishing a path for a flow of the data packets through the communication network. At a node along the path having a plurality of aggregated ports, a port is selected from among the plurality to serve as part of the path. A label is chosen responsively to the selected port. The label is attached to the data packets in the flow at a point on the path upstream from the node. Upon receiving the data packets at the node, the data packets are switched through the selected port responsively to the label.

Signaling a planned off-lining, such as for maintenance, of a node or node component supporting a label switched path (LSP), and using such signaling
11032192 · 2021-06-08 · ·

A node of an LSP may inform the ingress node of the LSP, for example via RSVP signaling, about its temporary unavailability for a certain time. In response, the ingress node can stop using any affected LSP(s) and divert the traffic to other LSPs. This provides a faster mechanism to signal traffic shift then traditional IGP overload which causes considerable churn into the network as all the nodes need to compute the SPF. It is sufficient for ingress node to be aware of this node maintenance and it can use information to divert the traffic to other equal cost multipath (ECMP) LSP(s), or other available LSP(s). If no alternative LSP path exists when the ingress node receives such a message, a new LSP can be built during this time and traffic diverted smoothly (e.g., in a make-before-break manner) before the node goes offline for maintenance. Since only the ingress node is responsible to push the traffic to the LSP, there is no need to tear down the LSP for such node maintenance (especially when they are for a short duration). This can be used with a controller responsible for the LSP as well.

MULTI-LEVEL RESOURCE RESERVATION
20210136001 · 2021-05-06 ·

The present disclosure is directed to a multi-level resource reservation system that obviates one or more of the problems due to limitations and disadvantages of the related art. The multi-level resource reservation system creates, or modifies existing, peer-to-peer protocol(s) to complete a continuous chain of configured ports to support QoS feature(s), e.g., bound latency and guaranteed jitter, for a data flow that traverses an arbitrary sequence of bridges, routers, and virtual links.

RESOURCE RESERVATION METHOD AND APPARATUS
20210099397 · 2021-04-01 ·

This application provides a resource reservation method and an apparatus. The method includes: receiving, by a node, a first message, where the first message carries a bandwidth value r1 of a resource requested to be reserved and a quantity Q1 of cycles occupied by the resource, the node maintains resource reservation states in K cycles, K is a positive integer, and Q1 is less than or equal to K; and updating, by the node, resource reservation states in Q1 of the K cycles based on the bandwidth value r1 and the quantity Q1. The node maintains the resource reservation states at a granularity of cycles. Therefore, the resource reservation states that need to be maintained by the node do not depend on a quantity of data flows processed by the node, and a performance requirement of a resource reservation for the node can be lowered.

Method and apparatus for processing low-latency service flow
10972398 · 2021-04-06 · ·

A method and an apparatus for processing a low-latency service flow, where the method includes that a first forwarding device obtains a low latency identifier corresponding to a first service flow, and obtains a second data packet based on the first data packet and the low latency identifier after determining that a received first data packet belongs to the first service flow, where the second data packet includes the first data packet and the low latency identifier, the low latency identifier instructing a forwarding device that receives the first service flow to forward the first service flow in a low-latency forwarding mode, and the low-latency forwarding mode is a mode in which fast forwarding of the first service flow is implemented under dynamic control, and the first forwarding device sends the second data packet to a second forwarding device in the low-latency forwarding mode.

METHODS AND APPARATUS TO PROVIDE A CUSTOM INSTALLABLE OPEN VIRTUALIZATION APPLICATION FILE FOR ON-PREMISE INSTALLATION VIA THE CLOUD

Methods, apparatus, systems and articles of manufacture to provide a custom installable open virtualization application file for on-premise installation via the cloud are disclosed. An example apparatus includes a resource processor to determine a resource capacity for an agent in a private cloud network; a file manipulator to modify an open virtualization appliance (OVA) file by modifying a descriptor file of the OVA file to configure the resource capacity for the agent in the private cloud network, the OVA file being deployed in a public cloud network; and a first interface to transmit an indication to a location of the modified OVA file to a user device, the location of the modified OVA file being the same location as the OVA file.

Multi-layer LSP control method and apparatus
10992573 · 2021-04-27 · ·

Disclosed are a multi-layer LSP control method and apparatus. The method comprises: acquiring a label switched path (LSP) addition request, wherein the LSP addition request carries identifier information for identifying a layer associated group that the LSP is to be added to; and in response to the LSP addition request, adding the LSP to the layer associated group, wherein the layer associated group comprises: an upper-layer LSP and several lower-layer LSPs.

Multi-level resource reservation
10917356 · 2021-02-09 · ·

The present disclosure is directed to a multi-level resource reservation system that obviates one or more of the problems due to limitations and disadvantages of the related art. The multi-level resource reservation system creates, or modifies existing, peer-to-peer protocol(s) to complete a continuous chain of configured ports to support QoS feature(s), e.g., bound latency and guaranteed jitter, for a data flow that traverses an arbitrary sequence of bridges, routers, and virtual links.