H04L47/829

GEOGRAPHIC SERVICE CLASSIFICATION AND ROUTING
20220131749 · 2022-04-28 ·

Methods, systems, and computer programs are presented for managing resources to deliver a network service in a distributed configuration. A method includes an operation for identifying resources for delivering a network service, the resources being classified by geographic area. Further, the method includes operations for selecting service agents to configure the identified resources, each service agent to manage service pools for delivering the network service across at least one geographic area, the service agents being selected to provide configurability for the service pools. The method further includes operations for sending configuration rules, to the service agents, configured to establish service pools for delivering the network service across the geographic areas. Service traffic information is collected from the service agents, and the resources are adjusted based on the collected service traffic information. Updated respective configuration rules are sent to each determined service agent based on the adjusting.

Execution of a topology

A method of executing a topology includes deriving executable logic from the topology. The method of executing the topology further includes, with an LCM engine, executing the topology based on the executable logic.

Method for processing low-rate service data in optical transport network, apparatus, and system
11764874 · 2023-09-19 · ·

A method for processing low-rate service data, an apparatus, and a system, where the method includes: mapping low-rate service data into a newly defined low-rate data frame, where a rate of the low-rate data frame matches a rate of the low-rate service data, the data frame includes an overhead area and a payload area, the payload area is used to carry the low-rate service data, a rate of the payload area in the low-rate data frame is not less than the rate of the low-rate service data, and the rate of the low-rate service data is less than 1 Gbps; mapping the low-rate data frame into one or more slots in another data frame, where a rate of the slot is not greater than 100 Mbps; mapping the other data frame into an optical transport unit (OTU) frame; and sending the OTU frame.

Building a highly-resilient system with failure independence in a disaggregated compute environment

A new approach to resiliency management is provided in a data center wherein servers are constructed dynamically, on-demand and based on workload requirements and a tenant's resiliency requirements by allocating resources from these pools. In this approach, a set of functionally-equivalent “interchangeable compute units” (ICUs) are composed of resources from resource pools that have been extended to include not only different resource types (CPU, memory, accelerators), but also resources of different specifications (specs) and flavors. As a workload is being processed, the health or status of the resources are monitored. Upon a performance issue or failure event, a resiliency manager can swap out a current ICU and replace it with a functionally-equivalent ICU. Preferably, individual ICUs are hosted on one of: resources of a same type each with different specifications, and resources of a same type and specification and different flavors. The approach enables failure independence in a disaggregated environment.

Method for processing low-rate service data in optical transport network, apparatus, and system
11233571 · 2022-01-25 · ·

A method for processing low-rate service data, an apparatus, and a system, where the method includes: mapping low-rate service data into a newly defined low-rate data frame, where a rate of the low-rate data frame matches a rate of the low-rate service data, the data frame includes an overhead area and a payload area, the payload area is used to carry the low-rate service data, a rate of the payload area in the low-rate data frame is not less than the rate of the low-rate service data, and the rate of the low-rate service data is less than 1 Gbps; mapping the low-rate data frame into one or more slots in another data frame, where a rate of the slot is not greater than 100 Mbps; mapping the other data frame into an optical transport unit (OTU) frame; and sending the OTU frame.

PROACTIVE AUTO-SCALING
20230318988 · 2023-10-05 ·

In an approach for proactive service group based auto-scaling, a processor collects usage data generated in one or more services in a container platform. A processor predicts access situation and resource utilization of the one or more services based on the usage data. A processor constructs a dynamic correlation topology among the one or more services based on the access situation and resource utilization. A processor identifies associated services correlated with the one or more services based on the dynamic correlation topology. A processor, in response to a service request exceeding a pre-set threshold, expands the one or more services and associated services.

REDUCING A NETWORK DEVICE UPGRADE OUTAGE USING A REDUCED HARDWARE RESOURCE SCALING

In general, embodiments relate to a method, for managing a network device, that includes accessing, by a feature agent of the network device, an allocation data structure, wherein the allocation data structure specifies a first portion of memory and a second portion of memory, identifying, using the allocation data structure, the first portion of the memory to be used during an upgrade, wherein the second portion of memory is used for storing a network device table, wherein the network device table is used by a packet transmission component while the upgrade is being performed, and upon completion of the upgrade, updating the allocation data structure to specify that the packet transmission component use a second network device table and stop using the network device table, wherein the second network device table is initially populated during the upgrade.

Compute express link over ethernet in composable data centers

Techniques for sending Compute Express Link (CXL) packets over Ethernet (CXL-E) in a composable data center that may include disaggregated, composable servers. The techniques may include receiving, from a first server device, a request to bind the first server device with a multiple logical device (MLD) appliance. Based at least in part on the request, a first CXL-E connection may be established for the first server device to export a computing resource to the MLD appliance. The techniques may also include receiving, from the MLD appliance, an indication that the computing resource is available, and receiving, from a second server device, a second request for the computing resource. Based at least in part on the second request, a second CXL-E connection may be established for the second server device to consume or otherwise utilize the computing resource of the first server device via the MLD appliance.

Proactive auto-scaling

In an approach for proactive service group based auto-scaling, a processor collects usage data generated in one or more services in a container platform. A processor predicts access situation and resource utilization of the one or more services based on the usage data. A processor constructs a dynamic correlation topology among the one or more services based on the access situation and resource utilization. A processor identifies associated services correlated with the one or more services based on the dynamic correlation topology. A processor, in response to a service request exceeding a pre-set threshold, expands the one or more services and associated services.

Microservice placement in hybrid multi-cloud using graph matching

An embodiment of the invention may include a method, computer program product and system for deployment of microservices within a shared pool of configurable computing resources. An embodiment may include creating a dependency map for a plurality of microservices of an application deployed on the shared pool of configurable computing resources. An embodiment may include identifying attributes, with associated values, for each microservice of the plurality of microservices and identifying eligible deployment locations within the shared pool of configurable computing resources. An embodiment may include creating a bipartite graph based on the plurality of microservices and the identified eligible deployment locations. An embodiment may include applying bipartite matching to the shared pool of configurable computing resources based on the created bipartite graph. An embodiment may include, based on the applied bipartite matching, relocating one or more microservices within the shared pool of configurable computing resources.