H04L47/828

Partitioning health monitoring in a global server load balancing system

Some embodiments provide a novel method of performing health monitoring for resources associated with a global server load balancing (GSLB) system. This system is implemented by several domain name system (DNS) servers that perform DNS services for resources located at several geographically separate sites. The method identifies several different groupings of the resources. It then assigns the health monitoring of the different resource groups to different DNS servers. The method then configures each particular DNS server (1) to send health monitoring messages to the particular group of resources assigned to the particular DNS server, (2) to generate data by analyzing responses to the sent health monitoring messages, and (3) to distribute the generated data to the other DNS servers. The method in some embodiments is performed by a set of one or more controllers.

Load-balancing establishment of connections among groups of connector servers

Techniques are described herein that are capable of load-balancing establishment of connections among groups of connector servers in a public computer network by performing operations that include receiving a connection request from a connector client in a private computer network, requesting establishment of a connection between the connector client and one of the connector servers in the public computer network. A number of connections between the private computer network and each group is determined. An identified group is selected from the groups based at least in part on a number of connections between the private computer network and the identified group being less than or equal to a number of connections between the private computer network and each other group. The connection request is provided toward the identified group, which enables establishment of the connection between the connector client and a connector server in the identified group.

Automated local scaling of compute instances

At a first compute instance run on a virtualization host, a local instance scaling manager is launched. The scaling manager determines, based on metrics collected at the host, that a triggering condition for redistributing one or more types of resources of the first compute instance has been met. The scaling manager causes virtualization management components to allocate a subset of the first compute instance's resources to a second compute instance at the host.

METHOD AND APPARATUS FOR LDPC TRANSMISSION OVER A CHANNEL BONDED LINK
20230231647 · 2023-07-20 ·

A particular overall architecture for transmission over a bonded channel system consisting of two interconnected MoCA (Multimedia over Coax Alliance) 2.0 SoCs (Systems on a Chip) and a method and apparatus for the case of a “bonded” channel network. With a bonded channel network, the data is divided into two segments, the first of which is transported over a primary channel and the second of which is transported over a secondary channel.

Managing cloud acquisitions using distributed ledgers
11706155 · 2023-07-18 · ·

Systems and methods of the disclosure include: receiving, by a cloud resource provisioning component via a cloud provisioning request application programming interface (API), a cloud resource request; storing the cloud resource request on a cryptographically-protected distributed ledger; receiving, from a first cloud provider, a first cloud resource offer responsive to the cloud resource request; and responsive to receiving, from a node of the cryptographically-protected distributed ledger, a notification of validation of the first cloud resource offer with respect to the cloud resource request, causing the first cloud provider to provision a cloud resource specified by the first cloud resource offer.

System for source independent but source value dependent transfer monitoring

Systems, computer program products, and methods are described herein for source independent but source value dependent transfer monitoring. The invention is configured to receive a processing request to initiate a processing network session, wherein the processing network session is associated with the processing of a first activity; receive, a processing interaction request to access a first resource associated with the user; extract a resource processing value associated with the first activity from the processing parameter data structure; determine whether the resource processing value is associated with triggering at least one block intervention step; block the entity input device from accessing the first resource in response to determining that the resource processing value is associated with triggering the at least one block intervention step during processing of the first activity; transmit a block notification to the entity input device; and trigger display of a success notification at an end-user application.

Load adaptation architecture framework for orchestrating and managing services in a cloud computing system

According to one aspect of the concepts and technologies disclosed herein, a cloud computing system can include a load adaptation architecture framework that performs operations for orchestrating and managing one or more services that may operate within at least one of layers 4 through 7 of the Open Systems Interconnection (“OSI”) communication model. The cloud computing system also can include a virtual resource layer. The virtual resource layer can include a virtual network function that provides, at least in part, a service. The cloud computing system also can include a hardware resource layer. The hardware resource layer can include a hardware resource that is controlled by a virtualization layer. The virtualization layer can cause the virtual network function to be instantiated on the hardware resource so that the virtual network function can be used to support the service.

SYSTEMS AND METHODS FOR MULTI-CLOUD VIRTUALIZED INSTANCE DEPLOYMENT AND EXECUTION

A system may receive a first definition for a virtualized instance of a network function. The first definition may include a first set of declarations in a first format that is different than respective formats supported by different virtualized environments. The system may select a first virtualized environment to run the virtualized instance based on requirements specified within the first definition, and may generate a second definition with a second set of declarations that map the first set of declarations from the first format to a second format supported by the first virtualized environment. The system may deploy the virtualized instance to the first virtualized environment using the second set of declarations from the second definition. Deploying the virtualized instance may include configuring its operation based on some of the second set of declarations matching a configuration format supported by the first virtualized environment.

Enhanced selection of cloud architecture profiles

This document describes modeling and simulation techniques to select a cloud architecture profile based on correlations between application workloads and resource utilization. In some aspects, a method includes obtaining infrastructure data specifying utilization of computing resources of an existing computing system. Application workload data specifying tasks performed by one or more applications running on the existing computing system is obtained. One or more models are generated based on the infrastructure data and the application workload data. The model(s) define an impact on utilization of each computing resource in response to changes in workloads of the application(s). A workload is simulated, using the model(s), on a candidate cloud architecture profile that specifies a set of computing resources. A simulated utilization of each computing resource of the candidate cloud architecture profile is determined based on the simulation. An updated cloud architecture profile is generated based on the simulated utilization.

Mechanism to enforce consistent next hops in a multi-tier network

In general, the disclosure relates to a method for forwarding a packet through a multi-tier network by establishing a routing protocol session with network devices in the multi-tier network, obtaining routing protocol information from network devices of the multi-tier network, determining a group using the routing protocol information, generating an ordered group listing using network device identifiers (NDIs) for the network devices in the group, and programming a network device hardware of a network device of the set of network devices of the multi-tier network using the ordered group listing. The group includes a set of the network devices of the multi-tier network.