H04L67/101

Method for operating a distributed application
11706316 · 2023-07-18 · ·

A method for operating a distributed application includes: transmitting, by an application frontend, an initialization request to a registration server via a communication network; selecting, by the registration server, an instance of an application backend and transmitting a fully qualified domain name of the selected instance to the application frontend; transmitting, by the application frontend, a lookup request to a domain name server; transmitting, by the domain name server, an IP address associated with the fully qualified domain name to the application frontend; transmitting, by the application frontend, application data to the transmitted IP address via a connection provided by the communication network; selecting, by a core server of the communication network, a quality service for the distributed application; applying, by the communication network, a service quality determined by the selected quality service to the connection; and operating, by the distributed application, with the applied service quality.

Local preference in anycast CDN routing

Embodiments herein describe a CDN where anycast routing is used to identify a load balancer for selecting a cache in the CDN to use to deliver a requested object to a user. In one embodiment, the user performs a DNS lookup to identify an anycast IP address for a plurality of load balancers in the CDN. The user can then initiate anycast routing using the anycast IP address to automatically identify the closest load balancer. Once the identified balancer selects the cache, the load balancer can close the anycast connection with the user device and use an HTTP redirect to provide the user device with a unicast path to the selected cache. The user device can then establish a unicast connection with the cache to retrieve (e.g., stream) the object.

Local preference in anycast CDN routing

Embodiments herein describe a CDN where anycast routing is used to identify a load balancer for selecting a cache in the CDN to use to deliver a requested object to a user. In one embodiment, the user performs a DNS lookup to identify an anycast IP address for a plurality of load balancers in the CDN. The user can then initiate anycast routing using the anycast IP address to automatically identify the closest load balancer. Once the identified balancer selects the cache, the load balancer can close the anycast connection with the user device and use an HTTP redirect to provide the user device with a unicast path to the selected cache. The user device can then establish a unicast connection with the cache to retrieve (e.g., stream) the object.

SERVICE-AWARE GLOBAL SERVER LOAD BALANCING

Example methods and systems for service-aware global server load balancing are described. One example may involve a first load balancer receiving, from a client device, a request to access a service associated with an application deployed in at least a first cluster and a second cluster. In response to determination that a first pool in the first cluster is associated with an unhealthy status, the first load balancer may identify a second pool implementing the service in the second cluster, the second pool being associated with a healthy status and includes one or more second backend servers selectable by a second load balancer to process the request. Failure handling may be performed by interacting with the client device, or the second load balancer, to allow the client device to access the service implemented by the second pool in the second cluster.

SERVICE-AWARE GLOBAL SERVER LOAD BALANCING

Example methods and systems for service-aware global server load balancing are described. One example may involve a first load balancer receiving, from a client device, a request to access a service associated with an application deployed in at least a first cluster and a second cluster. In response to determination that a first pool in the first cluster is associated with an unhealthy status, the first load balancer may identify a second pool implementing the service in the second cluster, the second pool being associated with a healthy status and includes one or more second backend servers selectable by a second load balancer to process the request. Failure handling may be performed by interacting with the client device, or the second load balancer, to allow the client device to access the service implemented by the second pool in the second cluster.

Application link resource scaling method, apparatus, and system based on concurrent stress testing of plural application links

Application link scaling method, apparatus and system are provided. The method includes obtaining an application link, the application link being a path formed by at least two associated applications for a service scenario; determining information of target resources required by capacity scaling for all applications in the application link; allocating respective resources to the applications according to the information of the target resources; and generating instances for the applications to according the respective resources. From the perspective of services, the method performs capacity assessment for related applications on a link as a whole, and capacity scaling of the entire link, thus fully utilizing resources, and preventing the applications from being called by other applications which results in insufficient resources. This ensures the applications not to become the vulnerability of a system, ensures the stability of the system, avoids allocating excessive resources to the applications, and reduces a waste of resources.

Application link resource scaling method, apparatus, and system based on concurrent stress testing of plural application links

Application link scaling method, apparatus and system are provided. The method includes obtaining an application link, the application link being a path formed by at least two associated applications for a service scenario; determining information of target resources required by capacity scaling for all applications in the application link; allocating respective resources to the applications according to the information of the target resources; and generating instances for the applications to according the respective resources. From the perspective of services, the method performs capacity assessment for related applications on a link as a whole, and capacity scaling of the entire link, thus fully utilizing resources, and preventing the applications from being called by other applications which results in insufficient resources. This ensures the applications not to become the vulnerability of a system, ensures the stability of the system, avoids allocating excessive resources to the applications, and reduces a waste of resources.

Multi-access edge computing low latency information services

A multi-access edge computing (MEC) platform may receive an indication that a user device has downloaded a MEC application client associated with a MEC application and may send, to the user device, instructions to install a device client. The device client may transmit device information associated with the user device to the MEC platform. The MEC platform may receive the device information associated with the user device and determine, based on the received device information, performance information associated with the MEC application.

Multi-access edge computing low latency information services

A multi-access edge computing (MEC) platform may receive an indication that a user device has downloaded a MEC application client associated with a MEC application and may send, to the user device, instructions to install a device client. The device client may transmit device information associated with the user device to the MEC platform. The MEC platform may receive the device information associated with the user device and determine, based on the received device information, performance information associated with the MEC application.

METHODS AND SYSTEMS FOR DEVICE-SPECIFIC EVENT HANDLER GENERATION

A system for device-specific event handler generation, includes a computing device configured to configure a first remote device to display an event handler graphic, receive a plurality of data via a data-reception event handler, retrieve at least a memory entry linked to the memory map index, divide a device identifier space into a first identifier set and a disjoint second identifier set, and configure the first remote device to generate a graphical view, wherein the graphical view includes at least a display element generated as a function of the at least a memory entry and a first selectable event graphic corresponding to a first selectable event handler, wherein the first selectable event handler is configured to trigger a first action if first remote device identifier corresponding to the first remote device matches the first identifier set and a second action if the identifier matches the second identifier set.