Patent classifications
H04L67/60
METHOD AND NETWORK ENTITY FOR SERVICE API PUBLISHING
The present disclosure provides a method in a first network entity for service Application Programming Interface, API, publishing. The method includes: receiving, from a second network entity, an API publish request for publishing a service API, the API publish request containing a list of identifiers of network entities that have published the service API; and transmitting, when an identifier of the first network entity is included in the list, an API publish response indicating failure of publishing of the service API to the second network entity, without creating a new resource for the service API at the first network entity or further publishing the service API to any network entity.
SERVICE PROCESSING METHOD AND APPARATUS, AND STORAGE MEDIUM
A service processing method, performed by a cloud application management server, includes: upon receiving an allocation request from a target terminal, acquiring N pieces of selection reference information corresponding to a pending edge server and related to the target terminal and running reference information, the pending edge server being one of P edge servers connected to the cloud application management server; upon determining that the pending edge server meets a requirement of providing a running service of a target cloud application for the target terminal, determining a connection reference score corresponding to the pending edge server; storing the connection reference score and identification information about the pending edge server into a candidate set; and transmitting the candidate set to the target terminal.
MICROSERVICE CALL METHOD AND APPARATUS, DEVICE, AND MEDIUM
This application provides a microservice call method. The method includes: A network interface card receives service call requests separately generated by a plurality of first services deployed on a first device, where each of the plurality of first services is corresponding to one second service, and the second service is used to process first data of a service call request generated by the first service corresponding to the second service; and the network interface card sends the service call request to the second service based on service governance logic related to the first data. In this way, a problem that contention of a plurality of proxies for a system resource causes process context switching and further causes a sharp increase in a service delay is resolved, and application performance is improved.
MICROSERVICE CALL METHOD AND APPARATUS, DEVICE, AND MEDIUM
This application provides a microservice call method. The method includes: A network interface card receives service call requests separately generated by a plurality of first services deployed on a first device, where each of the plurality of first services is corresponding to one second service, and the second service is used to process first data of a service call request generated by the first service corresponding to the second service; and the network interface card sends the service call request to the second service based on service governance logic related to the first data. In this way, a problem that contention of a plurality of proxies for a system resource causes process context switching and further causes a sharp increase in a service delay is resolved, and application performance is improved.
METHOD AND APPARATUS FOR REAL-TIME DYNAMIC APPLICATION PROGRAMMING INTERFACE (API) TRAFFIC SHAPING AND INFRASTRUCTURE RESOURCE PROTECTION IN A MULTICLIENT NETWORK ENVIRONMENT
A real-time dynamic API traffic shaping and infrastructure resource protection in a multiclient network environment is provided. A traffic rules engine (TRE) applies traffic shaping only to customers that are utilizing “more than their fair share” of the currently available bandwidth without allowing them to negatively impact the user experience of other users. The present invention takes current API traffic into consideration, allowing one or a few high volume users to utilize most of all available bandwidth as long as other users do not need that bandwidth. This includes dynamically measuring and adjusting which users had traffic shaping applied to them based on the overall traffic during any given second. The solution of the present invention avoids any slowdown of customer API requests unless the maximum allowable TPS limit is near to being reached.
FEDERATED SERVICE REGISTRIES FOR ACCESS PROVIDERS
Techniques for federated service registries are provided. A first access server determines a first plurality of services available within a local network associated with the first access server, as well as a second plurality of services available at one or more remote networks. A request for a first service is received from a client device, where the first service is not included in the first plurality of services and is included in the second plurality of services. A tunnel is established from the client device to one or more remote networks.
FEDERATED SERVICE REGISTRIES FOR ACCESS PROVIDERS
Techniques for federated service registries are provided. A first access server determines a first plurality of services available within a local network associated with the first access server, as well as a second plurality of services available at one or more remote networks. A request for a first service is received from a client device, where the first service is not included in the first plurality of services and is included in the second plurality of services. A tunnel is established from the client device to one or more remote networks.
Dynamically coordinated service maintenance operations and adaptive service polling for microservices
Techniques are provided for a coordinated microservice system including a coordinator and multiple services, which interact with each other. Each of the services can have multiple execution instances, which run independently of each other. In operation, the current status of each instance is evaluated against one or more rules to determine whether the current status changes the topography of the services and updating the topography based on the changes. An execution plan is created for executing a command based on one or more predefined rules and the updated topography, where the execution plan includes one or more steps for executing the command on each instance of the service. The execution plan is executed on each instance of the service in accordance with the one or more predefined rules.
Dynamically coordinated service maintenance operations and adaptive service polling for microservices
Techniques are provided for a coordinated microservice system including a coordinator and multiple services, which interact with each other. Each of the services can have multiple execution instances, which run independently of each other. In operation, the current status of each instance is evaluated against one or more rules to determine whether the current status changes the topography of the services and updating the topography based on the changes. An execution plan is created for executing a command based on one or more predefined rules and the updated topography, where the execution plan includes one or more steps for executing the command on each instance of the service. The execution plan is executed on each instance of the service in accordance with the one or more predefined rules.
Methods and systems for transmitting information
Methods and systems for transferring information, comprising: transmitting, by a first computing device of the first computing system, a first network function request to a decentralized network, the first network function request including first information; and transmitting, by a second computing device of the second computing system, a second network function request to the decentralized network, the second network function request including second information.