METHOD FOR PROVIDING A LOW-LATENCY, DISTRIBUTED, MULTI-USER APPLICATION THROUGH AN EDGE CLOUD PLATFORM
20210160165 · 2021-05-27
Inventors
Cpc classification
H04L47/225
ELECTRICITY
H04L12/4633
ELECTRICITY
H04L67/10
ELECTRICITY
A63F13/30
HUMAN NECESSITIES
International classification
A63F13/30
HUMAN NECESSITIES
Abstract
The disclosure relates to an Edge Cloud Platform (ECP) and a method executed in the ECP, for providing a low-latency, distributed, multi-user application. The method comprises determining a first location of a first group of users requesting access to the multi-user application and deploying the multi-user application in a first Point of Presence (PoP) in a first Service Provider (SP) domain operative to serve the first group of users. The method comprises determining a second location of a second group of users requesting access to the multi-user application and deploying a proxy of the multi-user application in a second PoP in a second SP domain operative to serve the second group of users. The method comprises, upon determining that a Software License Agreement (SLA) exists between the first and second SPs, establishing a tunnel for linking the multi-user application and the proxy of the multi-user application, thereby providing the low-latency.
Claims
1. A method for providing a low-latency, distributed, multi-user application through an Edge Cloud Platform, comprising: determining a first location of a first group of users requesting access to the multi-user application and deploying the multi-user application in a first Point of Presence (PoP) in a first Service Provider (SP) domain operative to serve the first group of users; determining a second location of a second group of users requesting access to the multi-user application and deploying a proxy of the multi-user application in a second PoP in a second SP domain operative to serve the second group of users; and upon determining that a Software License Agreement (SLA) exists between the first and second SPs, establishing a tunnel for linking the multi-user application and the proxy of the multi-user application, thereby providing the low-latency.
2. The method of claim 1, wherein the multi-user application is a gaming application.
3. The method of claim 1, wherein the multi-user application and the proxy of the multi-user application are deployed by a traffic manager.
4. The method of claim 1, wherein the traffic manager selects a largest group of users as the first group of users.
5. The method of claim 1, wherein the traffic manager selects the first group of users, among multiple groups of users, based on network characteristics that enable achieving lower latencies for all groups of users.
6. The method of claim 5, wherein characteristics that enable to achieve lower latencies for all groups of users are determined using data analytics of network characteristics, a number of users per location, a network status or traffic characteristics.
7. The method of claim 1, further comprising detecting changes in the groups of users and moving the multi-user application to another PoP in another SP domain.
8. The method of claim 7, wherein changes include more users or less users at a given location, a new group of users at a new location, all users “dropped” at another given location, and wherein moving the multi-user application to another PoP in another SP domain reduces a cumulative latency.
9. The method of claim 1, wherein the users are connected through any one of: a cable connection, a short range wireless connection, a long range wireless connection.
10. The method of claim 1, wherein the tunnel is established using peering or transit.
11. An Edge Cloud Platform for providing a low-latency, distributed, multi-user application, comprising processing circuits and a memory, the memory containing instructions executable by the processing circuits whereby the Edge Cloud Platform is operative to: determine a first location of a first group of users requesting access to the multi-user application and deploy the multi-user application in a first Point of Presence (PoP) in a first Service Provider (SP) domain operative to serve the first group of users; determine a second location of a second group of users requesting access to the multi-user application and deploy a proxy of the multi-user application in a second PoP in a second SP domain operative to serve the second group of users; and upon determining that a Software License Agreement (SLA) exists between the first and second SPs, establish a tunnel for linking the multi-user application and the proxy of the multi-user application, thereby providing the low-latency.
12. The Edge Cloud Platform of claim 11, wherein the multi-user application is a gaming application.
13. The Edge Cloud Platform of claim 11, wherein the multi-user application and the proxy of the multi-user application are deployed by a traffic manager.
14. The Edge Cloud Platform of claim 11, wherein the traffic manager is operative to select a largest group of users as the first group of users.
15. The Edge Cloud Platform of claim 11, wherein the traffic manager is operative to select the first group of users, among multiple groups of users, based on network characteristics that enable achieving lower latencies for all groups of users.
16. The Edge Cloud Platform of claim 15, wherein characteristics that enable to achieve lower latencies for all groups of users are determined using data analytics of network characteristics, a number of users per location, a network status or traffic characteristics.
17. The Edge Cloud Platform of claim 11, further operative to detect changes in the groups of users and move the multi-user application to another PoP in another SP domain.
18. The Edge Cloud Platform of claim 17, wherein changes include more users or less users at a given location, a new group of users at a new location, all users “dropped” at another given location, and wherein moving the multi-user application to another PoP in another SP domain reduces a cumulative latency.
19. The Edge Cloud Platform of claim 11, wherein the users are connected through any one of: a cable connection, a short range wireless connection, a long range wireless connection.
20. The Edge Cloud Platform of claim 11, wherein the tunnel is established using peering or transit.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
DETAILED DESCRIPTION
[0016] Various features will now be described with reference to the figures to fully convey the scope of the disclosure to those skilled in the art.
[0017] Sequences of actions or functions may be used within this disclosure. It should be recognized that some functions or actions, in some contexts, could be performed by specialized circuits, by program instructions being executed by one or more processors, or by a combination of both.
[0018] Further, computer readable carrier or carrier wave may contain an appropriate set of computer instructions that would cause a processor to carry out the techniques described herein.
[0019] The functions/actions described herein may occur out of the order noted in the sequence of actions or simultaneously. Furthermore, in some illustrations, some blocks, functions or actions may be optional and may or may not be executed; these may be illustrated with dashed lines.
[0020] Referring to
[0021] Although the servers (data centers (DCs) 10) are deployed in different locations, where some locations are supposed to be closer to end users, it is still required that access networks 15 from the Internet Service Providers (ISP) 20 engage with end users. The access network is the so-called “last mile access”.
[0022] Since the global cloud platforms operate on top of internet service providers, the connection between their GCP servers and the nodes in the internet service provider network 20 normally goes through Internet 25 if they don't have any Software License Agreement (SLA) between them. The internet connection causes delays in the north-south traffic (GCP to ISP and ISP to GCP) i.e. between the end users and the applications. For gaming application, this type of delays need to be reduced.
[0023]
[0024] Latency issues on gaming players can be expected due to the Internet connection 25 and due to the physical location of the players, since there is no Quality of Service (QoS) in place for the traffic paths between the end users and the gaming server deployed in the New York data center.
[0025] When user in both group A and B play the same game at the same time (collaboratively or against each other), how can the game experience be improved, how it is possible to provide to all game players a fair play in terms of the request-response turn-around time?
[0026] Turning to
[0027] The proposed new architecture consists of two parts, one is called the “Core part” 40a and the other is called the “Edge part” 40b. Both are described in details hereafter.
[0028] The Core part 40a of ECP owns all the data centers 10 in which the ECP computing nodes, ECP delivery nodes, control nodes, monitoring nodes and analytic nodes are deployed. Those data centers 10 reside in different geographical locations based on the business needs. All the data centers 10 are connected to Internet via secure means, such as Firewall (FW) settings. ECP also provides the direct connections to the ECP backbone network 27 (
[0029] The Edge part 40b of ECP deploys computing nodes and delivery nodes inside different service provider's 20 network. Those nodes are adjacent to SP access network 15, such as 5G access network (which support network slicing function). It is assumed that QoS from the edge nodes to the end users can be guaranteed. Regarding the horizontal traffic from west to east (at similar levels in the network) i.e. between different geographical locations, according to SLA, service provider might give an option for ECP to peer up with other service providers 20 using techniques such as peering or transit. In addition, the concept of Point of Presence (PoP) 45 can be used to group serval edge nodes within a specific service provider, thereby providing an abstraction level for the edge level.
[0030] Still referring to
[0031] As shown in
[0032] In the case of a gaming application, a proxy is automatically generated in the corresponding service providers to make a sure that fairness is maintained among the game players for the same game. The mechanism for deploying the gaming application as well as gaming proxy will be described further below.
[0033] Three scenarios are presented in relation with
[0034] Referring to
[0035] The second case is illustrated hereafter and occurs after both groups, A and B, start the online game, i.e. when group C joins the game session in which both groups A and B are already playing.
[0036] Referring to
[0037] The traffic performance degradation may be logged in access log files or may be acquired using other methods such as real-time monitoring tools to collect performance data, which are consumed by an ECP analytic solution 55. Based on the outcome of the analytic solution and also based on ECP monitoring, a notification or alarm is sent to the game tenant. This triggers the tenant to reset QoS profile (using peering between SP.sub.2 and SP.sub.1) for the gaming application.
[0038] This eventually leads to case 3, described in
[0039] The location selection for the gaming server 35 and gaming proxy 37 can be done automatically based on the outcome of an ECP data analytic 55 solution. The prediction of the traffic pattern (against time) can be made using Artificial Intelligence (AI)/Machine Learning (ML) algorithms, such as the reinforcement learning algorithm that takes the traffic data logs as its input to predict the traffic patterns towards the gaming server for a next epoch. Those data logs are collected by ECP from the gaming servers deployed in different locations.
[0040] For instance, at the time t1, a gaming server is deployed in location A in SP1, and the game proxy is deployed in location C in SP2. This is because much more requests from end users go to location A compared to location C.
[0041] The ECP AI/ML algorithm predicts that at the time t2, more requests will go to location C instead of A. As a result, ECP deploys the new gaming server in location C in SP2, and a new gaming proxy in location A in SP1.
[0042] When both new gaming server and proxy are ready, the old/existing gaming server and proxy redirect their client requests to the new gaming server and proxy. After all the existing gaming sessions are successfully moved to the new gaming server, ECP removes/deletes these old (out-of-data) gaming servers and proxies.
[0043] Now the new gaming server and proxy provide good gaming performance towards the end users. Several advantages are provided by the techniques described herein. The service providers may gain traffic from gaming application, by leveraging their access networks assets, especially with 5G mobile access networks. The tenants benefit from the fast response from delivery nodes to its subscriber's requests, which may allow to attract more game players to utilize the gaming service. The end user can benefit by having a good or improved user experience using “online gaming”.
[0044]
[0045] The multi-user application may be a gaming application. The multi-user application and the proxy of the multi-user application may be deployed by a traffic manager. The traffic manager may select a largest group of users as the first group of users. The traffic manager may select the first group of users, among multiple groups of users, based on network characteristics that enable achieving lower latencies for all groups of users. The characteristics that enable to achieve lower latencies for all groups of users may be determined using data analytics of network characteristics, a number of users per location, a network status or traffic characteristics.
[0046] The method may further comprise detecting, step 76, changes in the groups of users and moving the multi-user application to another PoP in another SP domain. The changes may include more users or less users at a given location, a new group of users at a new location, all users “dropped” at another given location, and wherein moving the multi-user application to another PoP in another SP domain reduces a cumulative latency. The users may be connected through any one of: a cable connection, a short range wireless connection, a long range wireless connection. The tunnel may be established using peering or transit.
[0047] Referring again to
[0048] A virtualization environment (which may go beyond what is illustrated in
[0049] A virtualization environment provides hardware comprising processing circuitry 80 and memory 85. The memory can contain instructions executable by the processing circuitry whereby functions and steps described herein may be executed to provide any of the relevant features and benefits disclosed herein.
[0050] The hardware may also include non-transitory, persistent, machine readable storage media 90 having stored therein software and/or instruction 95 executable by processing circuitry to execute functions and steps described herein.
[0051] Referring to
[0052] The e multi-user application may be a gaming application. The multi-user application and the proxy of the multi-user application may be deployed by a traffic manager. The traffic manager may be operative to select a largest group of users as the first group of users. The traffic manager may be operative to select the first group of users, among multiple groups of users, based on network characteristics that enable achieving lower latencies for all groups of users. The characteristics that enable to achieve lower latencies for all groups of users may be determined using data analytics of network characteristics, a number of users per location, a network status or traffic characteristics.
[0053] The Edge Cloud Platform may be further operative to detect changes in the groups of users and move the multi-user application to another PoP in another SP domain. The changes may include more users or less users at a given location, a new group of users at a new location, all users “dropped” at another given location, and wherein moving the multi-user application to another PoP in another SP domain reduces a cumulative latency. The users may be connected through any one of: a cable connection, a short range wireless connection, a long range wireless connection. The tunnel may be established using peering or transit.
[0054] Modifications will come to mind to one skilled in the art having the benefit of the teachings presented in the foregoing description and the associated drawings. Therefore, it is to be understood that modifications, such as specific forms other than those described above, are intended to be included within the scope of this disclosure. The previous description is merely illustrative and should not be considered restrictive in any way. The scope sought is given by the appended claims, rather than the preceding description, and all variations and equivalents that fall within the range of the claims are intended to be embraced therein. Although specific terms may be employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.