Network Slinghop via tapestry slingshot
11630811 · 2023-04-18
Assignee
Inventors
Cpc classification
H04L67/06
ELECTRICITY
H04L67/568
ELECTRICITY
G06F16/1858
PHYSICS
H04L67/2876
ELECTRICITY
G06F15/17331
PHYSICS
H04L67/1095
ELECTRICITY
H04L67/1097
ELECTRICITY
International classification
G06F16/00
PHYSICS
G06F15/173
PHYSICS
H04L67/06
ELECTRICITY
H04L67/1095
ELECTRICITY
H04L67/1097
ELECTRICITY
H04L67/2876
ELECTRICITY
H04L67/568
ELECTRICITY
Abstract
A network system for providing long haul network connection between endpoint devices is disclosed. The network system includes a first and a second endpoint devices, a first and a second exchange servers, a first access point server coupled between the first endpoint device and the first exchange server, a second access point server coupled between the second endpoint device and the second exchange server, a first storage node coupled between the first exchange server and the second exchange server, and a second storage node coupled between the first exchange server and the second exchange server. The first exchange server is configured to convert first packetized traffic into a carrier file and write the carrier file to the second storage node. The second exchange server is configured to read the carrier file from the second storage node and convert the carrier file into second packetized traffic.
Claims
1. A method comprising: receiving, at a first network node coupled to a first network and a second network, a first plurality of network packets from a source node via the first network; identifying, by the first network node, a destination node of the first plurality of network packets, wherein the first network provides a first path to the destination node and the second network provides a second path to the destination node; diverting, by the first network node, the first plurality of network packets from the first network to the second network, wherein diverting the first plurality of network packets comprises (1) combining the first plurality of network packets into a first carrier file, and (2) transmitting the first carrier file to the destination node via the second path; in response to transmitting the first carrier file, receiving, by the first network node, a second carrier file from the destination node via the second network; extracting, by the first network node, a second plurality of network packets from the second carrier file; and transmitting, by the first network node, the second plurality of network packets to the source node via the first network.
2. The method of claim 1, wherein the first plurality of network packets comprise a plurality of internet protocol (IP) request packets, the second plurality of network packets comprise a plurality of IP response packets responsive to the plurality IP request packets, and the first network comprises the Internet.
3. The method of claim 2, wherein the second network comprises at least one of an Infiniband (IB) over distance link or a high-speed Ethernet link.
4. The method of claim 1, wherein the first path comprises a plurality of network hops, and the second path consists of a single network hop.
5. The method of claim 1, wherein transmitting the first carrier file to the destination node comprises writing the first carrier file to a remote storage node associated with the destination node.
6. The method of claim 5, wherein the first carrier file is written to the remote storage node via remote direct memory access.
7. The method of claim 1, wherein receiving the second carrier file comprises retrieving the second carrier file from a local storage node associated with the first network node.
8. The method of claim 7, wherein retrieving the second carrier file comprises pulling a plurality of batches of files from the local storage node at regular intervals, wherein a first batch among the plurality of batches of files includes the second carrier file.
9. The method of claim 8, wherein retrieving the second carrier file further comprises, in response to determining that the first batch includes the second carrier file, marking the second carrier file as used.
10. The method of claim 1, further comprising performing latency analysis to determine that diverting the first plurality of network packets via the second path offers lower latency than transmitting the first plurality of network packets via the first path.
11. The method of claim 1, wherein the first network node comprises a local backbone exchange server and the destination node comprises a remote backbone exchange server.
12. The system of claim 11, wherein the operations further comprise performing latency analysis to automatically determine that diverting the first plurality of network packets to the second network offers lower latency than transmitting the first plurality of network packets via the first network.
13. A system comprising: a non-transitory memory; and one or more hardware processors configured to read instructions from the non-transitory memory that, when executed, cause the one or more hardware processors to carry out operations comprising: receiving a first plurality of network packets from a source node via a first network; identifying a destination node of the first plurality of network packets, the destination node being separately reachable via a first path of the first network and via a second path of a second network; determining that transmitting the first plurality of network packets via the second path has lower latency than transmitting the first plurality of network packets via the first path; combining the first plurality of network packets into a first carrier file; transmitting the first carrier file to the destination node via the second path; in response to transmitting the first carrier file, receiving a second carrier file from the destination node via the second network; extracting a second plurality of network packets from the second carrier file; and transmitting the second plurality of network packets to the source node via the first network.
14. The system of claim 13, wherein the first plurality of network packets comprise a plurality of internet protocol (IP) request packets, the second plurality of network packets comprise a plurality of IP response packets responsive to the plurality IP request packets, and the first network comprises the Internet.
15. The system of claim 13, wherein the first path comprises a plurality of network hops, and the second path consists of a single network hop.
16. The system of claim 13, wherein transmitting the first carrier file to the destination node comprises writing the first carrier file to a remote storage node associated with the destination node via remote direct memory access.
17. The system of claim 13, wherein receiving the second carrier file comprises retrieving the second carrier file from a local storage node coupled to the second network.
18. The system of claim 17, wherein retrieving the second carrier file comprises: pulling a plurality of batches of files from the local storage node at regular intervals; determining that a first batch among the plurality of batches of files includes the second carrier file; and marking the second carrier file as used.
19. A non-transitory computer readable medium storing instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to perform operations comprising: receiving a plurality of internet protocol (IP) request packets from an endpoint device via the Internet; identifying a target backbone exchange server of the plurality of IP request packets, the target backbone exchange server being located in a remote region and being reachable via the Internet; diverting the plurality of IP request packets from the Internet to a second network, wherein diverting the plurality of IP request packets comprises (1) combining the first plurality of IP request packets into a first carrier file, and (2) writing the first carrier file to a remote storage node associated with the target backbone exchange server via the second network; pulling a batch of files from a local storage node; determining that the batch of files includes a second carrier file received via the second network; extracting a plurality of IP response packets from the second carrier file, wherein the plurality of IP response packets are responsive to the plurality of IP request packets; marking the second carrier file as used; and transmitting the plurality of IP response packets to the endpoint device via the Internet.
20. The non-transitory computer readable medium of claim 19, wherein the second network comprises at least one of an Infiniband (TB) over distance link or a high-speed Ethernet link.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) In order to facilitate a fuller understanding of the present disclosure, reference is now made to the accompanying drawings, in which like elements are referenced with like numerals or references. These drawings should not be construed as limiting the present disclosure, but are intended to be illustrative only.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
DETAILED DESCRIPTION
(31) In the following description, numerous specific details are set forth regarding the systems, methods and media of the disclosed subject matter and the environment in which such systems, methods and media may operate, etc., in order to provide a thorough understanding of the disclosed subject matter. It will be apparent to one skilled in the art, however, that the disclosed subject matter may be practiced without such specific details, and that certain features, which are well known in the art, are not described in detail in order to avoid complication of the disclosed subject matter. In addition, it will be understood that the examples provided below are exemplary, and that it is contemplated that there are other systems, methods, and media that are within the scope of the disclosed subject matter.
(32)
(33) Traffic from GVN 1-322 to GVN 1-326 flows as follows. Traffic flows from GVN 1-322 to the access point server (SRV_AP) 1-302 via 1-P322 and onto backbone exchange server (SRV_BBX) 1-502. At this point, the slingshot mechanism on SRV_BBX/Sling node (SLN) 1-502 via its Write Queue 1-WQ502 function converts the packetized traffic into a combined carrier file and directly writes this carrier file via path 1-W606 to a folder on the parallel file system (PFS) storage node 1-606. The Read Queue 1-RQ-506 function of SRV_BBX SLN 1-506 retrieves the carrier file from the folder on PFS 1-606 and then it separates the carrier file back into individual packets which are sent to SRV_AP 1-306 via path 1-P506 and then onto the GVN 1-326 via 1-P326.
(34) Traffic in the other direction flows from GVN 1-326 to GVN 1-322 following the pathway described as follows. Traffic flows to the access point server (SRV_AP) 1-306 via 1-P326 and onto backbone exchange server (SRV_BBX)/Sling node (SLN) 1-506. At this point, the slingshot mechanism on SRV_BBX SLN 1-506 via its Write Queue 1-WQ506 function converts the packetized traffic into a combined carrier file and directly writes this file via path 1-W602 to a folder on the parallel file system (PFS) storage node 1-602. The Read Queue 1-RQ-502 function of SRV_BBX 1-502 retrieves the carrier file from PFS 1-602 and then it separates the carrier file back into individual packets which are sent to SRV_AP 1-302 via path 1-P502 and then on to the GVN 1-322 via 1-P322.
(35) A QoS setting can be integrated to allow for priority scheduling of traffic by writing to a specific folder versus another folder. The Read Queues can be configured to process files in one directory first and or it can also be set to check one folder more frequently than others in effect augmenting the priority of its contents.
(36) Each one-way communication path is powered by Slingshot as defined in U.S. Provisional Patent Application No. 62/266,060 as noted above.
(37) Together, these two nodes and their corresponding communication paths working in unison form the basis of the underlying Slinghop technology.
(38)
(39) There exists a Secure Perimeter 2-182 which is between the IP/Internet layer 2-822 and the BB/Backbone layer 2-832. The Secure Perimeter 2-182 can function with firewall type operations to isolate the above layers from the layers below. Another built-in protection concerns the nature of the transport. Packets travel along path 2-TR6AP and files are written via RDMA to PFS devices via path 2-TR6BP. Files cannot natively move at the IP layer and packets cannot be individually routed via the BB layer.
(40)
(41) At the next layer, Transport Layer 3-L03, the Packet Size 3-PBytes has the original size of the data 3-D4 which is equal to Data UDP 3-D3. It further includes bloat of Header UDP 3-H3.
(42) At the next layer, Internet Layer 3-L02 and the body payload Data IP 3-D2 is a combination of 3-D3 and 3-H3. It increases 3-PBytes by Header IP 3-H2.
(43) At the Link Layer 3-L01, Frame Data 3-D1 is a combination of 3-H2 and 3-D2. It further increases 3-PBytes by Header Frame 3-H1 and Footer Frame 3-F1.
(44)
(45) The total packet bloat in an OSI model at the Physical OSI Layer 14-L1 is denoted by Packet Size 4-PBytes.
(46)
(47) OTT and UTI are used for descriptive purposes only to indicate the layering of relationships and interactions. At the physical layer, all types of protocols may exist at the same level or various levels. GVN 5-82 is a global virtual internet at layer OTT.sup.1 5-TOP82 which is built upon the basic plumbing of the Base Internet 5-TOP80 which powers the ISP network connectivity 5-80.
(48) The Slinghop BB 5-88 mechanism is a second degree UTI at layer UTI.sup.2 5-UNDER88. It utilizes the UTI.sup.1 technology of Slingshot BB 5-86. The product of its functionality and corresponding advantages can be integrated into the flow of GVN 5-82 which is at layer OTT.sup.1 5-TOP82 or integrated as a segment in an internet path at level Base Internet 5-TOP80. The Slinghop therefore is a blended bridge of GVN and Slingshot.
(49) An example of second degree OTT of MPFWM 5-84 at layer OTT.sup.2 5-TOP84 is noted for illustrative purposes only. In live implementations, it may or may not be integrated into the traffic flow.
(50)
(51) Particularly,
(52) The path from planes 6-900 to terminal exit 6-000 begins at start 6-910 and again offers choice of riding the train or walking with similar performance and time advantages enduring for those who opt to take the train. This is an analogy of the decision of whether to use Slinghop between long-distance points or to have packets travel along extended internet paths.
(53)
(54)
(55) Path 7-P502 and 7-P506 link the Slinghop mechanism to end hop bridgehead of the segments P0-3 and P0-11 respectively. Two paths 7-CPT302 and 7-CPT306 between those end hops indicate each one-way Slingshot as described in
(56) At the top of
(57) Between the non-Slinghop and Slinghop enabled paths are bracket notations which denote sections of segments into blocks to compare the operations of Slinghop enabled paths to regular non-Slinghop internet paths.
(58) A set of initial segment blocks at one end of each path are Internet local 7-CPT140 which compares with Paths to SRV_BBX/SLN 7-CPT100 and for this example each has the same Δt of X ms. The set of initial segment blocks at the other end of each path are Internet local 7-CPT240 which compares with Paths to SRV_BBX/SLN 7-CPT200. For this example, each have the same duration of time Δt of Y ms.
(59) The middle segments are where the comparison can be made between Slinghop enabled and regular internet paths. The regular internet path duration of time for segment Internet trans-regional 7-CPT340 is Δt of W ms. The Slinghop enabled segment is the sum of a combination of three time durations Process on SRV_BBX/SLN 7-CPT110 with a Δt=+<1 ms plus Slingshot hop 7-CPT300 with a Δt=Z ms plus Process on SRV_BBX/SLN 7-CPT210 with a duration of Δt=+<1 ms. The rationale for the addition of the three time durations is to note not just the time and efficiency gained by the Slinghop over Slingshot hop 7-CPT300 but also to take into account the resistance due to processing time for Slinghop at either end. Even though there is additional time added for the processing of packets combined into files which are saved to remote PFS and then pulling of the files and breaking them back into packets, the key point is that the efficiency and speed gain from Slinghop will more than compensate for this with its relative end-to-end shorter duration of time in both directions.
(60)
(61) A new pathway line of segments end-to-end is located at the bottom of this figure. Instead of two separate arrows 8-CPT302 and 8-CPT306 like on the middle line, the Slinghop segment is denoted by 8-CPT300.
(62) When Slinghop enabled paths are compared with long distance transfers over the regular internet, the benefit from the speed of the transparent “middle” Slinghop compensates for the low added time at either marginal end of the Slinghop.
(63)
(64) Path P2030 represents many hops along the internet over a long distance—this figure is not drawn to scale. And Paths P2038 and P2830 together represent the bi-directional slingshot segments which together constitute a Slinghop over the long distance.
(65) Over long distance, network Segment 9-2838 is significantly better performing and faster than internet longhaul network segment 9-2030.
(66)
(67) The tunnel TUN 10-222 is over-the-top (OTT) of the internet and Slinghop utilizes reciprocal slingshot mechanisms over fiber back bone.
(68) Algorithmic analysis can be applied to choose which transport type over which path is most optimal. This segment and path analysis can also be utilized to tweak various settings to realize best performance.
(69)
(70) On the straight path along the figure, IP packets transmit naturally along segments over hops from P0-1 through P0-14 and travel in both directions.
(71) A set of sequential steps are hereby described for travel in one direction from 11-A through 11-G:
(72) Step 11-A: The start of the path. Traffic travels through three segments P0-1, P0-2, and P0-3.
(73) Step 11-B: Traffic enters the Slinghop at this point 11-B; from the client perspective, the traffic will flow through segment 11-CPT306 to the hop at 11-F.
(74) Step 11-C: Transparent to the client, the traffic will be diverted from the open internet pathway at 11-B and sent to 11-C, where IP packets are captured on entry and packaged within the file to be processed by the Write Queue 11-WQ502 of backbone exchange server 11-502. The traffic is enclosed within a transport file (see
(75) Step 11-D: The file saved from the Write Queue (can also be a direct write via RDMA) on SRV_BBX/SLN 11-502 is read from PFS 1-606 by SRV_BBX/SLN 1-506 along with complete files during a tick of time (see Granularity of a Tick at U.S. Provisional Patent Application No. 62/296,257) in a batch according to
(76) Step 11-E: The file is retrieved by Read Queue 11-RQ506 manager on backbone exchange server (SRV_BBX) 11-506 and it is divided into individual packets. The packets re-enter the internet pathway at 11-F from 11-E via path 11-P506. They are therefore re-packetized and sent on their way.
(77) Step 11-F: Each packet(s) re-enters and then travels across the standard internet pathway from the IP address of 11-F to 11-G via segments P0-11, P0-12, P0-13, to P0-14.
(78) Step 11-G: Each packet travels to destination past 11-G in Internet local 11-CTP240 via the path from SRV_BBX 11-506 via path 11-CPT200.
(79) The following sequential steps are hereby described for travel in the other direction from 11-H to 11-N.
(80) Step 11-H: Return packets or packets originating from the internet in close proximity to 11-H travel via path P0-14 to P0-13 to P0-12 to P0-11 and enter the Slinghop at 11-I.
(81) Step 11-I: Packets enter Slinghop via IP address at 11-I end of segment P0-11 and are passed to SRV_BBX/SLN 11-506 where the packets are combined into the payload of a “carrier” file consisting of various packets.
(82) Step 11-J: The combined “carrier” file of various packets is saved to PFS 11-602 by Write Queue 11-WQ506 via path 11-W602 from SRV_BBX/SLN 11-506.
(83) Step 11-K: The Read Queue 11-RW502 regularly pulls batches of files from PFS 11-602. The operation on the SRV_BBX 11-502 at step 11-K is very similar to 11-D.
(84) Step 11-L: The “carrier” file is retrieved by Read Queue 11-RQ502 manager on SRV_BBX/SLN 11-502 and it is divided into individual packets. The packets re-enter the internet pathway at 11-M from 11-F via path 11-P502. They are therefore re-packetized and sent on their way.
(85) Step 11-M: Each packet re-enters and then travels across standard internet pathway from the IP address of 11-M to 11-N via segments P0-3, P0-2, to P0-1.
(86) Step 11-N: Each packet travels to destination past 11-N in Internet local 11-CTP140 via the path from SRV_BBX/SLN 11-502 via path 11-CPT100.
(87) The noted time comparison and other elements are based on
(88) From the packet's perspective, 11-CPT306 and 11-CPT302 are each just another segment between two hops, although they replace many middle hops and benefit from the efficiency of slingshot technology.
(89)
(90) As an illustration, 11-CPT306 is the Slinghop for a request from an origin before segment P0-1 to a target beyond P0-14 with a response of 11-CPT302 sent back via Slinghop.
(91) The flow corresponds to the paths of the receiving of a series of packets via paths P1-1 through P-1-8 by a SRV_BBX/SLN and their subsequent buffering into a combined file by the Write Queue. The file consisting of REQ1 through REQ8 packets is sent in one direction via a save to PFS 11-606 through Slinghop REQ Batch 11-312.
(92) The flow corresponds to the paths of the receiving of a series of packets via paths P1-1 through P-1-8 by a SRV_BBX/SLN and their subsequent buffering into a combined file by the Write Queue. The file consisting of REQ1 through REQ8 packets is sent in one direction via a save to PFS 11-606 through Slinghop REQ Batch 11-312.
(93) From the exit point for the REQ Batch 12-312 the individualized packets follow paths P2-1 through P2-8 to their corresponding destination addresses on the Internet 12-322. Assuming that the target hosts are all live and respond, their return packets locate and re-enter the Slinghop transparent segment via paths P3-1 through P3-8 to be recombined into a file which is then transmitted via file save across path RESP Batch 12-332. Upon reaching the other end of the transparent Slinghop, the file is read and packets individually go back to their originating devices via paths P4-1 through P4-8.
(94)
(95) In some embodiments, a packet gets combined, encapsulated or otherwise joined into a carrier file which contains the slingshot payload, then the slingshot file is transported and on the other side, the contents of the payload are separated back into their individual parts, and the packet is then sent on.
(96) Elements 13-TW, 13-TX, 13-TW, 13-TY, and 13-TZ in this figure directly correlate with elements TV, TX, TW, TY, and TZ in the equations Equation 13-1, Equation 13-2, and Equation 13-3 below.
(97) Total path latency for the internet path can be deduced from the sum of the time duration Δt of Internet local 13-CPT140 plus Internet trans-regional 13-CPT340 plus Internet local 13-CPT240.
Total Internet Path Time Δt(P004)=Δt(TX ms)+Δt(TW ms)+Δt(TY ms) Equation 13-1:
(98) Total path latency for Slinghop enhanced hybrid Internet+Slinghop path can be calculated by adding the sum of Paths to SRV_BBX 13-CPT100 plus Process on SRV_BBX 13-CPT110 plus Slingshot hop 13-CPT300 plus Process on SRV_BBX 13-CPT210 plus SRV_BBX to Internet 13-CPT200.
Total Internet-Slinghop Hybrid Path Time(P504)=Δt(TX ms)+2*Δt(TV ms)+Δt (TW ms)+Δt(TY ms) Equation 13-2
(99) The Δt 13-TX ms is equivalent for zones Internet local 13-CPT140 and Paths to SRV_BBX 13-CPT100.
(100) The Δt 13-TY ms is equivalent for zones Internet local 13-CPT240 and Paths to SRV_BBX 13-CPT200.
(101) The Process steps 13-CPT110 and 13-CPT210 indicate both the short-term RAM buffer for clumps.
(102) Slingshot Hop 13-CPT300 is a transparent hop from the client perspective.
(103)
(104) This example embodiment demonstrates the functionality of the plumbing which powers Slinghop as illustrated in
(105) The improvement of Slinghop at Slingshot hop 13-CPT300 therefore needs to include the times of the processing steps of Process on SRV_BBX 13-CPT110 and Process on SRV_BBX 13-CPT210, therefore 13-TV*2.
(106) Based on
(107) So therefore for Slinghop to be effective, it must be more efficient and have a lower latency than the connectivity through the open internet:
Δt{TV+TZ+TV}<Δt{TW ms}. Equation 14-1
(108) The meaning of this equation is that the combined time for: TV, which is processing to add items to and prepare the slingshot payload; TZ, which is the sending of the payload by slingshot; and TV, which is the processing of the payload to separate back into separate elements. The sum of these three parts is the total time for slingshot. TW represents the time that it would take for a packet to transit the internet via an IP path. TV at either end adds time. TZ is a speedy transport.
(109) The point is that the marginal time added by TV at either end should be compensated for by the faster time of TZ. When comparing TV+TZ+TV, the net total should still be smaller than TW
(110) Therefore, Δt{TV+TZ+TV} must be less than Δt{TW ms}. The rationale is that the added duration of time added by the requisite processing Δt(TV) at both ends of a Slinghop must be offset by the speed advantage offered by the Slingshot section of the segment Δt(TZ) between Slinghop endpoint bridgeheads.
(111) There are other advantages presented by slinghop, such as a wide parallel bandwidth which is more efficient than packetized IP traffic over distance but this is not described by this figure.
(112)
(113) In reality, due to political/administrative boundaries, city limits, zoning, geographic features such as bodies of water, various elevation changes, and other reasons, the actual routes of pipes are rarely ever straight or direct. However, the additional distance caused by path deviations from the potentially most direct route in most cases do not add enough distance to have a significantly adverse effect of added latency.
(114) Segments can be described as city/location pairs and for Slinghop purposes, the origin end-point of the Slinghop is represented by an IP Address of a server or gateway device there, with segment transiting over the Slinghop segment to IP address of the server or gateway device at the target end-point city/location.
(115) Transit from one location to the other is as simple as from origin IP address to target IP address and for the return path the IP addresses are in reciprocal order. This single segment replaces many other IP segments over the internet and is optimized by Slinghop.
(116) PFS naming can be based on last octet or last 2 octets of an IP address. PFS naming can also include city code, region, IP Address, noted world nodes, and more factors. IP address pairs denoting bridgeheads at either end of a segment.
(117) For example, from 188.xxx.xxx.128 to 188.xxx.xxx.100 is by writing to PFS 15-600. And if back, from 188.xxx.xxx.100 to 188.xxx.xxx.128 is by writing to PFS 15-628.
(118) Like airline routes for roundtrips, combination of two one-way segments constitute a Slinghop transparent roundtrip integration nested into an existing IP pathway.
(119)
(120)
(121)
(122) The first boundary is GVN egress ingress point (EIP) 17-322 between the internet 17-328 and the GVN 17-028 or EPD 17-378 between a LAN 17-376 and the GVN 17-028. The next boundary is the secure perimeter 17-182. This layered security approach protects the core infrastructure upon which the GVN is predicated.
(123) The secure perimeter 17-182 boundary between GVN and GVN backbone protect the high speed global network. The section of the GVN above the perimeter 17-822 has traffic flowing over the top (OTT) the open internet via secure GVN tunnels. Under the secure perimeter 17-182, GVN connections utilize various protocols over dark fiber or other connectivity which are not directly reachable from the internet.
(124) A sling node (SLN) 17-538 can be a super computer/cluster node (SCN), a high-performance computer/cluster (HPC), or equivalent which operates inside of (below) the secure perimeter 17-832 which can operate a true internal network with advanced features such as remote direct memory access (RDMA) to a parallel file system (PFS) 17-602 device.
(125) Global Ring for PFS storage devices via 14-P506 and 14-P504 along with other nodes not noted herein.
(126) SRV_AP 17-302 feeds and/or retrieves information to/from SRV_INFO 17-338.
(127) Secure Perimeter 17-182 is between IP traffic over Ethernet above SP 17-182 and non-IP internal Slingshot/Slinghop traffic below it.
(128)
(129)
(130)
(131) These bridgeheads are bolded to highlight their place and focus. IP addresses are noted for illustrative purposes X.X.X.02 on SRV_BBX/SLN 20-502 and X.X.X.10 on SRV_BBX/SLN 20-510 as either end. Slinghop is therefore from Region 2 to Region 10 by IP order of X.X.X.02 to X.X.X.10, and back from Region 10 to Region 2 via IP order of X.X.X.10 to X.X.X.02.
(132) Slingshot matches the target region and maps it to a PFS drive 20-610 or 20-602 which will be accessed by the read queue on an SLN on the remote side.
(133)
(134)
(135) SRV_BBX 21-280 and SRV_BBX 21-282 are backbone exchange servers and provide the global connectivity. A SRV_BBX may be placed as one or more load-balanced servers in a region serving as global links. Access point servers (SRV_AP) 21-302, 21-304 and 21-306 in 21-RGN-A connect to SRV_BBX 21-280. The central, control server (SRV_CNTRL) 21-200 serves all the devices within that region and it may be one or more multiple master SRV_CNTRL servers. End-point devices (EPD) 21-100 through 21-110 will connect with one or more multiple SRV_AP servers through one or more multiple concurrent tunnels.
(136) This figure further demonstrates multiple egress ingress points (EIP) 21-EIP420, 21-EIP400, 21-EIP430, and 21-EIP410 as added spokes to the hub and spoke model with paths to and from the open internet. This topology can offer EPD connections to an EIP in remote regions routed through the GVN. In the alternative this topology also supports EPD connections to an EIP in the same region, to an EPD in the same region, or to an EPD in a remote region. These connections are securely optimized through the GVN. This also facilitates the reaching of an EPD from the open internet with traffic entering the EIP nearest to the source and being carried via the GVN realizing the benefits of the GVN's optimization.
(137) In
(138)
(139) In
(140) Routing is based on the writing to a destination PFS device via RDMA, and the subsequent reading of the file by an SRV_BBX/SLN in the target region and subsequent use of the data there.
(141) The global structure of PFS connectivity is octagonal for illustrative purposes and in reality, can take any shape. The Slinghop mechanism depends on the connection of various devices at a physical layer.
(142) Based on
(143)
(144)
(145)
(146) This example embodiment illustrates the importance of a granularity of a tick in a practical application. It is based on FIG. 25 from U.S. Provisional Patent Application No. 62/266,060 and International Patent Application No. PCT/IB16/01867 and describes the pulling of a batch of files from a queue where new files will be continually appearing.
(147) In
(148)
(149) Interval A 24-5100 and Batch Pull A 24-5200 occur at the same time. USE 24-5202 happens at the end of this interval. Interval B 24-5110 and Batch Pull B 24-5210 occur at the same time. USE 24-5212 happens at the end of this interval. Interval C 24-5120 and Batch Pull C 24-5220 occur at the same time. USE 24-5222 happens at the end of this interval.
(150) There is a problem which exists by having one interval begin right after one has ended because there may not be enough time for a file to be marked as read, even though it has been read by a previous Batch process. This problem is described in FIG. 24 from U.S. Provisional Patent Application No. 62/266,060 and International Patent Application No. PCT/IB 16/01867. This is a very dangerous flaw as for example, a trade request being executed twice could cause significant financial damage and/or other unintended consequences.
(151)
(152) There is a further Delay B 24-5112 between Interval B 24-5110 and Interval C 24-5120 (and between Batch Pull B 24-5210 and Batch Pull C 24-5220).
(153) The key point is that this delay allows for the current batch to evaluate and process all of the files it has pulled and where it has utilized complete files, to mark those files as read.
(154) This delay added to the mechanism is fully dynamic and can be lengthened if more processing time is required for a batch or shortened if a batch processing is completed. The interval times can also be dynamically adjusted based on a number of factors.
(155)
(156) This figure demonstrates a fixed period for each batch, with the synch stage being variable to help keep the timing accurate. It also governs the maximum number of files that could be processed by a batch if the expected processing time and post-processing time would be exceeded if more files were to be processed.
Δt=P+Q+R Equation 25-1
(157) t=Delta time from the start of tick to end of the tick
(158) P=Time for batch processing of items for this tick in the cycle.
(159) Q=Time for post batch processing.
(160) R=Time for delay to ensure no overlap between batch items and/or to set offset between ticks.
(161) The granularity of a tick has been discussed in U.S. Provisional Patent Application No. 62/296,257. The granularity of a tick is based on a number of factors. The start time of a tick is either based on completion of the last cycle called by a tick or according to a fixed time interval scheduled.
(162) There are two types of cycles. Fixed time cycle is based on an estimate for time per item to process (P) and then handle post processing (Q), a limited quantity of items is set. This limit ensures that each cycle does not run beyond its allotted allowed time. Variable time cycle allows for all items to be processed in that batch. The R delay time ensures that the next cycle pulls a fresh batch and that all items in the last processed batch have been processed (P) and that post processing (Q) has been completed. For example, for files in a queue—a list can be processed at the P stage and then the files can be marked, moved, deleted or otherwise touched so that they do not get pulled in the next batch. The time delay (R) ensures that there is enough delay between the last item processed and post-processed in the last batch and the first item to be processed in the next batch.
(163) A component part of this tick manager is to maintain historical records of runtime data for each cycle. These logged records can be used to analyze periods of under usage and peak usage. It can also detect delays due to unprocessed items cut off because of limited quantities per cycle. If this occurs too often, then it is an indicator that either more processing power is required, that the software is inefficient, or it may indicate other issues such as database slowdowns or the need for software refactoring, and/or other issues.
(164) The tick can have granularity as fine as required by the application, can be set to run at a fixed interval, or at various intervals based on cycle length. The granularity of a tick allows for consistent network beacon flashes, slingshot related functionality, and other time reliant functionality. For more features and details about The Granularity of a Tick are demonstrated in U.S. Provisional Patent Application No. 62/296,257.
(165) This example demonstrates two batch file pulls on a backbone exchange server (SRV_BBX) with Read Queue+Process 25-RQP00 and Read Queue+Process 25-RQP10. They both pull files from the same storage media parallel file system PFS Incoming Files 25-606. Files pulled in 25-RQP00 via path 25-RQP606 are processed and then in post processing Post P 25-Q00 the files are marked via path 25-Q606.
(166) This is a critically important point because the next batch file pull Read Queue+Process 25-RQP10 from PFS Incoming Files 25-606 via path 25-RQP616 should only include unmarked files or files not filled by previous batches. Then at Post P 25-Q10 the files pulled and used are marked via path 25-Q616 so that they will not be inadvertently pulled by a subsequent batch file pull.
(167)
(168) There are three cycles noted herein: 26-T00, 26-T10 and 26-T20. And each has different respective quantities of Quantity 58 in 26-QNTY00, of Quantity 22 in 26-QNTY10, and of Quantity 8 in 26-QNTY20.
(169) These varying quantities influence the time duration of each interval.
(170)
(171) Use Files (Batch A) 27-5206 sends files File 27-00 and File 27-02 in parallel and ends at Batch END 27-5208. Use Files (Batch B) 27-5216 sends files File 27-04 and File 27-06 and File 27-08 in parallel and ends at Batch END 27-5218. Use Files (Batch C) 27-5226 sends files File 27-10 and File 27-12 and File 27-14 in parallel and ends at Batch END 27-5228.
(172) The key point is that the pull of files is in batches which are available in parallel and therefore the use phase is in parallel as well. Files not ready in Batch Pull A 27-5200 such as File 27-04 and File 27-06 will not be pulled and used until they are available. File 27-04 and 27-06 are pulled with File 27-08 in Batch Pull B 27-5210.
(173)
(174) The algorithmic decision logic of whether it is faster and/or more efficient to send packets via an internet path or through a Slingshot enhanced internet path with a Slinghop transparent segment between a pair (or more than one pair) of IP addresses. It does not consider reliability or loss or related congestion or other issues beyond latency which may influence the performance of the internet path.
(175) There are inefficiencies of using Slinghop of Δt at each bridgehead of the Slinghop transparent hop segment versus the gain achieved via the utilization of Slinghop. A minimum distance is needed to compensate for the requisite processing time of the Slinghop mechanism. Measure Latency of IP 28-D10 uses Equation 13-1 to measure Total Internet Path Time Δt. Measure Latency of SL28-D20 uses Equation 13-2 to measure Total Internet-Slinghop Hybrid Path Time Δt.
(176) Latency comparison of IP vs SL 28-D30 uses Equation 14-1 Δt{TV+TZ+TV} <Δt{TW ms}. There is a decision “Latency comparison of IP vs SL 28-D30” that looks the latency for internet path 28-CPT340 in a measurement of ms 28-TW ms to see if it is higher or lower than the time for slingshot path 28-CPT302/28-CPT306 time measurement sum of values 28-TV+28 TZ+28-TV.
(177) If True that Slinghop enhanced path (Δt{TV+TZ+TV}) has a lower latency SL is Lower 28-DP50 than internet path then “SL is most efficient transport, use” 28-D50 is the most optimal path
(178) If False and internet (Δt{TW ms}) is faster than Slinghop “IP is Lower” 28-DP40 then “IP is most efficient transport, use” 28-D40 is the most optimal path
(179)
(180) At SRV_BBX 500, it may also have one Slinghop router devices between it and the internet path. In this topology, SRV_BBX is infrastructure between internet and the backbone utilizing two reciprocal mechanisms. The Slingshot router can act as a path enabler for Slinghop. For example, in the internet data center (IDC), there can be a series of Slinghop routes as a front face. This mechanism can be configured as Slinghop client devices. Slingshot can be administered by a service provider and Slinghop routers by clients.
(181) Some key elements have been highlighted. More elements may be present which have not been noted. Some of the elements noted are not directly influenced by, dependent on, or otherwise integrated with Slinghop but have been noted to show where in the stack that that items may be placed. The hierarchy and placement of items may indicate levels with elements near the top as high level items, and items at the bottom as lower level items. For example, the network interface card (NIC) S108, 5308, S208, and S508 are all at a very low system level. The Operating System (O/S) S110, S310, S210, and S510 are above the NIC level and within the O/S there are driver files which interface with and operate the NIC. Some elements noted (and others not noted) may be at the appropriate level relative to other elements or they may need to be lower or higher, depending on use, context and other factors.
(182) Other elements of GVN, Slingshot, Slinghop or other related technologies also include fabric manager, logging, AI, security, FW, secure boot manager (SBM), back channel mechanism (BCM), geographic destination (Geo-D), Resources Manager, GVN Modules, APPs, advanced smart routing (ASR), GVN Manager, Accounting, and others.
(183) Slingshot manager manages hop listener, file buffer module (receive), file buffer manager (send), hop router, file sender, and other items.