Extensible platform for orchestration of data with built-in scalability and clustering

20230052148 · 2023-02-16

    Inventors

    Cpc classification

    International classification

    Abstract

    In a computer system, an orchestration platform includes extensible components that interact with external systems and technology. The platform scales by way of a plurality of application servers using a clustering architecture.

    Claims

    1. A computer system for data orchestration comprising: a platform core with an application stack and database stack; a platform extension, remote from the platform core; one or more probes installed on the platform extension and in communication with the platform core by way of one or more connectors; and a plurality of application servers including at least one of a portal server, web server, job server, and collector server.

    2. The system of claim 1 wherein the plurality of application servers includes at least two of a portal server, web server, job server, and collector server.

    3. The system of claim 1 wherein the plurality of application servers includes at least three of a portal server, web server, job server, and collector server.

    4. The system of claim 1 wherein the plurality of application servers includes a portal server, web server, job server, and collector server.

    5. The system of claim 1 wherein the at least one server includes a portal server in a reverse proxy configuration.

    6. The system of claim 1 where in the at least one server includes a web server in a reverse proxy configuration.

    7. A method of orchestrating data in a computer system with a platform core and a platform extension comprising: installing one or more probes at a remote location; configuring the one or more probes to connect with the platform core, wherein the one or more probes are installed on the platform extension and in communication with the platform core by way of one or more connectors; and configuring a plurality of application servers including at least one of a portal server, web server, job server, and collector server.

    8. The method of claim 7 wherein the plurality of application servers includes at least two of a portal server, web server, job server, and collector server.

    9. The method of claim 7 wherein the plurality of application servers includes at least three of a portal server, web server, job server, and collector server.

    10. The method of claim 7 wherein the plurality of application servers includes a portal server, web server, job server, and collector server.

    11. The method of claim 7 wherein the at least one server includes a portal server in a reverse proxy configuration.

    12. The method of claim 7 wherein the at least one server includes a web server in a reverse proxy configuration.

    13. A computer system for data orchestration comprising: a platform extension, remote from a platform core; one or more probes installed on the platform extension and in communication with the platform core by way of one or more connectors; and a plurality of application servers in communication with the one or more probes, including at least one of a portal server, web server, job server, and collector server.

    14. The system of claim 13 wherein the plurality of application servers includes at least two of a portal server, web server, job server, and collector server.

    15. The system of claim 13 wherein the plurality of application servers includes at least three of a portal server, web server, job server, and collector server.

    16. The system of claim 13 wherein the plurality of application servers includes a portal server, web server, job server, and collector server.

    17. The system of claim 13 wherein the at least one server includes a portal server in a reverse proxy configuration.

    18. The system of claim 13 where in the at least one server includes a web server in a reverse proxy configuration.

    19. The system of claim 17 wherein the at least one server further includes a web server in a reverse proxy configuration.

    20. The system of claim 17 wherein collector server nodes subscribe to various probe data receiver queues, and the queues are configured to serve round-robin data.

    Description

    SUMMARY OF FIGURES

    [0009] FIG. 1 shows details of an orchestration platform ecosystem.

    [0010] FIG. 2 shows interaction between the platform ecosystem and remote network premises with platform probes.

    [0011] FIG. 3 shows details of interactions between an access control layer and a probe and user devices.

    [0012] FIG. 4 shows details of interactions between the access control layer and the rest of the core platform ecosystem.

    [0013] FIG. 5 shows logical-deployment details of interactions between probes associated with a platform extension and the core platform.

    [0014] FIG. 6 shows physical-deployment details of interactions between an extension server and two platform servers.

    [0015] FIG. 7 shows details of platform-extending probes in various configurations.

    [0016] FIG. 8 shows details of application security for an orchestration platform.

    [0017] FIG. 9 shows details of data and storage security for an orchestration platform.

    [0018] FIG. 10 shows details of transport and network security for an orchestration platform.

    DETAILED DESCRIPTION

    [0019] An orchestration platform comprises an ecosystem that includes an application server stack and a database server stack. Additional platform extension architecture includes platform probes. Multiple instances of these probes can be installed at remote network locations. The platform probes may be controlled from an administration console of the platform.

    [0020] In one embodiment, probes come with specialized libraries. The focus of these libraries follows particular use cases such as robotics, data collection from industry standard databases, data collection and management of IP-enabled devices, and remote program and script execution.

    [0021] Extension libraries can also be injected into the probe after installation. These libraries enhance or upgrade existing probe capabilities to incorporate various technologies.

    [0022] An HTTPD server acts as the entry point to the platform. This HTTPD server may also act as a request forwarder and load balancer to the platform application server stack. In the description that follows, HTTPD refers to Apache HyperText Transfer Protocol or Apache HTTP Server. Alternatively, nginx or another solution with similar functionality may be used.

    [0023] FIG. 1 shows an orchestration-platform ecosystem 100. Platform ecosystem 102 includes an application server stack 104. Application server stack 104 includes job server 106, and Message Queue (MQ or IBM MQ) server 108. REST server 110 includes web servers 1 and 2 (112, 114). HTML Content Server 116 is also part of application server stack 104. An HTTPD server 118 includes load balancer 120 and request forwarder 122. In communication with application server stack 104 are databases 124, 126, 128, 130, 132, and 134. Databases 124-134 are selected according to use cases and may include Cassandra, MongoDB, MySQL, MariaDB, ElasticSearch, and Redis.

    [0024] FIG. 2 shows a detailed view 200 of the interaction between platform ecosystem 102 and an extension of the platform to a remote network. Compressed data in JSON or XML passes by way of connections 202 between platform ecosystem 102 and user devices 206. Similarly, compressed data in JSON or XML passes between platform 102 and remote network premises 208 and 210. Remote network premises 208, 210 each include a platform probe (212, 214). Details of probes 212, 214 will be described below.

    [0025] FIG. 3 shows details 300 of the interaction between access control layer 302, which comprises REST facade 304 and WebSocket facade 306, and user devices 206. Communication between user devices 206 and access control layer 302 takes place by way of HTTP 308 and WebSocket 310. Communication with access control layer 302 includes getting instructions 312 and sending data 314 between access control layer 302 and probe 316.

    [0026] FIG. 4 shows detail 400 showing the interaction between access control layer 302 and the rest of the core platform ecosystem. Access control lawyer 302 communicates with administrative service 402, application modeling service 404, application runtime service 406 and data collection service 410. Application modelling service 404 uses cache 412 for models, forms, and scripts. Cache 412 is used by application runtime service 406 which outputs to process engine 414 and MQ 416, as well as to instruction service 408. Job server 418 receives input from MQ 416 and communicates with instruction service 408, which in turn interacts with access control layer 302. Data collection service 410 communicates with MQ 416 and data store 420. A command database (CMDB) 422 is accessible to administrative service 402, application modelling service 404, process engine 414, and job server 418.

    [0027] FIG. 5 shows detail 500 of communication between core platform 502 and platform extension 504. Application server stack 104 communicates with probes 506 located outside the platform core. The database stack supporting the core platform is supplied by databases 124, 126, 128, 130, 132, and 134. The number and choice of databases varies depending on particular use cases. Examples of possible choices are Cassandra, MongoDB, MySQL, MariaDB, ElasticSearch, and Redis.

    [0028] Probes 506 include extension technologies 508, 510, 512, 514, and 516. Examples of such technologies include Autodesk, ArcGIS, Node.js, R, and Spark. The probe exposes an environment of libraries and APIs to interact with any external systems, via various techniques, e.g., an in-process client for an external system like a database or a proprietary system, a facilitator for executing an R script to an adjacent R execution environment, or a node.js JavaScript to be executed in an adjacent node.js etc.

    [0029] FIG. 6 shows detail 600 of Unix platform server 602, Unix platform server 604 and extension server 606. Incoming communications using port 80 arrive at Unix platform server 602. In a typical embodiment, platform servers 602 and 604 run an open source Unix-based Linux distribution such as Ubuntu while platform server 606 may be proprietary, such as Microsoft Windows. The actual choice of operating system for each of servers 602, 604, and 606 may be changed according to particular use cases.

    [0030] Platform 602 includes a MySQL database 608, Redis database 610, an application stack 612, a load balancer, and an HTTPD server. Platform 604 supports the database stack and includes databases 616, 618, 620, 622, and 624. On extension platform server 606 reside probes 626, 628, 630, 632, and 634.

    [0031] One aspect of a probe of the present invention is that it executes scripts, for example, using programming languages such as Javascript, Jython, or Scala. Other languages may also be used, depending on particular use cases. The scripts use the embedded libraries and APIs that the probe exposes. The probe also downloads additional libraries, such as JAR files, to add additional functionality. A JAR file is a Java archive file format based on the ZIP file format that is used for aggregating many files into one.

    [0032] Another aspect of the probe is that it establishes connection with the platform and polls for “instructions” to be executed in the probe on schedule. The instructions are posted in the platform, tagged for a probe. These instructions contain “Remote Execution Service” definitions, i.e., scripts that use probe libraries and APIs to connect to any external systems or technology and send collected data to the platform.

    [0033] The probe is a standalone software component that sits in remote premises, in the vicinity of the target systems it is configured to connect to. It builds a conduit to the platform to execute specific connector instructions. Software connectors transfer control and data among system components. For example, MariaDB Connector/Node.js is a native Javascript driver used to connect applications developed on Node.js to MariaDB and MySQL databases. Connectors also provide services that are independent of the interacting components' functionalities. Examples of such services are persistence, invocation, messaging and transactions. These services are sometimes known as “facilities components” by middleware standards such as CORBA, DCOM and RMI.

    [0034] In an embodiment, the probe is a framework for connectors, rather than an in-built connector. Hence the probe exposes an environment to execute scripts to inter-operate with an external system. The probe can load additional client libraries on-demand to connect to proprietary technologies. The probe, being a standalone component in the customer premises, its environment is not opaque as the platform and can be boosted with other software components. In an embodiment, all connectors in the probe are scripts to facilitate specific handling of data. Data collected and curated by the probe is sent to the platform, where it can be further manipulated in a historical context. An appropriate datastore is chosen depending on the nature of the data. In an embodiment, Influxdb or Cassandra are chosen for time-series data, Redis for geodesic data, and MariaDB ColumnStore for huge amounts of structured relational data. As new datastores are developed, they may be chosen using the same or similar criteria.

    [0035] FIG. 7 shows detail 700 of platform 702, an embodiment of platform server 602 described above. In this embodiment, one or more probes are configured to carry out specific tasks. These specific tasks could be duplicated by one or more probes or distributed among the probes in various combinations. Illustrative examples of tasks performed by the probes include platform or technology extension, client data integration, fetching data from third-party providers, remote monitoring and management, or receiving data on-demand.

    [0036] First probe 704 is configured for and includes platform extension or technology integration 706 such as Autodesk, ArcGIS, Node.js, R, and Spark.

    [0037] Second probe 708 communicates with integrated client data 710. In an embodiment, this integrated client data 710 includes connected applications or databases via APIs. Alternatively, integrated client data 710 is a subscribed message queue that uses, for example, the Advanced Message Queuing Protocol (AMQP) or the Message Queuing Telemetry Transport (MQTT). In another embodiment, integrated client data 710 comprises web scraping or desktop applications with Robotic Process Automation (“RPA”). In this context, RPA generally refers to software robotics that automate business-process activities.

    [0038] Third probe 712 is linked to a third-party data provider 714. In an embodiment, data retrieved from the third party is accessed by third probe 712 but not replicated in other parts of the platform.

    [0039] Fourth probe 716 is configured for remote monitoring or management of one or more resources 718. Exemplary monitored or managed resources 718 include a router, a firewall, a hub or router, mobile devices, laptop computers, Internet Protocol telephones, and websites. A system or network comprising different combinations of these resources is monitored or managed by fourth probe 716.

    [0040] Probe n 720 receives data on-demand from data source 722. Data source 722 alternatively comprises one or more kinds of internet sockets. In an embodiment, the sockets comprise raw User Transmission Control Protocol (TCP)/User Datagram Protocol (UDP) sockets. Alternatively, source 722 comprises Simple Object Access Protocol (SOAP) or Representational State Transfer (REST) interfaces. Source 722 may also comprise custom HTTP servers.

    [0041] FIG. 8 shows detail 800 showing security aspects of platform 802. Remote devices 804, such as desktop computers, laptop computers, tablets, and mobile phones, communicate with platform 802 by way of secure hypertext transfer protocol (HTTPS) 806. Authenticated users 808 are given access by way of role-based-access-control (RBAC) to one or more accounts. In an embodiment, available accounts are a first account 812 with applications (814, 816) and a second account 818 with applications (820, 822) and additional accounts represented by account n 824. The accounts 812, 818, through 824 are accessible to authenticated users. These accounts in turn have access to platform services 828.

    [0042] Applications 814, 816, 820, and 822 may receive dedicated service from RBAC 810. For example, in an embodiment application 814 has on-demand security. In another embodiment, application 820 has on-demand data classification.

    [0043] FIG. 9 shows detail 900 of security aspects of platform 902. In an embodiment, platform 902 is the same platform as platform 802 in FIG. 8. Remote devices 904, such as desktop computers, laptop computers, tablets, and mobile phones, communicate with platform 902 by way of secure hypertext transfer protocol (HTTPS) 906 to access application server stack 908. The application server stack in turn communicates with one or more databases 912.

    [0044] Data passing from application server 908 is encrypted by process 910. In an embodiment, process 910 uses SHA2 Encryption.

    [0045] FIG. 10 shows detail 1000 of security aspects of platform 1002. In an embodiment, platform 1002 is the same platform as platform 802 in FIG. 8 or the same platform as platform 902 in FIG. 9. Remote devices 1004, such as desktop computers, laptop computers, tablets, and mobile phones, communicate with platform 1002 by way of secure hypertext transfer protocol (HTTPS) 906 to access a cross-platform web server 1008, such as Apache HTTPD server. Load balancer 1010 and request forwarder 1012 handle communication between server 1008 and application server stack 1014. In this configuration, transport and network security is provided by HTTPS.

    [0046] The platform has a cluster architecture for dividing user requests among platform resources, such that a single user request can be handled and delivered by multiple server nodes.

    [0047] In an embodiment, the platform includes up to four application servers. In a further embodiment, these application servers are configured to be horizontally scaled. One of these servers acts as a portal server. This server serves static content, such as html content, and bundled libraries, such as JavaScript libraries. In a further embodiment, additional portal server nodes are added, and reverse proxied in the Apache HTTPD server which fronts all requests. A reverse proxy is a configuration where a server is positioned in front of web servers and forwards client requests to those web servers.

    [0048] In an embodiment, the platform includes a web server. The web server serves dynamic data as a REST request and response cycle. In a further embodiment, additional web server nodes are added, and reverse proxied by the Apache HTTPD server.

    [0049] In an embodiment, the platform includes a job server. The job server executes background jobs as part of process models. In a further embodiment, additional nodes of job server are added to the application server stack. Job server nodes are idempotent, such that only one node executes a job at the same time, by acquiring a lock on the persistent job store.

    [0050] In an embodiment, the platform includes a collector server that collects data from probes. In a further embodiment, additional nodes of the collector server are added to the application server stack. Collector server nodes subscribe to various probe data receiver queues, and the queues serve data in a round-robin strategy. Round robin refers generally to rotating requests among web servers in the order the requests are received. This strategy ensures that only one collector server node processes the same data at the same time.

    [0051] The platform further provides semantics for distributed locking and distributed caching to manage clustered data processing environments as needed. Distributed locking is a technique that ensures that two processes cannot both access shared data at the same time. The locking protocol ensures that only one process is allowed to proceed once a lock is established. In distributed caching, user data is not stored in the individual web server's memory, but on other available resources. Cached data is accessible to an application's web servers or virtual machines. The cached data remains accessible to every server that runs the application, even when the application scales by adding or removing servers, or when servers are replaced due to upgrades or faults.