Patent classifications
H04L67/2866
Distributed Deep Learning System
A distributed deep learning system according to an embodiment includes M distributed processing nodes that perform deep learning of a neural network distributed from each other, and N aggregation processing nodes that are connected to each of the M distributed processing nodes via a first communication line and a second communication line, and perform aggregation of distributed processing results obtained at the M distributed processing nodes via the first communication line. Accordingly, even in a case of a plurality of users sharing the distributed deep learning system at the same time, efficient and stable distributed deep learning processing can be realized.
METHOD FOR PROVISIONING INSTANCE IN COMMUNICATION SYSTEM SUPPORTING MULTIPLE ARCHITECTURES
The disclosure relates to a 5G or 6G communication system for supporting a higher data transmission rate. A method of operating a management service (MnS) in a communication system supporting a plurality of architectures according to an embodiment of the present disclosure includes identifying an architecture supported by a first instance which is running and a list of functions for the architecture, determining whether to use the first instance based on a result of the identification and a requirement for service provision, and terminating the first instance and generating a second instance in case that the MnS determines not to use the first instance.
PROXY SELECTION BY MONITORING QUALITY AND AVAILABLE CAPACITY
Empirical data of exit nodes are continuously monitored and each exit node’s overall performance and available capacity are calculated. The empirical data can include monitoring the number of concurrent requests currently being executed by each exit node and the disconnection chronology of each exit node. Further, each exit node is tested by benchmark requests and ping messages and each exit node’s quality rate is calculated. Additionally, systems and methods are provided to select an exit node with the highest quality and available capacity value, from a particular pool to route the user request.
PROXY SELECTION BY MONITORING QUALITY AND AVAILABLE CAPACITY
Empirical data of exit nodes are continuously monitored and each exit node’s overall performance and available capacity are calculated. The empirical data can include monitoring the number of concurrent requests currently being executed by each exit node and the disconnection chronology of each exit node. Further, each exit node is tested by benchmark requests and ping messages and each exit node’s quality rate is calculated. Additionally, systems and methods are provided to select an exit node with the highest quality and available capacity value, from a particular pool to route the user request.
CURATING PROXY SERVER POOLS
A system and method of forming proxy server pools is provided. The method comprises several steps, such as requesting a pool to execute the user's request and retrieving an initial group. The system checks the service history of an initial group, including whether any of the proxy servers in an initial group are exclusive to existing pools. The exclusive proxy servers in an initial group with eligible proxy servers are replaced when needed and new proxy server pools are formed. The system also records the service history of proxy servers and pools before and after the pools are created. The method can also involve predicting the pool health in relation with the thresholds foreseen and replacing the proxy servers below the threshold.
METHOD AND SYSTEM FOR REAL-TIME RESOURCE CONSUMPTION CONTROL IN A DISTRIBUTED COMPUTING ENVIRONMENT
The invention refers to a system for real-time resource consumption control in a distributed environment and a corresponding method, the system comprising: a multitude of server instances (Sx) having access to shared resources, whereby each request for a shared resource issued by a client application (CA) is handled by one of the server instances (Sx); a global resource consumption counter (G), representing the overall resource consumption of the multitude of server instances (Sx) at a given time; and a multitude of proxy servers (Lx), each proxy server comprising—a receiver module (R) for receiving resource consumption requests issued from a client application (CA), a resource consumption decision module (Dm) for accepting or rejecting a resource consumption request, a queue (Q) for collecting resource consumption requests that have been locally accepted by the respective proxy server (Lx), a local resource consumption counter (L), representing the global resource consumption as seen by the respective proxy server (Lx), said local resource consumption counter (L) being updated every time a resource consumption request is accepted by the decision module (Dm), the updated value being provided in turn as an input to the decision module (Dm), and a synchronization module (S) for synchronizing the global resource consumption counter (G) by interfacing with all other server instances (Sx).
NETWORK MAPPING IN CONTENT DELIVERY NETWORK
A computer-implemented method in a content delivery network (CDN) having multiple delivery servers. The CDN delivers content on behalf of at least one content provider. Distinct delivery servers are logically grouped into delivery server groups. One or more CDN name servers are associated with some of the delivery server groups. Network map data are determined using network data determined by the CDN name servers associated with at least some of the deliver server groups. The network data with respect to a CDN name server relative to a resolver is based on an estimated popularity of that CDN name server for that resolver. Responsive to a client request, including a hostname associated with a content provider, at least one CDN name server determines, using network map data, at least one delivery server to process the client request.
Client-server protocol
A system including a client and a server in a client-server architecture. The client transmits requests to the server for content subject to a sorting criterion that is ultimately used to sort results of the search. The server identifies an item matching the sorting criterion from its items collection. The server further generates an identifier for the directory item. The identifier is generated from the sorting criterion and is transmitted to the client. The client uses the identifier to sort the matching items.
MICROSERVICES CLOUD-NATIVE ARCHITECTURE FOR UBIQUITOUS SIMULATION AS A SERVICE
A system and method for deploying software is disclosed. The system includes an architecture for deploying simulation software as a service. The architecture includes a client layer. The client layer includes an edge device, a resource manager, an update framework, a firewall, and a key management system. The architecture further includes a control layer communicatively coupled to the client layer, wherein a portion of the control layer is configured within a server. The control layer includes an application programming interface, one or more containers, wherein at least one of the one or more containers is a simulation processing container. The control layer further includes an orchestration node, a continuous integration tool, one or more processors, and a content delivery network module. The architecture further includes a data layer communicatively coupled to the one or more containers.
SYSTEM AND METHOD FOR INTEGRATING A TRANSACTIONAL MIDDLEWARE PLATFORM WITH A CENTRALIZED AUDIT FRAMEWORK
In accordance with an embodiment, described herein is a system and method for integrating a transactional middleware platform with a centralized audit framework for a SOA middleware platform. An audit provider in the centralized audit framework can be provided as a plug-in module to the transactional middleware platform, and registered as an internal audit service therein. The internal audit service can be advertised on an audit server, and can process audit requests from within the transactional middleware platform. One or more configuration files can be provided to the audit provider, for use in generating audit data for audit events occurring in one or more components in the transactional middleware platform. The audit provider itself can be configured to represent an audit aware component within the centralized audit framework, thereby utilizing a plurality of functionalities available in the centralized audit framework, including saving the audit data in a central data store.