H04L67/1014

Scalable proxy clusters

The invention enables high-availability, high-scale, high security and disaster recovery for API computing, including in terms of capture of data traffic passing through proxies, routing communications between clients and servers, and load balancing and/or forwarding functions. The invention inter alia provides (i) a scalable cluster of proxies configured to route communications between clients and servers, without any single point of failure, (ii) proxy nodes configured for implementing the scalable cluster (iii) efficient methods of configuring the proxy cluster, (iv) natural resiliency of clusters and/or proxy nodes within a cluster, (v) methods for scaling of clusters, (vi) configurability of clusters to span multiple servers, multiple racks and multiple datacenters, thereby ensuring high availability and disaster recovery (vii) switching between proxies or between servers without loss of session.

Scalable proxy clusters

The invention enables high-availability, high-scale, high security and disaster recovery for API computing, including in terms of capture of data traffic passing through proxies, routing communications between clients and servers, and load balancing and/or forwarding functions. The invention inter alia provides (i) a scalable cluster of proxies configured to route communications between clients and servers, without any single point of failure, (ii) proxy nodes configured for implementing the scalable cluster (iii) efficient methods of configuring the proxy cluster, (iv) natural resiliency of clusters and/or proxy nodes within a cluster, (v) methods for scaling of clusters, (vi) configurability of clusters to span multiple servers, multiple racks and multiple datacenters, thereby ensuring high availability and disaster recovery (vii) switching between proxies or between servers without loss of session.

Technologies for providing shared memory for accelerator sleds

Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.

Technologies for providing shared memory for accelerator sleds

Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.

METHOD FOR RESPONDING TO RESOURCE REQUEST, REDIRECT SERVER, AND DECISION DELIVERY SERVER
20230038228 · 2023-02-09 ·

Embodiments of the present disclosure disclose a method for responding to a resource request, a redirect server, and a decision delivery server. The redirect server classifies a first resource request from a client based on a first screening rule (201), and responds to the first resource request determined to be of an unprocessable type, to enable the client to send a second resource request to the decision delivery server (202). The decision delivery server determines, based on a second screening rule, whether the second resource request from the client is of a serviceable type (203), and performs proxy acceleration for the second resource request if it is determined the second resource request is of the serviceable type (204).

System and method of dynamic and scalable IoT framework

A method and a system for providing one or more services to one or more user devices [202] in an IoT network in a scalable M2M (Machine to Machine) framework. The method comprises receiving a connection request from the one or more user devices [202] at a load balance of the IoT network, the connection request comprises at least a username comprising a cluster identifier. The load balancer [204] determines a cluster identifier based on the connection request and identifies at least one target cluster from the one or more clusters [206], said target cluster being associated with the identifier cluster identifier. The load balancer [204] routes the connection request to the at least one target cluster to provide the one or more services to the one or more user devices [202].

DATA UPDATE METHOD, APPARATUS, AND EMBEDDED UNIVERSAL INTEGRATED CIRCUIT CARD
20180004736 · 2018-01-04 ·

The present invention provides a data update method, an apparatus, and an eUICC. The method is applied to an eUICC in which a management apparatus and at least one SE are disposed, where the at least one SE is configured to store an application corresponding to at least one profile. The management apparatus receives a profile enabling request, where the enabling request is used to switch a source profile to a target profile, the enabling request includes identifier information of the target profile, and the source profile is a profile that is in an enabled state before the switching; and updates a first correspondence to a second correspondence according to the enabling request; and the second correspondence is a correspondence between a second application set and the target profile, and the second application set includes at least one application in the first application set.

SYSTEMS, DEVICES AND METHODS FOR EDGE NODE COMPUTING

Some embodiments are directed to a session manager, a cloud service system, and a mobile device. The session manager may be configured for managing an edge computing resource in a mobile network, wherein the mobile network comprises edge nodes which are configurable to provide edge computing resources to mobile devices. The session manager may generate a session identifier for the mobile device and associate the session identifier with the mobile device. Later, the session manager may receive a request from the cloud service system for deployment of an edge computing resource for the mobile device on an edge node.

SYSTEMS, DEVICES AND METHODS FOR EDGE NODE COMPUTING

Some embodiments are directed to a session manager, a cloud service system, and a mobile device. The session manager may be configured for managing an edge computing resource in a mobile network, wherein the mobile network comprises edge nodes which are configurable to provide edge computing resources to mobile devices. The session manager may generate a session identifier for the mobile device and associate the session identifier with the mobile device. Later, the session manager may receive a request from the cloud service system for deployment of an edge computing resource for the mobile device on an edge node.

SMB2 scaleout

Systems and methods are disclosed for clients and servers operating in a scaled cluster environment. Efficiencies are introduced to the process of connecting a client to a clustered environment by providing the client with the ability to attempt a connection with multiple servers in parallel. Servers operating the in the clustered environment are also capable of providing persistent storage of file handles and other state information. Ownership of the state information and persistent handles may be transferred between servers, thereby providing clients with the opportunity to move from one server to another while maintaining access to resources in the clustered environment.