Patent classifications
H04L67/288
SELECTIVE TRAFFIC PROCESSING IN A DISTRIBUTED CLOUD COMPUTING NETWORK
A server receives internet traffic from a client device. The server is one of multiple servers of a distributed cloud computing network which are each associated with a set of server identity(ies) including a server/data center certification identity. The server processes, at layer 3, the internet traffic including participating in a layer 3 DDoS protection service. If the traffic is not dropped by the layer 3 DDoS protection service, further processing is performed. The server determines whether it is permitted to process the traffic at layers 5-7 including whether it is associated with a server/data center certification identity that meets a selected criteria for the destination of the internet traffic. If the server does not meet the criteria, it transmits the traffic to another one of the multiple servers for processing the traffic at layers 5-7.
SELECTIVE TRAFFIC PROCESSING IN A DISTRIBUTED CLOUD COMPUTING NETWORK
A server receives internet traffic from a client device. The server is one of multiple servers of a distributed cloud computing network which are each associated with a set of server identity(ies) including a server/data center certification identity. The server processes, at layer 3, the internet traffic including participating in a layer 3 DDoS protection service. If the traffic is not dropped by the layer 3 DDoS protection service, further processing is performed. The server determines whether it is permitted to process the traffic at layers 5-7 including whether it is associated with a server/data center certification identity that meets a selected criteria for the destination of the internet traffic. If the server does not meet the criteria, it transmits the traffic to another one of the multiple servers for processing the traffic at layers 5-7.
System and Method for Improving Content Fetching by Selecting Tunnel Devices
A method for fetching a content from a web server to a client device is disclosed, using tunnel devices serving as intermediate devices. The tunnel device is selected based on an attribute, such as IP Geolocation. A tunnel bank server stores a list of available tunnels that may be used, associated with values of various attribute types. The tunnel devices initiate communication with the tunnel bank server, and stays connected to it, for allowing a communication session initiated by the tunnel bank server. Upon receiving a request from a client to a content and for specific attribute types and values, a tunnel is selected by the tunnel bank server, and is used as a tunnel for retrieving the required content from the web server, using standard protocol such as SOCKS, WebSocket or HTTP Proxy. The client only communicates with a super proxy server that manages the content fetching scheme.
System and Method for Improving Content Fetching by Selecting Tunnel Devices
A method for fetching a content from a web server to a client device is disclosed, using tunnel devices serving as intermediate devices. The tunnel device is selected based on an attribute, such as IP Geolocation. A tunnel bank server stores a list of available tunnels that may be used, associated with values of various attribute types. The tunnel devices initiate communication with the tunnel bank server, and stays connected to it, for allowing a communication session initiated by the tunnel bank server. Upon receiving a request from a client to a content and for specific attribute types and values, a tunnel is selected by the tunnel bank server, and is used as a tunnel for retrieving the required content from the web server, using standard protocol such as SOCKS, WebSocket or HTTP Proxy. The client only communicates with a super proxy server that manages the content fetching scheme.
Measuring the performance of a peer-managed content distribution network
A system and method are provided for measuring the performance of a synthetic peer-managed content distribution network. Each node peers with one or more other nodes to share the content and facilitate its presentation to associated users. Each node collects session metadata for identifying the node's environment, presentation events regarding presentation of the content to users, and transfer events regarding the sharing of the content among peers. The nodes report their data toward a central entity that feeds the different types of data through different ETL pipelines to obtain the performance measurements. For example, the session metadata may allow the reach of a content item to be determined, the presentation events may be used to determine the quality of experience with the content item for users, and the transfer events may be used to determine how much external bandwidth the network conserved and/or how efficiently the nodes shared the content.
Efficient and flexible load-balancing for clusters of caches under latency constraint
The present technology provides a system, method and computer readable medium for steering a content request among plurality of cache servers based on multi-level assessment of content popularity. In some embodiments a three levels of popularity may be determined comprising popular, semi-popular and unpopular designations for the queried content. The processing of the query and delivery of the requested content depends on the aforementioned popularity level designation and comprises a acceptance of the query at the edge cache server to which the query was originally directed, rejection of the query and re-direction to a second edge cache server or redirection of the query to origin server to thereby deliver the requested content. The proposed technology results in higher hit ratio for edge cache clusters by steering requests for semi-popular content to one or more additional cache servers while forwarding request for unpopular content to origin server.
Exit node benchmark feature
Systems and methods for effectively managing exit nodes are provided. The exemplary systems and methods use a Supernode to examine an Exit Node through sending and receiving a request to a Target. Information about the exit node is then stored into the Supernode. According to the information provided from the Supernode, the Exit Nodes Database systemizes the proxies according to availability and provides available exit nodes to a User Device.
Exit node benchmark feature
Systems and methods for effectively managing exit nodes are provided. The exemplary systems and methods use a Supernode to examine an Exit Node through sending and receiving a request to a Target. Information about the exit node is then stored into the Supernode. According to the information provided from the Supernode, the Exit Nodes Database systemizes the proxies according to availability and provides available exit nodes to a User Device.
MITIGATING MULTIPLE AUTHENTICATIONS FOR A GEO-DISTRIBUTED SECURITY SERVICE USING AN AUTHENTICATION CACHE
Mitigating multiple authentications for a geo-distributed security service is disclosed. A request to access a web service from a client device is received. The request is redirected to a geo-distributed authentication service including a distributed cache for storing a user's authentication authorization. An authorization token included in a distributed authentication cache cookie and uniform resource locator (URL) for the web service to facilitate secure access to the web service from the client device are returned.
APPARATUS, METHOD, AND STORAGE MEDIUM FOR FEDERATED LEARNING
The present disclosure relates to an apparatus, a method, and a storage medium for federated learning (FL). Various embodiments for FL are described. In an embodiment, a central processing apparatus can be configured to, for a first one of a plurality of distributed computing apparatuses: receive a report message from the first distributed computing apparatus, the report message including at least one of training data set information or device information of the first distributed computing apparatus; evaluate FL performance of the first distributed computing apparatus based on the report message of the first distributed computing apparatus; determine wireless resource requirements of the first distributed computing apparatus based on the FL performance of the first distributed computing apparatus; and notify a wireless network, via a message, of configuring wireless resources for the first distributed computing apparatus based on the wireless resource requirements.