Patent classifications
H04L47/745
Internet-based proxy service to modify internet responses
A proxy server receives from a client device a request for a network resource that is hosted at an origin server for a domain. The request is received at the proxy server as a result of a DNS request for the domain resolving to the proxy server. The origin server is one of multiple origin servers that belong to different domains that resolve to the proxy server and are owned by different entities. The proxy server retrieves the requested network resource. The proxy server determines that the requested resource is an HTML page, automatically modifies the HTML page, and transmits the modified HTML page to the client device.
Access path management based on path condition
Access path management is provided based on one or more path conditions in an information processing system. For example, an apparatus comprises a storage system comprising a processor coupled to a memory. The storage system is configured to communicate over a network with one or more host devices. The storage system is further configured to obtain notification from one of the one or more host devices that a first path through the network between the storage system and the given one of the one or more host devices is at least temporarily unreliable. The storage system is further configured to cause a path state change for the first path from a first state to a second state and a path state change for a second path to the first state.
CAPACITY FORECASTING FOR HIGH-USAGE PERIODS
Examples herein include systems and methods for providing capacity forecasting for high-usage periods of a computing infrastructure. An example method can include segmenting a first portion of a data stream and generating a first core set for a forecasting model that predicts future usage of computing resources. The example method can further include segmenting a second portion of the data stream, generating a second core set, and using both core sets to forecast usage. The first core set can then be phased out after a predetermined time period has elapsed such that forecasting is based only on the second core set. The example method can further include defining at least two clusters of data and performing predictive analysis on that specific cluster. Cluster-specific results can be displayed on a GUI, which can also provide a user with options for increase or decrease computing resources based on the predictions.
System and method for predictive network congestion control
A method for predictive network congestion control may include receiving network traffic data of a network. The network traffic data may be indicative of a current level of use or the network. A predicted future level of use at the location of the network may be identified based on the received network traffic data and based on past network traffic data for the location of the network. A recommendation to alter the future level of use for the location may be generated. The recommendation may include a type of alert to transmit to devices of users in the location of the network. The recommendation may be transmitted to a network policy management server of the network.
DYNAMIC AND DETERMINISTIC ACCELERATION OF NETWORK SCHEDULING FOR SHARED FPGAS
A method for allocating resources of a field-programmable gate array (FPGA), the method comprising: deterministically estimating a maximum latency for executing a network service at the FPGA; determining that the maximum latency is less than a threshold latency value associated with the network service; outputting an acknowledgement indicating that the maximum latency is less than or equal to the threshold latency value; receiving confirmation that the FPGA has been selected to execute the network service within a threshold time period; and deterministically scheduling the resources of the FPGA for executing the network service in response to receiving the confirmation within the threshold time period.
Methods and apparatuses for providing internet-based proxy services
A proxy server receives, from multiple visitors of multiple client devices, a plurality of requests for actions to be performed on identified network resources belonging to a plurality of origin servers. At least some of the origin servers belong to different domains and are owned by different entities. The proxy server and the origin servers are also owned by different entities. The proxy server analyzes each request it receives to determine whether that request poses a threat and whether the visitor belonging to the request poses a threat. The proxy server blocks those requests from visitors that pose a threat or in which the request itself poses a threat. The proxy server transmits the requests that are not a threat and is from a visitor that is not a threat to the appropriate origin server.
Dynamic throttling systems and services
A lightweight throttling mechanism allows for dynamic control of access to resources in a distributed environment. Each request received by a server of a server group is parsed to determine tokens in the request, which are compared with designated rules to determine whether to process or reject the request based on usage data associated with an aspect of the request, the token values, and the rule(s) specified for the request. The receiving of each request can be broadcast to throttling components for each server such that the global state of the system is known to each server. The system then can monitor usage and dynamically throttle requests based on real time data in a distributed environment.
Method and system for improving bandwidth allocation efficiency
Provided are a method and system for improving bandwidth allocation efficiency, relating to optical communication field. In a PON system, an ONU detects each TCONT of the ONU in real time, and sends to an OLT a private message used for reporting buffer overflow when detecting that buffer overflow occurs on a TCONT; the OLT sends, according to the received private message used for reporting the buffer overflow, to the ONU a private message used for instructing the ONU to activate an overflow allocation mechanism; after receiving the private message used for instructing the ONU to activate the overflow allocation mechanism, the ONU activates the overflow allocation mechanism, calculates an actual traffic of a buffer of the TCONT, and sends the actual traffic of the buffer of the TCONT to the OLT; and the OLT dynamically allocates bandwidth to the TCONT according to the actual traffic of the buffer of the TCONT.
RESOURCE RESERVATION MANAGEMENT DEVICE AND RESOURCE RESERVATION MANAGEMENT METHOD
[Problem] When a resource reserved in a resource sharing system become unavailable, the reservation is efficiently reset.
[Solution] A resource sharing system 10 shares resources 30 with a plurality of users 20 (user terminals). A resource reservation management device 42 includes: a reservation setting unit 402 that accepts a reservation request for the resource 30 from the user 20, and sets a reservation on a predetermined resource 30 in the resource sharing system 10; and a reservation changing unit 404 that resets the reservation to another resource 30 in the resource sharing system 10 in case the predetermined reserved resource 30 becomes unavailable. if the other resource 30 has insufficient resource capacity, the reservation changing unit 404 resets the reservation to the other resource 30 based on a reservation changing policy defining which reservation is preferentially reset out of the reservation to be reset.
User-Based Data Tiering
Techniques are provided for user-based data tiering. In an example, a computer maintains a first-in-first-out queue that logs a finite set of users that have most-recently modified a file. This queue can be maintained in an extended attribute of an Mode that corresponds to a file. A computer can also maintain a policy that defines how to perform storage tiering on a file based on which users have accessed the file. When a tiering operation is performed, the files specified by a corresponding tiering policy can be evaluated for which users have recently accessed them. When a user specified by the tiering policy has recently modified a file, the file can be placed in tiering queue for tiering.