Patent classifications
H04L47/74
Systems and methods for managing streams of packets via intermediary devices
Virtual application and desktop delivery may be optimized by supplying application metadata and user intent to the device between a client and a server hosting resources for the delivery. The data packets used to deliver the virtual application or desktop may be also tagged with references to the application. By supplying the metadata and tagging packets with the metadata, an intermediary network device may provide streams of data packets at the target QoS. In addition, the device may apply network resource allocation rules (e.g., firewalls and QoS configuration) for redirected content retrieved by the client out of band relative to a virtual channel such as the Internet. The network resource allocation rules may differ for different types of resources accessed. The device may also control a delivery agent on the server to modify communication sessions established through the virtual channels based on network conditions.
SHARED STORAGE MODEL FOR HIGH AVAILABILITY WITHIN CLOUD ENVIRONMENTS
Techniques are provided for a high availability solution (e.g., a network attached storage (NAS) solution) with address preservation during switchover. A first virtual machine is deployed into a first domain and a second virtual machine is deployed into a second domain of a computing environment. The first and second virtual machines are configured as a node pair for providing clients with access to data stored within an aggregate comprising one or more storage structures within shared storage of the computing environment. A load balancer is utilized to manage logical interfaces used by clients to access the virtual machines. During switchover, the load balancer preserves an IP address used to mount and access a data share of the aggregate used by a client.
Monitoring a Communication System That is Used for Control and/or Surveillance of an Industrial Process
A computer-implemented method for monitoring a communication system includes identifying a set of signals that need to be transmitted over the communication system for proper functioning of the control and/or surveillance; for each signal from the identified set of signals, identifying one or more resources of the communication system that are needed for transmission of this signal; obtaining information that is indicative of the operational state of the identified resources; and evaluating, from the obtained information, at least one remedial activity which, when performed on at least one resource, and/or on the control and/or surveillance, is likely to improve, and/or to restore, the reliability of the control and/or surveillance.
Flow queueing method and system
A method includes receiving a packet. The method further includes determining whether the packet is part of a responsive connection. The method further includes determining whether a responsive buffer is full in response to a determination that the packet is part of the responsive connection. The method further includes applying a responsive probability to the packet in response to a determination that the responsive buffer is full. The method further includes determining whether to drop the packet based on the responsive probability. The method further includes accepting the packet for processing in response to a determination that the responsive buffer is not full or in response to a determination not to drop the packet.
USING EDGE-OPTIMIZED COMPUTE INSTANCES TO EXECUTE USER WORKLOADS AT PROVIDER SUBSTRATE EXTENSIONS
Techniques are described for enabling users of a service provider network to create and configure “application profiles” that include parameters related to execution of user workloads at provider substrate extensions. Once an application profile is created, users can request the deployment of user workloads to provider substrate extensions by requesting instance launches based on a defined application profile. The service provider network can then automate the launch and placement of the user's workload at one or more provider substrate extensions using edge-optimized compute instances (e.g., compute instances tailored for execution within provider substrate extension environments). In some embodiments, once such edge-optimized instances are deployed, the service provider network can manage the auto-resizing of the instances in terms of various types of computing resources devoted to the instances, manage the lifecycle of instances to ensure maximum capacity availability at provider substrate extension locations, and perform other instance management processes.
Method and device for downloading resources
Method and device for downloading resources, applicable to a peer to peer (P2P) network. The method includes: initiating a downloading task according to a downloading request from a resource requester; acquiring data of the downloading task and writing the data of the downloading task to a memory; and reading the data from the memory and providing the data to the resource requester. The present invention can reduce the amount of magnetic disk consumption due to resource downloading.
AUTOMATED DECISION TECHNIQUES FOR CONTROLLING RESOURCE ACCESS
A durability assessment system may receive a request, from a computing system, for a durability index describing an entity. The durability assessment system may determine the durability index based on information about the resource usage by the entity, such as a resource availability score or a resource allocation score. The durability assessment system may compare the obtained resource availability score and resource allocation score to ranges associated with a set of durability indices. Based on the comparison, the durability assessment system may determine a durability index for the entity. The durability index may indicate an ability of the entity to return accessed resources. In some cases, the durability assessment system may provide the durability index to an allocation computing system that is configured to determine whether to grant access to resources based on the durability index.
Orchestrating apparatus, VNFM apparatus, managing method and program
An orchestrating apparatus, comprising: a receiving part that receives virtual resource information attached to a VNF (Virtualized Network Function) from a VNFM (Virtualized Network Function Manager) that generated the VNF; a storage part that stores the virtual resource information in correspondence with the VNF; and a synchronizing part that transmits the virtual resource information corresponding to a designated VNF to a VNFM that has lost correspondence between the VNF and the virtual resource information, and causes the VNFM to restore the virtual resource information assigned to the VNF.
Processing allocation in data center fleets
A method and system for allocating tasks among processing devices in a data center. The method may include receiving a request to allocate a task to one or more processing devices, the request indicating a required bandwidth for performing the task, a list of predefined processing device groups connected to a host server and indicating availability of the processing device groups included therein for allocation of tasks and available bandwidth for each available processing device group, assigning the task to a processing device group having an available bandwidth greater than or equal to the required bandwidth for performing the task, and updating the list to indicate that each of the processing device group to which the task is assigned and other processing device group sharing at least one processing device is unavailable. The task may be assigned to an available processing device group having a lowest amount of power needed.
RESERVATION MECHANIC FOR NODES WITH PHASE CONSTRAINTS
A computer chip, a method, and computer program product for providing phase reservations between processing nodes. A computer chip includes a plurality of processing nodes interconnected in an on-chip data transfer network configured in a circular topology. The processing nodes include reservation mechanisms managing reservations made by processing nodes with phase constraints. The reservation policy allows the processing nodes to make a reservation, for a given phase, in any phase window, only once per reservation window. A reservation window can be a bounded amount of time for when a node is guaranteed an opportunity to transmit at least one message. The reservation policy also prevents the processing nodes from making more than one reservation in a phase window. Once a reservation is granted, the corresponding message may progress on the bus unimpeded. Requestors attempting to transmit messages are blocked until the message is transmitted.