Patent classifications
H04L47/76
Systems and methods for managing streams of packets via intermediary devices
Virtual application and desktop delivery may be optimized by supplying application metadata and user intent to the device between a client and a server hosting resources for the delivery. The data packets used to deliver the virtual application or desktop may be also tagged with references to the application. By supplying the metadata and tagging packets with the metadata, an intermediary network device may provide streams of data packets at the target QoS. In addition, the device may apply network resource allocation rules (e.g., firewalls and QoS configuration) for redirected content retrieved by the client out of band relative to a virtual channel such as the Internet. The network resource allocation rules may differ for different types of resources accessed. The device may also control a delivery agent on the server to modify communication sessions established through the virtual channels based on network conditions.
LOAD ADAPTATION ARCHITECTURE FRAMEWORK FOR ORCHESTRATING AND MANAGING SERVICES IN A CLOUD COMPUTING SYSTEM
According to one aspect of the concepts and technologies disclosed herein, a cloud computing system can include a load adaptation architecture framework that performs operations for orchestrating and managing one or more services that may operate within at least one of layers 4 through 7 of the Open Systems Interconnection (“OSP”) communication model. The cloud computing system also can include a virtual resource layer. The virtual resource layer can include a virtual network function that provides, at least in part, a service. The cloud computing system also can include a hardware resource layer. The hardware resource layer can include a hardware resource that is controlled by a virtualization layer. The virtualization layer can cause the virtual network function to be instantiated on the hardware resource so that the virtual network function can be used to support the service.
LOAD ADAPTATION ARCHITECTURE FRAMEWORK FOR ORCHESTRATING AND MANAGING SERVICES IN A CLOUD COMPUTING SYSTEM
According to one aspect of the concepts and technologies disclosed herein, a cloud computing system can include a load adaptation architecture framework that performs operations for orchestrating and managing one or more services that may operate within at least one of layers 4 through 7 of the Open Systems Interconnection (“OSP”) communication model. The cloud computing system also can include a virtual resource layer. The virtual resource layer can include a virtual network function that provides, at least in part, a service. The cloud computing system also can include a hardware resource layer. The hardware resource layer can include a hardware resource that is controlled by a virtualization layer. The virtualization layer can cause the virtual network function to be instantiated on the hardware resource so that the virtual network function can be used to support the service.
Managing devices within a vehicular communication network
A system for determining the servicing needs of a vehicle. In various embodiments, the system includes a remote server and a vehicle control module of the vehicle. The vehicle control module includes a first communication interface to enable communications with at least one vehicle device via a network fabric of the vehicle. The vehicle control module is configured to receive status data, from the vehicle device, relating to a performance status or operational status of the vehicle. The vehicle control module further includes a second communication interface that enables wireless communications with the remote server. The wireless communications include sending status data to the remote server. The remote server is configured to receive and interpret the status data to determine if the vehicle requires service, and send a response to the vehicle. When service is required, the response may cause the vehicle to provide a service indication.
Managing devices within a vehicular communication network
A system for determining the servicing needs of a vehicle. In various embodiments, the system includes a remote server and a vehicle control module of the vehicle. The vehicle control module includes a first communication interface to enable communications with at least one vehicle device via a network fabric of the vehicle. The vehicle control module is configured to receive status data, from the vehicle device, relating to a performance status or operational status of the vehicle. The vehicle control module further includes a second communication interface that enables wireless communications with the remote server. The wireless communications include sending status data to the remote server. The remote server is configured to receive and interpret the status data to determine if the vehicle requires service, and send a response to the vehicle. When service is required, the response may cause the vehicle to provide a service indication.
System and method for throttling service requests having non-uniform workloads
A system that provides services to clients may receive and service requests, various ones of which may require different amounts of work. The system may determine whether it is operating in an overloaded or underloaded state based on a current work throughput rate, a target work throughput rate, a maximum request rate, or an actual request rate, and may dynamically adjust the maximum request rate in response. For example, if the maximum request rate is being exceeded, the maximum request rate may be raised or lowered, dependent on the current work throughput rate. If the target or committed work throughput rate is being exceeded, but the maximum request rate is not being exceeded, a lower maximum request rate may be proposed. Adjustments to the maximum request rate may be made using multiple incremental adjustments. Service request tokens may be added to a leaky token bucket at the maximum request rate.
System and method for throttling service requests having non-uniform workloads
A system that provides services to clients may receive and service requests, various ones of which may require different amounts of work. The system may determine whether it is operating in an overloaded or underloaded state based on a current work throughput rate, a target work throughput rate, a maximum request rate, or an actual request rate, and may dynamically adjust the maximum request rate in response. For example, if the maximum request rate is being exceeded, the maximum request rate may be raised or lowered, dependent on the current work throughput rate. If the target or committed work throughput rate is being exceeded, but the maximum request rate is not being exceeded, a lower maximum request rate may be proposed. Adjustments to the maximum request rate may be made using multiple incremental adjustments. Service request tokens may be added to a leaky token bucket at the maximum request rate.
System and method to estimate network disruption index
Presented herein are methodologies for implementing a system and apparatus to estimate a network disruption index and undertake a mitigation action accordingly. A method includes calculating a network disruption index based on at least a disruption score associated with a service request measure, an end-of-life measure, a security incident response measure and a return material authorization measure for respective hardware devices in a network, comparing the network disruption index to a predetermined threshold, and when the network disruption index is above the predetermined threshold, identifying one or more of the hardware devices in the network for a mitigation action and implementing the mitigation action.
Utilizing a model to manage resources of a network device and to prevent network device oversubscription by endpoint devices
A network device may receive configuration data identifying resource subscription thresholds associated with a plurality of respective endpoint devices and may receive traffic from the plurality of endpoint devices. The network device may process the traffic and the configuration data, with a resource allocation model, to determine that processing traffic associated with a first endpoint device requires allocating a resource of the network device, and may process the configuration data, with the resource allocation model, to identify the resource of the network device from a particular resource of the network device that is currently allocated to traffic associated with a second endpoint device. The network device may allocate the particular resource of the network device to the traffic associated with the first endpoint device, and may process the traffic associated with the first endpoint device with the particular resource to generate processed traffic.
BOTTLENECK STRUCTURES TO COMPUTE INCREMENTAL DIRECTIONS IN MULTIPATH MAX-MIN BANDWIDTH ALLOCATION
A processor-implemented method includes computing a bandwidth allocation for a number of flows in a number of flow groups. Pairs of nodes in a network transmit data to each other via at least one of the flows in one of the flow groups. Each of the flows traverses a path comprising a number of network links. The method also includes building a bottleneck structure graph for the flow groups. The method further includes calculating a network allocation based on the bottleneck structure.