Patent classifications
H04L47/83
LATENCY-AWARE LOAD BALANCER FOR TOPOLOGY-SHIFTING SOFTWARE DEFINED NETWORKS
Techniques are described for performing latency-aware load balancing. In some examples, a computing device communicably coupled to a plurality of service endpoints that are in motion with respect to the computing device may receive data to be processed. The computing device may select, based at least in part on a communication latency of each of the plurality of service endpoints and a predicted compute latency of each of the plurality of service endpoints, a service endpoint out of the plurality of service endpoints to process the data. The computing device may send the data to the selected service endpoint for processing.
Edge processing of sensor data using a neural network to reduce data traffic on a communication network
Methods, systems, and apparatuses related to edge processing of sensor data using a neural network to reduce network traffic to and/or from a server. In one approach, a cloud server processes sensor data from a vehicle using an artificial neural network (ANN). The ANN has several layers. Based on analyzing at least one characteristic of the sensor data received from the vehicle and/or a context associated with processing the sensor data, the cloud server determines to send one or more of the layers of the ANN for edge processing on the vehicle itself. In other cases, the cloud server decides to send the one or more layers to an edge server device located on a communication path between the vehicle and the cloud server. The edge processing reduces network data traffic.
Automatic configuration of logical routers on edge nodes
Some embodiments provide a method or tool for automatically configuring a logical router on one or more edge nodes of an edge cluster (e.g., in a hosting system such as a datacenter). The method of some embodiments configures the logical router on the edge nodes based on a configuration policy that dictates the selection method of the edge nodes. In some embodiments, an edge cluster includes several edge nodes (e.g., gateway machines), through which one or more logical networks connect to external networks (e.g., external logical and/or physical networks). In some embodiments, the configured logical router connects a logical network to an external network through the edge nodes.
Capacity optimization in an automated resource-exchange system
The current document is directed a resource-exchange system that facilitates resource exchange and sharing among computing facilities. The currently disclosed methods and systems employ efficient, distributed-search methods and subsystems within distributed computer systems that include large numbers of geographically distributed data centers to locate resource-provider computing facilities that match the resource needs of resource-consumer computing-facilities based on attribute values associated with the needed resources, the resource providers, and the resource consumers. The resource-exchange system monitors and controls resource exchanges on behalf of participants in the resource-exchange system in order to optimize resource usage within participant data centers and computing facilities. By optimizing resource usage, the resource-exchange system drives participant data centers and computing facilities towards maximum operational efficiency.
TECHNOLOGIES FOR DYNAMIC ACCELERATOR SELECTION
Technologies for dynamic accelerator selection include a compute sled. The compute sled includes a network interface controller to communicate with a remote accelerator of an accelerator sled over a network, where the network interface controller includes a local accelerator and a compute engine. The compute engine is to obtain network telemetry data indicative of a level of bandwidth saturation of the network. The compute engine is also to determine whether to accelerate a function managed by the compute sled. The compute engine is further to determine, in response to a determination to accelerate the function, whether to offload the function to the remote accelerator of the accelerator sled based on the telemetry data. Also the compute engine is to assign, in response a determination not to offload the function to the remote accelerator, the function to the local accelerator of the network interface controller.
Providing alternate resource deployment guidance for use with cloud services
The present disclosure relates to devices, methods, and computer-readable medium for providing recommendations for alternate resources to use for cloud services. The devices, methods, and computer-readable medium may receive a resource allocation request for a new resource of a computing system and may predict an occurrence of a capacity related allocation for the resource allocation request. The devices, methods, and computer-readable medium may identify alternate resources to use for the resource allocation request and may provide recommendations with the alternate resources.
Using multi-phase constraint programming to assign resource guarantees of consumers to hosts
“Resource guarantee” refers to a unit of a resource that is guaranteed and therefore designated to a consumer. A multi-phased constraint programming (CP) approach is used to determine assignments of resource guarantees of a set of consumers to a set of hosts in a resource system. Phase I uses CP to segregate non-split consumers from split consumers. Phase II uses CP to assign each cotenant group of non-split consumers to a respective host. Phase III uses CP to assign resource guarantees of the split consumers across the hosts, wherein resource guarantees of a single split consumer may be splits across different hosts. Each phase involves execution of a CP solver based on a different CP data model. A CP data model declaratively expresses combinatorial properties of a problem in terms of constraints. CP is a form of declarative programming.
USING CONSTRAINT PROGRAMMING TO SET RESOURCE ALLOCATION LIMITATIONS FOR ALLOCATING RESOURCES TO CONSUMERS
Resource allocation limitations include resource limits and resource guarantees. A consumer is vulnerable to interruption by other consumers if using more resources than guaranteed. Resources are designated and/or assigned to consumers based on resource limits and resource guarantees. A constraint programming (CP) solver determines resource limits and resource guarantees that minimize vulnerability and/or vulnerability cost based on resource usage data. A CP data model includes limit elements, guarantee elements, and vulnerability elements. The CP data model further includes guarantee-vulnerability constraints, which relies on exceedance distributions generated from resource usage data for the consumers. The CP data model declaratively expresses combinatorial properties of a problem in terms of constraints. CP is a form of declarative programming.
Device-Assisted Services for Protecting Network Capacity
Device Assisted Services (DAS) for protecting network capacity is provided. In some embodiments, DAS for protecting network capacity includes monitoring a network service usage activity of the communications device in network communication; classifying the network service usage activity for differential network access control for protecting network capacity; and associating the network service usage activity with a network service usage control policy based on a classification of the network service usage activity to facilitate differential network access control for protecting network capacity.
METHOD AND SYSTEM FOR MANAGING SERVICE QUALITY ACCORDING TO NETWORK STATUS PREDICTIONS
Aspects of the subject disclosure may include, for example, obtaining predicted available bandwidths for an end user device, monitoring buffer occupancy of a buffer of the end user device, determining bit rates for portions of media content according to the predicted available bandwidths and according to the buffer occupancy, and adjusting bit rates for portions of media content according to the predicted available bandwidths and according to the buffer occupancy during streaming of the media content to the end user device over a wireless network. Other embodiments are disclosed.