Patent classifications
H04L47/782
ORCHESTRATING EDGE SERVICE WORKLOADS ACROSS EDGE HIERARCHIES
Computing resources are managed in a computing environment comprising a computing service provider and an edge computing network. The edge computing network comprises computing and storage devices configured to extend computing resources of the computing service provider to remote users of the computing service provider. The edge computing network collects capacity and usage data for computing and network resources at the edge computing network. The capacity and usage data is sent to the computing service provider. Based on the capacity and usage data, the computing service provider, using a cost function, determines a distribution of workloads pertaining to a processing pipeline that has been partitioned into the workloads. The workloads can be executed at the computing service provider or the edge computing network.
Technologies for switching network traffic in a data center
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Joint resource assigning method and device for allocating resources to terminal
Provided in the embodiments of the present disclosure are a resource determination and information sending method and device, a storage medium and a processor. The resource determination method includes: receiving configuration information, the configuration information carrying indication information for indicating information of a Physical Resource Block (PRB) that supports resource assignment for a terminal with a subcarrier Resource Unit (RU) as a minimum granularity; receiving information carrying a resource assignment field, a Resource Indication Value (RIV) of a specified field in the resource assignment field being used for indicating resource information assigned to the terminal; and determining, according to the indication information and the RIV, a resource assigned to the terminal.
Systems and methods for dynamically allocating resources based on configurable resource priority
A system described herein may provide a technique for the dynamic selection of configurable resources in an environment that includes a hierarchical or otherwise differentiated arrangement of configurable resources. The environment may include, or may be implemented by, a Distributed Resource Network (“DRN”), which may include hardware or virtual resources that may be configured, including the instantiation of containers, virtual machines, Virtualized Network Functions (“VNFs”), or the like. The DRN may be hierarchical in that some resources of the DRN may provide services to, and/or may otherwise be accessible to, a greater quantity of elements of the DRN or some other network.
VERIFICATION OF DATA PROCESSES IN A NETWORK OF COMPUTING RESOURCES
A method for managing data processes in a network of computing resources includes: receiving at least one child request being routed from an intermediary device to at least one corresponding destination device, the at least one child request requesting execution of at least one corresponding child data process, each of the at least one child data process for executing at least a portion of the at least one parent data process from an instructor device, and each of the at least one child request including a destination key derived at least in part from the at least one instructor key; storing the at least one child request in at least one storage device; modifying the at least one child request upon receiving a child request modification signal; and generating signals for communicating the child requests to one or more requesting devices.
TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuity is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Automated lifecycle management with flexible scaling and dynamic resource allocation for virtualized cable data plane applications
Systems and methods to support flexible scaling and dynamic resource allocation for virtualized cable data plane applications. The system includes a head end together with a node to provide data to customer devices. A container operating that includes a data plane application that provides packets of data for transmission to the node. The data plane application is instantiated with at least one of a virtual networking function and a computing resource function.
Resource unit allocation in mesh networks
Resource Unit (RU) allocation in mesh networks is provided via identifying devices engaged in wireless communication over a shared channel in a mesh network, the devices including a first Access Point (AP), a second AP in wireless communication with the first AP via a first backhaul connection, and a third AP in wireless communication with the first AP via a second backhaul connection; determining a first demand for bandwidth in the shared channel over the first backhaul connection and a second demand for bandwidth over the second backhaul connection; and assigning RUs to the first backhaul connection based on the first demand relative to a total bandwidth demand within the shared channel and to the second backhaul connection based on the second demand relative to the total bandwidth demand the shared channel, wherein the total bandwidth demand includes the first demand and the second demand.
Hierarchical token buckets
Systems and methods are provided for efficient handling of user requests to access shared resources in a distributed system, which handling may include throttling access to resources on a per-resource basis. A distributed load-balancing system can be logically represented as a hierarchical token bucket cache, where a global cache contains token buckets corresponding to individual resources whose tokens can be dispensed to service hosts each maintaining a local cache with token buckets that limit the servicing of requests to access those resources. Local and global caches can be implemented with a variant of a lazy token bucket algorithm to enable limiting the amount of communication required to manage cache state. High granularity of resource management can thus enable increased throttle limits on user accounts without risking overutilization of individual resources.
LOAD ADAPTATION ARCHITECTURE FRAMEWORK FOR ORCHESTRATING AND MANAGING SERVICES IN A CLOUD COMPUTING SYSTEM
According to one aspect of the concepts and technologies disclosed herein, a cloud computing system can include a load adaptation architecture framework that performs operations for orchestrating and managing one or more services that may operate within at least one of layers 4 through 7 of the Open Systems Interconnection (“OSP”) communication model. The cloud computing system also can include a virtual resource layer. The virtual resource layer can include a virtual network function that provides, at least in part, a service. The cloud computing system also can include a hardware resource layer. The hardware resource layer can include a hardware resource that is controlled by a virtualization layer. The virtualization layer can cause the virtual network function to be instantiated on the hardware resource so that the virtual network function can be used to support the service.