H04L47/788

Systems and methods related to resource distribution for a fleet of machines

Systems and methods related to resource distribution for a fleet of machines are disclosed. A system may include a fleet of machines each having an associated resource capacity and a resource requirement to perform a task. The system may further include a controller having a resource requirement circuit to determine an aggregated amount of the resource requirement and an aggregated amount of the resource capacity. A resource distribution circuit may adaptively improve, in response to an aggregated amount of the resource capacity, an aggregated resource delivery of the resource.

LOAD ADAPTATION ARCHITECTURE FRAMEWORK FOR ORCHESTRATING AND MANAGING SERVICES IN A CLOUD COMPUTING SYSTEM

According to one aspect of the concepts and technologies disclosed herein, a cloud computing system can include a load adaptation architecture framework that performs operations for orchestrating and managing one or more services that may operate within at least one of layers 4 through 7 of the Open Systems Interconnection (“OSP”) communication model. The cloud computing system also can include a virtual resource layer. The virtual resource layer can include a virtual network function that provides, at least in part, a service. The cloud computing system also can include a hardware resource layer. The hardware resource layer can include a hardware resource that is controlled by a virtualization layer. The virtualization layer can cause the virtual network function to be instantiated on the hardware resource so that the virtual network function can be used to support the service.

USING EDGE-OPTIMIZED COMPUTE INSTANCES TO EXECUTE USER WORKLOADS AT PROVIDER SUBSTRATE EXTENSIONS

Techniques are described for enabling users of a service provider network to create and configure “application profiles” that include parameters related to execution of user workloads at provider substrate extensions. Once an application profile is created, users can request the deployment of user workloads to provider substrate extensions by requesting instance launches based on a defined application profile. The service provider network can then automate the launch and placement of the user's workload at one or more provider substrate extensions using edge-optimized compute instances (e.g., compute instances tailored for execution within provider substrate extension environments). In some embodiments, once such edge-optimized instances are deployed, the service provider network can manage the auto-resizing of the instances in terms of various types of computing resources devoted to the instances, manage the lifecycle of instances to ensure maximum capacity availability at provider substrate extension locations, and perform other instance management processes.

AUTOMATED DECISION TECHNIQUES FOR CONTROLLING RESOURCE ACCESS

A durability assessment system may receive a request, from a computing system, for a durability index describing an entity. The durability assessment system may determine the durability index based on information about the resource usage by the entity, such as a resource availability score or a resource allocation score. The durability assessment system may compare the obtained resource availability score and resource allocation score to ranges associated with a set of durability indices. Based on the comparison, the durability assessment system may determine a durability index for the entity. The durability index may indicate an ability of the entity to return accessed resources. In some cases, the durability assessment system may provide the durability index to an allocation computing system that is configured to determine whether to grant access to resources based on the durability index.

Processing allocation in data center fleets
11637791 · 2023-04-25 · ·

A method and system for allocating tasks among processing devices in a data center. The method may include receiving a request to allocate a task to one or more processing devices, the request indicating a required bandwidth for performing the task, a list of predefined processing device groups connected to a host server and indicating availability of the processing device groups included therein for allocation of tasks and available bandwidth for each available processing device group, assigning the task to a processing device group having an available bandwidth greater than or equal to the required bandwidth for performing the task, and updating the list to indicate that each of the processing device group to which the task is assigned and other processing device group sharing at least one processing device is unavailable. The task may be assigned to an available processing device group having a lowest amount of power needed.

Orchestrating edge service workloads across edge hierarchies

Computing resources are managed in a computing environment comprising a computing service provider and an edge computing network. The edge computing network comprises computing and storage devices configured to extend computing resources of the computing service provider to remote users of the computing service provider. The edge computing network collects capacity and usage data for computing and network resources at the edge computing network. The capacity and usage data is sent to the computing service provider. Based on the capacity and usage data, the computing service provider, using a cost function, determines a distribution of workloads pertaining to a processing pipeline that has been partitioned into the workloads. The workloads can be executed at the computing service provider or the edge computing network.

Allocating cloud resources in accordance with predicted deployment growth

The present disclosure relates to systems, methods, and computer readable media for predicting deployment growth on one or more node clusters and selectively permitting deployment requests on a per cluster basis. For example, systems disclosed herein may apply tenant growth prediction system trained to output a deployment growth classification indicative of a predicted growth of deployments on a node cluster. The system disclosed herein may further utilize the deployment growth classification to determine whether a deployment request may be permitted while maintaining a sufficiently sized capacity buffer to avoid deployment failures for existing deployments previously implemented on the node cluster. By selectively permitting or denying deployments based on a variety of factors, the systems described herein can more efficiently utilize cluster resources on a per-cluster basis without causing a significant increase in deployment failures for existing customers.

Pre-allocating resources with hierarchy-based constraints

In a resource-pooling system, predictions can be made as to when and how resources may be needed by particular processes in the system. Requests can be made preemptively to client systems to pre-allocate resources such that resources are ready to use when needed. Client systems can submit constraints on how particular resources may be used by the system. In order to efficiently evaluate these constraints, the system may be organized into a hierarchy of groups, subsystems, and processes, and the constraints may be formulated to match this hierarchy. When resources need to be allocated, constraints may be evaluated using an algorithm that traverses levels of the hierarchy to quickly identify pre-allocations that are available for a particular process based on its location in the system hierarchy.

Forward market renewable energy credit prediction from human behavioral data

Systems and methods for predicting forward market pricing for renewable energy credit based on human behavioral data are disclosed. An example transaction-enabling system may include a forward market circuit to access a forward energy credit market and a market forecasting circuit to automatically generate a forecast for a forward market price of an energy credit in the forward energy credit market where the forecast is based at least in part on a human behavior information collected from at least one human behavioral data source. The example system may further include wherein the energy credit includes a renewable energy credit associated with a renewable energy system, and a smart contract circuit to perform at least one of selling the renewable energy credit or purchasing the renewable energy credit on the forward energy credit market in response to the forecasted forward market price of the energy credit.

Facilitating human intervention in an autonomous device
11513505 · 2022-11-29 ·

Methods, apparatuses, systems, and computer program products for facilitating human intervention in an autonomous device are disclosed. In a particular embodiment, a method of facilitating human intervention in an autonomous device includes a service controller selecting from a first plurality of human interventionists, by a service controller, a first set of human interventionists to respond to a request associated with an autonomous device; transmitting, by the service controller, the request to a first set of interventionist devices, each interventionist device of the first set of interventionist devices associated with a particular human interventionist in the first set of human interventionists; and receiving from the first set of interventionist devices, by the service controller, a first set of interventionist responses to the request.