Patent classifications
G06F2209/503
Modular Electronic Devices with Contextual Task Management and Performance
The present disclosure provides modular electronic devices that are capable of managing task performance based on a particular context of computing resources currently available from the ad hoc combination of devices.
Modular Electronic Devices with Prediction of Future Tasks and Capabilities
The present disclosure provides modular electronic devices that are capable of predicting future availability of module combinations and associated computing resources and/or capable of predicting future tasks. Based on such predictions, the module or modular electronic device can choose to schedule or delay certain tasks, alter resource negotiation behavior/strategy, or select from among various different resource providers. As an example, a modular electronic device of the present disclosure can identify one or more computing tasks to be performed; predict one or more future sets of computing resources that will be respectively available to the modular electronic device at one or more future time periods; and determine a schedule for performance of the one or more computing tasks based at least in part on the prediction of the one or more future sets of computing resources that will be respectively available at the one or more future time periods.
Systems and methods for cloud computing data processing
Systems and methods allow users to leverage multiple disparate cloud solutions, offered by disparate service providers, in a unified and cohesive manner. A system includes a task repository configured to store a plurality of task parameters, wherein the task parameters cause one or more tasks to run on cloud services when provided to the cloud services including a dedicated solution and a shared solution, wherein the task parameters include common parameters and proprietary parameters, wherein the common parameters are common to two or more disparate cloud services, and wherein the proprietary parameters are unique to one of the cloud services. The system also includes an interface configured to receive task input including the plurality of task parameters.
Redistributing update resources during update campaigns
Disclosed are various embodiments for the controlling the amount of active updates that can occur during a given time on devices that are associated with tenants (e.g., organizations) and subtenants (e.g., sub-organizations) in a multi-tenant environment. In particular, each tenant and subtenant is assigned throttle corresponding to different update parameters (e.g., an amount of devices executing an active update, an amount of data to be downloaded during a campaign, a time for completing the update campaign, etc.). When an update campaign is established, the update campaign can define the different devices that are to be updated. In some situations, the number of active updates required may exceed the allotted resources for a given subtenant. When a subtenant requires additional resources than what is assigned to complete the update, the subtenant can borrow resources defined by the update parameters from a subtenant peer that has a surplus.
RESOURCE SCHEDULE OPTIMIZATION
Embodiments of the present invention provide methods, computer program products, and systems for evaluating expressions. Embodiments of the present invention can be used to receive a set of program instructions to be evaluated in a virtualized environment and determine an evaluation strategy based, at least in part, on an availability of CPU resources. The CPU resource include resources impacted by use of virtual machines and hypervisors. Embodiments of the present invention can, responsive to determining that there are sufficient CPU resources available, evaluate the set of program instructions according to the evaluation strategy using the CPU resources.
Data center capability summarization
A method for summarizing capabilities in a hierarchically arranged data center includes receiving capabilities information, wherein the capabilities information is representative of capabilities of respective nodes at a first hierarchical level in the hierarchically arranged data center, clustering nodes based on groups of capabilities information, generating a histogram that represents individual node clusters, and sending the histogram to a next higher level in the hierarchically arranged data center. Relative rankings of capabilities may be used to order a sequence of clustering operations.
Enhanced availability for message services
An enhanced availability environment for facilitating a message service provided by a plurality of service elements is disclosed herein. The enhanced availability environment comprises a monitoring element and an enhanced availability element. The monitoring element monitors a first service element of the plurality of service elements for a monitored characteristic, generates monitoring information corresponding to the monitored characteristic, and communicates the monitoring information to the enhanced availability element. The enhanced availability element determines an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element, and communicates the availability to initiate an availability action.
Level two first-in-first-out transmission
A hardware state machine connected to a processor, the hardware state machine configured to receive operational codes from the processor; a multiplexer connected to the processor, the hardware state machine and a checksum circuit, the multiplexer configured to receive data from the processor; and a transmit circuit connected to the multiplexer, the transmit circuit configured to receive data from the multiplexer for transmission to a far end device, wherein the hardware state machine is further configured to, responsive receiving one or more operational codes from the processor: cause the checksum circuit to alter a checksum value of a first data packet being transmitted by the transmit circuit; and cause the transmit circuit to preempt transmission of the first data packet and begin transmitting a second data packet once the checksum value so altered has been transmitted from the transmit circuit.
Interactive GUI for bin-packing virtual machine workloads based on predicted availability of compute instances and scheduled use of the compute instances
Techniques are described for optimizing the allocation of computing resources provided by a service provider network—for example, compute resources such as virtual machine (VM) instances, containers, standalone servers, and possibly other types of computing resources—among computing workloads associated with a user or group of users of the service provider network. A service provider network provides various tools and interfaces to help businesses and other organizations optimize the utilization of computing resource pools obtained by the organizations from the service provider network, including the ability to efficiently schedule use of the resources among workloads having varying resource demands, usage patterns, relative priorities, execution deadlines, or combinations thereof. A service provider network further provides various graphical user interfaces (GUIs) to help users visualize and manage the historical and scheduled uses of computing resources by users' workloads according to user preferences.
Service or network function workload preemption
Techniques are described to provide service or network function workload preemption. In one example, a method includes identifying a network location at which a first function can be instantiated; determining whether compute resources are available at the network location to instantiate the first function; based on determining that compute resources are available, instantiating the first function; based on determining that compute resources are not available, determining whether preemption of a second function can be performed at the network location, wherein determining whether preemption of the second function can be performed is based, at least in part, on a comparison between a setup priority of the first function and a holdover priority of the second function; and, based on determining that preemption of the second function at the network location can be performed, performing preemption of the second function and instantiating the first function at the network location.