G06F9/5027

MULTI-DEVICE PROCESSING ACTIVITY ALLOCATION

Allocating processing activities among multiple computing devices can include identifying multiple computing activities of a computer-executable process and, for each computing activity identified, estimating in real time the computing resources needed. The identifying can be in response to detecting a computer-executable instruction executed by one multiple communicatively coupled computing devices, and the computer-executable instruction can be associate with the computer-executable process. A current condition and configuration of each of the computing devices can be determined in real time. For each computing device an effect induced by executing one or more of the plurality of activities can be predicted, the predicting based each computing device's current condition and configuration and performed by a machine learning model trained using data collected from prior real-time processing of example process activities. Based on the predicting, computing activities can be allocated in real time among the computing devices.

Provisioning engine hosting solution for a cloud orchestration environment

Systems and methods provide for execution of different provisioning engines within a resource provider environment. A user may submit a request to provision one or more resources using a particular provisioning engine, which may include a provisioning engine that is non-native to the resource provider environment. A control plane may evaluate and transmit requests to the provisioning engine executing within the resource provider environment. Operations associated with the provisioning engine may be executed and stored within a data store, which may be processed upon completion and made accessible.

Task delegation and cooperation for automated assistants

Task delegation and cooperation for automated assistants is presented. A method comprises receiving, at a centralized support center that is in contact with a plurality of automated assistants including a first automated assistant and a second automated assistant, a request to perform a task on behalf of an individual, formulating, at the centralized support center, the task as a plurality of sub-tasks including a first sub-task and a second sub-task, delegating, at the centralized support center, the first sub-task to the first automated assistant, based on a determination at the centralized support center that the first automated assistant is capable of performing the first sub-task, and delegating, at the centralized support center, the second sub-task to the second automated assistant, based on a determination at the centralized support center that the second automated assistant is capable of performing the second sub-task.

Software defined automation system and architecture

Embodiments of a software defined automation system that provides a reference architecture for designing, managing and maintaining a highly available, scalable and flexible automation system. In some embodiments, an SDA system can include a localized subsystem including a system controller node and multiple compute nodes. The multiple compute nodes can be communicatively coupled to the system controller node via a first communication network. The system controller node can manage the multiple compute nodes and virtualization of a control system on a compute node via the first communication network. The virtualized control system includes virtualized control system elements connected to a virtual network that is connected to a second communication network to enable the virtualized control system elements to control a physical control system element via the second communication network connected to the virtual network.

Resource determination based on resource definition data

In one example, a computer implemented method may include retrieving resource definition data corresponding to an endpoint. The resource definition data includes resource type information. Further, an API response may be obtained from the endpoint by querying the endpoint using an API call. Furthermore, the API response may be parsed and a resource model corresponding to the resource definition data may be populated using the parsed API response. The resource model may include resource information and associated metric information correspond to a resource type in the resource type information. Further, a resource and/or metric data associated with the resource may be determined using the populated resource model. The resource may be associated with an application being executed in the endpoint.

Resource determination based on resource definition data

In one example, a computer implemented method may include retrieving resource definition data corresponding to an endpoint. The resource definition data includes adapter information and resource type information. Further, an adapter instance may be generated using the adapter information to establish communication with the endpoint. Furthermore, an API response may be obtained, via the adapter instance, from the endpoint by querying the endpoint using an API call. Further, the API response may be parsed. Further, a resource model corresponding to the resource definition data may be populated using the parsed API response. The resource model may include resource information and associated metric information corresponding to a resource type in the resource type information. Furthermore, a resource and/or metric data associated with the resource may be determined using the populated resource model. The resource may be associated with an application being executed in the endpoint.

Dynamic selection of cores for processing responses

Methods, systems, and devices for the dynamic selection of cores for processing responses are described. A memory sub-system can receive, from a host system, a read command to retrieve data. The memory sub-system can include a first core and a second core. The first core can process the read command based on receiving the read command. The first core can identify the second core for processing a read response associated with the read command. The first core can issue an internal command to retrieve the data from a memory device of the memory sub-system. The internal command can include an indication of the second core selected to process the read response.

Cross platform application flow orchestration by transmitting the application flow including a transition rule to a plurality of computation layers
11579929 · 2023-02-14 · ·

Disclosed herein are system, method, and computer program product embodiments for configuring a dynamic reassignment of an application flow across different computation layers based on various conditions. An embodiment operates by assigning a first rule of an application flow to a first computation layer of a plurality of computation layers. The embodiment assigns a second rule of the application flow to a second computation layer of the plurality of computation layers. The embodiment assigns a transition rule of the application flow to the first computation layer. The transition rule includes an action that causes the first rule of the application flow to be executed in the second computation layer of the plurality of computation layers based on a condition. The embodiment then transmits the application flow to the plurality of computation layers thereby causing the application flow to be configured for execution.

Scheduling artificial intelligence model partitions based on reversed computation graph

Techniques are disclosed for scheduling artificial intelligence model partitions for execution in an information processing system. For example, a method comprises the following steps. An intermediate representation of an artificial intelligence model is obtained. A reversed computation graph corresponding to a computation graph generated based on the intermediate representation is obtained. Nodes in the reversed computation graph represent functions related to the artificial intelligence model, and one or more directed edges in the reversed computation graph represent one or more dependencies between the functions. The reversed computation graph is partitioned into sequential partitions, such that the partitions are executed sequentially and functions corresponding to nodes in each partition are executed in parallel.

Function as a service (FaaS) execution distributor
11579938 · 2023-02-14 · ·

The disclosure provides an approach for distribution of functions among data centers of a cloud system that provides function-as-a-service (FaaS). For example, the disclosure provides one or more function distributors configured to receive a request for loading or executing a function, automatically determine an appropriate data center to load or execute the function, and automatically load or execute the function on the determined data center. In certain embodiments, the function distributors are further configured to determine an appropriate data center to provide storage resources for the function and configure the function to utilize the storage resources of the determined data center.