G06F2209/5013

Cross platform application flow orchestration by transmitting the application flow including a transition rule to a plurality of computation layers
11579929 · 2023-02-14 · ·

Disclosed herein are system, method, and computer program product embodiments for configuring a dynamic reassignment of an application flow across different computation layers based on various conditions. An embodiment operates by assigning a first rule of an application flow to a first computation layer of a plurality of computation layers. The embodiment assigns a second rule of the application flow to a second computation layer of the plurality of computation layers. The embodiment assigns a transition rule of the application flow to the first computation layer. The transition rule includes an action that causes the first rule of the application flow to be executed in the second computation layer of the plurality of computation layers based on a condition. The embodiment then transmits the application flow to the plurality of computation layers thereby causing the application flow to be configured for execution.

Processing rest API requests based on resource usage satisfying predetermined limits

A request manager analyzes API calls from a client to a host application for state and performance information. If current utilization of host application processing or memory footprint resources exceed predetermined levels, then the incoming API call is not forwarded to the application. If current utilization of the host application processing and memory resources do not exceed the predetermined levels, then the request manager quantifies the processing or memory resources required to report the requested information and determines whether projected utilization of the host application processing or memory resources inclusive of the resources required to report the requested information exceed predetermined levels. If the predetermined levels are not exceeded, then the request manager forwards the API call to the application for processing.

Synthesizing a resource request to obtain resource identifier based on extracted unified model, user requirement and policy requirements to allocate resources

Resource allocation problems involve identification of resource, selection by certain criteria and offering of resources to the requester. Identification of required resources may involve matching the type of resource, selecting based on user requirements and policy criteria, and offering the resource through an assignment system. An apparatus and a method are provided that enable identification and selection of resources. The method includes receiving a resource allocation request for the allocation of a resource, the resource allocation request specifying a set of user requirements. The method includes receiving an operator policy associated with the resource, the operator policy including one or more policy requirements. The method includes synthesizing a resource request based on the resource allocation request and the operator policy. Synthesizing the resource request based on the resource allocation request and the operator policy comprises combining the user requirements with the one or more of the policy requirements.

DISTRIBUTED ACCELERATOR

Systems, methods, and devices are described coordinating a distributed accelerator. A command that includes instructions for performing a task is received. One or more sub-tasks of the task are determined to generate a set of sub-tasks. For each sub-task of the set of sub-tasks, an accelerator slice of a plurality of accelerator slices of a distributed accelerator is allocated, sub-task instructions for performing the sub-task are determined. Sub-task instructions are transmitted to the allocated accelerator slice for each sub-task. Each allocated accelerator slice is configured to generate a corresponding response indicative of the allocated accelerator slice having completed a respective sub-task. In a further example aspect, corresponding responses are received from each allocated accelerator slice and a coordinated response indicative of the corresponding responses is generated.

Autonomic caching for in memory data grid query processing

A method, system and computer program product for autonomic caching in an IMDG has been provided. A method for autonomic caching in an IMDG includes receiving from a client of the IMDG a request for a primary query in the IMDG. The method also includes associating the primary query with a previously requested sub-query related to the primary query. Finally, the method includes directing the sub-query concurrently with a directing of the primary query without waiting to receive a request for the sub-query from the client. In this way, the method can proactively predict a receipt of the request for a sub-query following a request for a primary query prior the actual receipt of the request for the sub-query.

METHOD AND SYSTEM FOR PERFORMING PREDICTIVE COMPOSITIONS FOR COMPOSED INFORMATION HANDLING SYSTEMS USING TELEMETRY DATA

Techniques described herein relate to a method for managing composed information handling systems. The method includes obtaining, by a system control processor manager, a composition request for a composed information handling system to perform a workflow; in response to obtaining the composition request: identifying a composed system blueprint associated with the workflow; making a first determination that there are first predictive analytics associated with the composed system blueprint; in response to the first determination: identifying a composed infrastructure associated with the composed system blueprint capable of performing the workflow based on telemetry data and the first predictive analytics; instantiating a composed information handling system using the composed infrastructure to service the composition request; and setting up telemetry services for the composed information handling system using an at least one control resource set.

RE-INITIATION OF MICROSERVICES UTILIZING CONTEXT INFORMATION PROVIDED VIA SERVICE CALLS
20230004427 · 2023-01-05 ·

An apparatus comprises a processing device configured to identify, at a first microservice, a service call that is to be transmitted to a second microservice, and to modify the service call to include context information, the context information characterizing a current state of execution of one or more tasks by one of the first microservice and the second microservice. The processing device is further configured to provide, from the first microservice to the second microservice, the modified service call including the context information. The context information enables re-initiation of said one of the first microservice and the second microservice to continue execution of the one or more tasks from the current state.

Data processing platform monitoring

A device may receive error data indicating that an error occurred, the error being associated with a data processing job scheduled to be performed by a data processing platform. The device may identify input data for the data processing job associated with the error and determine that the error is based on the data processing platform not receiving the input data. In addition, the device may determine a location of the input data and determine a measure of priority associated with the data processing job. Based on the location of the input data and the measure of priority, the device may perform an action to correct the error.

ACCESSING PURGED WORKLOADS

Examples described herein relate to a method and a system, for example, a workload controller, for accessing purged workloads. An alert indicative of an attempt to access a purged workload of workloads deployed in a workload environment may be received by the workload controller. The purged workload may include one or both of a deactivated workload or an archived workload. The workload controller may detect the attempt to access the purged workload based on port mirrored data traffic. Further, in some examples, the workload controller may activate the purged workload based on the alert.

Methods for Offloading A Task From A Processor to Heterogeneous Accelerators

Systems and methods are provided for offloading a task from a central processor in a radio access network (RAN) server to one or more heterogeneous accelerators. For example, a task associated with one or more operational partitions (or a service application) associated with processing data traffic in the RAN is dynamically allocated for offloading from the central processor based on workload status information. One or more accelerators are dynamically allocated for executing the task, where the accelerators may be heterogeneous and may not comprise pre-programming for executing the task. The disclosed technology further enables generating specific application programs for execution on the respective heterogeneous accelerators based on a single set of program instructions. The methods automatically generate the specific application programs by identifying common functional blocks for processing data traffic and mapping the functional blocks to the single set of program instructions to generate code native to the respective accelerators.