G06F2209/503

PROVIDING AN OPTIMIZED SERVICE-BASED PIPELINE
20230102063 · 2023-03-30 ·

An optimized service-based pipeline includes a resource manager that receives a request that includes a description of a workload from a workload initiator such as an application. The resource manager identifies runtime utilization metrics of a plurality of processing resources, where the plurality of processing resources includes at least a first graphics processing unit (GPU) and a second GPU. The resource manager determines, based on the utilization metrics and one or more policies, a workload allocation recommendation for the workload. Thus, the workload initiator can determine whether placing a workload on a particular processing resource is preferable based on runtime behavior of the system and policies established of the workload.

RELATIVE DISPLACEABLE CAPACITY INTEGRATION

A system may include a memory and a processor in communication with the memory. The processor may be configured to perform operations. The operations may include analyzing a host system, detecting one or more specifications of the host system, and determining a displaceable capacity of the host system. The determining a displaceable capacity of the host system may include identifying a workload on the host system, establishing a workload priority for the workload, and defining a task priority of a task. The operations may include computing service metrics of the host system. The operations may include displacing a portion of the workload using the displaceable capacity.

DETERMINING AVAILABLE MEMORY ON A MOBILE PLATFORM
20230036737 · 2023-02-02 ·

An application from a plurality of applications executing at one or more processors of a computing device may determine a plurality of memory metrics of the computing device. The application may determine information indicative of a predicted safe amount of memory available for allocation by an application from the plurality of applications based at least in part on the plurality of memory metrics. The application may adjust, based at least in part on the information indicative of the predicted safe amount of memory available for allocation by the application, one or more characteristics of the application executing at the one or more processors to adjust an amount of memory allocated by the application.

AUTO-SPLIT AND AUTO-MERGE CLUSTERS
20230032812 · 2023-02-02 ·

Methods, computer program products, and/or systems are provided that perform the following operations: identifying a first workload being processed by a first plurality of sites in a cluster; identifying, from the first plurality of sites: (i) a first site as a primary site for the first workload, and (ii) one or more secondary sites for the first workload; identifying a communication link issue between the first site and at least one of the one or more secondary sites; splitting the cluster into sub-clusters based, at least in part, on the communication link issue, wherein the first site is included in a first sub-cluster of the sub-clusters and the at least one of the one or more secondary sites is included in a sub-cluster of the sub-clusters that is different from the first sub-cluster; and instructing the first sub-cluster to locally process the first workload.

METHOD FOR AUTOMATIC SCHEDULING OF TASKS, ELECTRONIC DEVICE EMPLOYING METHOD, AND COMPUTER READABLE STORAGE MEDIUM
20230029609 · 2023-02-02 ·

A method for the automatic scheduling of tasks obtains data processing tasks and data sources. A job queue is formed based on the data processing tasks. The job tasks are extracted in order from the job queue. Computing resources are distributed based on the extracted job tasks. A result of the data processing task is obtained by the pre-trained model based on the data source. An electronic device and a computer readable storage medium applying the method are also provided.

Systems and method for automating security workflows in a distributed system using encrypted task requests

Methods and systems for automating execution of a workflow by integrating security applications of a distributed system into the workflow are provided. In embodiments, a system includes an application server in a first cloud, configured to receive a trigger to execute the workflow. The workflow includes tasks to be executed in a device of a second cloud. The application server sends a request to process the task to a task queue module. The task queue module places the task request in a queue, and a worker hosted in the device of the second cloud retrieves the task request from the queue and processes the task request by invoking a plugin. The plugin interacts with a security application of the device of the second cloud to execute the task, which yields task results. The task results are provided to the application server, via the worker and the task queue module.

METHOD AND APPARATUS FOR DYNAMICALLY MANAGING SHARED MEMORY POOL
20230085979 · 2023-03-23 ·

A method and an apparatus for dynamically managing a shared memory pool are provided, to determine, based on different service scenarios, a shared memory pool mechanism applicable to a current service scenario, and then dynamically adjust a memory pool mechanism based on the determining result. The method for dynamically managing a shared memory pool includes: determining a first shared memory pool mechanism, where the first shared memory pool mechanism is a fixed memory pool mechanism or a dynamic memory pool mechanism; determining a second shared memory pool mechanism suitable for a second service scenario based on the second service scenario, where the second shared memory pool mechanism is a fixed memory pool mechanism or a dynamic memory pool mechanism; and when the second shared memory pool mechanism is different from the first shared memory pool mechanism, adjusting the first shared memory pool mechanism to the second shared memory pool mechanism.

METHOD FOR DATA PROCESSING AND APPARATUS, AND ELECTRONIC DEVICE
20220342706 · 2022-10-27 ·

A method for data processing includes: receiving capability information for processing data sent by a network-side device; determining whether the capability information satisfies a preset requirement of data to be processed; and in response to the capability information satisfying the preset requirement, sending the data to be processed to the network-side device for processing.

ASSIGNING JOBS TO HETEROGENEOUS GRAPHICS PROCESSING UNITS
20230089925 · 2023-03-23 ·

Architectures and techniques for managing heterogeneous sets of physical GPUs. Functionality information is collected for one or more physical GPUs with a GPU device manager coupled with a heterogeneous set of physical GPUs. At least one of the physical GPUs is to be managed as multiple virtual GPUs based on the collected functionality information with the GPU device manager. Each of the physical GPUs is classified as either a single physical GPU or as one or more virtual GPUs with the device manager. Traffic representing processing jobs to be processed is received by at least a subset of the physical GPUs via a gateway programmed by a traffic manager. The GPU application to process received processing jobs scheduled by and distributed into the scheduled GPU application with a GPU scheduler communicatively coupled with the traffic manager and with the GPU device manager.

METHOD FOR THE DEPLOYMENT OF A SOFTWARE MODULE IN A MANUFACTURING OPERATION MANAGEMENT SYSTEM
20230082523 · 2023-03-16 ·

A software module is deployed in a MOM system without requiring the operator to know where to deploy the software module within the network of the computational resources that are addressed and/or accessed within the MOM system. A number of software modules are provided, each including a set of metadata with a number of deploy criteria. A plurality of computational resource layers are provided, with each resource layer having different computational resources and being enabled to communicate layer specific data, which include resource availability information. A deployment instance is executed that evaluates the metadata and the layer specific data and, depending on the evaluation, the computational resource layer and the computational resource on which the software module will be deployed is determined. The software module is then executed on the determined computational resource within the determined computational resource layer.