G06F2209/484

Storage medium, task execution management device, and task execution management method
11556377 · 2023-01-17 · ·

A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process includes acquiring first multiple tasks; dividing each task in the first multiple tasks in accordance with a cache size; classifying second multiple tasks in accordance with a range of data to be referred to at a time of execution of each task in the second multiple tasks that have been obtained by the dividing; and determining an execution order of tasks in a group for each group that has been obtained by the classifying.

USER CONFIGURABLE TASK TRIGGERS

Systems and processes for user configurable task triggers are provided. In one example, at least one user input, including a selection of at least one condition of a plurality of conditions and a selection of at least one task of a plurality of tasks, is received. Stored context data corresponding to an electronic device is received. A determination is whether the stored context data indicates an occurrence of the at least one selected condition. In response to determining that the stored context data indicates an occurrence of the at least one selected condition, the at least one selected task associated with the at least one selected condition is performed.

TASK SCHEDULING METHOD AND APPARATUS
20230025917 · 2023-01-26 ·

A task scheduling method and an apparatus that belongs to the field of intelligent vehicles is provided. The method may be applied to an embedded device using AUTomotive Open System Architecture (AUTOSAR), the embedded device includes a memory and a processor, the memory stores an interface function, and a first software component and a second software component are deployed in the processor. In this solution, registration information of a to-be-deployed algorithm may be obtained and parsed by using the interface function, and a task in the algorithm may be scheduled and executed by using the software component.

Method for Scheduling Hardware Accelerator and Task Scheduler
20230022294 · 2023-01-26 ·

A task scheduler is connected between a central processing unit (CPU) and each hardware accelerator. The task scheduler first obtains a target task (for example, obtains the target task from a memory), and obtains a dependency relationship between the target task and an associated task. When it is determined, based on the dependency relationship, that a first associated task (for example, a prerequisite for executing the target task is that both a task 1 and a task 2 are executed) in the associated task has been executed, it indicates that the target task meets an execution condition, and the task scheduler schedules related hardware accelerators to execute the target task. Based on a dependency relationship between tasks, the task scheduler schedules, through hardware scheduling, each hardware accelerator to execute each task, and delivery of each task is performed through direct hardware access.

Technologies for providing efficient migration of services at a cloud edge

Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.

SYSTEMS AND METHODS FOR INFORMATION MANAGEMENT SYSTEM TRANSACTION SCHEDULING AS WAIT-FOR-INPUT OR PSEUDO-WAIT-FOR-INPUT
20230127920 · 2023-04-27 ·

Systems and techniques for scheduling transactions as wait-for-input (WFI) or regions as pseudo-wait-for-input (P-WFI) include receiving a transaction report. The transactions from the transaction report are filtered using the plurality of metrics to generate a list of eligible WFI transactions or regions for running as P-WFI. The list of eligible WFI transactions or regions for running as P-WFI is filtered by applying a first system benchmark that is based on a total number of executed transactions to generate a list of top-eligible WFI transactions or regions for running as P-WFI. A total number of dedicated processing regions are calculated for each of the transactions on the list of top-eligible WFI transactions or regions for running as P-WFI. A scheduling report with a final list of top transactions eligible for scheduling as WFI or regions for running as P-WFI is generated based on the total number of dedicated processing regions.

Workflow pipeline optimization based on machine learning operation for determining wait time between successive executions of the workflow

Embodiments are provided for providing workflow pipeline optimization in a computing environment. Execution of a workflow containing dependencies between one or more subject nodes and one or more observer nodes may be dynamically optimized by determining a wait time between successive executions of the workflow for the one or more observer nodes.

OPTIMIZER AGNOSTIC EXPLANATION SYSTEM FOR LARGE SCALE SCHEDULES

A computer implemented method using an artificial intelligence (A.I.) module to explain large scale scheduling solutions includes receiving an original instance of a resource constrained scheduling problem. The instance includes a set of tasks and a variety of resource requirements and a variety of constraints. An optimizer process determines a schedule for the set of tasks while minimizing a makespan of the schedule. A minimal set of resource links is generated based on resource dependencies between tasks. The resource links are added to the original instance of scheduling problem, as precedence constraints. All the resource constraints are removed from the original instance of the resource constrained scheduling problem. A set of critical tasks is computed using a non-resource constrained critical path. Schedules are provided with an explanation of an optimized order of the set of tasks based on the use of the non-resource constrained critical path.

NEURAL NETWORK PROCESSOR USING COMPRESSION AND DECOMPRESSION OF ACTIVATION DATA TO REDUCE MEMORY BANDWIDTH UTILIZATION

A deep neural network (“DNN”) module can compress and decompress neuron-generated activation data to reduce the utilization of memory bus bandwidth. The compression unit can receive an uncompressed chunk of data generated by a neuron in the DNN module. The compression unit generates a mask portion and a data portion of a compressed output chunk. The mask portion encodes the presence and location of the zero and non-zero bytes in the uncompressed chunk of data. The data portion stores truncated non-zero bytes from the uncompressed chunk of data. A decompression unit can receive a compressed chunk of data from memory in the DNN processor or memory of an application host. The decompression unit decompresses the compressed chunk of data using the mask portion and the data portion. This can reduce memory bus utilization, allow a DNN module to complete processing operations more quickly, and reduce power consumption.

TECHNIQUES FOR PROVIDING CLOUD SERVICES ON DEMAND

Techniques are disclosed for deploying a computing resource (e.g., a service) in response to user input. A computer-implemented method can include operations of receiving (e.g., by a gateway computer of a cloud-computing environment) a request comprising an identifier for a computing component of the cloud-computing environment. The computing device receiving the request may determine whether the identifier exists in a routing table that is accessible to the computing device. If so, the request may be forwarded to the computing component. If not, the device may transmit an error code (e.g., to the user device that initiated the request) indicating the computing component is unavailable and a bootstrap request to a deployment orchestrator that is configured to deploy the requested computing component. Once deployed, the computing component may be added to a routing table such that subsequent requests can be properly routed to and processed by the computing component.