G06F9/4887

Heterogeneous system on a chip scheduler

Described are techniques for scheduling tasks on a heterogeneous system on a chip (SoC). The techniques including receiving a directed acyclic graph at a meta pre-processor associated with a heterogeneous SoC and communicatively coupled to a scheduler, wherein the directed acyclic graph corresponds to a control flow graph of tasks associated with an application executed by the heterogeneous SoC. The techniques further including determining a rank for a respective task in the directed acyclic graph, wherein the rank is based on a priority of the respective task and a slack in the directed acyclic graph. The techniques further including providing the respective task to the scheduler for execution on the heterogeneous SoC according to the rank.

Estimate and control execution time of a utility command

A method, system, and computer program product to plan and schedule executions of various utility tasks of a utility command during a maintain window, the method including receiving a utility command. The method may also include identifying possible utility tasks used to execute the utility command. The method may also include determining preferred utility tasks. The method may also include calculating a degree of parallelism for the preferred utility tasks. The method may also include generating a utility execution plan for the utility command. The method may also include analyzing the utility execution plan against resource constraints of a time window and sub time windows of the time window. The method may also include generating a time window execution plan for each sub time window of the sub time windows. The method may also include updating the utility execution plan with the time window execution plans.

Determining optimal placements of workloads on multiple platforms as a service in response to a triggering event

A computer-implemented method, a computer program product, and a computer system for placements of workloads in a system of multiple platforms as a service. A computer detects a triggering event for modifying a matrix that pairs respective workloads on respective platforms and includes attributes of running respective workloads on respective platforms. The computer recalculates the attributes in the matrix, in response to the triggering event being detected. The computer determines optimal placements of the respective workloads on the respective platforms, based on information in the matrix. The computer places the respective workloads on the respective platforms, based on the optimal placements.

Task Processing Method and Device, and Electronic Device

A task processing method, a task processing device and an electronic device are provided, which relate to the field of cloud computing technology and big data technology, in particular to the field of task processing technology. The task processing method includes: obtaining a task processing request for a to-be-processed task, the task processing request including processing time information of the to-be-processed task and a service type of the to-be-processed task; in the case that the processing time information of the to-be-processed task meets a triggering condition, writing the to-be-processed task into a corresponding message queue in accordance with the service type of the to-be-processed task, one message queue corresponding to a respective one service type; and processing the to-be-processed task in the message queue, to obtain a task processing result of the to-be-processed task.

DEVICE, METHOD AND SYSTEM TO PROVIDE THREAD SCHEDULING HINTS TO A SOFTWARE PROCESS

Techniques and mechanisms for providing a thread scheduling hint to an operating system of a processor which comprises first cores and second cores. In an embodiment, the first cores are of a first type which corresponds to a first range of sizes, and the second cores are of a second type which corresponds to a second range of sizes smaller than the first range of sizes. A power control unit (PCU) of the processor is to detect that an inefficiency, of a first operational mode of the processor, would exist while an indication of an amount of power, to be available to the processor, is below a threshold. Based on the detecting, the PCU hints to an executing software process that a given core is to be included in, or omitted from, a pool of cores available for thread scheduling. The hint indicates the given core based on a relative prioritization of the first core type and the second core type.

Efficiently Maintaining a Globally Uniform-in-Time Execution Schedule for a Dynamically Changing Set of Periodic Workload Instances
20230020580 · 2023-01-19 ·

An algorithm for efficiently maintaining a globally uniform-in-time execution schedule for a dynamically changing set of periodic workload instances is provided. At a high level, the algorithm operates by gradually adjusting execution start times in the schedule until they converge to a globally uniform state. In certain embodiments, the algorithm exhibits the property of “quick convergence,” which means that regardless of the number of periodic workload instances added or removed, the execution start times for all workload instances in the schedule will typically converge to a globally uniform state within a single cycle length from the time of the addition/removal event(s) (subject to a tunable “aggressiveness” parameter).

CRYPTO DEVICE OPERATION

Multiple work requests from different applications are queued to be processed subsequently without interruption by a crypto device. A prediction table is generated for each application to be processed by the crypto device. An initial credit value is determined for each incoming work request. The work request is an entry in an ordered queue in the order of time using respective time stamps. The next work request to be processed is selected from the entries in the queue by using the first entry in the queue for which the credit values for the corresponding application is greater than or equal to the predicted execution time for the corresponding request type in the prediction table. The selected next work request is processed.

OPERATION METHOD OF THE NON-UNIFORM MEMORY ACCESS SYSTEM

Provided is an operation method of a NUMA system, which includes: designating a page scan range including a plurality of pages; identifying a detour value for each of the plurality of pages; determining whether a detour value of a current target scan page is the same as the reference detour value; and releasing a connection of the current target scan page from the page table when determining that the detour value of the current target scan page is the same as the reference detour value.

Model Training Utilizing Parallel Execution of Containers
20230014399 · 2023-01-19 ·

Embodiments relate to systems and methods that create a final model by parallel training of models executed within separate containers. A master job present within one container, performs pre-processing (e.g., noise reduction; duplicate removal) of incoming data. The master job orchestrates the training of individual models by child jobs that are executed in parallel within respective separate containers. After checking the status of completion of the child jobs (e.g., via HTTP or by reading local progress files) the master job references the trained models in order to determine a final model. This final model determination may comprise aggregating the trained models, or selecting one model based upon a metric (such as a f1 score). Parallel training of models by child jobs executed within separate containers, streamlines and accelerates model creation. Particular embodiments may be suited to train a model that identifies unique entities from incoming data including names and addresses.

TASK MANAGING SYSTEM HAVING MULTIPLE TASK EXECUTION CONTROLLERS FOR TESTING-CONFIGURING VEHICLES AND METHOD THEREOF

A task managing system for testing and configuring one or more vehicles includes a plurality of task execution controllers. Each of the plurality of the task execution controllers defines a set of communication nodes configured to wirelessly communicate with a set of vehicles of a plurality of vehicles. The task execution controller includes a processor configured to execute instructions stored in a nontransitory computer-readable medium to operate as a task application module configured to execute a task order on a selected vehicle from the set of vehicles by way of a selected communication node from the set of communication nodes. The task order defines one or more software-based tasks to be performed on the selected vehicle