Patent classifications
G06F9/4887
Scheduling jobs
Provided are methods, systems, computer program products for scheduling jobs. The method may include receiving a request for allocating resources for a first job, a job comprising information regarding maximum amount of resources required by the job; determining a type of the first job; obtaining at least one backfill of the first job based on the determined type; allocating the maximum amount of resources from system resources to the first job; searching a second job in waiting jobs to be allocated resources, the second job being suitable to be allocated used resources by the first job from the maximum amount of resources allocated to the first job during the at least one backfill; allocating the resources unused by the first job from the maximum amount of resources allocated to the first job to the second job in response to the first job running to the at least one backfill.
Signaling timeout and complete data inputs in cloud workflows
There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform obtaining an input of at least one of a task and a workflow, setting a timeout for the input of the at least one of the task and the workflow, determining whether the at least one of the task and the workflow observes a lack of data of the input for a duration equal to the timeout, determining, in response to determining that the at least one of the task and the workflow observed the lack of data of the input for the duration equal to the timeout, an unavailability of further data of the input, applying an update to the at least one of the task and the workflow based on determining the unavailability, and processing the at least one of the task and the workflow.
Automated streaming data model generation with parallel processing capability
An event stream processing (ESP) model is read that describes computational processes. (A) An event block object is received. (B) A new measurement value, a timestamp value, and a sensor identifier are extracted. (C) An in-memory data store is updated with the new measurement value, the timestamp value, and the sensor identifier. (A) through (C) are repeated until an output update time is reached. When the output update time is reached, data stored in the in-memory data store is processed and updated using data enrichment windows to define enriched data values that are output. The data enrichment windows include a gate window before each window that uses values computed by more than one window. The gate window sends a trigger to a next window when each value of the more than one window has been computed. The enrichment windows are included in the ESP model.
CPU CLUSTER SHARED RESOURCE MANAGEMENT
Embodiments include an asymmetric multiprocessing (AMP) system having a first central processing unit (CPU) cluster comprising a first core type, and a second CPU cluster comprising a second core type, where the AMP system can update a thread metric for a first thread running on the first CPU cluster based at least on: a past shared resource overloaded metric of the first CPU cluster, and on-core metrics of the first thread. The on-core metrics of the first thread can indicate that first thread contributes to contention of the same shared resource corresponding to the past shared resource overloaded metric of the first CPU cluster. The AMP system can assign the first thread to a different CPU cluster while other threads of the same thread group remain assigned to the first CPU cluster. The thread metric can include a Matrix Extension (MX) thread flag or a Bus Interface Unit (BIU) thread flag.
Systems and methods for performing concurrency analysis in simulation environments
Systems and methods analyze an executable simulation model to identify existing concurrency, determine opportunities for increasing concurrency, and develop proposed modifications for realizing the opportunities for increased concurrency. The systems and methods may label locations at the simulation model where concurrency exists, and provide information regarding the proposed modifications to increase the model's concurrency. The systems and methods may modify the simulation model if the additional concurrency is accepted. The systems and methods may operate within a higher-level programming language, and may develop the proposed modifications without lowering or translating the simulation model to a lower abstraction level. The systems and methods may also undo a modification, rolling the simulation model back to a prior design state. Accepting the proposed modifications may cause the simulation models to execute more efficiently, e.g., faster.
METHOD FOR EXECUTING MULTITHREADED INSTRUCTIONS GROUPED INTO BLOCKS
A method for executing multithreaded instructions grouped into blocks. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks, wherein the instructions of the instruction blocks are interleaved with multiple threads; scheduling the instructions of the instruction block to execute in accordance with the multiple threads; and tracking execution of the multiple threads to enforce fairness in an execution pipeline.
System and method for multi-tenant implementation of graphics processing unit
A method for graphics processing, wherein a graphics processing unit (GPU) resource is allocated among applications, such that each application is allocated a set of time slices. Commands of draw calls are loaded to rendering command buffers in order to render an image frame for a first application. The commands are processed by the GPU resource within a first time slice allocated to the first application. The method including determining at least one command has not been executed at an end of the first time slice. The method including halting execution of commands, wherein remaining one or more commands are not processed in the first time slice. A GPU configuration is preserved for the commands after processing a last executed command, the GPU configuration used when processing in a second time slice the remaining commands.
INTELLIGENT ORCHESTRATION OF CLASSIC-QUANTUM COMPUTATIONAL GRAPHS
One example method includes receiving a computation workflow defined by a graph that includes quantum computing nodes, receiving a catalogue of quantum computing instances that are available in a hybrid classic-quantum computation infrastructure, transforming the graph to create a first graph transformation, and each of the quantum computing nodes is assigned a respective candidate resource allocation that identifies candidate resources operable to execute a respective quantum algorithm associated with that quantum computing node, and the transforming is performed using information from the catalogue, and optimizing the computation workflow by selecting, for each of the quantum computing nodes, a resource from the candidate resource allocation associated with that quantum computing node, and the optimizing includes transforming the first graph transformation to create a second graph transformation that specifies the selected resources for each node.
Programmable controller
A programmable controller allocates times obtained by dividing executable time in one operation cycle to a plurality of paths, executes sequence programs of the paths within the respective allocated times, and measures extra time that is a remainder of each of the allocation times when execution of each of the sequence programs ends. Then, the programmable controller determines whether a predetermined sequence program is to be executed in the measured extra time, and instructs the sequence execution unit to execute the predetermined sequence program in the extra time in accordance with a result of the determination.
QUANTUM CHIP CONTROLLER, QUANTUM COMPUTING PROCESSING SYSTEM AND ELECTRONIC APPARATUS
Embodiments of the present specification provide a quantum chip controller, a quantum computing processing system, and an electronic apparatus. The quantum chip controller includes: an instruction execution unit for executing a quantum instruction to generate a quantum event and its corresponding time point; and a quantum chip queue control unit including: an event queue for storing a quantum event to be executed, a time queue for storing a time point corresponding to the quantum event to be executed, and a time counter for counting time, wherein when time being counted in the time counter is equal to a time point in the time queue, a quantum event corresponding to the time point is read out from the event queue and is to be executed by a quantum chip, and wherein the time counter includes an enabling control section for controlling starting and pausing of counting of the time counter.