Patent classifications
G06F9/463
Rule engine optimization via parallel execution
A first graph that includes a plurality of containers is accessed. The containers each contain one or more rules that each have corresponding computer code. The containers are configured for sequential execution by a rule engine. The computer code corresponding to the one or more rules in each of the containers is electronically scanned. Based on the electronic scan, an interdependency among the rules is determined. Based on the determined interdependency, a second graph is generated. The second graph includes all of the rules of the containers, but not the containers themselves. At least some of the rules are configured for parallel execution by the rule engine.
Iterative learning processes for executing code of self-optimizing computation graphs based on execution policies
A method includes receiving code of an application, the code structured as a plurality of instructions in a computation graph that corresponds to operational logic of the application. The method also includes processing the code according to an iterative learning process. The iterative learning process includes determining whether to adjust an exploration rate associated with the iterative learning process based on a state of a computing environment. Additionally, the process includes executing the plurality of instructions of the computation graph according to an execution policy that indicates certain instructions to be executed in parallel. The process also includes determining an execution time for executing the plurality of instructions of the computation graph according to the execution policy and based on the execution time and the exploration rate, adjusting the execution policy to reduce the execution time in a subsequent iteration.
PIPELINE MANAGER
The exemplary embodiments are related to a pipeline manager configured to manage a software development pipeline. The pipeline manager receives, via a user interface (UI), a representation of a pipeline comprising a plurality of blocks, wherein each block comprises a defined input and a defined output, executes each block of the pipeline, validates output of each block of the pipeline based on the execution of the block and stores the output of each block and updating data that defines the pipeline based on the output of each block.
COOPERATIVE INPUT/OUPUT OF ADDRESS MODES FOR INTEROPERATING PROGRAMS
Aspects of the invention include creating a first file control block in a primary runtime environment with a first addressing mode and a second file control block in a second runtime environment with a second addressing mode, where both the first file control block and the second file control block describe a status of a first file of a caller program in the primary runtime environment. The parameters of the first file of the caller program in the primary runtime environment are passed to a target callee program in the secondary runtime environment. An anchor is added in the first file control block as a link to the second file control block. The first file control block are the second file control block synchronized with updates to the first file in the primary runtime environment and the passed parameters of the first file in the secondary runtime environment.
Extended asynchronous data mover functions compatibility indication
A method is provided that is executable by a processor of a computer. Note that the processor is communicatively coupled to a memory of the computer, and the memory stores a response block of a call command. In implementing the method, the processor defines a sub-functions field in the response block of the call command. Further the processor indicates that a set of functions of a set of instructions are installed and available at an interface based on a corresponding sub-functions flag within the sub-functions field being set. Note that the interface is also being executed on the computer and that the set of functions being represented by the corresponding sub-functions flag. The processor further indicates that the set of functions of the set of instructions are not installed based on the corresponding sub-functions flag not being set.
Initialization of parameters for machine-learned transformer neural network architectures
An online system trains a transformer architecture by an initialization method which allows the transformer architecture to be trained without normalization layers of learning rate warmup, resulting in significant improvements in computational efficiency for transformer architectures. Specifically, an attention block included in an encoder or a decoder of the transformer architecture generates the set of attention representations by applying a key matrix to the input key, a query matrix to the input query, a value matrix to the input value to generate an output, and applying an output matrix to the output to generate the set of attention representations. The initialization method may be performed by scaling the parameters of the value matrix and the output matrix with a factor that is inverse to a number of the set of encoders or a number of the set of decoders.
VISUAL CONFORMANCE CHECKING OF PROCESSES
Systems and methods for determining conformance of a process based on a process model of the process and an event log of an execution of the process are provided. The process model is divided into one or more control regions and reachable nodes are determined for each node in the process model. Conformance of the process is determined by comparing transitions from source activities to destination activities in the event log with the reachable nodes based on the one or more control regions.
Thread context preservation in a multithreading computer system
According to one aspect, a computer-implemented method for thread context preservation in a configuration including a core configurable between a single thread (ST) mode and a multithreading (MT) mode is provided. The ST mode addresses a primary thread, and the MT mode addresses the primary thread and one or more secondary threads on shared resources of the core. Based on determining, by the core in the MT mode, that MT is to be disabled, switching from the MT mode to the ST mode is performed, where the primary thread of the MT mode is maintained as the primary thread of the ST mode. A thread context including program accessible register values and program counter values of the one or more secondary threads is made inaccessible to programs. Based on the switching, any one of clearing the program accessible register values or retaining the program accessible register values is performed.
Architecture for simulation clock-based simulation of distributed systems
Systems and methods are provided for the deterministic simulation of distributed systems, such as vehicle-based processing systems. A distributed system may be represented as a plurality of subsystems or “nodelets” executing with a single process of a computing device during a simulation. The nodelets may communicate using in-process communication. A task scheduler can schedule the nodelets to execute separately in serially-occurring frames. A simulated clock may be used to mitigate the variability in timestamped data that may be caused by latency or jitter.
Autonomous job queueing system for hardware accelerators
Embodiments may relate to an electronic device that includes a processor communicatively coupled with a hardware accelerator. The processor may be configured to identify, based on an indication of a priority level in a task control block (TCB), a location at which the TCB should be inserted in a queue of TCBs. The hardware accelerator may perform jobs related to the queue of TCBs in an order related to the order of TCBs within the queue. Other embodiments may be described or claimed.