G06F9/30058

INFERRING FUTURE VALUE FOR SPECULATIVE BRANCH RESOLUTION

Aspects of the invention include includes determining a first instruction in a processing pipeline, wherein the first instruction includes a compare instruction, determining a second instruction in the processing pipeline, wherein the second instruction includes a conditional branch instruction relying on the compare instruction, determining a predicted result of the compare instruction, and completing the conditional branch instruction using the predicted result prior to executing the compare instruction.

Apparatus and method for configuring sets of interrupts

An apparatus and method are described for efficiently processing and reassigning interrupts. For example, one embodiment of an apparatus comprises: a plurality of cores; and an interrupt controller to group interrupts into a plurality of interrupt domains, each interrupt domain to have a set of one or more interrupts assigned thereto and to map the interrupts in the set to one or more of the plurality of cores.

Conditional branching control for a multi-threaded, self-scheduling reconfigurable computing fabric
11573796 · 2023-02-07 · ·

Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an interconnection network; a processor; and a plurality of configurable circuit clusters. Each configurable circuit cluster includes a plurality of configurable circuits arranged in an array; a synchronous network coupled to each configurable circuit of the array; and an asynchronous packet network coupled to each configurable circuit of the array. A representative configurable circuit includes a configurable computation circuit and a configuration memory having a first, instruction memory storing a plurality of data path configuration instructions to configure a data path of the configurable computation circuit; and a second, instruction and instruction index memory storing a plurality of spoke instructions and data path configuration instruction indices for selection of a master synchronous input, a current data path configuration instruction, and a next data path configuration instruction for a next configurable computation circuit.

Compression techniques for encoding stack trace information
11614969 · 2023-03-28 · ·

Embodiments provide a thread classification method that represents stack traces in a compact form using classification signatures. Some embodiments can receive a stack trace that includes a sequence of stack frames. Some embodiments may generate, based on the sequence of stack frames, a trace signature that represents the set. Some embodiments may receive one or more subsequent stack traces. For each of the one or more subsequent stack traces, some embodiments may determine whether a subsequent trace signature has been generated to represent the sequence of stack frames included within the subsequent stack trace. If not, some embodiments may generate, based on the trace signature and other subsequent trace signatures that were generated based on the trace signature, the subsequent trace signature to represent the subsequent sequence of stack frames.

Method performed by a microcontroller for managing a NOP instruction and corresponding microcontroller

Disclosed herein is a method for managing of NOP instructions in a microcontroller, the method comprising duplicating all jump instructions causing a NOP instruction to form a new instruction set; inserting an internal NOP instruction into each of the jump instructions; when a jump instruction is executed, executing a subsequent instruction of the new instruction set; and executing the internal NOP instruction when an execution of the subsequent instruction is skipped.

TECHNIQUES FOR PARALLEL EXECUTION
20220342673 · 2022-10-27 ·

Apparatuses, systems, and techniques to identify instructions for advanced execution. In at least one embodiment, a processor performs one or more instructions that have been identified by a compiler to be speculatively performed in parallel.

Branch prediction in a data processing apparatus

An apparatus comprises instruction fetch circuitry to retrieve instructions from storage and branch target storage to store entries comprising source and target addresses for branch instructions. A confidence value is stored with each entry and when a current address matches a source address in an entry, and the confidence value exceeds a confidence threshold, instruction fetch circuitry retrieves a predicted next instruction from a target address in the entry. Branch confidence update circuitry increases the confidence value of the entry on receipt of a confirmation of the target address and decreases the confidence value on receipt of a non-confirmation of the target address. When the confidence value meets a confidence lock threshold below the confidence threshold and non-confirmation of the target address is received, a locking mechanism with respect to the entry is triggered. A corresponding method is also provided.

DEVICES AND METHODS FOR EFFICIENT EXECUTION OF RULES USING PRE-COMPILED DIRECTED ACYCLIC GRAPHS

In one aspect, a computer implemented method for translating and executing rules using a directed acyclic graph is provided. The method includes transforming a ruleset into a directed acyclic graph. The directed acyclic graph includes a plurality of nodes and a plurality of branches. The method further includes identifying similarities across the plurality of branches. The method further includes grouping branches of the directed acyclic graph based on the identified similarities. The method further includes creating a modified directed acyclic graph based on the grouping. The method further includes selecting and using a method of processing a group of the modified directed acyclic graph based on an aspect of the group.

Separate branch target buffers for different levels of calls
11481221 · 2022-10-25 · ·

A computing device (e.g., a processor) having a plurality of branch target buffers. A first branch target buffer in the plurality of branch target buffers is used in execution of a set of instructions containing a call to a subroutine. In response to the call to the subroutine, a second branch target buffer is allocated from the plurality of branch target buffers for execution of instructions in the subroutine. The second branch target buffer is cleared before the execution of the instructions in the subroutine. The execution of the instructions in the subroutine is restricted to access the second branch target buffer and blocked from accessing branch target buffers other than the second branch target buffer.

Computation and prediction of linked access

An example operation includes one or more of detecting a fork in a supply-chain by a modeling node, resolving, by the modeling node, a branch prediction to determine a likely access control, generating, by the modeling node, a range of information based on a branch confidence level, and responsive to the resolution of the branch prediction, revoking access from a document or granting a greater access to the document based on the range.