G06F9/3844

BRANCH PREDICTION CIRCUIT AND INSTRUCTION PROCESSING METHOD
20220350608 · 2022-11-03 · ·

A branch prediction circuit includes a branch target address storage circuitry, a higher order address storage circuitry, an address generation circuitry, and a branch instruction execution circuitry. The branch target address storage circuitry stores a first address of a branch instruction executed in the past, a lower order address of a second address of an instruction to be executed next, and information pertaining to a reference target for a higher order address of the second address and to whether or not reference is needed. The higher order address storage circuitry stores the higher order address of the second address. The address generation circuitry generates the second address when a third address of an instruction to be newly executed matches the first address. The branch instruction execution circuitry provides an instruction for speculative execution of the instruction having the second address.

CORE-BASED SPECULATIVE PAGE FAULT LIST

An embodiment of an integrated circuit may comprise an instruction decoder to decode one or more instructions to be executed by a core, and circuitry coupled to the instruction decoder, the circuitry to determine if a decoded instruction involves a page to be fetched, and determine one or more hints for one or more optional pages that may be fetched along with the page for the decoded instruction. Other embodiments are disclosed and claimed.

AUTOMATIC GENERATION OF CONVERGENT DATA MAPPINGS FOR BRANCHES IN AN INTEGRATION WORKFLOW

In an approach to improve integration workflows by automatically generating convergent data mappings for branches in an integration workflow using a computer. A branch schema for each branch is generated, wherein the branch schema represents the union of all the individual node output schemas on the branch. A common output schema for a convergence point is generated, wherein the common output schema represents an intersection of all the branch schemas and generates branch mappings from each branch node to the common output schema.

Branch prediction in a data processing apparatus

An apparatus comprises instruction fetch circuitry to retrieve instructions from storage and branch target storage to store entries comprising source and target addresses for branch instructions. A confidence value is stored with each entry and when a current address matches a source address in an entry, and the confidence value exceeds a confidence threshold, instruction fetch circuitry retrieves a predicted next instruction from a target address in the entry. Branch confidence update circuitry increases the confidence value of the entry on receipt of a confirmation of the target address and decreases the confidence value on receipt of a non-confirmation of the target address. When the confidence value meets a confidence lock threshold below the confidence threshold and non-confirmation of the target address is received, a locking mechanism with respect to the entry is triggered. A corresponding method is also provided.

Separate branch target buffers for different levels of calls
11481221 · 2022-10-25 · ·

A computing device (e.g., a processor) having a plurality of branch target buffers. A first branch target buffer in the plurality of branch target buffers is used in execution of a set of instructions containing a call to a subroutine. In response to the call to the subroutine, a second branch target buffer is allocated from the plurality of branch target buffers for execution of instructions in the subroutine. The second branch target buffer is cleared before the execution of the instructions in the subroutine. The execution of the instructions in the subroutine is restricted to access the second branch target buffer and blocked from accessing branch target buffers other than the second branch target buffer.

METHOD, SYSTEM AND DEVICE FOR PIPELINE PROCESSING OF INSTRUCTIONS, AND COMPUTER STORAGE MEDIUM

A method, system and device for pipeline processing of instructions and a computer storage medium. The method comprises: acquiring a target instruction set (S101); acquiring a target prediction result, wherein the target prediction result is a result obtained by predicting a jump mode of the target instruction set (S102); performing pipeline processing on the target instruction set according to the target prediction result (S103); determining if a pipeline flushing request is received (S104); and if so, correspondingly saving the target instruction set and a corresponding pipeline processing result, so as to perform pipeline processing on the target instruction set again on the basis of the pipeline processing result (S105). By means of the method, system, and device and computer-readable storage medium, a target instruction set and a corresponding pipeline processing result are correspondingly saved, so that when the target instruction set is subsequently processed again, the saved pipeline processing result can be directly used to perform pipeline processing, and the efficiency of pipeline processing of instructions can be improved.

UPDATING METADATA PREDICTION TABLES USING A REPREDICTION PIPELINE

Aspects of the invention include a computer-implemented method of updating metadata prediction tables. The computer-implemented method includes establishing, in the metadata prediction tables, a prediction of how a set of instructions will resolve and identifying that the set of instructions is completed. The computer-implemented method also includes determining, upon completion of the set of instructions, whether prediction update queues (PUQs) associated with the set of instructions indicate that the set of instructions resolved in one of a plurality of proscribed manners relative to the prediction and deciding that the metadata predictions tables are candidates to be updated based on the PUQs indicating that the set of instructions resolved in one of the plurality of proscribed manners.

Zero latency prefetching in caches

This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.

Apparatuses and methods for speculative execution side channel mitigation

Methods and apparatuses relating to mitigations for speculative execution side channels are described. Speculative execution hardware and environments that utilize the mitigations are also described. For example, three indirect branch control mechanisms and their associated hardware are discussed herein: (i) indirect branch restricted speculation (IBRS) to restrict speculation of indirect branches, (ii) single thread indirect branch predictors (STIBP) to prevent indirect branch predictions from being controlled by a sibling thread, and (iii) indirect branch predictor barrier (IBPB) to prevent indirect branch predictions after the barrier from being controlled by software executed before the barrier.

Flushing a fetch queue using predecode circuitry and prediction information

A data processing apparatus is provided. It includes control flow detection prediction circuitry that performs a presence prediction of whether a block of instructions contains a control flow instruction. A fetch queue stores, in association with prediction information, a queue of indications of the instructions and the prediction information comprises the presence prediction. An instruction cache stores fetched instructions that have been fetched according to the fetch queue. Post-fetch correction circuitry receives the fetched instructions prior to the fetched instructions being received by decode circuitry, the post-fetch correction circuitry includes analysis circuitry that causes the fetch queue to be at least partly flushed in dependence on a type of a given fetched instruction and the prediction information associated with the given fetched instruction.