G06F9/3848

BRANCH PREDICTION IN A COMPUTER PROCESSOR

Branch prediction in a computer processor, includes: fetching an instruction, the instruction comprising an address, the address comprising a first portion of a global history vector and a global history vector pointer; performing a first branch prediction in dependence upon the first portion of the global history vector; retrieving, in dependence upon the global history vector pointer, from a rolling global history vector buffer, a second portion of the global history vector; and performing a second branch prediction in dependence upon a combination of the first portion and second portion of the global history vector.

Instruction sequence buffer to store branches having reliably predictable instruction sequences
09733944 · 2017-08-15 · ·

A method for outputting reliably predictable instruction sequences. The method includes tracking repetitive hits to determine a set of frequently hit instruction sequences for a microprocessor, and out of that set, identifying a branch instruction having a series of subsequent frequently executed branch instructions that form a reliably predictable instruction sequence. The reliably predictable instruction sequence is stored into a buffer. On a subsequent hit to the branch instruction, the reliably predictable instruction sequence is output from the buffer.

Secure predictors for speculative execution
11429392 · 2022-08-30 · ·

Systems and methods are disclosed for secure predictors for speculative execution. Some implementations may eliminate or mitigate side-channel attacks, such as the Spectre-class of attacks, in a processor. For example, an integrated circuit (e.g., a processor) for executing instructions includes a predictor circuit that, when operating in a first mode, uses data stored in a set of predictor entries to generate predictions. For example, the integrated circuit may be configured to: detect a security domain transition for software being executed by the integrated circuit; responsive to the security domain transition, change a mode of the predictor circuit from the first mode to a second mode and invoke a reset of the set of predictor entries, wherein the second mode prevents the use of a first subset of the predictor entries of the set of predictor entries; and, after completion of the reset, change the mode back to the first mode.

TECHNIQUES FOR RESTORING PREVIOUS VALUES TO REGISTERS OF A PROCESSOR REGISTER FILE

A technique for operating a processor includes receiving, by a history buffer, a flush tag associated with an oldest instruction to be flushed from a processor pipeline. In response to the flush tag being older than a first instruction tag that identifies a first instruction associated with a current value stored in a register of the register file and younger than a second instruction tag that identifies a second instruction associated with a previous value that was stored in the register of the register file, the history buffer transfers the previous value for the register to the register file. In response to the flush tag not being older than the first instruction tag and younger than the second instruction tag, the history buffer does not transfer the previous value for the register to the register file (as such, the register maintains the current value following a pipeline flush).

SPECULATIVE MULTI-THREADING TRACE PREDICTION

A method for trace prediction includes using trace prediction to predict a trace specifying branch decisions. When a branch misprediction is detected, trace prediction is terminated and prediction is continued using branch prediction.

HYPERVECTOR-BASED BRANCH PREDICTION
20170322810 · 2017-11-09 ·

Systems and methods are directed to hypervector-based branch prediction. For a branch instruction whose direction is to be predicted, a taken distance between a current hypervector and a taken hypervector and a not-taken distance between the current hypervector and a not-taken hypervector is determined, wherein the current hypervector comprises an encoding of context of the branch instruction, the taken hypervector comprises an encoding of context of taken branch instructions and the not-taken hypervector comprises an encoding of context of not-taken branch instructions. If the taken distance is less than the not-taken distance, the branch instruction is predicted to be taken, or on the other hand, if the not-taken distance is less than the taken distance, the branch instruction is predicted to be not-taken.

Managed instruction cache prefetching

Disclosed is an apparatus and method to manage instruction cache prefetching from an instruction cache. A processor may comprise: a prefetch engine; a branch prediction engine to predict the outcome of a branch; and dynamic optimizer. The dynamic optimizer may be used to control: identifying common instruction cache misses and inserting a prefetch instruction from the prefetch engine to the instruction cache.

Flushing in a microprocessor with multi-step ahead branch predictor and a fetch target queue

A microprocessor is shown, in which a branch predictor and an instruction cache are decoupled by a fetch-target queue (FTQ). The FTQ stores at least an instruction address whose branch prediction has been finished by the branch predictor. The instruction addresses queued in the FTQ is to be read out later as an instruction-fetching address for the instruction cache. The instruction address that is input into the branch predictor and used for branch prediction leads the instruction-fetching address.

ADVANCED PROCESSOR ARCHITECTURE
20210406027 · 2021-12-30 · ·

The invention relates to a method for processing instructions out-of-order on a processor comprising an arrangement of execution units. The inventive method comprises looking up operand sources in a Register Positioning Table and setting operand input references of the instruction to be issued accordingly, checking for an Execution Unit (EXU) available for receiving a new instruction, and issuing the instruction to the available Execution Unit and entering a reference of the result register addressed by the instruction to be issued to the Execution Unit into the Register Positioning Table (RPT).

VIRTUAL 3-WAY DECOUPLED PREDICTION AND FETCH

A unified queue configured to perform decoupled prediction and fetch operations, and related apparatuses, systems, methods, and computer-readable media, is disclosed. The unified queue has a plurality of entries, where each entry is configured to store information associated with at least one instruction, and where the information comprises an identifier portion, a prediction information portion, and a tag information portion. The unified queue is configured to update the prediction information portion of each entry responsive to a prediction block, and to update the tag information portion of each entry responsive to a tag and TLB block. The prediction information may be updated more than once, and the unified queue is configured to take corrective action where a later prediction conflicts with an earlier prediction.