G06F9/312

Cache management operations using streaming engine

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.

Method and system for implementing recovery from speculative forwarding miss-predictions/errors resulting from load store reordering and optimization
10592300 · 2020-03-17 · ·

A method for forwarding data from the store instructions to a corresponding load instruction in an out of order processor. The method includes accessing an incoming sequence of instructions; reordering the instructions in accordance with processor resources for dispatch and execution; ensuring a closest earlier store in machine order for to a corresponding load, by determining if said store has an actual age but said corresponding load does not have an actual age, then said store is earlier than said corresponding load; if said corresponding load has an actual age but said store does not have an actual age, then said corresponding load is earlier than said store; if neither said corresponding load or said store have an actual age, then a virtual identifier table is used to determine which is earlier; and if both said corresponding load and said store have actual ages, then the actual ages are used to determine which is earlier.

Streaming engine with error detection, correction and restart

Disclosed embodiments relate to a streaming engine employed in, for example, a digital signal processor. A fixed data stream sequence including plural nested loops is specified by a control register. The streaming engine includes an address generator producing addresses of data elements and a steam head register storing data elements next to be supplied as operands. The streaming engine fetches stream data ahead of use by the central processing unit core in a stream buffer. Parity bits are formed upon storage of data in the stream buffer which are stored with the corresponding data. Upon transfer to the stream head register a second parity is calculated and compared with the stored parity. The streaming engine signals a parity fault if the parities do not match. The streaming engine preferably restarts fetching the data stream at the data element generating a parity fault.

Streaming engine with cache-like stream data storage and lifetime tracking
10592243 · 2020-03-17 · ·

A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. The streaming engine fetches stream data ahead of use by the central processing unit core in a stream buffer constructed like a cache. The stream buffer cache includes plural cache lines, each includes tag bits, at least one valid bit and data bits. Cache lines are allocated to store newly fetched stream data. Cache lines are deallocated upon consumption of the data by a central processing unit core functional unit. Instructions preferably include operand fields with a first subset of codings corresponding to registers, a stream read only operand coding and a stream read and advance operand coding.

Efficient store-forwarding with partitioned FIFO store-reorder queue in out-of-order processor

Technical solutions are described for executing one or more out-of-order (OoO) instructions by a processing unit. The execution includes detecting, by a load-store unit (LSU), a load-hit-store (LHS) in an out-of-order execution of the instructions, the detecting based only on effective addresses. The detecting includes determining an effective address associated with an operand of a load instruction. The detecting further includes determining whether a store instruction entry using said effective address to store a data value is present in a store reorder queue, and indicating that an LHS has been detected based at least in part on determining that store instruction entry using said effective address is present in the store reorder queue. In response to detecting the LHS, a store forwarding is performed that includes forwarding data from the store instruction to the load instruction.

Instruction to cancel outstanding cache prefetches

Techniques relate to handling outstanding cache miss prefetches. A processor pipeline recognizes that a prefetch cancelling instruction is being executed. In response to recognizing that the prefetch cancelling instruction is being executed, all outstanding prefetches are evaluated according to a criterion as set forth by the prefetch cancelling instruction in order to select qualified prefetches. In response to evaluating, a cache subsystem is communicated with to cause cancelling of the qualified prefetches that fit the criterion. In response to successful cancelling of the qualified prefetches, a local cache is prevented from being updated from the qualified prefetches.

Operation of a multi-slice processor with an expanded merge fetching queue

Operation of a multi-slice processor that includes a plurality of execution slices and a plurality of load/store slices, where each load/store slice includes a load miss queue and a load reorder queue, includes: receiving, at a load reorder queue, a load instruction requesting data; responsive to the data not being stored in a data cache, determining whether a previous load instruction is pending a fetch of a cache line comprising the data; if the cache line does not comprise the data, allocating an entry for the load instruction in the load miss queue; and if the cache line does comprise the data: merging, in the load reorder queue, the load instruction with an entry for the previous load instruction.

Load-hit-load detection in an out-of-order processor

Technical solutions are described for executing one or more out-of-order instructions by a load-store unit (LSU) by detecting a load-hit-load (LHL) case based only on effective addresses (EA). An example method includes, in response to receiving a first load instruction, creating an entry in a LHL table. Further, in response to receiving a second load instruction in the load reorder queue, and in response to the predetermined number of bits from a second EA used by the second load instruction matching the predetermined number of bits from the first EA, comparing the first EA and the second EA. Further, a first thread identifier for the first load instruction is compared with a second thread identifier for the second load instruction. In response to the first EA matching the second EA, and the first thread identifier matching the second thread identifier, the method includes flushing the first load instruction.

Function evaluation using multiple values loaded into registers by a single instruction

A technique for efficient calling of functions on a processor generates an executable program having a function call by analysing an interface for the function that defines an argument expression and an internal value used solely within the function, and an argument declaration defining an argument value to be provided to the function when the program is run. A data structure is generated including the internal value and a resolved argument value derived from the argument expression and the argument value. A single instruction is encoded in the program to utilise the data structure. When the program is executed on a processor, the single instruction causes the processor to load the argument value and internal value from the data structure into registers in the processor, prior to evaluating the function. The function can then be executed without further register loads being performed.

Data read-write scheduler and reservation station for vector operations

The present disclosure provides a data read-write scheduler and a reservation station for vector operations. The data read-write scheduler suspends the instruction execution by providing a read instruction cache module and a write instruction cache module and detecting conflict instructions based on the two modules. After the time is satisfied, instructions are re-executed, thereby solving the read-after-write conflict and the write-after-read conflict between instructions and guaranteeing that correct data are provided to a vector operations component. Therefore, the subject disclosure has more values for promotion and application.