G06F9/30109

Processing Device Using Variable Stride Pattern

For certain applications, parts of the application data held in memory of a processing device (e.g. that are produced as a result of operations performed by the execution unit) are arranged in regular repeating patterns in the memory, and therefore, the execution unit may set up a suitable striding pattern for use by a send engine. The send engine accesses the memory at locations in accordance with the configured striding pattern so as to access a plurality of items of data that are arranged together in a regular pattern. In a similar manner as done for sends, the execution may set up a striding pattern for use by a receive engine. The receive engine, upon receiving a plurality of items of data, causes those items of data to be stored at locations in the memory, as determined in accordance with the configured striding pattern.

Mapping convolution to a matrix processor unit

A system comprises a matrix processor unit that includes a first type of register, a group of a second type of registers, and a plurality of calculation units. The first type of register is configured to concurrently store values from different rows of a first matrix. At least a portion of the first type of register is logically divided into groups of elements, and each of the groups corresponds to a different row of the first matrix. Each of the second type of registers is configured to concurrently store values from a plurality of different rows of a second matrix. Each of the calculation units corresponds to one of the second type of registers and is configured to at least in part determine a corresponding element in a result matrix of convoluting the second matrix with the first matrix.

METHOD AND APPARATUS FOR PERFORMING REDUCTION OPERATIONS ON A PLURALITY OF ASSOCIATED DATA ELEMENT VALUES

Embodiments detailed herein relate to reduction operations on a plurality of data element values. In one embodiment, a process comprises decoding circuitry to decode an instruction and execution circuitry to execute the decoded instruction. The instruction specifies a first input register containing a plurality of data element values, a first index register containing a plurality of indices, and an output register, where each index of the plurality of indices maps to one unique data element position of the first input register. The execution includes to identify data element values that are associated with one another based on the indices, perform one or more reduction operations on the associated data element values based on the identification, and store results of the one or more reduction operations in the output register.

SUPPORTING LARGE-WORD OPERATIONS IN A REDUCED INSTRUCTION SET COMPUTER ("RISC") PROCESSOR

A Reduced Instruction Set Computer (“RISC”) supporting large-word operations in a computing environment is disclosed. In one implementation, in response to receiving one or more control signals from a central processing unit (“CPU”), a set of operations are executed on a state of a special purpose execution unit (“SPU”) having a plurality of SPU registers, the SPU being associated with the CPU and the state of the SPU having word widths of one or more of the plurality of registers being greater in size than word widths of a plurality of CPU registers of a computing system and a set of state-master bits to synchronize the state of the SPU and a state of the CPU. The results of the set of operations are stored in the plurality of CPU registers or an alternative set of the plurality of SPU registers.

VECTOR PROCESSING OF DECISION TREES TO FORM INFERENCES
20230068120 · 2023-03-02 ·

A method and computer program product for performing machine learning inferences are disclosed. A set of input records to be processed by decision trees is selected, and the decision trees are run. Running the decision trees includes identifying operations to be performed as matrix elements, wherein the matrix elements correspond to the input records. Running the decision trees also includes using vector processing to process disjoint subsets of the matrix elements based on vector instructions operating on data stored in vector registers, such that the matrix elements of each subset of the disjoint subsets are processed in parallel. All leaf nodes of each decision tree involved are processed as split nodes looping to themselves until a termination condition is met. The termination condition is met if at least one of the leaf nodes has been reached for each of the decision trees involved.

Supporting large-word operations in a reduced instruction set computer (“RISC”) processor

A Reduced Instruction Set Computer (“RISC”) supporting large-word operations in a computing environment is disclosed. In one implementation, in response to receiving one or more control signals from a central processing unit (“CPU”), a set of operations are executed on a state of a special purpose execution unit (“SPU”) having a plurality of SPU registers, the SPU being associated with the CPU and the state of the SPU having word widths of one or more of the plurality of registers being greater in size than word widths of a plurality of CPU registers of a computing system and a set of state-master bits to synchronize the state of the SPU and a state of the CPU. The results of the set of operations are stored in the plurality of CPU registers or an alternative set of the plurality of SPU registers.

SYSTEM AND METHOD TO CONTROL THE NUMBER OF ACTIVE VECTOR LANES IN A PROCESSOR
20230161587 · 2023-05-25 ·

In one disclosed embodiment, a processor includes a first execution unit and a second execution unit, a register file, and a data path including a plurality of lanes. The data path and the register file are arranged so that writing to the register file by the first execution unit and by the second execution unit is allowed over the data path, reading from the register file by the first execution unit is allowed over the data path, and reading from the register file by the second execution unit is not allowed over the data path. The processor also includes a power control circuit configured to, when a transfer of data between the register file and either of the first and second execution units uses less than all of the lanes, power down the lanes of the data path not used for the transfer of the data.

CALCULATOR AND CALCULATION METHOD
20230065733 · 2023-03-02 · ·

A calculator includes: registers each including sub-registers that hold pieces of data for use in operation; an operator that executes, in parallel, operations of the pieces of data; and a memory configured to hold a first vector and second vectors to be compared with the first vector. Each second vector is divided into sub-vectors and sub-vector groups each including the sub-vectors of the second vectors are arranged in units of sub-vector groups. A first process of transferring one of sub-vectors of the first vector to sub-registers of a first register among the registers, a second process of transferring the sub-vector group of the second vectors corresponding to the transferred sub-vector of the first vector to sub-registers of a second register, the sub-vector group being held in the memory, and a third process of calculating and integrating numbers of mismatches between bit values of the sub-vectors held are repeatedly executed.

SOFTWARE-DIRECTED DIVERGENT BRANCH TARGET PRIORITIZATION

Instruction set architecture extensions to configure priority ordering of divergent target branch instructions on SIMT computing platforms to enable tools such as compilers (e.g., under influence of execution profilers) or human software developers to configure branch direction prioritization explicitly in code. Extensions for simple (two-way) branch instructions as well as multi-target (more than two branch target instructions) are disclosed.

Load-store instruction for performing multiple loads, a store, and strided increment of multiple addresses

A processor having an instruction set including a load-store instruction having operands specifying, from amongst the registers in at least one register file, a respective destination of each of two load operations, a respective source of a store operation, and a pair of address registers arranged to hold three memory addresses, the three memory addresses being a respective load address for each of the two load operations and a respective store address for the store operation. The load-store instruction further includes three stride operands each specifying a respective stride value for each of the two load addresses and one store address, wherein at least some possible values of each stride operand specify the respective stride value by specifying one of a plurality of fields within a stride register in one of the one or more register files, each field holding a different stride value.