Programmable Multi-Level Data Access Address Generator
20230244599 · 2023-08-03
Assignee
Inventors
Cpc classification
G06N3/10
PHYSICS
International classification
Abstract
A programmable address generator has an iteration variable generator for generation of an ordered set of iteration variables, which are re-ordered by an iteration variable selection fabric, which delivers the re-ordered iteration variables to one or more address generators. A configurator receives an instruction containing fields which provide configuration constants to the address generator, iteration variable selection fabric, and address generators. After configuration, the address generators provide addresses coupled to a memory. In one example of the invention, the address generators generate an input address, a coefficient address, and an output address for performing convolutional neural network inferences.
Claims
1. A programmable address generator comprising: an iteration variable generator outputting a plurality of ordered iteration variables, each iteration variable of a higher order incrementing after an adjacent iteration variable of a lower order increments to a bound associated with each iteration variable, thereafter the iteration variable of the lower order resetting to zero; the ordered iteration variables coupled to an iteration variable selection fabric configured to map at least one ordered iteration variable to a selection fabric iteration variable output; one or more outputs of the iteration variable selection fabric coupled to at least one address generator forming an address from the one or more outputs of the iteration variable selection fabric; a configurator operative to receive an iteration instruction comprising at least one iteration variable bound, at least one iteration variable stride, at least one iteration variable selection fabric configuration, a data size, and a start address, the configurator coupling the at least one iteration variable bound and at least one iteration variable stride to the iteration variable generator, the configurator coupling the at least one iteration variable selection fabric configuration to the iteration variable selection fabric identifying a coupling of an ordered iteration variable to a selection fabric iteration variable output, and the data size and the start address coupled to the at least one address generator indicating a start address and a data size of the address generated by the at least one address generator.
2. The programmable address generator of claim 1 where the iteration variable generator comprises a plurality of iteration counters arranged in a sequence, each iteration counter incrementing by an associated iteration counter stride for each particular iteration counter until reaching an associated iteration counter bound, thereafter incrementing an adjacent higher order iteration variable generator and resetting the particular iteration counter to zero.
3. The programmable address generator of claim 1 where the iteration variable generator comprises: a plurality of iteration variable counters, each iteration variable counter comprising: an increment input and an increment output coupled to a different iteration variable counter increment input; a stride value indicating an amount by which an associated iteration variable counter should increment when an increment input for the associated iteration variable counter is asserted; a bound value, which when met or exceeded by a count of an iteration variable counter, causes the iteration variable counter to reset to zero and assert the increment output whereby an adjacent higher order increment counter increments by an associated stride.
4. The programmable address generator of claim 3 where each iteration counter stride and each iteration counter bound is extracted from a field of an instruction received by the configurator.
5. The programmable address generator of claim 1 where the iteration variable selection fabric comprises a plurality of multiplexers, each multiplexer configured to receive a map contained in a field of an instruction received by the configurator, each multiplexer configured to couple a particular ordered iteration variable input to a particular iteration variable output of the iteration variable selection fabric.
6. The programmable address generator of claim 1 where a number of address generators is one, two, or three.
7. The programmable address generator of claim 1 where the at least one address generator computes an address derived from an offset address, the offset address equal to:
(D.sub.1*D.sub.2*D.sub.3* . . .*D.sub.n−i) *i.sub.n+D.sub.1*D.sub.2* . . . *D.sub.n−2)*i.sub.n−1+. . . +(D.sub.1*D.sub.2)*i.sub.3+D.sub.1*i.sub.2+i.sub.1. where D.sub.1, D.sub.2, . . . D.sub.n-1 are associated iteration variable generator bounds, i.sub.1, i.sub.2, . . . , i.sub.n are associated iteration counts, and n is a value in a range of 2 to 8.
8. A programmable address generator comprising: an iteration variable generator; a plurality of address generators, each address generator coupled to an iteration variable selection fabric for mapping of iteration variables coupled to a respective iteration variable generator; a memory coupled to the plurality of address generators; a configurator receiving configuration instructions as a plurality of fields, the configuration instructions including an iteration variable generator configuration comprising a bound and a stride for each individual iteration variable; an iteration variable selection fabric configuration determining a mapping from an iteration variable to an address generator input.
9. The programmable address generator of claim 8 where the iteration variable generator comprises: a plurality of iteration variable counters, each iteration variable counter comprising: an increment input and an increment output coupled to a different iteration variable counter increment input; a stride value indicating an amount by which the iteration variable counter increments when an increment input for the iteration variable counter is asserted; a bound value indicating a maximum value of an associated iteration variable counter and causing the iteration variable counter to reset to zero and assert the increment output when the bound value is reached; the stride value and the bound value for each iteration variable counter provided by a value in a field in the configuration instruction received by the configurator; each iteration variable counter incrementing upon assertion of an associated increment input.
10. The programmable address generator of claim 8 where each address generator generates an address from a plurality of iteration variables, the address equal to:
(D.sub.1*D.sub.2*D.sub.3* . . . *D.sub.n−1)*i.sub.n+D.sub.1*D.sub.2* . . . *D.sub.n-2)*i.sub.n−1+. . . +(D.sub.1*D.sub.2)*i.sub.3+D.sub.1*i.sub.2+i.sub.1. where D.sub.1, D.sub.2, . . . D.sub.n−1 are associated iteration variable generator bounds, and i .sub.1, i.sub.2, . . . , i.sub.n are associated iteration counts from the variable selection fabric according to the variable selection fabric map.
11. The programmable address generator of claim 8 where a first address generator is configured to generate an address using iteration variables [i.sub.7, i.sub.1, i.sub.5, i.sub.3, i.sub.2, i.sub.4] and U.
12. The programmable address generator of claim 8 where a second address generator has a first mode configured to generate an address using iteration variables [i.sub.6, i.sub.1, i.sub.3, i.sub.2] and a second mode configured to generate an address using iteration variables [i.sub.7, i.sub.6, i.sub.5, i.sub.4].
13. The programmable address generator of claim 12 where the iteration variables of the first mode and the iteration variables of the second mode are selected using a configuration of the iteration variable selection fabric from the configurator according to an instruction received by the configurator.
14. The programmable address generator of claim 8 where address generator outputs are stored for later use.
15. The programmable address generator of claim 8 where address generator outputs forming an IN tensor address, a W tensor address and an OUT tensor address are applied to the memory for performing an arithmetic operation.
16. The programmable address generator of claim 15 where the arithmetic operation is a multiply-accumulate operation.
17. The programmable address generator of claim 9 where at least one of the plurality of address generators is configured to generate an address using four iteration variables contained in a configuration instruction and provided by the configurator.
18. The programmable address generator of claim 9 where at least one of the plurality of address generators is configured to generate an address using four iteration variables and a convolution stride contained in a configuration instruction and provided by the configurator.
19. The programmable address generator of claim 9 where each address generator of at least three address generators of the plurality of address generators is configured to use four or more iteration variables received from the configurator and which were contained in a configuration instruction.
20. The programmable address generator of claim 9 where each address generator of at least three address generators of the plurality of address generators is configured to generate an address using four or more iteration variables received from the configurator, and at least one of the address generators also receives a convolution stride value.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
DETAILED DESCRIPTION OF THE INVENTION
[0046] In the present specification and figures, reference to a particular reference number indicates a reference to the same function or processing element in a description or view.
[0047]
[0048] Accessing the multiple arrays and matrices creates a particular problem of computational complexity, as the addresses of the particular data elements to be accessed or read and written have to be calculated first. The computation involved in calculating the memory address of a data element is known as address arithmetic. In general, the memory address for a 1-dimensional memory array with n arrays of tensors such as the memory 202 of
mem_addr=start_addr+{(i.sub.n−1)(D.sub.n−1)(D.sub.n−2) . . . (D.sub.1)+(i.sub.n−1−1)(D.sub.n−2) . . . (D.sub.1)+. . . +(i.sub.2−1)(D.sub.1) +(i.sub.1)}*data_size EQ-1
[0049] where:
[0050] mem_addr is the desired memory location;
[0051] start_addr is the starting address of the memory;
[0052] {D.sub.1 . . . D.sub.n−1} are the respective number of entries of each memory segment representing a range of each associated iteration variables {i.sub.1 . . . i.sub.n−1}, and
[0053] data_size is the word length for each data entry in the memory.
[0054] For example, if a memory containing three arrays of vectors or tensors (each having associated values for input IN, coefficient W, and output O) starts at byte address 4000 hex (the start_addr), and D.sub.i=D.sub.2=D.sub.3=64 data_size words (256 bytes), and each element is 4 bytes (the data_size), the third element of the first array (i.sub.3=0, i.sub.2=0, i.sub.1=3) is located at address=4000+(3−1)*4=4000+2*4=4008.
[0055] In processing systems with an instruction set processor, a compiler will generate the necessary instructions to perform the address arithmetic.
[0056] In the example RISC instructions of
[0057] In many applications, multiple tensors need to be simultaneously accessed from memory, requiring multiple iteration loops and multiple address generators. Having dedicated hardware to simultaneously compute address arithmetic for multiple tensors can improve accessing of data in tensor processing systems, and thereby the overall performance. Efficient accessing of data also improves energy efficiency.
[0058] The present invention provides hardware calculation of memory addresses for use by a Central Processing Unit (CPU), Graphics Processing Unit (GPS), or CNN engine, and those addresses may be used instantly, or captured as “pre-fetch” addresses for later use, by saving them in a stream, a queue, or a buffer. The term prefetching is generally used in the field to indicate anticipatory fetches before the address or data is actually needed or known to be needed. The timing at which the prefetching is initiated usually affects power and performance. If prefetches are done too aggressively, there may be many superfluous fetches resulting in wasted energy. In both instant fetching and prefetching, addresses have to be computed and the present invention facilitates the simultaneous access of multiple data streams while providing flexible programmability for various stream sizes and convolutional or index strides, where the stride indicates the number of locations to be skipped in each increment event. The present invention may be applied to many types of different sources or destinations for the fetched data, for example caches, registers, general purpose buffers and special purpose buffers.
[0059]
[0060]
[0061] D.sub.7 is the bound of i.sub.7. In the CNN context, D.sub.7 represents the number of input fmaps/output fmaps (also known as the batch size).
[0062] D.sub.6 is the bound of i.sub.6. In the CNN context, D.sub.6 represents the number of 2-D output fmaps (also known as channels).
[0063] D.sub.5 is the bound of is. In the CNN context, D.sub.5 represents the width of the output fmap (number of activations).
[0064] D.sub.4 is the bound of i.sub.4. In the CNN context, D.sub.4 represents the height of the output fmap (also known as the number of activations).
[0065] D.sub.3 is the bound of D.sub.3. In the CNN context, D.sub.3 represents the height of the 2-D filter (weights), and is the bound associated with iteration variable i.sub.3.
[0066] D.sub.2 is the bound of i.sub.2. In the CNN context, D.sub.2 represents the width of the 2-D filter (weights).
[0067] D.sub.1is the bound of i.sub.1. In the CNN context, D.sub.1 represents the number of 2-D input fmaps/filters (channels).
[0068] U is the convolution stride.
[0069] D.sub.4 (the height of the output fmap) is governed by the equation D.sub.4=(H−R+U)/U, where H is the height of the input fmap and R is the height of the 2D-filter (weights). R is also equal to D.sub.3.
[0070] D.sub.6 (the width of the output fmap) is governed by the equation D.sub.5=(W−S+U)/U, where W is the width of the input fmap and S is the width of the 2D-filter (weights). S is also equal to D.sub.2.
[0071]
[0072] In one example of the invention, only three arrays (tensors) participate in the address computation for input (IN) address 216, coefficient (W) address 214, and output (O) address 217, with the innermost iteration variable loop (i.sub.1, i.sub.2, i.sub.3) forming an output data vector O 120 for a given input tensor IN 124, and weight tensor W 126 with respect to the terms shown in
[0073] In an extension of the single vector example, each of the vectors has multiple dimensions, such as 4. In that case, the output vector is O[i.sub.7] [i.sub.6] [i.sub.5] [i.sub.4], with input vector IN[i7] [i.sub.1] [U*i.sub.5+i.sub.3][U*i.sub.4+i.sub.2] and weight vector W[i.sub.6] [i.sub.1] [i.sub.3][i.sub.2]. Although there are only 4 index variables for each vector, the 4 indices of the 3 vectors are controlled by 7 nested loops.
[0074] In this example, [i.sub.7, i.sub.6, i.sub.5, i.sub.4] are the outer iteration variables, and [i.sub.3, i.sub.2, i.sub.1] are the inner iteration variables, where i.sub.7 is the outermost iteration variable and i.sub.1 is the innermost variable of seven iteration loops.
[0075] The outer 4 iteration loops [i.sub.7, i.sub.6, i.sub.5, i.sub.4] are for each output fmap value, i.e. the convolved output, and the inner 3 iteration loops [i.sub.3, i.sub.2], convolve a window of the image. The ordering of the inner iteration loops may be permuted, as may the order of the outer iteration loops [i.sub.7 . . . i.sub.4]. Alternatively, the iteration loops such as 208 may not be hierarchical and ordered in any manner without limitation to the scope of the invention.
[0076] Many optimizations are possible in this code sequence in the ordering of the convolutions, but the present example shows a 7-level nested loop structure for illustration only. It is understood that any number of outer loops may be added with four shown for example purposes only. The generated addresses for the associated data values for IN, W, and O depends on the indices of the 7 loops and the bounds of the dimensions.
[0077] The computation and memory accessing associated with CNNs can be improved by providing hardware support for the 7-level nested loop. The present example of the invention provides a 7 level hardware looping structure with stride (step size) for iteration loops {i.sub.7, i.sub.6, i.sub.5, i.sub.4, i.sub.3, i.sub.2, i.sub.1} of
[0078] Address computation involved multiplications and additions as shown in the address arithmetic equation. One basic structure is a multiply-add (MUL-ADD) unit which has three inputs A, B, C, and an output D, such that the output D=A*B+C where A, B, C, and D are integer values. Another structure is a MUL unit, which has two inputs A and B, and an output C, such that the output C=A*B, where A, B, and C are all integer values.
[0079] The block diagram of
[0080]
[0081] The iteration variables [i.sub.7,i.sub.6,i.sub.5,i.sub.4,i.sub.3,i.sub.2,i.sub.1] may be considered “ordered iteration variables” in the sense that ii increments from 0 to its associated bound D.sub.1 and resets to zero before iteration variable i.sub.2 increments, and so on, with each iteration variable such as i.sub.2 having a “higher order” adjacent iteration variable i.sub.3 and a “lower order” adjacent iteration variable i.sub.1, other than the lowest order (i.sub.1 with no lower order iteration variable, and i.sub.7 with no higher order iteration variable).
[0082] The number of iteration variables in the present examples is shown as n=7, but the range of iteration variables may be in a range from 4 to 10 or more, and each iteration variable may have a bitwidth in the range of 8 bits to 64 bits, although shown as 32 bits in the present examples.
[0083] Each iteration counter 402-1 through 402-7 increments according to an associated stride and also sends an increment signal to the next outer level iteration variable counter when it reaches its associated iteration variable bound, such that the subsequent iteration counter increments to a next value. For example, iteration counter 402-1 comprises counter 404-1 which increments through a range by stride 410-1 until i1 reaches bound D1 412-1, after which it asserts increment output signal 424-1, which is input to iteration variable counter 402-2 as increment input signal 422-2, causing i.sub.2 counter 404-2 to increment by its D.sub.2 stride value and i.sub.1 counter 404 to reset to 0. The default stride for each of the iteration counters 402-1 through 402-7 is 1, but other strides can be supported by loading different values into 410-1 corresponding to a respective stride value. Each of the iteration variable counters 402-1 through 402-7 has an associated stride register such as 410-1 and bound 412-1 D.sub.1 of iteration counter 402-1, which may be updated through a configuration process so that each CNN computation may have a corresponding set of stride values (of 410-1) and bound value (412-1).
[0084] In this manner, the 7-level nested loop of
[0085]
[0086]
[0087] When performing a convolution operation, the address generator 206 generates W_addr 214 and O_addr 217 using address generator 602, and I_addr 216 using address generator 502, causing the memory 202 to generate W_data 218 based on W_addr, I_data 220 based on I_addr, and provides the O_data 212 based on O_addr 217. The multiplier accumulator 210 multiplies the I_data 220 and with W_data 218, adds that product to the value O_data 212 to generate new O_data, and stores the new O_data result back to the memory using the same O_addr location, and moves on to the next set of operations associated with the next set of iteration variables.
[0088]
[0089] In one example of the invention, a single set of iteration variables, associated iteration stride constants, start address constants, and data size constants are applied to the address generators of
[0090] 1) stored into a cache memory for a pre-fetch operation, where the addresses are saved for future use;
[0091] 2) stored into a queue of access addresses for a pre-fetch operation, where the addresses are saved for future use;
[0092] 3) stored into a wide interleaved buffer for a tensor controlled by nested loops;
[0093] 4) applied for immediate use in addressing IN and W tensors, and determining the O address for stream buffers for later use in computing the associated tensor;
[0094] 5) applied for immediate use in addressing IN and W tensors, and determining the O address for instant storage of the result to an O address;
[0095] 6) computing the IN, W, and O addresses, and loading the associated addressed tensors into respective separate stream buffers;
[0096] 7) computing the IN, W, and O addresses, loading the associated tensors simultaneously into three queues.
[0097] The address generation hardware above can be incorporated into a dedicated access processor, a CPU, a GPU or in a custom accelerator. The address generation hardware can work in conjunction with any access processor or unit. An access processor can be an ordinary processor such as a CPU focusing on access tasks in parallel to the main compute processor. Or, an access processor can be designed with special addressing modes and buffering to stage data in the order required for computations (100% correctness needed); buffers can be parallel enough to provide high data bandwidth required for all computations of the execute processor. Different streams (tensors) can be in different buffers allowing parallel access. The different tensors can be simultaneously loaded, and simultaneously used by the compute processor or accelerator. An access processor can be designed to prefetch data into caches and local memories accessible by the Execute Processor in an advisory fashion. In this case, not all data fetched may be used by the execute processor. In this case 100% correctness is not needed for the fetching processor.
[0098] The access unit or the access processor can be programmable with certain sophisticated access pattern instructions. For example, for 3-dimensional tensors, 4-dimensional tensors, or 6-dimensional tensors. In the convolution loop presented above, there are three 4-dimensional tensors O, I and W corresponding to output, input and weights, respectively, and a 1-dimensional vector B (bias). Multiple instances of the hardware can be used in the access processor as described for
[0099]
[0100] In another example of the invention, each address generator of
[0101]
[0102] Returning to
[0103] As was previously described for
[0104] The architecture of
[0105]
[0106]
[0107] In one example of the invention, the index variable selection fabric is not used, and each variable i.sub.1′, i.sub.2′, etc is equal to i.sub.1, i.sub.2, etc respectively and multiplexers 920A to 920G are not necessary. In another example of the invention, the configuration instruction of
[0108] Indices can be indicated by the following full mask vectors.
i.sub.7=1000000
i.sub.6=0100000
i.sub.5=0010000
i.sub.4=0001000
i.sub.3=0000100
i.sub.2=0000010
i.sub.1=0000001
[0109] Alternatively, references to an iteration variable can be encoded in 3 bits, where i.sub.1 is represented as {001} and i.sub.7 is represented as {111}.
[0110] Accordingly, when (U, i.sub.5, i.sub.3) need to be indicated as the index meaning that the index is U*i.sub.5+i.sub.3, the encoded scheme can use 3 bits to reference iteration variable is, 3 bits to reference iteration variable i.sub.3, and 1 bit to indicate whether U should be used (or provide a non-zero multi-bit value for U), and start addresses (StADDR-T1, StAddr-T2, StAddr-T3) are the same width as the address they specify.
[0111] Many conventional optimizations are possible to reduce the number of bits used to encode this instruction. But encoded full vectors are also useful to reduce the time spent to decode the information later for processing. In some scenarios it will be advantageous to use encoding and fewer bits. In other scenarios it will be advantageous to not use encoding but use the full amount of bits for the unencoded information.
[0112] Instructions can be devised for demand loading as well as prefetching. In addition to the examples shown, other types of instructions can be created for processing by the configurator 211 coupled to the address generator 206 and iteration variable generator 207, including the specification for storage or use of the resulting addresses generated by the address generator 206, including:
[0113] 1) PREFETCH into cache a tensor controlled by indexing of 6 or more nested loops
[0114] 2) PREFETCH into queues a tensor controlled by many nested loops
[0115] 3) PRFETCH into wide interleaved buffer a tensor controlled by many nested loops
[0116] 4) PREFETCH 3 tensors simultaneously into 3 stream buffers
[0117] 5) PREFETCH 3 tensors simultaneously 3 queues
[0118] 6) Load 3 tensors simultaneously into 3 stream buffers
[0119] 7) Load 3 tensors simultaneously 3 queues
[0120] Each of the embodiments of the preceding examples may be performed by processes. For example, the iteration variable generator may be operative as a sequence of iteration variables generated by respective iteration variable generation processes, a first iteration variable generator counting by an associated stride S and returning to 0 when reaching an associated bound D, and also generating a carry output to increment a subsequent stage. In this manner, the iteration variables may be generated by a series of processes which perform as shown in
[0121] In another example of the invention, the input address generator of
[0122] The present examples are provided for illustrative purposes only, and are not intended to limit the invention to only the embodiments shown. In the present description, the use of ellipses such as “i.sub.1 . . . i.sub.7”, “D.sub.1 . . . D.sub.7”, etc are understood to refer to a set of coefficients or values such as {i.sub.1, i.sub.2, i.sub.3, i.sub.4, i.sub.5, i.sub.6, i.sub.7} and {D.sub.1, D.sub.2, D.sub.3, D.sub.4, D.sub.5, D.sub.6, D.sub.7}, respectively.