Programmable Multi-Level Data Access Address Generator
20230289287 · 2023-09-14
Assignee
Inventors
Cpc classification
G06N3/10
PHYSICS
International classification
Abstract
A programmable address generator has an iteration variable generator for generation of an ordered set of iteration variables, which are re-ordered by an iteration variable selection fabric, which delivers the re-ordered iteration variables to one or more address generators. A configurator receives an instruction containing fields which provide configuration constants to the address generator, iteration variable selection fabric, and address generators. After configuration, the address generators provide addresses coupled to a memory. In one example of the invention, the address generators generate an input address, a coefficient address, and an output address for performing convolutional neural network inferences.
Claims
1-20. (canceled)
21. A multi-dimensional address processor comprising: a plurality of iteration variable processors, each iteration variable processor generating one iteration variable of said plurality of iteration variables, each iteration variable processor also receiving a corresponding stride value, each iteration variable processor comprising: an iteration index counter receiving a bound value and an increment input for incrementing an index variable, the iteration index counter outputting an index variable value and an increment output asserted after an index variable value reaches the bound value, each iteration index counter increment output coupled to a different iteration index counter increment input; an iteration variable processor output formed by multiplying an associated index variable value by an associated stride value D; one or more address processors, each address processor coupled to a subset of the iteration variable processor outputs and generating a tensor address; at least one of the one or more address processors computing a tensor address by multiplying outputs of two unique address processors and adding a bound value to generate a first output; multiplying the first output by an output of a unique address processor and adding a product of two bound values to form a second output; multiplying the second output by an output of a unique address processor and adding a product of three bound values to generate a third output for computing the tensor address.
22. The multi-dimensional address processor of claim 21 where computing the tensor address comprises multiplying the third output by a data-size and adding a memory start address.
23. The multi-dimensional address processor of claim 21 where at least one said iteration variable increments by the associated stride value.
24. The multi-dimensional address processor of claim 21 where a number of address processors is three and comprises an input address processor, a coefficient address processor, and an output address processor.
25. An address generator operating on a plurality n of index variables x.sub.1 through x.sub.n and outputting an address derived from an offset address, each index variable x.sub.1 through x.sub.n having an associated stride value S.sub.1 through S.sub.n and an associated bound value D.sub.1 through D.sub.n, each index variable x.sub.1 through x.sub.n being multiplied by a respective stride value S.sub.1 through S.sub.n to generate associated iteration variables i.sub.1 through i.sub.n, the at least one index variable resetting to zero after reaching the associated bound value and sending a command for a different index variable to increment, the iteration variables i.sub.1 through i.sub.n generating an offset address computed as
(D.sub.1*D.sub.2*D.sub.3* . . . *D.sub.n-1)*i.sub.n+D.sub.1*D.sub.2* . . . *D.sub.n-2)*i.sub.n-1+ . . . +(D.sub.1*D.sub.2)*i.sub.3+D.sub.1*i.sub.2+i.sub.1.
26. The address generator of claim 25 where n=7, and the offset address is computed by a plurality of multiplier-adder (MUL-ADD) units, each MUL-ADD unit having a first input, a second input, a third input, and an output formed by adding the third input to a product of the first input with the second input; the offset address computed by: a first MUL-ADD unit having the first input coupled to i.sub.2, the second input coupled to D.sub.1, and the third input coupled to i.sub.1; a second MUL-ADD unit having the first input coupled to i.sub.3, the second input coupled to a product of D.sub.1 and D.sub.2, and the third input coupled to the output of the first MUL-ADD unit; a third MUL-ADD unit having the first input coupled to i.sub.4, the second input coupled to a product of D.sub.1 and D.sub.2 and D.sub.3, and the third input coupled to the output of the second MUL-ADD unit; a fourth MUL-ADD unit having the first input coupled to i.sub.5, the second input coupled to a product of D.sub.1 and D.sub.2 and D.sub.3 and D.sub.4, and the third input coupled to the output of the third MUL-ADD unit; a fifth MUL-ADD unit having the first input coupled to i.sub.6, the second input coupled to a product of D.sub.1 and D.sub.2 and D.sub.3 and D.sub.4 and D.sub.5, and the third input coupled to the output of the fourth MUL-ADD unit; a sixth MUL-ADD unit having the first input coupled to i.sub.7, the second input coupled to a product of D.sub.1 and D.sub.2 and D.sub.3 and D.sub.4 and D.sub.5 and D.sub.6, and the third input coupled to the output of the fifth MUL-ADD unit; a sixth MUL-ADD unit having the first input coupled to i.sub.4, the second input coupled to a product of D.sub.1 and D.sub.2 and D.sub.3 and D.sub.4 and D.sub.5, and the third input coupled to the output of the sixth MUL-ADD unit; the offset address coupled to the output of the sixth MUL-ADD unit.
27. The address generator of claim 25 where the address derived from an offset address comprises adding a start address to the product of the offset address and a data-size value.
28. An iteration variable generator for an iteration address comprising: a sequence of iteration loop variable generators, each iteration loop variable generator comprising: an integer counter generating an index value that counts from 0 to a bound value, the integer counter having an output comprising the index value multiplied by a stride value, thereby generating an integer-stride value, the index counter having an increment input and an increment output such that the integer counter asserts its increment output and resets the index to 0 when the index value reaches or exceeds the bound value, the index value incrementing when its increment input is asserted; a first iteration loop variable generator of the sequence of iteration loop variable generators having an increment input coupled to the clock generator; each subsequent iteration loop variable generator of the sequence of iteration loop variable generators having an increment input coupled to an increment output of a previous iteration loop variable generator; each iteration loop variable generator integer counter coupled to an iteration variable output, each of the integer-stride values thereby forming the iteration address.
29. The iteration variable generator of claim 28 where at least one integer counter of at iteration loop variable generator forms an output value by multiplying an associated index value by an associated stride value.
30. An address processor receiving a plurality n of iteration variables i.sub.1 through i.sub.n, each iteration variable having a corresponding bound D.sub.1 through D.sub.n; the address processor computing an address derived from an offset address, the offset address equal to:
(D.sub.1*D.sub.2*D.sub.3* . . . *D.sub.n-1)*i.sub.n+D.sub.1*D.sub.2* . . . *D.sub.n-2)*i.sub.n-1+ . . . +(D.sub.1*D.sub.2)*i.sub.3+D.sub.1*i.sub.2+i.sub.1.
31. The address processor of claim 30 where at least one of the plurality of iteration variables increments by a stride integer which is greater than 1.
32. The address generator of claim 30 where the address derived from the offset address is generated by adding a memory start address to the product of the offset address and a data-size.
33. A Convolutional Neural Network (CNN) system comprising: a configuration interface receiving at least a convolutional stride U, a plurality of bound values D.sub.1 through D.sub.7, and associated stride values S.sub.1 through S.sub.7; a memory containing input values and coefficient values; an iteration variable generator generating a plurality of iteration variables i.sub.1 through i.sub.7, each of the plurality of iteration variables having an associated bound the D.sub.1 through D.sub.7 and a stride the S.sub.1 through S.sub.7; an input address generator generating an input address from the plurality of iteration variables, the input address generator coupled to the memory, the input address generator computing the input address using an input offset, where:
the input offset=(((((U*i.sub.4)+i.sub.2)*((U*i.sub.5)+i.sub.3)+D.sub.1)*i.sub.1)+D.sub.1*D.sub.2)*i.sub.7+D.sub.1*D.sub.2*D.sub.3; a coefficient address generator generating a coefficient address from the plurality of iteration variables, the coefficient address generator coupled to the memory, the coefficient address generator computing the coefficient address using a coefficient offset, where the
coefficient offset=(((i.sub.2*i.sub.3+D.sub.1)*i.sub.1)+D.sub.1*D.sub.2)*i.sub.6+D.sub.1*D.sub.2*D.sub.3; an output address generator generating an output address from the iteration variables i.sub.7, i.sub.6, i.sub.5 and i.sub.4, the output address generator coupled to the memory.
34. The CNN system of claim 33 where an output value from the memory associated with the output address is added to a product formed by an output from the memory associated with the input address and an output from the memory associated with the coefficient address and saved to a memory location corresponding to the output address.
35. The CNN system of claim 33 where the convolutional stride U, the index strides S.sub.1 through S.sub.7, the bounds D.sub.1 through D.sub.7, the data size, and the start address of the memory is provided to the configuration interface as an instruction.
36. The CNN system of claim 33 where a particular one of the iteration variables i.sub.1 through i.sub.7 individually increment from 0 to an associated bound D.sub.1 through D.sub.7, after which the iteration variable resets to 0 and a different iteration variable increments.
37. The CNN system of claim 33 where a lower numbered iteration variable increments before a comparatively higher numbered iteration variable.
38. The CNN system of claim 33 where at least one iteration variable increments by an associated stride.
39. The CNN system of claim 33 where at least one of the index address generator or the coefficient address generator forms an address by adding a start address to a product of a data-size with either the index address or the coefficient address.
40. The CNN system of claim 33 where at least one of: the output of the input address generator, the output of the coefficient address generator, and the output of the output address generator, are saved into at least one of a queue, a stream, or a buffer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
DETAILED DESCRIPTION OF THE INVENTION
[0046] In the present specification and figures, reference to a particular reference number indicates a reference to the same function or processing element in a description or view.
[0047]
[0048] Accessing the multiple arrays and matrices creates a particular problem of computational complexity, as the addresses of the particular data elements to be accessed or read and written have to be calculated first. The computation involved in calculating the memory address of a data element is known as address arithmetic. In general, the memory address for a 1-dimensional memory array with n arrays of tensors such as the memory 202 of
mem_addr=start_addr+{(i.sub.n−1)(D.sub.n-1)(D.sub.n-2) . . . (D.sub.1)+(i.sub.n-1−1)(D.sub.n-2) . . . (D.sub.1)+ . . . +(i.sub.2−1)(D.sub.1)+(i.sub.1)}*data_size EQ-1:
[0049] where:
[0050] mem_addr is the desired memory location;
[0051] start_addr is the starting address of the memory;
[0052] {D.sub.1 . . . D.sub.n-1} are the respective number of entries of each memory segment representing a range of each associated iteration variables {i.sub.1 . . . i.sub.n-1}, and
[0053] data_size is the word length for each data entry in the memory.
[0054] For example, if a memory containing three arrays of vectors or tensors (each having associated values for input IN, coefficient W, and output O) starts at byte address 4000 hex (the start_addr), and D.sub.1=D.sub.2=D.sub.3=64 data_size words (256 bytes), and each element is 4 bytes (the data_size), the third element of the first array (i.sub.3=0, i.sub.2=0, i.sub.1=3) is located at address=4000+(3−1)*4=4000+2*4=4008.
[0055] In processing systems with an instruction set processor, a compiler will generate the necessary instructions to perform the address arithmetic.
[0056] In the example RISC instructions of
[0057] In many applications, multiple tensors need to be simultaneously accessed from memory, requiring multiple iteration loops and multiple address generators. Having dedicated hardware to simultaneously compute address arithmetic for multiple tensors can improve accessing of data in tensor processing systems, and thereby the overall performance. Efficient accessing of data also improves energy efficiency.
[0058] The present invention provides hardware calculation of memory addresses for use by a Central Processing Unit (CPU), Graphics Processing Unit (GPS), or CNN engine, and those addresses may be used instantly, or captured as “pre-fetch” addresses for later use, by saving them in a stream, a queue, or a buffer. The term prefetching is generally used in the field to indicate anticipatory fetches before the address or data is actually needed or known to be needed. The timing at which the prefetching is initiated usually affects power and performance. If prefetches are done too aggressively, there may be many superfluous fetches resulting in wasted energy. In both instant fetching and prefetching, addresses have to be computed and the present invention facilitates the simultaneous access of multiple data streams while providing flexible programmability for various stream sizes and convolutional or index strides, where the stride indicates the number of locations to be skipped in each increment event. The present invention may be applied to many types of different sources or destinations for the fetched data, for example caches, registers, general purpose buffers and special purpose buffers.
[0059]
[0060]
[0061] D.sub.7 is the bound of i.sub.7. In the CNN context, D.sub.7 represents the number of input fmaps/output fmaps (also known as the batch size).
[0062] D.sub.6 is the bound of i.sub.6. In the CNN context, D.sub.6 represents the number of 2-D output fmaps (also known as channels).
[0063] D.sub.5 is the bound of i.sub.5. In the CNN context, D.sub.5 represents the width of the output fmap (number of activations).
[0064] D.sub.4 is the bound of i.sub.4. In the CNN context, D.sub.4 represents the height of the output fmap (also known as the number of activations).
[0065] D.sub.3 is the bound of D.sub.3. In the CNN context, D.sub.3 represents the height of the 2-D filter (weights), and is the bound associated with iteration variable i.sub.3.
[0066] D.sub.2 is the bound of i.sub.2. In the CNN context, D.sub.2 represents the width of the 2-D filter (weights).
[0067] D.sub.1 is the bound of i.sub.1. In the CNN context, D.sub.1 represents the number of 2-D input fmaps/filters (channels).
[0068] U is the convolution stride.
[0069] D.sub.4 (the height of the output fmap) is governed by the equation D.sub.4=(H−R+U)/U, where H is the height of the input fmap and R is the height of the 2D-filter (weights). R is also equal to D.sub.3.
[0070] D.sub.5 (the width of the output fmap) is governed by the equation D.sub.5=(W−S+U)/U, where W is the width of the input fmap and S is the width of the 2D-filter (weights). S is also equal to D.sub.2.
[0071]
[0072] In one example of the invention, only three arrays (tensors) participate in the address computation for input (IN) address 216, coefficient (W) address 214, and output (O) address 217, with the innermost iteration variable loop (i.sub.1, i.sub.2, i.sub.3) forming an output data vector O 120 for a given input tensor IN 124, and weight tensor W 126 with respect to the terms shown in
[0073] In an extension of the single vector example, each of the vectors has multiple dimensions, such as 4. In that case, the output vector is 0[i.sub.7][i.sub.6][i.sub.5][i.sub.4], with input vector IN[i.sub.7][i.sub.1][U*i.sub.5+i.sub.3][U*i.sub.4+i.sub.2] and weight vector W[i.sub.6][i.sub.1][i.sub.3][i.sub.2]. Although there are only 4 index variables for each vector, the 4 indices of the 3 vectors are controlled by 7 nested loops.
[0074] In this example, [i.sub.7, i.sub.6, i.sub.5, i.sub.4] are the outer iteration variables, and [i.sub.3, i.sub.2, i.sub.1] are the inner iteration variables, where i.sub.7 is the outermost iteration variable and i.sub.1 is the innermost variable of seven iteration loops.
[0075] The outer 4 iteration loops [i.sub.7, i.sub.6, i.sub.5, i.sub.4] are for each output fmap value, i.e. the convolved output, and the inner 3 iteration loops [i.sub.3, i.sub.2, i.sub.1] convolve a window of the image. The ordering of the inner iteration loops [i.sub.3 . . . i.sub.1] may be permuted, as may the order of the outer iteration loops [i.sub.7 . . . i.sub.4]. Alternatively, the iteration loops such as 208 may not be hierarchical and ordered in any manner without limitation to the scope of the invention.
[0076] Many optimizations are possible in this code sequence in the ordering of the convolutions, but the present example shows a 7-level nested loop structure for illustration only. It is understood that any number of outer loops may be added with four shown for example purposes only. The generated addresses for the associated data values for IN, W, and O depends on the indices of the 7 loops and the bounds of the dimensions.
[0077] The computation and memory accessing associated with CNNs can be improved by providing hardware support for the 7-level nested loop. The present example of the invention provides a 7 level hardware looping structure with stride (step size) for iteration loops {i.sub.7, i.sub.6, i.sub.5, i.sub.4, i.sub.3, i.sub.2, i.sub.1} of
[0078] Address computation involved multiplications and additions as shown in the address arithmetic equation. One basic structure is a multiply-add (MUL-ADD) unit which has three inputs A, B, C, and an output D, such that the output D=A*B+C where A, B, C, and D are integer values. Another structure is a MUL unit, which has two inputs A and B, and an output C, such that the output C=A*B, where A, B, and C are all integer values.
[0079] The block diagram of
[0080]
[0081] The iteration variables [i.sub.7,i.sub.6,i.sub.5,i.sub.4,i.sub.3,i.sub.2,i.sub.1] may be considered “ordered iteration variables” in the sense that i.sub.1 increments from 0 to its associated bound D.sub.1 and resets to zero before iteration variable i.sub.2 increments, and so on, with each iteration variable such as i.sub.2 having a “higher order” adjacent iteration variable i.sub.3 and a “lower order” adjacent iteration variable i.sub.1, other than the lowest order (i.sub.1 with no lower order iteration variable, and i.sub.7 with no higher order iteration variable).
[0082] The number of iteration variables in the present examples is shown as n=7, but the range of iteration variables may be in a range from 4 to 10 or more, and each iteration variable may have a bitwidth in the range of 8 bits to 64 bits, although shown as 32 bits in the present examples.
[0083] Each iteration counter 402-1 through 402-7 increments according to an associated stride and also sends an increment signal to the next outer level iteration variable counter when it reaches its associated iteration variable bound, such that the subsequent iteration counter increments to a next value. For example, iteration counter 402-1 comprises counter 404-1 which increments through a range by stride 410-1 until i1 reaches bound D1 412-1, after which it asserts increment output signal 424-1, which is input to iteration variable counter 402-2 as increment input signal 422-2, causing i.sub.2 counter 404-2 to increment by its D.sub.2 stride value and i.sub.1 counter 404 to reset to 0. The default stride for each of the iteration counters 402-1 through 402-7 is 1, but other strides can be supported by loading different values into 410-1 corresponding to a respective stride value. Each of the iteration variable counters 402-1 through 402-7 has an associated stride register such as 410-1 and bound 412-1 D.sub.1 of iteration counter 402-1, which may be updated through a configuration process so that each CNN computation may have a corresponding set of stride values (of 410-1) and bound value (412-1).
[0084] In this manner, the 7-level nested loop of
[0085]
[0086]
[0087] When performing a convolution operation, the address generator 206 generates W_addr 214 and O_addr 217 using address generator 602, and I_addr 216 using address generator 502, causing the memory 202 to generate W_data 218 based on W_addr, I_data 220 based on I_addr, and provides the O_data 212 based on O_addr 217. The multiplier accumulator 210 multiplies the I_data 220 and with W_data 218, adds that product to the value O_data 212 to generate new O_data, and stores the new O_data result back to the memory using the same O_addr location, and moves on to the next set of operations associated with the next set of iteration variables.
[0088]
[0089] In one example of the invention, a single set of iteration variables, associated iteration stride constants, start address constants, and data size constants are applied to the address generators of
[0090] 1) stored into a cache memory for a pre-fetch operation, where the addresses are saved for future use;
[0091] 2) stored into a queue of access addresses for a pre-fetch operation, where the addresses are saved for future use;
[0092] 3) stored into a wide interleaved buffer for a tensor controlled by nested loops;
[0093] 4) applied for immediate use in addressing IN and W tensors, and determining the O address for stream buffers for later use in computing the associated tensor;
[0094] 5) applied for immediate use in addressing IN and W tensors, and determining the O address for instant storage of the result to an O address;
[0095] 6) computing the IN, W, and O addresses, and loading the associated addressed tensors into respective separate stream buffers;
[0096] 7) computing the IN, W, and O addresses, loading the associated tensors simultaneously into three queues.
[0097] The address generation hardware above can be incorporated into a dedicated access processor, a CPU, a GPU or in a custom accelerator. The address generation hardware can work in conjunction with any access processor or unit. An access processor can be an ordinary processor such as a CPU focusing on access tasks in parallel to the main compute processor. Or, an access processor can be designed with special addressing modes and buffering to stage data in the order required for computations (100% correctness needed); buffers can be parallel enough to provide high data bandwidth required for all computations of the execute processor. Different streams (tensors) can be in different buffers allowing parallel access. The different tensors can be simultaneously loaded, and simultaneously used by the compute processor or accelerator. An access processor can be designed to prefetch data into caches and local memories accessible by the Execute Processor in an advisory fashion. In this case, not all data fetched may be used by the execute processor. In this case 100% correctness is not needed for the fetching processor.
[0098] The access unit or the access processor can be programmable with certain sophisticated access pattern instructions. For example, for 3-dimensional tensors, 4-dimensional tensors, or 6-dimensional tensors. In the convolution loop presented above, there are three 4-dimensional tensors O, I and W corresponding to output, input and weights, respectively, and a 1-dimensional vector B (bias). Multiple instances of the hardware can be used in the access processor as described for
[0099]
[0100] In another example of the invention, each address generator of
[0101]
[0102] Returning to
[0103] As was previously described for
[0104] The architecture of
[0105]
[0106]
[0107] In one example of the invention, the index variable selection fabric is not used, and each variable i.sub.1′, i.sub.2′, etc is equal to i.sub.1, i.sub.2, etc respectively and multiplexers 920A to 920G are not necessary. In another example of the invention, the configuration instruction of
[0108] Indices can be indicated by the following full mask vectors.
[0109] i.sub.7=1000000
[0110] i.sub.6=0100000
[0111] i.sub.5=0010000
[0112] i.sub.4=0001000
[0113] i.sub.3=0000100
[0114] i.sub.2=0000010
[0115] i.sub.1=0000001
[0116] Alternatively, references to an iteration variable can be encoded in 3 bits, where i.sub.1 is represented as {001} and i.sub.7 is represented as {111}.
[0117] Accordingly, when (U,i.sub.5,i.sub.3) need to be indicated as the index meaning that the index is U*i.sub.5+i.sub.3, the encoded scheme can use 3 bits to reference iteration variable is, 3 bits to reference iteration variable i.sub.3, and 1 bit to indicate whether U should be used (or provide a non-zero multi-bit value for U), and start addresses (StADDR-T1, StAddr-T2, StAddr-T3) are the same width as the address they specify.
[0118] Many conventional optimizations are possible to reduce the number of bits used to encode this instruction. But encoded full vectors are also useful to reduce the time spent to decode the information later for processing. In some scenarios it will be advantageous to use encoding and fewer bits. In other scenarios it will be advantageous to not use encoding but use the full amount of bits for the unencoded information.
[0119] Instructions can be devised for demand loading as well as prefetching. In addition to the examples shown, other types of instructions can be created for processing by the configurator 211 coupled to the address generator 206 and iteration variable generator 207, including the specification for storage or use of the resulting addresses generated by the address generator 206, including:
[0120] 1) PREFETCH into cache a tensor controlled by indexing of 6 or more nested loops
[0121] 2) PREFETCH into queues a tensor controlled by many nested loops
[0122] 3) PRFETCH into wide interleaved buffer a tensor controlled by many nested loops
[0123] 4) PREFETCH 3 tensors simultaneously into 3 stream buffers
[0124] 5) PREFETCH 3 tensors simultaneously 3 queues
[0125] 6) Load 3 tensors simultaneously into 3 stream buffers
[0126] 7) Load 3 tensors simultaneously 3 queues
[0127] Each of the embodiments of the preceding examples may be performed by processes. For example, the iteration variable generator may be operative as a sequence of iteration variables generated by respective iteration variable generation processes, a first iteration variable generator counting by an associated stride S and returning to 0 when reaching an associated bound D, and also generating a carry output to increment a subsequent stage. In this manner, the iteration variables may be generated by a series of processes which perform as shown in
[0128] In another example of the invention, the input address generator of
[0129] The present examples are provided for illustrative purposes only, and are not intended to limit the invention to only the embodiments shown. In the present description, the use of ellipses such as “i.sub.1 . . . i.sub.7”, “D.sub.1 . . . D.sub.7”, etc are understood to refer to a set of coefficients or values such as {i.sub.1, i.sub.2, i.sub.3, i.sub.4, i.sub.5, i.sub.6, i.sub.7} and {D.sub.1, D.sub.2, D.sub.3, D.sub.4, D.sub.5, D.sub.6, D.sub.7}, respectively.