Convolution accelerator using in-memory computation

11562229 · 2023-01-24

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for accelerating a convolution of a kernel matrix over an input matrix for computation of an output matrix using in-memory computation involves storing in different sets of cells, in an array of cells, respective combinations of elements of the kernel matrix or of multiple kernel matrices. To perform the convolution, a sequence of input vectors from an input matrix is applied to the array. Each of the input vectors is applied to the different sets of cells in parallel for computation during the same time interval. The outputs from each of the different sets of cells generated in response to each input vector are sensed to produce a set of data representing the contributions of that input vector to multiple elements of an output matrix. The sets of data generated across the input matrix are used to produce the output matrix.

Claims

1. A method for accelerating a convolution of a kernel matrix over an input matrix, comprising: storing combinations of elements of the kernel matrix in respective sets of cells in an array of cells; and applying elements of an input vector from the input matrix to the sets of cells storing the combinations of elements of the kernel matrix; sensing outputs responsive to the elements of the input vector from the sets of cells to produce data representing contributions of the input vector to multiple elements of an output matrix; applying input vectors from the input matrix in a sequence, including said first mentioned input vector, to the sets of cells storing the combinations of elements of the kernel matrix; for each input vector in the sequence, sensing outputs from the sets of cells to produce output data representing contributions to the output matrix; and combining the output data representing contributions to the output matrix for each of the input vectors in the sequence to produce the output matrix.

2. The method of claim 1, wherein the sets of cells are disposed in an array of cells including a plurality of columns of cells, and wherein each set of cells is composed of cells in a single column in the plurality of columns.

3. The method of claim 1, wherein the sets of cells comprise programmable resistance memory cells.

4. The method of claim 1, wherein the sets of cells comprise charge trapping memory cells.

5. The method of claim 1, wherein said sensing includes sensing a combined conductance of cells in each of the sets of cells in response to the input vector.

6. The method of claim 1, wherein said outputs represent, for each of the sets of cells, respective sums of products of the elements of the input vector and corresponding combinations of elements of the kernel matrix.

7. The method of claim 1, including providing the output data to digital logic, and combining the output data in the digital logic for each of the input vectors in the sequence to produce the output matrix.

8. The method of claim 1, wherein said convolution of a kernel matrix over an input matrix is a layer of a convolutional neural network.

9. A device for convolution of a kernel matrix over an input matrix, comprising: an array of memory cells storing combinations of elements of the kernel matrix in respective sets of cells in the array of cells; driver circuitry configured to apply elements of an input vector from the input matrix to the respective sets of cells; and sensing circuitry configured to sense output data from the respective sets of cells to produce data representing contributions of the input vector to multiple elements of an output matrix, wherein said output data represents, for the respective sets of cells, a sum of products of the elements of the input vector and the combinations of elements of the kernel matrix stored in the respective sets.

10. The device of claim 9, including logic coupled to the driver circuitry to apply a sequence of input vectors of the input matrix, including said first mentioned input vector, to the driver circuitry, and logic coupled to the sensing circuitry to combine the output data for the sequence of input vectors to produce elements of the output matrix.

11. The device of claim 9, wherein the array of memory cells comprises a plurality of columns, a set of cells of the respective sets of cells is composed of cells in a single column in the plurality of columns.

12. The device of claim 9, wherein the array of cells comprises programmable resistance memory cells.

13. The device of claim 9, wherein the array of cells comprises charge trapping memory cells.

14. The device of claim 9, wherein said sensing circuitry senses a combined conductance of cells in the respective sets of cells in response to the input vector.

15. The device of claim 9, wherein the array of memory cells is disposed on a first integrated circuit, and including logic disposed on circuitry outside the first integrated circuit to apply a sequence of input vectors of an input matrix to the driver circuitry, and to combine the sets of data for each of the input vectors in the sequence to produce the output matrix.

16. The device of claim 9, wherein said convolution of a kernel matrix over an input matrix is a layer of a convolutional neural network.

17. A device for convolution of a kernel matrix over an input matrix, comprising: an array of memory cells including a plurality of rows and a plurality of columns, storing combinations of elements of the kernel matrix in respective columns in the plurality of columns; driver circuitry configured to apply elements of an input vector from the input matrix to respective rows in the plurality of rows; sensing circuitry configured to sense output data from the respective columns to produce data representing contributions of said input vector to elements of an output matrix; logic coupled to the driver circuitry and the sensing circuitry to apply a sequence of input vectors of the input matrix, including said input vector, to the driver circuitry and produce output data representing contributions of input vectors in the sequence, including said input vector, to elements of the output matrix; and logic coupled to the sensing circuitry to combine the output data for the sequence of input vectors to produce elements of the output matrix, wherein said output data represents, for the respective columns, a sum of products of the elements of said input vector on the respective rows and the combinations of elements of the kernel matrix stored in the respective columns.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is an illustration of a particular stage in a convolution operation, and can be referred to for the purposes of terminology used herein.

(2) FIG. 2 is an illustration of a different stage in the convolution operation of FIG. 1.

(3) FIG. 3 is a simplified illustration of an array of memory cells configured for acceleration of a convolution operation as described herein.

(4) FIG. 4 is a block diagram of a system for accelerating a convolution as described herein.

(5) FIG. 5 illustrates an array of memory cells in which different sets of memory cells in the array store respective combinations of elements of multiple kernel matrices.

(6) FIG. 6 is a simplified system block diagram for a system for accelerating a convolution.

(7) FIG. 7 is a simplified system block diagram for an alternative embodiment of a system for accelerating a convolution.

(8) FIG. 8 is a flowchart illustrating a method for accelerating a convolution as described herein.

DETAILED DESCRIPTION

(9) A detailed description of embodiments of the present invention is provided with reference to the FIGS. 1-8.

(10) FIGS. 1 and 2 illustrate stages of a convolution operation. In FIG. 1, an input matrix 10 has a height H, a width W and a depth C. In the case of a two-dimensional input matrix, the depth C can be 1. A kernel matrix 11 in this example comprises a three-dimensional filter FILTER 1. The kernel matrix 11 has a height R, a width S and a depth C. The convolution is executed to compute an output matrix 12. In this example, the output matrix 12 has a height E, a width F and a depth M. In a stage, as illustrated in FIG. 1, an element-wise multiplication of input vectors in the input matrix 10 is computed for a given stride STRIDE X and a given kernel matrix FILTER 1. In a stage, as illustrated in FIG. 2, an element-wise multiplication of input vectors in the input matrix is computed for a given stride STRIDE N and a given kernel matrix FILTER M.

(11) In a convolution, the kernel is applied, usually in a scan pattern, to the input matrix in a sequence having a horizontal stride and a vertical stride. In each particular stride, the elements of the kernel are combined with a set of elements of the input matrix in a window at the location of the stride. The results of the computation for each stride can be used to compute a single output value (e.g. 20) in the output matrix 12, or can be used in many other ways depending on the convolution functions. In the illustrated example of FIG. 1, for STRIDE X, the kernel is applied to a set of elements of the input matrix in window 15. The input vector X1:XC at the upper left corner of the input matrix includes elements from each of the layers of the input matrix. This input vector X1:XC for the purpose of STRIDE X is combined with the filter vector F1:FC from the upper left corner of the kernel matrix 11. Each of the input vectors within the window 15 is combined with a corresponding filter vector including a combination of elements of the kernel 11. Results from each of the input vectors within window 15 are combined according to the function of the convolution to produce the output value 20 in the output matrix 12.

(12) For each stride of the input kernel, a different combination of input vectors is utilized as the window for the kernel scans through the input matrix, to produce the corresponding value in the output matrix. However, each of the input vectors can be utilized in multiple strides for computation of multiple output values. For example, if the kernel matrix is a 3×3×C matrix, it can be represented by 9 kernel vectors having a length of C elements each. For a horizontal stride of 1, and a vertical stride of 1, each of the input vectors can be utilized by each of the 9 kernel vectors in 9 different strides.

(13) In FIG. 2, a representation of a stage of the convolution in which kernel FILTER M is applied in stride STRIDE N to compute an output value 21 in the output matrix 12. In this example, FILTER M is applied to compute the elements in layer M of the output matrix. In stride STRIDE N the kernel 14 is applied in the window 16 of the input matrix, and the results combined to computed output value 21 for the output matrix. Thus, this illustrates that multiple kernels, in this example M kernels, can be convolved with an input matrix 10 to compute an output matrix having multiple levels, in this example M levels in which each of the M kernels is used to compute the output values for a corresponding one level of the M levels of the output matrix 12.

(14) Thus, in a convolution, each input vector is combined with the kernel in multiple strides of the convolution, where each stride can be used to compute one element (typically) of the output matrix. In each stride of the convolution in which a given input vector is used, there is a set of elements of the kernel with which it is combined. For each stride, the set of elements of the kernel applied to the input vector is in a different location in the kernel.

(15) In a convolution accelerator as described herein, the multiple sets of elements in the kernel with which a particular input vector is combined, i.e. the sets for every stride in which it is used, are stored in different sets of cells (e.g. cells in different columns) of the array of cells. The outputs of the different sets of cells represent the contribution of the input vector to the output of a respective one of the multiple strides in which it is used in the convolution. The outputs of the different sets of cells can be sensed in parallel, and provided to logic circuitry which gathers and combines them to form the output matrix.

(16) FIG. 3 illustrates an array of memory cells “W” (e.g. 120). A set of first access lines 111, 112, 113, 114, . . . 118 (such as word lines) is disposed with the array, where each first access line is operatively coupled to memory cells in a row of the array, so that it is coupled to a corresponding memory cell in each of the multiple columns of the array in a single row. A set of second access lines 101, 102, 103, 104, . . . 109 (such as bit lines) is disposed with the array. Each second access line is operatively coupled to a corresponding set of memory cells in a single column in each of the multiple rows of the array.

(17) In this arrangement, the cells in each column that are coupled to the set of first access lines include a set of memory cells which stores a combination of elements of the kernel matrix. For this example, the kernel matrix can be characterized as a set of vectors of length C (e.g. F1-FC of FIG. 1) of filter F.sup.1, having coordinates in the R, S plane, as follows:

(18) TABLE-US-00001 F.sup.1.sub.−1, −1 F.sup.1.sub.−1, 0 F.sup.1.sub.−1, 1 F.sup.1.sub.0, −1 F.sup.1.sub.0, 0 F.sup.1.sub.0, 1 F.sup.1.sub.1, −1 F.sup.1.sub.1, 0 F.sup.1.sub.1, 1

(19) Thus, the kernel matrix includes nine vectors. In a convolution of horizontal and vertical stride 1 over an input matrix, each vector of the input matrix can be combined with each of the 9 vectors for the purposes of computing different values in the output matrix. Some input vectors on the edges, for example, may be combined with different numbers of the vectors, depending on the particular convolution being computed.

(20) The array of cells in FIG. 3 stores the elements of one vector of the kernel in one column. Thus, for the 3×3 kernel in this example, 9 sets of cells are used to store 9 vectors. The cells in the column operatively coupled to second access line 101 form a set of cells that stores elements of a vector F.sup.1.sub.−1,−1, of a first filter F.sup.1 at coordinate −1,−1. The cells in the column operatively coupled to second access line 102 form a set of cells that stores elements of a vector F.sup.1.sub.−1,0 of the first filter F.sup.1 at coordinate −1,0. The cells in the column operatively coupled to second access line 103 form a set of cells that stores elements of a vector F.sup.1.sub.−1,1 of the first filter F.sup.1 at coordinate −1,1. The cells in the column operatively coupled to second access line 104 form a set of cells that stores elements of a vector F.sup.1.sub.0,−1 of the first filter F.sup.1 at coordinate 0,−1. The fifth through eighth columns are not shown in FIG. 3. The last column in the array including cells operatively coupled to second access line 109 form a set of cells that stores elements of a vector F.sup.1.sub.1,1 of the first filter F.sup.1 at coordinate 1,1.

(21) To perform an in-memory computation, driver circuitry applies an input vector X1:XC to the set of first access lines 111-118 in this example, assuming a depth C of 8. Sensing circuitry is coupled to the set of second access lines 101-109 to sense, for each of the multiple sets of cells on the different second access lines, a combination of the conductances of the memory cells in their corresponding set of cells.

(22) For example, a current on each of the second access lines represents an element-wise sum-of-products of the filter vector implemented by the weights stored in the memory cells in the column, and an input vector applied on the first access lines. This element-wise sum-of-products can be computed simultaneously utilizing 9 different sense circuits in parallel for each of the 9 filter vectors.

(23) For the purposes of example, FIG. 3 illustrates a portion of an output matrix 125 including 3 rows and 4 columns of elements M11 to M34. In one example, output element M22 can be equal to the sum of the combination of the 9 filter vectors times a window of 9 input vectors, where the input vector at coordinate 2,2 in the input matrix is the center of the window.

(24) Thus, for a window centered at coordinate 2,2 on the input matrix, the output on second access line 101 is a partial sum used for the computation of output element M33. The output on second access line 102 is a partial sum used for the computation of output element M32. The output on second access line 103 is a partial sum used for the computation of output element M31. The output on second access line 104 is a partial sum used for the computation of output element M23, and so on. The output on the second access line 109 is a partial sum used for the computation of output element M11. Thus, the outputs of the nine second access lines represent contributions of the input vector to the computations of nine different strides.

(25) For a next window centered at coordinate 2,3, as illustrated at 126, the output on second access line 101 is a partial sum used for the computation of output element M34. The output on second access line 102 is a partial sum used for the computation of output element M33. The output on second access line 103 is a partial sum used for the computation of output element M32. The output on second access line 104 is a partial sum used for the computation of output element M24, and so on. The output on the second access line 109 is a partial sum used for the computation of output element M12.

(26) To compute the value of an element of the output matrix, the partial sums from each of the input vectors that contribute to that value must be gathered and accumulated as the sequence of input vectors is applied to the array of cells used for the in-memory computation. This function of gathering and accumulating the partial sums can be executed using digital logic and scratchpad memory.

(27) FIG. 4 illustrates a simplified example of an array of memory cells including a number C of rows (corresponding to the number of row lines) and a number N of columns (corresponding to the number of kernel vectors). In this illustration, the array includes cells 411 and 421 in a first column, cells 412 and 422 in a second column, and cells 413 and 423 in a third column. Of course, embodiments of the array can include many rows and many columns.

(28) The memory cells can be resistive RAMs, where resistances of the memory cells represent the elements of a kernel (e.g. weights in a filter) as discussed above. Each memory cell in the array stores a weight factor W.sub.nm corresponding to an element of a filter vector, and can be represented as storing weights W.sub.11, W.sub.12, W.sub.13, W.sub.21, W.sub.22, and W.sub.23 respectively.

(29) A set of first access lines (e.g., 481, 482) is coupled to the memory cells in respective rows of memory cells of the first type. A set of second access lines (e.g., 491, 492, and 493) is coupled to the memory cells in respective columns of the memory cells of the first type. The set of first access lines (e.g., 481, 482) is coupled to the row decoder/drivers 455 and the set of second access lines is coupled to the column decoder 456. Signals on the first access lines in the set of first access lines can represent inputs x1, x2 to the respective rows. In this example, the row decoder/drivers 455 asserts a signal input x1 on the first access line 481 and a signal input x2 on the first access line 482, which can represent elements of an input vector.

(30) The sensing circuit 454 is coupled to respective second access lines in the set of second access lines via the column decoder 456. Current (e.g., y1, y2, y3) sensed at a particular second access line (e.g., 491, 492, 493) in the set of second access lines can represent a sum-of-products of the inputs x1, x2 by respective weight factors Wnm.

(31) Thus, in this example, the set of cells on second access line 491 produces a combined current on the second access line in response to the input vector which, upon sensing, results in a digital value y.sub.1=x.sub.1*w.sub.11+x.sub.2*w.sub.21. The digital value output from line 492 is y.sub.2=x.sub.1*w.sub.12+x.sub.2*w.sub.22. The digital value output from line 493 is y.sub.3=x.sub.1*w.sub.13+x.sub.2*w.sub.23. The sum-of-products outputs y.sub.1, y.sub.2, y.sub.3 can be stored in the data buffer 458, in an output data path.

(32) The output data path is coupled to gather circuits 460, which can comprise a set of multiplexers controllable to align the outputs for a given input vector in multiple groups for combination with the output from other input vectors in the computation of multiple output values. The multiple groups can be applied to a set of accumulator 461 to compute the output matrix values. The output matrix values can then be applied to memory 462.

(33) FIG. 5 shows an array of memory cells, like that of FIG. 3, expanded for the purposes of in-memory computation for multiple kernels (M kernels as illustrated in FIG. 1) in one array of memory cells, which can be configured to apply an input vector to the multiple kernels simultaneously. In this example, an array of memory cells “W” (e.g. 520) is illustrated. A set of first access lines 521, 522, 523, 524, . . . 528 (such as word lines) is disposed with the array, where each first access line is operatively coupled to memory cells in a row of the array so that it is coupled to a corresponding memory cell in each of the multiple columns of the array. A set of second access lines 501, 502, 503, 504, . . . 509 for a first kernel (FILTER 1) and a set of second access lines 511, 512, 513, 514, . . . 519 for an Mth kernel (FILTER M) are disposed with the array. Many second access lines for kernels FILTER 2 to FILTER M−1 are included, in the array but not shown in the figure. Each second access line is operatively coupled to a corresponding set of memory cells in a single column in each of the multiple rows of the array in this example.

(34) In this arrangement, the cells in each column that are coupled to the set of first access lines include a set of memory cells which stores a combination of elements of the multiple kernel matrices. For this example, the kernel matrix for FILTER 1, as shown in FIG. 3, can be characterized as a set of vectors of length C (e.g. F1-FC of FIG. 1) of filter having coordinates in the R,S plane, as follows:

(35) TABLE-US-00002 F.sup.1.sub.−1, −1 F.sup.1.sub.−1, 0 F.sup.1.sub.−1, 1 F.sup.1.sub.0, −1 F.sup.1.sub.0, 0 F.sup.1.sub.0, 1 F.sup.1.sub.1, −1 F.sup.1.sub.1, 0 F.sup.1.sub.1, 1

(36) The kernel matrix for FILTER M can be characterized as a set of vectors of length C (e.g. F1-FC of FIG. 1) of filter F.sup.M, having coordinates in the R,S plane, as follows:

(37) TABLE-US-00003 F.sup.M.sub.−1, −1 F.sup.M.sub.−1, 0 F.sup.M.sub.−1, 1 F.sup.M.sub.0, −1 F.sup.M.sub.0, 0 F.sup.M.sub.0, 1 F.sup.M.sub.1, −1 F.sup.M.sub.1, 0 F.sup.M.sub.1, 1

(38) Thus, each of the M kernel matrices includes nine vectors. In a convolution of horizontal and vertical stride 1 over an input matrix, each vector of the input matrix can be combined with each of the 9 vectors in each of the M kernels (9*M combinations) for the purposes of computing different values in the output matrix. Some input vectors on the edges for example may be combined with different numbers of the vectors, depending on the particular convolution being computed.

(39) The array of cells in FIG. 5 stores the elements of one vector of the kernel in one column. Thus, for the 3×3 kernels in this example, 9 sets of cells are used to store 9 vectors for each kernel. The cells on second access lines 501 to 509 store the vectors of the first filter F.sup.1 as described with reference to corresponding cells of FIG. 3. The cells in the column operatively coupled to second access line 511 form a set of cells that stores elements of a vector F.sup.M.sub.−1,−1 of an Mth filter F.sup.M at coordinate −1,−1. The cells in the column operatively coupled to second access line 512 form a set of cells that stores elements of a vector F.sup.M.sub.−1,0 of the Mth first filter F.sup.M at coordinate −1,0. The cells in the column operatively coupled to second access line 513 form a set of cells that stores elements of a vector F.sup.M.sub.−1,1 of the Mth filter F.sup.M at coordinate −1,1. The cells in the column operatively coupled to second access line 514 form a set of cells that stores elements of a vector F.sup.M.sub.0,−1 of a first filter FM at coordinate 0,−1. The fifth through eighth columns are not shown in FIG. 3. The last column in the array, including cells operatively coupled to second access line 519, form a set of cells that stores elements of a vector F.sup.M.sub.1,1 of the Mth filter F.sup.M at coordinate 1,1.

(40) To perform an in-memory computation, driver circuitry applies an input vector X1:XC to the set of first access lines 521-528, in this example, assuming a depth C of 8. Sensing circuitry is coupled to the set of second access lines 501-509 and to the set of second access lines 511-519 to sense, for each of the multiple sets of cells on the different second access lines, a combination of the conductances of the memory cells in their corresponding set of cells.

(41) As mentioned above, the sensed outputs can be provided to digital logic to gather and accumulate the outputs to compute the elements of the output matrix.

(42) In some embodiments, the array of cells can be expanded to store the kernel data for a plurality of convolutions, such as convolutions for multiple layers of a CNN.

(43) FIG. 6 illustrates a system incorporating in-memory computation as described herein, suitable for executing a CNN or other data processing operation that utilizes a convolution. The system includes a first integrated circuit 600 including an in-memory computation unit 601 comprising an array of cells configured for in-memory computation, such as described with reference to FIGS. 3-5. The output of the in-memory computation unit is applied to logic 602 which gathers and accumulates, or otherwise manipulates, the data the outputs of the in-memory computation to generate an output matrix. The output matrix is stored in local memory 603 on the first integrated circuit 600. The first integrated circuit 600 in this example is coupled to a host 610, which can be a data processing system configured for machine learning or other system that utilizes complex convolution operations. For example, the host 610 can be an image processor. The host 610 can be coupled to a large-scale memory 620, such as DRAM or other high-speed or high-capacity memory. The host can include computer programs that implement or support logic coupled to the driver circuitry and sensing circuitry via for example addressing and command sequences applied to the computation unit 601 to apply a sequence of input vectors of the input matrix from the DRAM or other source of the input matrix data to the driver circuitry, and in coordination with the sequence of input vectors, and the gather and accumulate logic 602 on the computation unit 601 produce output data representing contributions of input vectors in the sequence, including said input vector, to elements of the output matrix, and to combine the output data for the sequence of input vectors to produce to produce elements of the output matrix.

(44) In this example, the in-memory computation unit 601 which comprises an array of memory cells, is manufactured on the same integrated circuit as the logic circuits (Gather and accumulate unit 602 and local memory 603) used to manipulate the outputs. Host 610 and large-scale memory 620 can be implemented off of the integrated circuit 600.

(45) In some embodiments, all these components can be implemented on a single integrated circuit, or single multichip package.

(46) FIG. 7 illustrates another embodiment of a system incorporating in-memory computation as described herein, suitable for executing a CNN or other data processing operation that utilizes a convolution. This is representative of a variety of configurations in which the logic supporting the convolution operations is arranged in combinations of software on instruction processors, in special purpose logic, in data flow graphs on programmable gate arrays, and otherwise.

(47) In this embodiment, an in-memory computation unit 701 is implemented on a first integrated circuit 700. The first integrated circuit 700 has digital outputs from the sense amplifiers for example applied to a second integrated circuit 710. The second integrated circuit 710 comprises logic circuitry for manipulating the outputs, including the gather and accumulate unit 702 and local memory 703 in this example. Likewise, a host 705 and large-scale memory 706 may be implemented on a third integrated circuit 720. The integrated circuit 710 and 720 can be manufactured using fabrication facilities optimized for implementation of logic circuits. The integrated circuit 700 on the other hand can be manufactured using fabrication facilities optimize for the type of memory array utilized.

(48) FIG. 8 is a flowchart showing a method for accelerating a convolution of a kernel matrix over an input matrix in which input vectors from the input matrix are combined with various combinations of elements of the kernel matrix for computation of an output matrix. Different sets of cells in the array can be used to implement the various combinations of elements. In preparation for the computation, the method includes storing the vectors of one or more kernels in different sets of cells in the array, where each of the vectors of the kernels comprises a different combination of elements of a kernel matrix (801). To execute computation, input vectors from an input matrix are read from memory in sequence (802). The sequence of input vectors is provided to the input drivers for the array (803). This results in applying elements of each input vector from the input matrix in parallel to the different sets of cells in the array (804). Next, the outputs from each of the different sets of cells is sensed to produce for each input vector a set of data representing contributions to multiple elements of the output matrix (805). Finally, the sets of data for each of the input vectors in the sequence are combined to produce the output matrix (806).

(49) Using this process, an input matrix can be applied only once to the in-memory computation unit. This can eliminate the requirement to repeatedly read and write vectors from the input matrix for the purposes of computation of different strides of the convolution. As result, the system can operate with lower cost, and lower bandwidth data paths for movement of the input matrix and output matrix among the computational resources.

(50) In embodiments in which the convolutions are implemented as layers of a CNN, for example, this cycle can be repeated using a single in-memory computation unit in which multiple kernels or sets of kernels are arranged in a large-scale array of cells, for the multiple layers of the CNN. Once an output vector is computed as a result of a first layer of the CNN, the algorithm can loop to providing the input matrix to another layer of the CNN. Alternatively, the output matrix produced as a result of the in-memory computation unit can be configured as an input matrix for a next layer of the CNN.

(51) As a result of utilizing the in-memory computation configured as described herein, a system for executing convolutions is provided that can significantly reduce the amount of data movement required. This can increase the speed of operation, reduce the power required to execute the operation, and decrease the bandwidth requirement for movement of data during the convolution.

(52) While the present invention is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the invention and the scope of the following claims.