METHOD FOR IMPROVING CONVOLUTIONAL NEURAL NETWORK TO PERFORM COMPUTATIONS
20220398429 · 2022-12-15
Inventors
Cpc classification
International classification
Abstract
A method for improving a convolutional neural network (CNN) to perform computations is provided. The method includes the following steps: determining a number of a plurality of multipliers to be N and a number of a plurality of adders to be N according to a number of convolution kernels used by a plurality of convolution layers; and in response to an i-th convolutional layer of the convolutional neural network performing a convolution operation and N convolution kernels of the i-th convolutional layer being all in a size of K×1×1, using the N multipliers and the N adders to perform a multiplication operation once and an addition operation once for each of the N convolution kernels of the i-th convolutional layer in one cycle, such that N outputs of the N convolution kernels of the i-th convolutional layer are obtained after K cycles.
Claims
1. A method for improving a convolutional neural network to perform computations, the convolutional neural network including a plurality of convolutional layers, each of the plurality of convolutional layers using N convolution kernels, and the method comprising: determining a number of a plurality of multipliers to be N and a number of a plurality of adders to be N according to the N convolution kernels used by the plurality of convolution layers; and in response to an i-th convolutional layer of the convolutional neural network performing a convolution operation and the N convolution kernels of the i-th convolutional layer all having a size of K×1×1, using the N multipliers and the N adders to perform a multiplication operation once and an addition operation once for each of the N convolution kernels of the i-th convolutional layer in one cycle, such that N outputs of the N convolution kernels of the i-th convolutional layer are obtained after K cycles, wherein N is an integer greater than 1, i is an integer greater than or equal to 1, and K is an integer greater than 1.
2. The method according to claim 1, further comprising: in response to a j-th convolutional layer of the convolutional neural network performing the convolution operation and the N convolution kernels of the j-th convolutional layer all having a size of P×1×N, using the N multipliers and the N adders to perform N multiplication operations and N addition operations for a target convolution kernel of the N convolution kernels of the j-th convolutional layer in one cycle, such that an output of the target convolution kernel is obtained after P cycles, wherein j is an integer greater than or equal to 1, and P is an integer greater than 1.
3. The method according to claim 2, wherein the convolutional neural network further includes a plurality of fully connected layers, and the method further comprises: in response to a k-th fully connected layer of the convolutional neural network performing an operation and a total number of records of input data of the k-th fully connected layer being M*N, using the N multipliers and the N adders to complete conversion operations of N records of the input data in one cycle, such that an output of the k-th fully connected layer is obtained after M cycles, wherein k and M are integers greater than or equal to 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The described embodiments may be better understood by reference to the following description and the accompanying drawings, in which:
[0010]
[0011]
[0012]
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS
[0013] The present disclosure is more particularly described in the following examples that are intended as illustrative only since numerous modifications and variations therein will be apparent to those skilled in the art. Like numbers in the drawings indicate like components throughout the views. As used in the description herein and throughout the claims that follow, unless the context clearly dictates otherwise, the meaning of “a”, “an”, and “the” includes plural reference, and the meaning of “in” includes “in” and “on”. Titles or subtitles can be used herein for the convenience of a reader, which shall have no influence on the scope of the present disclosure.
[0014] The terms used herein generally have their ordinary meanings in the art. In the case of conflict, the present document, including any definitions given herein, will prevail. The same thing can be expressed in more than one way. Alternative language and synonyms can be used for any term(s) discussed herein, and no special significance is to be placed upon whether a term is elaborated or discussed herein. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms is illustrative only, and in no way limits the scope and meaning of the present disclosure or of any exemplified term. Likewise, the present disclosure is not limited to various embodiments given herein. Numbering terms such as “first”, “second” or “third” can be used to describe various components, signals or the like, which are for distinguishing one component/signal from another one only, and are not intended to, nor should be construed to impose any substantive limitations on the components, signals or the like.
[0015] Referring to
[0016] Since the convolution kernels of a first convolutional layer are all in a size of 10×1×1, the first convolutional layer of the conventional CNN completes one convolution operation by performing multiplication operations for 10 times and addition operations for 9 times on 10 elements of the input data and one of convolution kernels of the first convolutional layer, so as to obtain an output. In addition, since the convolution kernels of a second convolutional layer are all in a size of 10×1×16, the second convolutional layer of the existing CNN completes one convolution operation by performing the multiplication operations for 160 times and the addition operations for 159 times on 10*16 elements of the input data and one of the convolution kernels of the second convolutional layer, so as to obtain an output. Similarly, since the convolution kernels of a third and a fourth convolutional layer are all in a size of 6×1×16, in the existing CNN, the 3rd convolutional layer completes one convolution operation by performing the multiplication operations for 96 times and the addition operations for 95 times on 6*16 elements and one of the convolution kernels of the 3rd convolutional layer, and the 4th convolutional layer completes one convolution operation by performing the multiplication operations for 96 times and the addition operations for 95 times on 6*16 elements and one of the convolution kernels of the 4th convolutional layer, so as to obtain an output.
[0017] It can be observed that the existing CNN requires 10 multipliers for the first convolutional layer, 160 multipliers for the second convolutional layer, and 96 multipliers for each of the third and the fourth convolutional layer. Therefore, integration of circuits is difficult to achieve. In addition, each convolutional layer needs independent control and access circuits. Especially for data storage, a number of the elements that need to be read for each operation is different, such that storage control and intermediate buffering mechanisms are complicated. In response to the above-referenced technical inadequacies, in step S110 of
[0018] In other words, i, N, and K can respectively be 1, 16 and 10 in this embodiment, but the present disclosure is not limited thereto. Therefore, the first convolutional layer of
[0019] Similarly, as shown in
A.sub.1,1*B.sub.r,1+A.sub.1,2*B.sub.r,2+A.sub.1,3*B.sub.r,3+A.sub.1,4*B.sub.r,4+A.sub.1,5*B.sub.r,5+A.sub.1,6*B.sub.r,6+A.sub.1,7*B.sub.r,7+A.sub.1,8*B.sub.r,8+A.sub.1,9*B.sub.r,9+A.sub.1,10*B.sub.r,10;
where r is an integer from 1 to 16. That is, outputs of 16 convolution kernels can be obtained at the same time.
[0020] Taking
[0021] In other words, j and P can respectively be 2 and 10 in this embodiment, but the present disclosure is not limited thereto. Therefore, the second convolutional layer of
[0022] It should be understood that the present disclosure does not limit an execution order and execution times of step S120 and step S130. In addition, the CNN can also include a plurality of fully connected layers for classification. However, since an operating principle of the fully connected layer is already known to those skilled in the art, the details thereof are omitted herein. In short, in step S140 of
[0023] As shown in
[0024] In conclusion, compared with the existing CNN, the present disclosure provides a method for improving a CNN to perform computations, such that complicated storage control and intermediate buffering mechanisms are not required, and an area and power consumption used for such implementation are relatively small.
[0025] The foregoing description of the exemplary embodiments of the disclosure has been presented only for the purposes of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.
[0026] The embodiments were chosen and described in order to explain the principles of the disclosure and their practical application so as to enable others skilled in the art to utilize the disclosure and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the present disclosure pertains without departing from its spirit and scope.