MESSAGE BASED MULTI-PROCESSOR SYSTEM AND METHOD OF OPERATING THE SAME
20230266974 · 2023-08-24
Inventors
Cpc classification
International classification
Abstract
The present application discloses a message based multi-processor system (1) comprising a message exchange network (R,L) and a plurality of processor clusters (Ci,j) capable to mutually exchange messages via the message exchange network. A processor cluster (Ci,j) comprises one or more processor cluster elements (PCE), and a message generator (MG). The message based multiprocessor system (1) is configured as a neural network processor system having a plurality of neural network processing layers (e.g. NL1, . . . ,NL5), each being assigned one or more of the processor clusters with their associated processor cluster elements being neural network processing elements therein. The message generator (MG) of a processor cluster (Ci,j) (associated with a neural network processing layer) comprises a logic module (MGL) and an associated message generator control storage space (MGM), wherein the logic module of a message generator in response to an activation signal (Sact([X,Y])) of a processor cluster element is configured to selectively generate and transmit a message for each of a set of destination processor clusters in accordance with respective message generation control data (CD1, CD2, CD3) for said destination processor clusters stored in the message generator control storage space (MGM).
Claims
1. A message based multi-processor system comprising: a message exchange network; and a plurality of processor clusters capable to mutually exchange messages via the message exchange network, wherein each processor cluster comprises: one or more processor cluster elements, and a message generator; wherein the message based multiprocessor system is configurable as a neural network processor system having a plurality of neural network processing layers, where each neural network processing layer is assigned one or more processor clusters of the plurality of processor clusters with associated processor cluster elements of the processor clusters being neural network processing elements; wherein the message generator of a processor cluster associated with a neural network processing layer comprises: a logic module, and an associated message generator control storage space comprising respective message generation control data for respective destination processor clusters in a set of destination processor clusters, wherein the logic module of the message generator is configured to perform, in response to an activation signal of a processor cluster element, a respective computation using the message generation control data for each destination processor cluster in the set of destination processor clusters to: determine whether the respective destination processor cluster is a target of the processor cluster element, and selectively generate and transmit a message to each destination processor cluster that was determined as a target by the respective computation.
2. The message based multi-processor system according to claim 1, wherein the logic module comprises a respective logic module section to compute, for the coordinate values of the processor cluster element associated with the activation signal, a potential destination range having minimum and maximum coordinate values for respective coordinates in a coordinate system of the destination processor cluster, wherein the logic module comprises a further logic module section configured to: determine whether a condition is complied with that, for each of the coordinates, at least one of the computed minimum value and the computed maximum value is within the corresponding range for that coordinate, and enable a message transmission if the condition is complied with and disable a message transmission if for any of the coordinates neither the computed minimum value nor the computed maximum value are within the corresponding range.
3. The message based multi-processor system according to claim 2, wherein the further logic module section comprises, for each coordinate: a respective first comparator module configured to provide a signal indicative that the computed minimum value for that coordinate is in the corresponding range; a respective second comparator module configured to provide a first match signal indicative that the computed maximum value for that coordinate is in the corresponding range; and a logic OR gate to provide an second match signal indicative that at least one first match signal is valid, the further logic module section further comprising a logic AND gate to provide a message transmission enable signal if the further match signal for each coordinate is valid.
4. The message based multi-processor system according to claim 3, wherein a comparator module comprises: at least one mask register having respective mask bits, each mask bit being representative for a respective power of 2 and respective logic gates for bitwise comparison with a corresponding bit of a computed minimum value or maximum value, and a combination module configured to issue an invalid match signal indicating that at least one of the logic gates indicates a bit of a computed minimum/maxim value is set while a corresponding mask bit is not set.
5. The message based multi-processor system according to claim 1, wherein the control data furthermore comprises predetermined data indicative of an offset (Xoffs, Yoffs) that is computed in a preparatory step as follows:
Xoffs=Xsrc0−Xdst0−ΔXmin,
Yoffs=Ysrc0−Ydst0−ΔYmin, wherein (Xsrc0,Ysrc0) is a pair of coordinates representative for a first position of the processor cluster that is the source of the message in its associated neural network processing layer, wherein (Xdst0,Ydst0) is a pair of coordinates representative for a first position of the processor cluster that is specified by the control data as the destination of the message in its associated neural network processing layer, and wherein the values −ΔXmin, −ΔYmin are related to a convolution kernel size Wx, Wy.
6. The message based multi-processor system according to claim 1, wherein the control data furthermore comprises an indicator that the processor cluster elements of the destination processor cluster are arranged in one dimension and have a coordinate value for the one dimension that is proportional to an index of the processor cluster elements in the destination processor cluster.
7. The message based multi-processor system according to claim 1, wherein the control data furthermore comprises an indicator specifying stride changes.
8. The message based multi-processor system according to claim 1, wherein the control data furthermore comprises an indicator specifying a scale factor.
9. The message based multi-processor system according to claim 1, wherein a destination processor cluster further comprises a pattern storage facility, wherein respective entries of the pattern storage facility specify a spatial pattern of processor cluster elements in a space of the neural network processing layer associated with the destination processor cluster, and wherein the control data furthermore comprises a reference to an entry in the pattern storage facility.
10. A method of operating a message based multi-processor system, wherein the system comprises: a message exchange network; and a plurality of processor clusters capable to mutually exchange messages via the message exchange network, wherein each processor cluster comprises one or more processor cluster elements, and a message generator including a logic module and a message generator control storage space; and wherein the method comprises: configuring, in a preparatory phase, the message based multiprocessor system as a neural network processor having a plurality of neural network processing layers, by assigning to each neural network processing layer a respective subset of one or more of the processor clusters including associated processor cluster elements, wherein the associated processor cluster elements form neural network processing elements therein; writing, in the preparatory phase, in respective storage entries of the message generator control storage space of a source processor cluster, respective sets of control data for respective destination processor clusters in a subsequent neural network processing layer; activating, during an operational phase, in a source processor cluster element of the source processor cluster, the message generator; performing, by the message generator in response to the activating, the following for each set of control data of the source processor cluster: a) retrieving the each set of control data from the respective storage entry; b) performing a respective computation using the message generation control data for each destination processor cluster in the set of destination processor clusters to determine whether or not the respective destination processor cluster is a target of the processor cluster element; and c) transmitting, in accordance with the result of the determination being affirmative, an output message to the designated processor cluster.
11. The method of operating a message based multi-processor system according to claim 10, wherein the control data further comprises an indication of a message distribution pattern, wherein the transmitted output message conveys the indication, and wherein the destination processor cluster receiving the message applies the message to a set of core elements of the destination processor cluster in accordance with a pattern specified by the indication.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0033] These and other aspects of the present disclosure are shown in more detail in the attached drawings. Therein:
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
DETAILED DESCRIPTION OF EMBODIMENTS
[0054] The upper part of
[0055] The message based multiprocessor system 1 is configurable as a neural network processor system having a plurality of neural network processing layers each being assigned one or more of the processor clusters with their associated processor cluster elements being neural network processing elements therein. By way of illustrative example it is shown in the lower in
[0056] The first convolutional neural network processing layer NL1 is a convolutional layer with 10 feature maps with a resolution of 80×80 pixels and is assigned to the processor clusters C1,0 and C0,1. The second convolutional neural network processing layer NL2 has 20 feature maps with a resolution of 40×40 pixels and is assigned to the processor clusters C2,0, C1,1 and C2,0. The third convolutional neural network processing layer NL3 has 42 feature maps with a resolution of 38×38 pixels and is assigned to the processor clusters C2,1 and C1,2. The fourth convolutional neural network processing layer NL4 has 50 feature maps with a resolution of 19×19 pixels and the fifth neural network processing layer NL5 is a fully connected layer. The neural network processing layers NL4, NL5 are both assigned to processor cluster C2,2. It is noted that the processor cluster C0,0 is not used to configure the neural network processor system and may be used for other purposes.
[0057] It will be appreciated that this is merely a simplified example. In practice a core may include thousands of processor cluster elements and the message based multiprocessor system may include hundreds or more of such cores arranged in the message exchange network. Also it is not necessary that there is such a clear geometrical relationship between the position of the cores in the message based processor system and their assignment to neural network processing layers.
[0058] In
[0059]
[0060] The processor cluster PCS1 assigned to this range has destinations in the destination feature map assigned to: [0061] a first destination processor cluster PCD1 for the range (Xdst0, Ydst0, Zdst0)=(0, 0, 0) to (3, 7, 15) [0062] second destination processor cluster PCD2 for the range (Xdst0, Ydst0, Zdst0)=(4, 0, 0) to (7, 7, 7) [0063] a third destination processor cluster PCD3 for the range (Xdst0, Ydst0, Zdst0)=(4,0, 8) to (7, 7, 15).
[0064] In the following it is described in more detail, how the source processor cluster PCS selectively generates and transmits a message to each of these destination processor clusters PCD1, PCD2, PCD3, based on an evaluation using the respective message generation control data (CD1, CD2, CD3) stored in its message generator control storage space MGM.
[0065] Starting with the first of the destination processor clusters PCD1, the evaluation is as follows.
[0066] Out of the ID of the processor cluster element responsible for the activation signal Sact[X,Y], further denoted herein also as “firing processor cluster element” or simply “firing core element” the (X, Y) location relative to the origin of the feature map is calculated. In this example, the activation signal Sact[X,Y] specifies that the local coordinates of the firing core element within the cluster PCS are X=0, Y=1.
[0067] Adding to these coordinates the coordinates of the origin (Xsrc0=4, Ysrc0=4) of the source processor cluster PCS in the global coordinate system provides the global coordinates (Xsrc, Ysrc) of the firing core element in the complete logical source feature map.
X.sub.src=X=X.sub.src0Y.sub.src=Y+Y.sub.src0
[0068] The coordinates of the origin of the destination core PCD1 in the global coordinate system are subtracted from this intermediary result. For destination core PSD1 the global coordinates of its origin are (Xdst0=0, Ydst0=0).
[0069] Then an offset (ΔXmin, ΔYmin) is subtracted to get the first destination (X,Y) affected by the firing core element (in the example of a zero-padded 3×3 convolution the kernel size (Kernel Size) is 3 and the value to be subtracted is (KernelSize−1)/2=1), presuming that the kernel is a square. Alternatively the kernel shape is rectangular and can be specified with a pair of kernel sizes.
[0070] Instead of repeating all computations each time an activation signal is received, the computation is simplified by precomputing an offset pair
Xoffs=Xsrc0−Xdst0−ΔXmin
Yoffs=Ysrc0−Ydst0−ΔYmin
[0071] The values of the pre-computed offset pair are stored as part of the message generation control data (CD1, CD2, CD3).
[0072] Upon receipt of an activation signal Sact[X,Y], a minimum coordinate pair is computed by adding the offset value pair (Xoffs, Yoffs) to the coordinate pair (X,Y) provided by the activation signal:
(X min,Y min)=(X,Y)+(Xoffs,Yoffs),
Therein Xmin, Ymin are the minimum X-value and the minimum Y-value respectively.
[0073] Also a maximum coordinate pair is computed by adding the kernel size indicator to the minimum coordinate pair. I.e.
(Xmax,Ymax)=(Xmin,Ymin)+(Kx,Ky)(=(Xmin,Ymin)+(K,K) for a rectangular kernel).
Therein Xmax, Ymax are the maximum X-value and the maximum Y-value respectively. Furthermore: [0074] K=KernelSize−1, or in the more general case: [0075] Kx=KernelSizeX [0076] Ky=KernelSizeY
[0077] Subsequently, it is determined by the logic module MGL of the processor whether or not an output message is sent to the destination processor cluster PSD1 in response to the activation signal Sact[X,Y].
[0078] To that end the values of the minimum coordinate pair (Xmin, Ymin) and the values of the maximum coordinate pair are compared with the dimensions of the range spanned by the destination processor cluster PSD1 in the coordinate space. These are denoted as W in the X-direction and H in the Y-direction. Transmission of an output message is enabled if at least one of the values Xmin, Xmax is within the range [0,W) and at least one of the values Ymin, Ymax is within the range [0,H).
[0079] With these steps, at most one message needs to be send to each destination cluster or feature map in a destination cluster. The message needs only to specify a single destination coordinate and the destination cluster applies the message to a set of destination processor cluster elements specified by a pattern. An identification of the pattern to be applied is typically included in the message. For the destination processor cluster PCD1 the offset value pair is (Xoffs=3, Yoffs=3). In this example, wherein the activation signal Sact[X,Y] originates from the firing core element with local coordinates X=0, Y=1, the minimum coordinate pair and the maximum value pair are computed as:
(Xmin,Ymin)=(0,1)+(3,3)=(3,4)
(Xmax,Ymax)=(3,4)+(2,2)=(5,6)
[0080] The message generation control data (CD1) of the processor cluster (PCS1) specifies that the destination processor cluster (PCD1) has a width W=4 and a height 11=8. Accordingly at least one of the values Xmin, Xmax, here the value Xmin=3, is within the range [0,W). Also at least one of the values Ymin, Ymax, in this case both values Ymin=4, Ymax=6, is within the range [0,H). Therewith the logic module (MGL) determines that a message is to be generated and transmitted to destination processor cluster PCD1.
[0081] The message to be generated and transmitted may comprise the following data: [0082] a) The cluster address specified in the message generation control data (CD1) and the value pair (Xmin, Ymin) computed with the logic module (MGL). [0083] b) In exemplary embodiments the message includes a value. This is not essential. In some cases the presence or absence of a message may be considered as indicating a Boolean value. Also the distance in time between subsequent messages may indicate a value to the recipient processor cluster (PCS1). [0084] c) In exemplary embodiments, the message alternatively or additionally comprises a pattern identification. The pattern identification enables the destination processor cluster (PCS1) to select one of a plurality of patterns of weight values to be applied to its processor cluster elements to which the message is to be applied. Alternatively, the destination processor cluster (PCS1) may apply a standard pattern. A pattern identification may be determined as the sum of a base ID (PatternID0) and a coordinate value Z indicating a feature map index associated with the processor cluster element processor cluster element (PCE) in the processor cluster (PCS1) that is responsible for the activation signal. The receiving destination processor cluster (PCD1) applies the message value in accordance with the weights of the pattern to its processor cluster elements (PCE). Therein the message value is either a value explicitly specified in the message or a value implicit from the presence or absence of the message or the amount of time passed since a previous message and the pattern is either a default pattern or a pattern specified by a pattern identification in the message.
[0085] For destination core PSD2 the global coordinates of its origin are (Xdst0=4, Ydst0=0). Accordingly, the offset value pair for this destination core (PSD2) is pre-computed as
Xoffs=Xsrc0−Xdst0−ΔXmin=4−4−1=−1
Yoffs=Ysrc0−Ydst0−ΔYmin=4−0−1=3
[0086] In this example, wherein the activation signal Sact[X,Y] originates from the firing core element with local coordinates X=0, Y=1, the minimum coordinate pair and the maximum value pair are computed as:
(Xmin,Ymin)=(0,1)+(−1,3)=(−1,4)
(Xmax,Ymax)=(−1,4)+(2,2)=(1,6)
[0087] The message generation control data (CD2) of the processor cluster (PCS2) further specifies that the destination processor cluster (PGD2) has a width W=4 and a height H=8. Accordingly at least one of the values Xmin, Xmax, here the value Xmax=1, is within the range [0,W). Also at least one of the values Ymin, Ymax, in this case both values Ymin=4, Ymax=6, is within the range [0,H). Therewith the logic module (MGL) determines that a message is to be generated and transmitted to destination processor cluster PCD2.
[0088] The message generation control data (CD3) pertains to a third processor cluster (PCD3) that has the same global coordinates of its origin (Xdst0=4, Ydst0=0) in the XY-plane as the processor cluster PCD2, but having a different Z-value, i.e. Zdst=8. This implies that a message, if transmitted, to the third destination processor cluster (PCD3) will have a pattern ID different from that of the second destination processor cluster (PCD3).
[0089]
[0090] The fully connected layer FC2 provides for a mapping 1×1×Z1.fwdarw.1×1×Z2 (where Z1 and Z2 are the layer depth of the source NL1 and the destination layer FC2 respectively). This mapping is logically equivalent to 1×1 convolution on a feature map with X and Y equal to 1.
Source-FM is of size (1, 1, 1024). In the example shown, the coordinate range of the neural network processing layer NL1 is (Xsrc0, Ysrc0, Zsrc0)=(0, 0, 0) to (0, 0, 1023). In this example the processor cluster PCS1 is assigned to the partition (Xsrc0, Ysrc0, Zsrc0)=(0, 0, 512) to (0, 0, 1023) of this coordinate range. The destination processor cluster PCD1 and the destination processor cluster PCD2 are respectively assigned to the partitions: [0091] (Xdst0, Ydst0, Zdst0)=(0, 0, 0) to (0, 0, 1023) and [0092] (Xdst0, Ydst0, Zdst0)=(0, 0, 1024) to (0, 0, 1067) in the coordinate range of fully connected neural network processing layer FC2.
[0093] By way of example, it is presumed that a neural network processor element identified by coordinates (X,Y,Z)=(0,0,N), (i.e., NeuronID=FMstart+N) gives rise to an activation signal. It is further presumed in this example that a range of PatternIDs in destination processor cluster PCD1 extends from 0 (source Neuron 0) to 1067 (source Neuron 1067) and that a range of PatternIDs in Cluster PGD2 extends from 99 (source Neuron 0) to 1166 (source Neuron 1067)
TABLE-US-00001 message generation control data CD1 CD2 PatternID0 512 611 Xoffs 0 0 Yoffs 0 0 KC 0 0 address Address of PCD1 Address of PCD2 W 1 1 H 1 1
[0094] Upon detection of the activation signal Sact(0,0,N), the logic module MGL of the processor cluster PCS1 computes the minimum and the maximum value pair [Xmin, Ymin] and [Xmax, Ymax] using the coordinates of the processor cluster element PCE in the XY plane. As the XY coordinates are (0,0) and the offset values as well as the KC-value are 0, the computed values of the minimum pair and the maximum pair are
[Xmin,Ymin]=[0,0]
[Xmax,Ymax]=[0,0]
[0095] Accordingly at least one of the values Xmin, Xmax, in this case both, is within the range [0,W). Also at least one of the values Ymin, Ymax, in this case both, are within the range [0,H), Since for fully connected layers Xmin, Xmax, Ymin, and Ymax are always zero (as Xsrc, Ysrc, Xdst, Ydst are always 1 while KernselSize is always 1—that means that the x,y of a firing neuron is always 0 while the Xoffset,Yofsset is always 0. The message transmitted to the destination processor cluster PCD1 comprises the following data (Address of PCD1, Xmin=0, Ymin=0, PatternID=PatternID0+N=512+N, Value).
[0096] The above applies equivalently to the enablement of a message to the destination processor cluster PCD2 albeit that the destination address and the selected pattern are different. I.e. the message transmitted for destination processor cluster PCD2 comprises the following data (Address of PCD2, Xmin=0, Ymin=0, PatternID=PatternID0+N=611+N, Value).
[0097]
[0098] As illustrated in
[0103] In the first case, shown in
KH=(KernelSize−1)>>1)
This is for example a 1×1 cony with w(0,0), wherein w (.,.) indicates the weights of the convolution kernel C1, i.e. w(0,0) is the weight of the convolution kernel C1 for the coordinates (0,0).
[0104] In the second case, as shown in
[0105] In the third case, shown in
[0106] In the fourth case, shown in
[0107] This example considers a typically odd kernel width/height, but it is the same for an even kernel shape. In that case, only the shape for the four sub-convolutions changes
[0108]
[0109] Effectively, Neuron X,Y will be shifted left (i.e., multiplied by two) before applying inverse convolution
[0110]
[0111]
[0112]
[0113]
[0114]
[0115] As becomes apparent from these figures, the logic module MGL determines whether or not a message is to be send to a particular destination processor cluster in a computationally efficient manner.
[0116] For illustration purposes,
[0117] More in particular the first module shown in
[0118] In
[0119] In operation, the logic module performs the following steps Initial lower boundary values [Ymin, Xmin] for the mapping window are computed as:
Ymin=Y<<CDi.UpSamp+CDi.Yoffset (See FIG. 7A, elements GLY1 and GLY2 respectively)
Xmin=X<<CDi.UpSamp+CDi.Xoffset (See FIG. 7B, elements GLX1 and GLX2 respectively)
[0120] Therein CDi.x denotes the parameter x in the message generation control data CDi stored in the message generator control storage space (MGM), wherein “i” is the index associated with the current destination processor cluster. For example CD1.UpSamp is the upsampling factor for the destination processor cluster with index 1.
[0121] In elements GLY1,GLX1 the Y-value and X-value indicated in the activation signal are optionally left-shifted by a factor indicated by the parameter CDi.Upsamp contained in the message generation control data CDi. Therewith the number of bits with which the left-shift operation is applied is equal to 2 Log (UpsamplingFactor). It is presumed that only upsampling by a power of 2 is required. If no upsampling is required the input value of Y is passed to the output of elements GLY1, GLX1. It may be contemplated to provide for upsampling factors other than powers of 2. In that case the shift-left operation should be replaced by a multiplier, which is computationally more expensive.
[0122] Initial upper boundary values are computed as
Ymax=Ymin+CDi.KC (See FIG. 7A: adder GLY5)
Xmax=Xmin+CDi.KC (See FIG. 7B: adder GLX5)
Final lower boundary values [Ymin, Xmin] for the mapping window are computed as:
Ymin=CDi.S2*Yodd+Ymin>>CDi.S2 (See FIG. 7A: multiplexer GLY3, shift-right element GLY4 and adder GLY6 respectively)
Xmin=CDi.S2*Xodd+Xmin>>CDi.S2 (See FIG. 7B: multiplexer GLX3, shift-right element GLX4 and adder GLX6 respectively)
[0123] Therein the value pair [Xodd, Yodd] is assigned as [Ymin[0], Xmin[0]], i.e. the least significant bit of Ymin and Xmin at the ouput of GLY2, GLX2.
[0124] It becomes apparent from
[0125] Final upper boundary values [Ymax, Xmax] for the mapping window are computed as:
Ymax=Ymax>>CDi.S2 (See FIG. 7A: shift-right element GLY7)
Xmax=Xmax>>CDi.S2 (See FIG. 7B: shift-right element GLX7)
[0126] With these values [Xmin, Ymin], [Xmax, Ymax] it is determined in the section of the logic module MGL shown in
[0127] In the pseudo hardware implementation shown in
[0128] In some embodiments the comparator modules XMN, XMX, YMN, YMX may be provided as a full fledged comparator module that performs the comparison for arbitrary values. Alternatively the comparator modules may be provided as bitwise comparators. Therewith the allowable target coordinate range can be selected from powers of 2.
[0129] By way of example such an embodiment of the comparator module XMN is shown in
[0130] If the logic module section of
PatternIDoffset=(Z<<2*52)+S2(2*Yodd+Xodd)
[0131] As shown in
[0132]
[0133] The exemplary embodiment of the method shown in
[0134] In an initialization step S1, the following input parameters are obtained ControlDataStart; ControlDataPreStop, N; Value, FMstart; FMsizeZv; FMsizeY; FMsizeX; It is noted that specifying the value FMstart enables the option to map multiple FMs to a common layer. It is further noted that it is alternatively possible to provide the X,Y,Z and Nrel directly as an input and make this independent of the message generator. This simplifies subsequent computations.
[0135] The input parameters ControlDataStart; ControlDataPreStop are obtained from a pattern-memory. The input parameter ControlDataStart indicates the location of the first set of control data therein. The input parameter ControDataPreStop indicates the end of the last set of control data.
[0136] The input parameters N and Value specify the ID (e.g. coordinate values of the processor cluster element) in the processor cluster and value of the firing neural processor layer element. A frame is specified by FMstart; FMsizeZv; FMsizeY; FMsizeX;
Therein FMstart indicates the index of the first processor cluster element of the current processor cluster. The parameters FMsizeZv; FMsizeY; FMsizeX indicate the size of a feature map or portion thereof represented by the processor cluster. I.e. the parameters FMsizeX and FMsizeY indicate the size of the feature map in the spatial directions X,Y and FMsizeZv indicates the number of feature maps in represented by the processor cluster.
[0137] With these input parameters, the following initial steps are performed: The neuron-id Nrel relative to the start position is computed as
Nrel=N−FMstart
[0138] The position [X,Y,Z] of the processor cluster element that issues an activation signal Sact(X,Y,Z) in the source feature map is determined from its relative address. This operation can be symbolically expressed as:
[X,Y,Z]=getXYZ(Nrel,FMsizeZv;FMsizeY,FMsizeX)
[0139] The relative processor cluster element Nrel may be related to the position [X,Y,Z] as Nrel=X+FMsizeX*Y+FMsizeX*FMsizeY*Z. this example it is presumed that the neuron-IDs are assigned in a X first, Y second, Z last layout/fashion. It is noted that every other way of assigning neuron ID is possible as long as it is sufficiently well-defined to enable a reconstruction of the coordinates X,Y,Z from the NeuronID.
In an embodiment the values for FMsizeX and FMsizeY are a power of 2, so that the value of Nrel can be efficiently calculated with
Nrel=X+Y<<.sup.2 log FMsizeX+Z<<(.sup.2 log FMsizeX+.sup.2 log FMsizeY)
[0140] Accordingly the coordinates X,Y,Z can be derived from Nrel as:
X=Nrel[0:.sup.2 log FMsizeX−1]
Y=Nrel[.sup.2 log FMsizeX:.sup.2 log FMsizeX+.sup.2 log FMsizeY−1]
Z=Nrel[.sup.2 log FMsizeX+.sup.2 log FMsizeY:]
[0141] Instead of computing the coordinate values for each instance, the coordinate values may alternatively be computed incrementally. For example the processor cluster element states may be updated on a cyclic basis, starting from the first processor cluster element in the cluster having coordinates (0,0,0) to the last one, while incrementally updating the coordinate values.
[0142] A value of a control parameter DestNum, indicating a number of destination processor clusters, is initialized. A value of a further control parameter DestInd is initialized at 0. This further control parameter is an index specifying a respective set of message control parameters for a respective destination processor cluster.
[0143] In step S2 the value of the control parameter DestNum is verified. If the value of DestNum is 0, the procedure ends. If the value of DestNum differs from zero, one or more of the procedural steps S3-S9 are performed as specified below.
[0144] In step S3 the message generation control data CDi for the destination processor cluster referred to by the destination index DestInd are read from the message generator control storage space MGM.
[0145] In step S4 a Boolean value of a further message type indication “Flatten” is determined. If the Boolean value is True, a step S5 is performed, which is succeeded by a step S6. If the Boolean value is False, procedure directly continues with step S6.
[0146] In step S5 the coordinates [X,Y,Z] are assigned as follows.
[X,Y,Z]=[0,0,Nrel]
[0147] In step S6 the following computations are performed.
Initial lower boundary values [Ymin, Xmin] for the mapping window are computed as:
Ymin=Yoffset+Y<<UpSamp
Xmin=Xoffset+X<<UpSamp
Initial upper boundary values are computed as
Ymax=Ymin+KC
Xmax=Xmin+KC
[0148] The value pair [Xodd, Yodd] is assigned as [Ymin[0], Xmin[0]]
[0149] Final lower boundary values [Ymin, Xmin] for the mapping window are computed as:
Ymin=S2*Yodd+Ymin>>S2
Xmin=S2*Xodd+Xmin>>S2
[0150] Final upper boundary values [Ymax, Xmax] for the mapping window are computed as:
Ymax=Ymax>>S2
Xmax=Xmax>>S2
[0151] With these values it is determined whether or not the message has a destination within the destination processor cluster corresponding to the message generation control data using the following function
Hit=hitDetect(Ymin,Ymax,Xmin,CutHeight,CutWidth);
[0152] If it is determined in step S7 that the boolean Hit was set to True in a preceding step, then in step S8 a message is prepared to be sent to the destination address as follows. The message comprises the following information. [0153] a) DstClsIDY, DstClslDX, together specifying the network address of the destination processor cluster, [0154] b) DestN, the address of the destination processor cluster element in that destination processor cluster, [0155] c) SynType, indicating a type of operation to be performed by the destination processor cluster, [0156] d) PatternID0+PatternID, an indication of a spatial pattern, e.g. a convolution pattern. [0157] e) Value. A value to be used in an operation to be performed by the processor cluster for the processor cluster element which is specifically referred to in case of a non-packed message or to a set of processor cluster elements within a range defined by the spatial pattern, typically centered around and including the destination processor cluster element designated in the message.
[0158] Regardless the value of the boolean Hit, in step S9 the control value that indicates the control word to be used is updated according to and if it is determined that processor cluster has a further destination processor cluster the same procedure is applied from step S3 onwards for this further destination processor cluster.
EXAMPLES
[0159] Exemplary configurations of embodiments of the improved message based multi-processor system are discussed below.
Convolution with Padding.
[0160] In one example shown in
[0161] In this configuration the XY-size (W,H) of the destination feature map is the same as the XY size of source feature map. Furthermore, the values for Ymin and Xmin in the destination feature map are computed as follows:
Xmin=X−(kernelSizeX−1)/2
Ymin=Y−(kernelSizeY−1)/2
Therein kernelSizeX, kernelSizeY are the dimensions of the convolution kernel. These may be equal in value, i.e. KernelSizeX=KernelSize=KernelSize.
I.e.
[0162]
Convolution without Padding
[0163]
[0164] The XY size of the destination feature map is equal to the XY size of the source feature map minus (KernelSize−1). Furthermore:
Ymin and Xmin in the destination feature map respectively are equal to Y and X in the source minus (KernelSize−1), and
Ymax and Xmax in the destination feature map are equal to X and Y in the source.
Transpose Convolution
[0165] Also in case of a transpose convolution, see
The XY size of the destination FM is equal to XY size of source FM plus (KernelSize−1)
The values for Ymin and Xmin in the destination feature map are equal to Y and X in the source FM.
The values for Ymax and Xmax in the destination FM are equal to X and Y in the source FM plus (KernelSize−1)
Concatenating Feature Maps
[0166] As shown in
[0167] In one example, see
[0168] As shown further in
Splitting Feature Maps
[0169] As shown in
[0170] In the example shown in
[0171] If a processor cluster element PCE gives rise to an activation signal (representing a neuron of source-FM layer 0 or 1 that fires) then PatternID0 or PatternID0+1 will be on outgoing event. This implies that in the destination cluster, Pattern-ID0 and PatternID0+1 will direct via a population memory to FM-3D.0 (green part) and the convolution is applied only on green part
[0172] If a neuron of source FM layer 2 or 3 fires then PatternID0+2 or PatternID0+3 will be on outgoing event. Consequently, in the destination cluster, Pattern-ID0+2 and PatternID0+3 will direct via NPM to FM-3D.1 (yellow part).
[0173] In some embodiments the logic module MGL of the message generator may be extended with an additional detection section that detects whether also for the Z-coordinate at least one of a computed minimum value Zmin and the computed maximum value Zmax is within a corresponding range SizeZ for that coordinate. Alternatively, if such a further detection section is absent, and it can not be avoided that a message is transmitted also to destination processor clusters that would otherwise be excluded, these otherwise excluded destination processor clusters may apply a zero-operation pattern, i.e. a pattern having a single zero-value weight, so that effectively the processor cluster elements in that pattern are not affected by the message, as if the message was not directed to the otherwise excluded destination processor cluster at all.
[0174] In the example shown in
Depth-Wise Convolution
[0175]
[0176] Hence, each destination simply is one 2D FM and the Pattern-IDs for the 2D FM are sorted. This implies that the 3D FM is simply split into parallel 2D FMs merged with a convolution (see preceding examples on splitting FMs). In order to reduce memory and processing capacity requirements, the 2D FMs may be merged to one 3D FM (see the examples of
Adding Feature Maps
[0177] ResNet requires a 1×1 convolution depth-first for each feature map with a kernel of [w(0,0)=1], pointing to the same destination FM (See
Combining Flattening with Preceding Convolution
[0178]
N.sub.ID−FMStart=FMsizeY.Math.FMsizeZ.Math.X+FMsizeZ.Math.Y+Z
Average Pooling
[0179] As shown in
Example 3×3 Convolution after Average Pooling
[0180] By way of example,
[0181] Hence, as shown in
ΔX.sub.min=KernelSize−1=2
ΔY.sub.min=KernelSize−1=2
KC=2KernelSize−1=5
A more formal proof is provided in Annex 1:
TABLE-US-00002 ANNEX 1: 1 Proof that average pooling can be merged with sub- sequent convolution using a stride The Initial/input layer is symbolized as I ∈ BF.sup.2m×2m, where BF indicates a 16-bit Brain Floating Point number. Moreover, the intermediate averaged layer and the output layer (after the subsequent convolution) are symbolized as A ∈ F.sup.m×n and O ∈
F.sup.m×n (without loss of generality, here, a zero-padded convolution is assumed). Since the output O is generated out of A by applying a convolution with a KS × KS kernel (weights: w.sub.i,j), the following formula expresses the output layers:
[
i/2], [j/2
1/4, Eq. (6) can be expressed as:
indicates data missing or illegible when filed