NEURAL AMPLIFIER, NEURAL NETWORK AND SENSOR DEVICE
20230013459 · 2023-01-19
Assignee
Inventors
Cpc classification
International classification
Abstract
A differential switched capacitor neural amplifier comprises a sampling stage (SMP) with a plurality of differential inputs for receiving a plurality of input voltages and with at least one pair of digitally adjustable charge stores for sampling the plurality of input voltages, a summation stage (SM) for summing up charges resulting from the sampled plurality of input voltages in order to generate a summation signal, the summation stage (SM) being connected downstream to the sampling stage (SMP), and a buffer and activation stage (ACB) that is configured to apply an activation function and to generate a buffered output voltage at a differential output, based on the summation signal.
Claims
1. A differential switched capacitor neural amplifier, in particular for usage in an analog artificial neural network, the neural amplifier comprising a sampling stage with a plurality of differential inputs for receiving a plurality of input voltages and with at least one pair of digitally adjustable charge stores for sampling the plurality of input voltages; a summation stage for summing up charges resulting from the sampled plurality of input voltages in order to generate a summation signal, the summation stage being connected downstream to the sampling stage; and a buffer and activation stage that is configured to apply an activation function and to generate a buffered output voltage at a differential output, based on the summation signal.
2. The neural amplifier according to claim 1, wherein a number of the differential inputs corresponds to a number of pairs of the digitally adjustable charge stores.
3. The neural amplifier according to claim 1, wherein the sampling stage comprises at least one multiplexer for selectively connecting the plurality of differential inputs to the at least one pair of digitally adjustable charge stores.
4. The neural amplifier according to claim 3, wherein a number of the multiplexers corresponds to a number of pairs of the digitally adjustable charge stores.
5. The neural amplifier according to claim 3, wherein the summation stage comprises a differential integrating amplifier with a pair of integrating charge stores in a differential feedback path of the integrating amplifier, the neural amplifier further comprising for each of the at least one multiplexers a first differential chopping block coupled between an output of the respective multiplexer and the connected pair of charge stores; a second differential chopping block coupling a first end of the feedback path to an input side of the integrating amplifier; and a third differential chopping block coupling a second end of the feedback path to an output side of the integrating amplifier.
6. The neural amplifier according to claim 5, wherein the differential integrating amplifier of the summation stage comprises switching circuitry for selectively charging the pair of integrating charge stores with a first offset voltage at the input side of the integrating amplifier and a second offset voltage at an input side of the buffer and activation stage, in particular such that during a summation an offset of the integrating amplifier at the output side of the integrating amplifier is removed and an offset of the buffer and activation stage is applied to compensate the offset of the buffer and activation stage.
7. The neural amplifier according to claim 5, wherein the buffer and activation stage comprises a buffer stage with a differential capacitive amplifier with a further pair of charge stores in a further differential feedback path of the capacitive amplifier.
8. The neural amplifier according to claim 7, wherein the activation function is implemented by limiting a supply voltage of the capacitive amplifier and/or the buffer stage.
9. The neural amplifier according to claim 7, wherein the buffer and activation stage further comprises a clipping stage connected upstream or downstream the buffer stage, and wherein the activation function is implemented by the clipping stage.
10. The neural amplifier according to claim 9, wherein the clipping stage is connected downstream the buffer stage; and is configured to compare a differential voltage at an output of the buffer stage to a differential reference voltage; to output the differential reference voltage at the differential output if the differential voltage at the output of the buffer stage exceeds the differential reference voltage either in a positive or a negative direction; and to output, at the differential output, the differential voltage at the output of the buffer stage otherwise.
11. The neural amplifier according to claim 2, wherein the summation stage comprises a differential integrating amplifier with a pair of integrating charge stores in a differential feedback path of the integrating amplifier and with a pair of double sampling charge stores switchably connected downstream the integrating amplifier, wherein the neural amplifier is configured to sample a zero input signal on the pair of double sampling charge stores during a first double sampling phase, in particular by setting the at least one pair of digitally adjustable charge stores to a zero value; and to provide the charges resulting from the sampled zero input signal to the buffer and activation stage together with charges stored on the pair of integrating charge stores.
12. The neural amplifier according to claim 1, wherein each digitally adjustable charge store of the at least one pair of digitally adjustable charge stores comprises a first and a second charging terminal and a plurality of weighted charge stores, each having a first end connected to the first charging terminal and a second end selectively connected to the second charging terminal or to a common mode terminal depending on a digital adjustment word.
13. The neural amplifier according to claim 1, further comprising a control circuit for controlling a switched capacitor function of the neural amplifier and/or for adjusting the at least one pair of digitally adjustable charge stores.
14. The neural amplifier according to claim 1, wherein the summation stage generates the summation signal in the analog domain as an analog summation signal.
15. An analog artificial neural network, in particular recurrent neural network, comprising a plurality of neural amplifiers according to claim 1, wherein the differential output of at least one of the neural amplifiers is connected to one of the differential inputs of the same or another one of the neural amplifiers.
16. A sensor device comprising one or more sensors and an analog artificial neural network according to claim 15, wherein output signals of the one or more sensors are provided to at least one of the neural amplifiers.
17. A differential switched capacitor neural amplifier for usage in an analog artificial neural network, the neural amplifier comprising: a sampling stage with a plurality of differential inputs for receiving a plurality of differential input voltages and with at least one pair of digitally adjustable charge stores for sampling the plurality of differential input voltages; a summation stage for summing up charges resulting from the sampled plurality of input voltages in order to generate a summation signal, the summation stage being connected downstream to the sampling stage and comprising a differential integrating amplifier with a pair of integrating charge stores in a differential feedback path of the integrating amplifier; and a buffer and activation stage that is configured to apply an activation function and to generate a buffered output voltage at a differential output, based on the summation signal.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0037] The improved concept will be described in more detail below for several embodiments with reference to the drawings. Identical reference numerals designate signals, elements or components with identical functions. If signals, elements or components correspond to one another in function, a description of them will not necessarily be repeated in each of the following figures.
[0038] In the drawings:
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
DETAILED DESCRIPTION
[0057]
[0058] For example, a neural network is a cascade of neuron layers that are interconnected.
[0059] As mentioned before, neural networks with a large number of neurons and high interconnectivity need to perform a vast number of MAC operations. Today neural networks are mostly implemented digitally, thus requiring a considerable amount of computing power. In contrast, an analog MAC operation is in principle a one-shot operation. Whereas values in the digital domain are represented by a number of bits in analog, only a single storage unit is required to hold the value independent of the resolution. Hence, there is increasing effort to shift MAC operations into the analog domain, opening the field of analog neural networks. Analog neural networks do not rely on sub-nanometer technology nodes to achieve competitive performance. Speed is achieved by levering analog properties which do not scale well with technology. This supports implementation in older low cost and analog optimized technologies. Analog neural networks are therefore an attractive option for co-integration with, for example, analog sensor readout circuits.
[0060] Implementing an analog neuron for a recurrent neural network requires an amplifier that can sum its inputs while holding the previous value and driving other neuron inputs at the same time. Performance can even be increased by implementing a low offset and gain error, which prevents accumulation of errors over different cycles. For example in recurrent neural networks, results are fed back to prior neurons by respective recurrent paths, as indicated in
[0061] In the following, several example implementations of an analog neural amplifier according to the improved concept will be described that are suitable for an efficient implementation of an analog neural network with or without recurrent paths. The improved concept enables an analog neuron implementation with differential signal processing and a switched capacitor approach, which reduces effects of charge injection, thus improving the position of an analog neuron and consequently an analog neural network implemented with such neurons. Performance may be further improved by including a switch charge injection and/or amplifier offset cancellation scheme. In summary, a high number of neurons can be connected to a single summing node even in a recurrent operation without significant offset accumulation. Furthermore, by making offset errors and gain errors negligible, corresponding drifts over PVT are not a concern. Consequently periodic retraining or calibration is not necessary.
[0062]
[0063] Second terminals of the charge stores C.sub.sia, C.sub.sib are coupled to the common mode terminal V.sub.CM via further respective switches S.sub.1a, S.sub.1b, and further to the summation stage SM via respective switches S.sub.4a, S.sub.4b. While the pair of charge stores C.sub.sia, C.sub.sib and the corresponding switches S.sub.2a, S.sub.2b, S.sub.3a, S.sub.3b are present multiple times in the sampling stage SMP, i.e. n times, switches S.sub.1a, S.sub.1b, S.sub.4a, S.sub.4b may be common to all such sampling structures and provided only once, however, without excluding the possibility of a multiple presence.
[0064] The charges stores C.sub.sia, C.sub.sib are digitally adjustable, in particular for setting a respective weight for the associated input V.sub.ini.sup.+, V.sub.ini.sup.−, at which a differential input voltage can be received.
[0065] The summation stage SM for example comprises an amplifier, for example an operational transconductance amplifier, OTA, with a pair of integrating charge stores C.sub.fb1a, C.sub.fb1b in a feedback path of the integrating amplifier. Respective switches are connected in parallel to the integrating charge stores C.sub.fb1a, C.sub.fb1b for resetting them. The summation stage operates in the analog domain, such that particularly no conversion or operation in the digital domain is required and an analog summation signal is output.
[0066] Downstream to the summation stage SM the buffer and activation stage ACB is connected that is configured to apply an activation function and to generate a buffered output voltage V.sub.out.sup.+, V.sub.out.sup.− at the differential output, based on a summation signal generated in the summation stage SM.
[0067]
[0068] During the high times of switching signals φ.sub.2 and slightly delayed φ.sub.2D the respective first terminals of the adjustable charge stores are connected to the common mode terminal V.sub.CM while the second terminals are connected to the summation stage via switches S.sub.4a, S.sub.4b. This results in the summation stage summing up the charges resulting from the sampled plurality of input voltages on the respective pairs of adjustable charge stores in order to generate the summation signal. The differential approach reduces the effects of charge inaction resulting from the different switches.
[0069]
[0070] The implementation of
[0071] Referring back to
[0072] In practice, routing complexity increases with the number of differential inputs and with the number of the weight resolution n.sub.adj. In order to obtain a routing complexity of O(n), multiplexing of the differential neural inputs may be performed, such that for example different differential input voltages are sampled and summed in subsequent phases. This also means that the pairs of digitally adjustable charge stores or capacitor DACs are reused for several differential inputs.
[0073] Referring now to
[0074] In this example implementation, n.sub.x inputs are multiplexed to one pair of adjustable charge stores C.sub.sia, C.sub.sib, thereby reducing the routing complexity. It should be noted that the number of parallel sampling structures is therefore reduced to n/n.sub.x compared to n sampling structures in
[0075] Referring now to
[0076] Consequently, routing complexity is traded against conversion time. Due to the multiphase conversion the summation signal provided by the summation stage, and therefore also the buffered output voltage is not available for driving the output respectively differential inputs of other neuron amplifiers during consecutive cycles. Therefore, the summation signal of the summation stage is sampled by the buffer and activation stage ACB after the last summing phase. The buffered output voltage can then drive the differential inputs of other neural amplifiers or one of its own differential inputs during a next recurrent cycle.
[0077] The differential structure significantly reduces charging action errors even for a high number of input connections to the neural amplifier. However, residual charge injection errors may remain, e.g. originating from offset errors that may sum up to a non-negligible amount, which may be further accumulated in a recurrent operation mode, depending on the number of differential inputs of a single neural amplifier and the number of neurons employed in the neural network.
[0078] Referring now to
[0079] During operation, in the neural amplifier this can be implemented by deselecting all units of the capacitor DACs, e.g. by connecting them to the common mode terminal V.sub.CM, thus effectively sampling a zero signal. In other words, a zero weight may be selected for the adjustable charge stores during this phase. The corresponding neural amplifier output is thus equivalent to its output offset and can be subtracted from the actual neural amplifier output with neural input signals. However, because the neural amplifier output is analog this operation cannot be realized in digital and will be performed during the charge transfer to the buffer. This requires the additional double sampling charge stores C.sub.CDSa, C.sub.CDSb at the summation amplifier output to hold the zero input signal summation outputs during the consecutive neural input conversion.
[0080] However, one issue with correlated double sampling is the reduction in conversion rate by 2. Moreover, subtraction of the offset in analog may introduce additional error sources. Referring now to
[0081] Referring now to
[0082] For example, the first chopping block ch1 is provided in each parallel sampling structure between the multiplexer MUX and the connected pair of adjustable charge stores C.sub.sia, C.sub.sib. Furthermore, a second differential chopping block ch2 is implemented in the summation stage SM and couples the first end of the differential feedback path including integrating charge stores C.sub.fb1a, C.sub.fb1b to an input side of the integrating amplifier. Similarly, a third differential chopping block ch3 couples the second end of the differential feedback path to an output side of the integrating amplifier.
[0083] The chopping blocks ch1, ch2, ch3 are controlled by a chopping control signal φ.sub.chop and have the function of either directly connecting the differential path between its input and output sides or to cross connect the differential paths, which basically corresponds to an inversion of the differential signal. If the chopping phases are distributed equally over the various switching phases, chopping can cancel out any residual offsets from all input sampling switches, allowing for a nearly arbitrary number of differential inputs.
[0084] Referring now to
[0085] The effectiveness of the chopping scheme is further supported in the context of the neural amplifier if the total equivalent offset, which is the sum of the individual neuron input offsets and the offset of the integrating amplifier is constant and thus independent of the individual neuron input waves controlling the digitally adjustable charge stores in all phases. For example, referring back to
[0086] Despite chopping, accuracy of the neural amplifier may be further increased, if made necessary by the respective application, for example by the complexity of the neural network. For example, there may be an output offset at an output of the summation stage SM after the last summation phase φ.sub.2, i.e. the last input voltage has been weighted and summed up, unless the summation stage SM itself is offset compensated.
[0087] Referring now to
[0088] In the summation stage SM, a switching pair of switches S.sub.5a, S.sub.5b is introduced which are controlled by switching signal φ.sub.4xn and connect the differential input of the integrating amplifier OTA1 via the second chopping block ch2 to a first end of integrating charge stores C.sub.fb1a, C.sub.fb1b. Switches S.sub.6a, S.sub.6b, being controlled by switching signals φ.sub.4DD, correspond to the reset switch of
[0089] The buffer stage BUF comprises a further pair of charge stores C.sub.fb2a, C.sub.fb2b having a first end connected to the differential input of the capacitive amplifier OTA2. A second end of the charge stores C.sub.fb2a, C.sub.fb2b is connected to the common mode terminal V.sub.CM via switches S.sub.8a, S.sub.8b controlled by switching signal φ.sub.3 and to the differential output terminals of the buffer stage BUF via switches S.sub.9a, S.sub.9b controlled by switching signals φ.sub.3DDn. Input and output of the amplifier OTA2 are connected by respective switches S.sub.10a, S.sub.10b being controlled by switching signals φ.sub.3D. A differential buffered output voltage V.sub.out_buf+, V.sub.out_buf− is provided at the differential output of the amplifier OTA2.
[0090]
[0091] Similarly, switching signals φ.sub.4xn, φ.sub.4D and φ.sub.4DD correspond to a phase for charge transfer to buffer and offset sampling, which will also be explained in more detail below.
[0092] Hence, as can be seen from
[0093] Referring now to
[0094] As mentioned before, unselected unit capacitors may be connected to the common mode terminal V.sub.CM, thus sampling zero signal charge but still introducing charge injection and offset charge of the first integrating amplifier OTA1. This can make the total input offset independent of any weights, respectively adjustment words. Thus, it is cancelled by chopping. As the switching pair S2.sub.a, S.sub.2b is driven by a delayed clock φ1D, it does not contribute to charge injection offset. Moreover, the first chopping block ch1 does not contribute since it is switched during the non-overlap time of φ.sub.1 and φ.sub.2 such that no charges can be transferred from the switching process in the chopping block ch1. With respect to the second chopping block ch2, there may be a charge injection contribution, as charge remains trapped on the internal nodes n1a, n1b, to which the second chopping block ch2 is connected. However, this chopping block ch2 only toggles once during all summation phases, making its contribution small and negligible.
[0095] Referring now to
Q.sub.off=C.sub.s_total.Math.V.sub.off1.
[0096] As unselected unit sample capacitors of the adjustable charge store are not kept floating but connected to the common mode terminal V.sub.CM, a total sample capacitance seen during the charge transfer phase φ.sub.2 is constant and thus Q.sub.off is effectively cancelled by chopping. Furthermore, switches S.sub.4a, S.sub.4b add charge injection which is cancelled by chopping too. Switches S.sub.3a, S.sub.3b do not contribute charge injection due to the delayed switching signal φ.sub.2D.
[0097] Referring now to
[0098] Referring now to
[0099] However, there may be some charge injection from switches S.sub.5a, S.sub.5b. As these switches S.sub.5a, S.sub.5b always remain at a virtual ground potential, this charge is not signal-dependent and only results in some residual offset, if any. Furthermore, as this charge is only added once per conversion, its impact would still be small. The implementation of the neural amplifier according to
[0100] Moreover, there is no signal-dependent charge injection leaking to the output, making the gain error solely dependent on an open loop gain of the amplifiers and on the capacitor-matching of C.sub.fb1a, C.sub.fb1b, C.sub.fb2a, C.sub.fb2b and C.sub.sia, C.sub.sib.
[0101] In various implementations, a contribution of the amplifiers, in particular if implemented as OTAs, can be made small by using a high gain topology, as shown for example in
[0102]
[0103] As mentioned before, the buffer and activation stage ASB further implements an activation function, which can be a clipping function. Clipping may be accomplished by limiting a supply voltage of the capacitive amplifier OTA2 and/or the buffer stage BUF itself. However, clipping can also be implemented by a dedicated clipping stage.
[0104] Referring now to
[0105] Otherwise, the reference voltages V.sub.ref.sup.+, V.sub.ref.sup.− will be used as the output voltages V.sub.out.sup.+, V.sub.out.sup.−.
[0106] As the clipping function must be applied both in positive and negative direction, clipping is performed in two steps, reusing the same comparator and employing a chopping block controlled by a control signal φ.sub.chop_clip. In particular, first clipping is checked in the positive range by comparing to the positive reference V.sub.ref.sup.+while, with reference to the example diagram of
[0107] In the case of no positive clipping, the reference is flipped by setting the control signal φ.sub.chop_clip to 1 for a comparison against the negative reference using the same comparator. If negative clipping is detected, the negative reference is directed to the output, otherwise the buffer output V.sub.out_buf.sup.+, V.sub.out_buf.sup.− is used.
[0108] The actual comparison is performed by precharging the capacitances in front of the comparator with the reference voltages and subsequently applying the buffered output voltages V.sub.out_buf.sup.+, V.sub.out_buf.sup.− to the sampled voltage in order to detect whether these are higher or lower than the precharged voltages.
[0109] As mentioned before, an alternative implementation of clipping is to supply the buffer output stage by the reference. Therefore, the buffer inherently clips the output to the desired levels. This may have the effect that the same clipping levels apply to all neural amplifiers, if the references or all neural amplifiers are supplied by a common voltage regulator, for example. This eliminates clipping threshold shift due to comparator offset. However, supply-based clipping cannot achieve hard clipping but instead is soft and resembles a logistic activation function.
[0110] With respect to the various implementations of the neural amplifier described above, a low offset and gain error can be achieved compared to conventional approaches of neural amplifiers, in particular for a high number of neuron inputs by applying, for example, circuit techniques in a fully differential neural amplifier. The reduction in circuit errors results in less concerns with respect to drift. Furthermore, periodic recalibration is not required. Specific implementations with the offset-compensated buffer stage, for example described in conjunction with
[0111] Multiple instances of a neural amplifier as described above can be used to form a neural network, as for example described in conjunction with
[0112] Referring now to
[0113] Training of the neural network can be performed online, i.e. during operation of the network, offline, e.g. by simulating the neural network in order to determine the respective weight factors, or even a combination of an offline training with a subsequent online calibration, for example. Other implementations are not excluded by these examples.