NEURAL AMPLIFIER, NEURAL NETWORK AND SENSOR DEVICE

20230013459 · 2023-01-19

Assignee

Inventors

Cpc classification

International classification

Abstract

A differential switched capacitor neural amplifier comprises a sampling stage (SMP) with a plurality of differential inputs for receiving a plurality of input voltages and with at least one pair of digitally adjustable charge stores for sampling the plurality of input voltages, a summation stage (SM) for summing up charges resulting from the sampled plurality of input voltages in order to generate a summation signal, the summation stage (SM) being connected downstream to the sampling stage (SMP), and a buffer and activation stage (ACB) that is configured to apply an activation function and to generate a buffered output voltage at a differential output, based on the summation signal.

Claims

1. A differential switched capacitor neural amplifier, in particular for usage in an analog artificial neural network, the neural amplifier comprising a sampling stage with a plurality of differential inputs for receiving a plurality of input voltages and with at least one pair of digitally adjustable charge stores for sampling the plurality of input voltages; a summation stage for summing up charges resulting from the sampled plurality of input voltages in order to generate a summation signal, the summation stage being connected downstream to the sampling stage; and a buffer and activation stage that is configured to apply an activation function and to generate a buffered output voltage at a differential output, based on the summation signal.

2. The neural amplifier according to claim 1, wherein a number of the differential inputs corresponds to a number of pairs of the digitally adjustable charge stores.

3. The neural amplifier according to claim 1, wherein the sampling stage comprises at least one multiplexer for selectively connecting the plurality of differential inputs to the at least one pair of digitally adjustable charge stores.

4. The neural amplifier according to claim 3, wherein a number of the multiplexers corresponds to a number of pairs of the digitally adjustable charge stores.

5. The neural amplifier according to claim 3, wherein the summation stage comprises a differential integrating amplifier with a pair of integrating charge stores in a differential feedback path of the integrating amplifier, the neural amplifier further comprising for each of the at least one multiplexers a first differential chopping block coupled between an output of the respective multiplexer and the connected pair of charge stores; a second differential chopping block coupling a first end of the feedback path to an input side of the integrating amplifier; and a third differential chopping block coupling a second end of the feedback path to an output side of the integrating amplifier.

6. The neural amplifier according to claim 5, wherein the differential integrating amplifier of the summation stage comprises switching circuitry for selectively charging the pair of integrating charge stores with a first offset voltage at the input side of the integrating amplifier and a second offset voltage at an input side of the buffer and activation stage, in particular such that during a summation an offset of the integrating amplifier at the output side of the integrating amplifier is removed and an offset of the buffer and activation stage is applied to compensate the offset of the buffer and activation stage.

7. The neural amplifier according to claim 5, wherein the buffer and activation stage comprises a buffer stage with a differential capacitive amplifier with a further pair of charge stores in a further differential feedback path of the capacitive amplifier.

8. The neural amplifier according to claim 7, wherein the activation function is implemented by limiting a supply voltage of the capacitive amplifier and/or the buffer stage.

9. The neural amplifier according to claim 7, wherein the buffer and activation stage further comprises a clipping stage connected upstream or downstream the buffer stage, and wherein the activation function is implemented by the clipping stage.

10. The neural amplifier according to claim 9, wherein the clipping stage is connected downstream the buffer stage; and is configured to compare a differential voltage at an output of the buffer stage to a differential reference voltage; to output the differential reference voltage at the differential output if the differential voltage at the output of the buffer stage exceeds the differential reference voltage either in a positive or a negative direction; and to output, at the differential output, the differential voltage at the output of the buffer stage otherwise.

11. The neural amplifier according to claim 2, wherein the summation stage comprises a differential integrating amplifier with a pair of integrating charge stores in a differential feedback path of the integrating amplifier and with a pair of double sampling charge stores switchably connected downstream the integrating amplifier, wherein the neural amplifier is configured to sample a zero input signal on the pair of double sampling charge stores during a first double sampling phase, in particular by setting the at least one pair of digitally adjustable charge stores to a zero value; and to provide the charges resulting from the sampled zero input signal to the buffer and activation stage together with charges stored on the pair of integrating charge stores.

12. The neural amplifier according to claim 1, wherein each digitally adjustable charge store of the at least one pair of digitally adjustable charge stores comprises a first and a second charging terminal and a plurality of weighted charge stores, each having a first end connected to the first charging terminal and a second end selectively connected to the second charging terminal or to a common mode terminal depending on a digital adjustment word.

13. The neural amplifier according to claim 1, further comprising a control circuit for controlling a switched capacitor function of the neural amplifier and/or for adjusting the at least one pair of digitally adjustable charge stores.

14. The neural amplifier according to claim 1, wherein the summation stage generates the summation signal in the analog domain as an analog summation signal.

15. An analog artificial neural network, in particular recurrent neural network, comprising a plurality of neural amplifiers according to claim 1, wherein the differential output of at least one of the neural amplifiers is connected to one of the differential inputs of the same or another one of the neural amplifiers.

16. A sensor device comprising one or more sensors and an analog artificial neural network according to claim 15, wherein output signals of the one or more sensors are provided to at least one of the neural amplifiers.

17. A differential switched capacitor neural amplifier for usage in an analog artificial neural network, the neural amplifier comprising: a sampling stage with a plurality of differential inputs for receiving a plurality of differential input voltages and with at least one pair of digitally adjustable charge stores for sampling the plurality of differential input voltages; a summation stage for summing up charges resulting from the sampled plurality of input voltages in order to generate a summation signal, the summation stage being connected downstream to the sampling stage and comprising a differential integrating amplifier with a pair of integrating charge stores in a differential feedback path of the integrating amplifier; and a buffer and activation stage that is configured to apply an activation function and to generate a buffered output voltage at a differential output, based on the summation signal.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0037] The improved concept will be described in more detail below for several embodiments with reference to the drawings. Identical reference numerals designate signals, elements or components with identical functions. If signals, elements or components correspond to one another in function, a description of them will not necessarily be repeated in each of the following figures.

[0038] In the drawings:

[0039] FIG. 1 shows an example implementation of an analog neural amplifier;

[0040] FIG. 2 shows an example implementation of a neural network;

[0041] FIG. 3 shows an example implementation of a neural amplifier according to the improved concept;

[0042] FIG. 4 shows an example diagram of controls signals that can be applied to the neural amplifier according to FIG. 3;

[0043] FIG. 5 shows an example implementation of a digitally adjustable charge store;

[0044] FIG. 6 shows an example implementation of a sampling stage of a neural amplifier;

[0045] FIG. 7 shows an example diagram of control signals that can be applied to a neural amplifier implemented according to FIG. 6;

[0046] FIG. 8 shows a further example implementation of a neural amplifier according to the improved concept;

[0047] FIG. 9 shows an example diagram of control signals that can be applied to the neural amplifier according to FIG. 8;

[0048] FIG. 10 shows a further example implementation of a neural amplifier according to the improved concept;

[0049] FIG. 11 shows an example diagram of control signals that can be applied to the neural amplifier according to FIG. 10;

[0050] FIG. 12 shows a further example implementation of a neural amplifier according to the improved concept;

[0051] FIG. 13 shows an example diagram of control signals that can be applied to the neural amplifier according to FIG. 10;

[0052] FIG. 14A to 14D show several example phases to be applied to a neural amplifier according to the improved concept;

[0053] FIG. 15 shows an example implementation of an operational transconductance amplifier usable in a neural amplifier;

[0054] FIG. 16 shows an example implementation of a clipping stage usable in a neural amplifier;

[0055] FIG. 17 shows an example diagram of control signals that can be applied to the clipping stage according to FIG. 14; and

[0056] FIG. 18 shows an example implementation of a sensor device with an analog artificial neural network.

DETAILED DESCRIPTION

[0057] FIG. 1 shows an example implementation of an analog neural amplifier with a plurality of inputs in.sub.1, in.sub.2, in.sub.3, . . . , in.sub.n being connected to a corresponding number of weighting elements w.sub.1, w.sub.2, w.sub.3, . . . , w.sub.n. The outputs of the weighting elements are connected to inputs of a summation stage for providing a summation signal. Basically, the summation stage together with the weighting elements performs a number of multiply accumulate, MAC, operations on the plurality of inputs in.sub.1, in.sub.2, in.sub.3, . . . , in.sub.n. It should be apparent that the summation stage performs the summation operation in the analog domain, such that particularly no conversion or operation in the digital domain is required. The analog summation signal at the output of the summation stage SM is provided to an activation stage ACT for applying an activation function, e.g. a clipping function or the like, to the summation signal. An output of the activation stage ACT is provided to a buffer stage BUF for providing a buffered output signal, e.g. output voltage at an output OUT of the neural amplifier. FIG. 1 describes the basic function of a neural amplifier that can be used, for example, in an analog neural network.

[0058] For example, a neural network is a cascade of neuron layers that are interconnected. FIG. 2 shows an example implementation of such a neural network with a plurality of neurons distributed over several layers, and represented by circles in FIG. 2. For example, the neural network comprises an input layer, an output layer and several hidden layers. An output of each neuron may be connected to one or more other neurons of the neural network, indicated by arrows originating from the respective neurons. Consequently, each neuron may be connected to the output of one or more other neurons or even its own output, thereby establishing a recurrent path.

[0059] As mentioned before, neural networks with a large number of neurons and high interconnectivity need to perform a vast number of MAC operations. Today neural networks are mostly implemented digitally, thus requiring a considerable amount of computing power. In contrast, an analog MAC operation is in principle a one-shot operation. Whereas values in the digital domain are represented by a number of bits in analog, only a single storage unit is required to hold the value independent of the resolution. Hence, there is increasing effort to shift MAC operations into the analog domain, opening the field of analog neural networks. Analog neural networks do not rely on sub-nanometer technology nodes to achieve competitive performance. Speed is achieved by levering analog properties which do not scale well with technology. This supports implementation in older low cost and analog optimized technologies. Analog neural networks are therefore an attractive option for co-integration with, for example, analog sensor readout circuits.

[0060] Implementing an analog neuron for a recurrent neural network requires an amplifier that can sum its inputs while holding the previous value and driving other neuron inputs at the same time. Performance can even be increased by implementing a low offset and gain error, which prevents accumulation of errors over different cycles. For example in recurrent neural networks, results are fed back to prior neurons by respective recurrent paths, as indicated in FIG. 2.

[0061] In the following, several example implementations of an analog neural amplifier according to the improved concept will be described that are suitable for an efficient implementation of an analog neural network with or without recurrent paths. The improved concept enables an analog neuron implementation with differential signal processing and a switched capacitor approach, which reduces effects of charge injection, thus improving the position of an analog neuron and consequently an analog neural network implemented with such neurons. Performance may be further improved by including a switch charge injection and/or amplifier offset cancellation scheme. In summary, a high number of neurons can be connected to a single summing node even in a recurrent operation without significant offset accumulation. Furthermore, by making offset errors and gain errors negligible, corresponding drifts over PVT are not a concern. Consequently periodic retraining or calibration is not necessary.

[0062] FIG. 3 shows an example implementation of an analog neural amplifier with a sampling stage SMP, a summation stage SM and a buffer and activation stage ACB. As indicated in conjunction with FIG. 1, FIG. 3 implements a sampling stage with n inputs with n parallel sampling structures, from which only an example structure is shown for reasons of a better overview. The sampling structure has a differential input pair V.sub.ini.sup.+, V.sub.ini.sup.−, representing the input i of n possible inputs. Each structure further comprises a pair of digitally adjustable charge stores C.sub.sia, C.sub.sib that have their first terminal connected to the differential signal input V.sub.ini.sup.+, V.sub.ini.sup.−via respective switches S.sub.2a, S.sub.2b. The first terminal of the charge stores C.sub.sia, C.sub.siab is also coupled to a common mode terminal V.sub.CM via respective switches S.sub.3a, S.sub.3b.

[0063] Second terminals of the charge stores C.sub.sia, C.sub.sib are coupled to the common mode terminal V.sub.CM via further respective switches S.sub.1a, S.sub.1b, and further to the summation stage SM via respective switches S.sub.4a, S.sub.4b. While the pair of charge stores C.sub.sia, C.sub.sib and the corresponding switches S.sub.2a, S.sub.2b, S.sub.3a, S.sub.3b are present multiple times in the sampling stage SMP, i.e. n times, switches S.sub.1a, S.sub.1b, S.sub.4a, S.sub.4b may be common to all such sampling structures and provided only once, however, without excluding the possibility of a multiple presence.

[0064] The charges stores C.sub.sia, C.sub.sib are digitally adjustable, in particular for setting a respective weight for the associated input V.sub.ini.sup.+, V.sub.ini.sup.−, at which a differential input voltage can be received.

[0065] The summation stage SM for example comprises an amplifier, for example an operational transconductance amplifier, OTA, with a pair of integrating charge stores C.sub.fb1a, C.sub.fb1b in a feedback path of the integrating amplifier. Respective switches are connected in parallel to the integrating charge stores C.sub.fb1a, C.sub.fb1b for resetting them. The summation stage operates in the analog domain, such that particularly no conversion or operation in the digital domain is required and an analog summation signal is output.

[0066] Downstream to the summation stage SM the buffer and activation stage ACB is connected that is configured to apply an activation function and to generate a buffered output voltage V.sub.out.sup.+, V.sub.out.sup.− at the differential output, based on a summation signal generated in the summation stage SM.

[0067] FIG. 4 shows an example diagram of control signals that can be applied to the neural amplifier according to FIG. 3. In particular, FIG. 4 shows switch control signals φ.sub.1, φ.sub.1D, φ.sub.2 and φ.sub.2D. For example, during the times where both φ.sub.1 and φ.sub.1D, which is a slightly delayed version of φ.sub.1, are high, the respective switches controlled by these signals are closed such that the adjustable charge stores are each connected between the respective input terminal V.sub.ini.sup.+, respectively V.sub.ini.sup.− and the common mode terminal VCM via switches S.sub.1a, S.sub.1b. Furthermore, the integrating charge stores C.sub.fb1a and C.sub.fb1b are reset.

[0068] During the high times of switching signals φ.sub.2 and slightly delayed φ.sub.2D the respective first terminals of the adjustable charge stores are connected to the common mode terminal V.sub.CM while the second terminals are connected to the summation stage via switches S.sub.4a, S.sub.4b. This results in the summation stage summing up the charges resulting from the sampled plurality of input voltages on the respective pairs of adjustable charge stores in order to generate the summation signal. The differential approach reduces the effects of charge inaction resulting from the different switches.

[0069] FIG. 5 shows an example implementation of the digitally adjustable charge store that, for example, can be used in the various sampling structures of the sampling stage SMP. For example, the charge store comprises a first charging terminal V.sub.1 and a second charging terminal V.sub.2 and a plurality of weighted charge stores, each having a first end connected to the first charging terminal V.sub.1 and a second end selectively connected to the second charging terminal V.sub.2 or to the common mode terminal V.sub.CM, depending on a digital adjustment word. In the example of FIG. 5, the charge stores are binary weighted starting with a first charge store having a capacitance value Cu and an n.sup.th charge store having a capacitance value 2.sup.n-1Cu. Respective switches are controlled by the digital adjustment word comprising the single bits weight<0>, weight<n-2>, weight<n-1>. Other weighting schemes instead of a binary weighting scheme can be used as well.

[0070] The implementation of FIG. 5 may be called a sample capacitor digital-to-analog converter, DAC, as the digital adjustment word is converted to an analog capacitance value, in particular with the binary weighting scheme.

[0071] Referring back to FIG. 3, if all n neuron inputs are to be sampled and summed in one shot, the total number of individual routing lines would be n*n.sub.adj with n.sub.adj denoting the number of bits of the adjustment word of the adjustable charge store.

[0072] In practice, routing complexity increases with the number of differential inputs and with the number of the weight resolution n.sub.adj. In order to obtain a routing complexity of O(n), multiplexing of the differential neural inputs may be performed, such that for example different differential input voltages are sampled and summed in subsequent phases. This also means that the pairs of digitally adjustable charge stores or capacitor DACs are reused for several differential inputs.

[0073] Referring now to FIG. 6, an example implementation of a part of the sampling stage SMP is shown, in particular a different implementation of the parallel sampling structures at the input side of the sampling stage SMP. Generally, this example implementation is based on the implementation of FIG. 3, but at least one multiplexer MUX is introduced between several of the differential inputs and an associated charge store pair C.sub.sia, C.sub.sib.

[0074] In this example implementation, n.sub.x inputs are multiplexed to one pair of adjustable charge stores C.sub.sia, C.sub.sib, thereby reducing the routing complexity. It should be noted that the number of parallel sampling structures is therefore reduced to n/n.sub.x compared to n sampling structures in FIG. 3. For instance, if for n.sub.adj of weight resolution a multiplex factor n.sub.x is set n.sub.x=n.sub.adj, the routing overhead increases linearly with n.sub.adj.

[0075] Referring now to FIG. 7, an example diagram of control signals that can be applied to a neural amplifier in accordance with FIG. 6 is shown. For either explanation, it is generally referred to FIG. 4, which expressed the basic scheme between the various switch settings controlled by signals φ.sub.1, φ.sub.1D, φ.sub.2 and φ.sub.2D. Now for FIG. 7, the selection signal SEL controls the multiplexer MUX to subsequently connect several inputs to the adjustable charge stores. For example, n.sub.x is chosen to be 4 in this example, without loss of generalization.

[0076] Consequently, routing complexity is traded against conversion time. Due to the multiphase conversion the summation signal provided by the summation stage, and therefore also the buffered output voltage is not available for driving the output respectively differential inputs of other neuron amplifiers during consecutive cycles. Therefore, the summation signal of the summation stage is sampled by the buffer and activation stage ACB after the last summing phase. The buffered output voltage can then drive the differential inputs of other neural amplifiers or one of its own differential inputs during a next recurrent cycle.

[0077] The differential structure significantly reduces charging action errors even for a high number of input connections to the neural amplifier. However, residual charge injection errors may remain, e.g. originating from offset errors that may sum up to a non-negligible amount, which may be further accumulated in a recurrent operation mode, depending on the number of differential inputs of a single neural amplifier and the number of neurons employed in the neural network.

[0078] Referring now to FIG. 8, a further development of the improved concept for a neural amplifier is shown that is based on the implementations of FIG. 3 and FIG. 6. In this example implementation, in order to eliminate offset by the input sampler and summation amplifier correlated double sampling, CDS, is employed. To this end, the summation stage SM further comprises a pair of double sampling charge stores C.sub.CDSa, C.sub.CDSb that is connected to the output side of the integrating amplifier via a pair of the respective switches controlled by a double sampling control signal φ.sub.CDS. Furthermore, the pair of double sampling charge stores C.sub.CDSa, C.sub.CDSb is connected to an input side of the buffer and activation stage via respective difference elements in order to subtract the charges stored on the double sampling charge stores C.sub.CDSa, C.sub.CDSb from the charges stored on the integrating charge stores C.sub.fb1a and C.sub.fb1b.

[0079] During operation, in the neural amplifier this can be implemented by deselecting all units of the capacitor DACs, e.g. by connecting them to the common mode terminal V.sub.CM, thus effectively sampling a zero signal. In other words, a zero weight may be selected for the adjustable charge stores during this phase. The corresponding neural amplifier output is thus equivalent to its output offset and can be subtracted from the actual neural amplifier output with neural input signals. However, because the neural amplifier output is analog this operation cannot be realized in digital and will be performed during the charge transfer to the buffer. This requires the additional double sampling charge stores C.sub.CDSa, C.sub.CDSb at the summation amplifier output to hold the zero input signal summation outputs during the consecutive neural input conversion.

[0080] However, one issue with correlated double sampling is the reduction in conversion rate by 2. Moreover, subtraction of the offset in analog may introduce additional error sources. Referring now to FIG. 9, an example diagram of control signals that can be applied to the neural amplifier according to FIG. 8 is shown.

[0081] Referring now to FIG. 10, a further development of the improved concept for a neural amplifier is shown that is based on the implementations of FIG. 3 and FIG. 6. In addition to the previous implementations, a chopping scheme is added, in particular by including several chopping blocks ch1, ch2 and ch3 in the neural amplifier. Introduction of a chopping (sometimes also referred to as swapping) is possible due to the multi-phase sampling scheme related to the multiplexer.

[0082] For example, the first chopping block ch1 is provided in each parallel sampling structure between the multiplexer MUX and the connected pair of adjustable charge stores C.sub.sia, C.sub.sib. Furthermore, a second differential chopping block ch2 is implemented in the summation stage SM and couples the first end of the differential feedback path including integrating charge stores C.sub.fb1a, C.sub.fb1b to an input side of the integrating amplifier. Similarly, a third differential chopping block ch3 couples the second end of the differential feedback path to an output side of the integrating amplifier.

[0083] The chopping blocks ch1, ch2, ch3 are controlled by a chopping control signal φ.sub.chop and have the function of either directly connecting the differential path between its input and output sides or to cross connect the differential paths, which basically corresponds to an inversion of the differential signal. If the chopping phases are distributed equally over the various switching phases, chopping can cancel out any residual offsets from all input sampling switches, allowing for a nearly arbitrary number of differential inputs.

[0084] Referring now to FIG. 11, an example diagram of control signals that can be applied to the neural amplifier according to FIG. 10 is shown. It is again referred to the previous explanations of example diagrams in FIG. 4 and FIG. 7. For example, in addition to the switching scheme in FIG. 7, the chopping signal φ.sub.chop is zero during the first half of the summation phases such that the offset of the integrating amplifier is accumulated negatively, while during the second half, where φ.sub.chop is high, the offset of the integrating amplifier is accumulated positively. Hence, the total transferred offset charge on C.sub.fb1a, C.sub.fb1b cancels out each other resulting, at least theoretically, in zero charge. Chopping once only during the summation phases reduces any residual offset introduced by the chopper switches themselves, because their contribution is only added once.

[0085] The effectiveness of the chopping scheme is further supported in the context of the neural amplifier if the total equivalent offset, which is the sum of the individual neuron input offsets and the offset of the integrating amplifier is constant and thus independent of the individual neuron input waves controlling the digitally adjustable charge stores in all phases. For example, referring back to FIG. 5, if all unit capacitors of the digitally adjustable charge store that are not selected for input sampling, i.e. not being connected to the second terminal V.sub.2, are connected to the common mode terminal V.sub.CM. This keeps the respective unit capacitors active but effectively with zero input. Furthermore, charging injection from all sampling switches S.sub.1a, S.sub.1b is added independently of the different weights, respectively capacitor settings, in the different phases.

[0086] Despite chopping, accuracy of the neural amplifier may be further increased, if made necessary by the respective application, for example by the complexity of the neural network. For example, there may be an output offset at an output of the summation stage SM after the last summation phase φ.sub.2, i.e. the last input voltage has been weighted and summed up, unless the summation stage SM itself is offset compensated.

[0087] Referring now to FIG. 12, this may be accomplished by a further development of the neural amplifier according to the improved concept shown in FIG. 12, which is based on the implementation shown in FIG. 10. In particular, the sampling stage SMP of FIG. 12 fully corresponds to the sampling stage of FIG. 10.

[0088] In the summation stage SM, a switching pair of switches S.sub.5a, S.sub.5b is introduced which are controlled by switching signal φ.sub.4xn and connect the differential input of the integrating amplifier OTA1 via the second chopping block ch2 to a first end of integrating charge stores C.sub.fb1a, C.sub.fb1b. Switches S.sub.6a, S.sub.6b, being controlled by switching signals φ.sub.4DD, correspond to the reset switch of FIG. 10. Switches S.sub.7a, S.sub.7b, controlled by switching signals φ.sub.4D, couple the first terminal of the integrating charge stores C.sub.fb1a, C.sub.fb1b to a differential input of a capacitive amplifier OTA2 of the buffer stage BUF. An activation stage being part of the buffer and activation stage ACB is not shown here for reasons of a better overview.

[0089] The buffer stage BUF comprises a further pair of charge stores C.sub.fb2a, C.sub.fb2b having a first end connected to the differential input of the capacitive amplifier OTA2. A second end of the charge stores C.sub.fb2a, C.sub.fb2b is connected to the common mode terminal V.sub.CM via switches S.sub.8a, S.sub.8b controlled by switching signal φ.sub.3 and to the differential output terminals of the buffer stage BUF via switches S.sub.9a, S.sub.9b controlled by switching signals φ.sub.3DDn. Input and output of the amplifier OTA2 are connected by respective switches S.sub.10a, S.sub.10b being controlled by switching signals φ.sub.3D. A differential buffered output voltage V.sub.out_buf+, V.sub.out_buf− is provided at the differential output of the amplifier OTA2.

[0090] FIG. 13 shows an example diagram of control signals that can be applied to the neural amplifier according to FIG. 12. For the function of switching signals φ.sub.chop, φ.sub.1, φ.sub.1D, φ.sub.2, φ.sub.2D and sel, it is referred to the respective explanations in conjunction with FIG. 7 and FIG. 11. With respect to the switching signals φ.sub.3, φ.sub.3D and φ.sub.3DDn it should be noted that φ.sub.3D is a slightly delayed version of φ.sub.3D, and φ.sub.3DDn is a further delayed version of φ.sub.3 that is also negated. Altogether they belong to a buffer offset compensation phase that will be explained in more detail below in conjunction with FIGS. 14A to 14D.

[0091] Similarly, switching signals φ.sub.4xn, φ.sub.4D and φ.sub.4DD correspond to a phase for charge transfer to buffer and offset sampling, which will also be explained in more detail below.

[0092] Hence, as can be seen from FIG. 13, phases φ1 and φ2 generally correspond to a sampling and summation phase while switching signals with index 3 and 4 correspond to charge transfer to buffer. It should be further noted that also in the example diagram of FIG. 13 n.sub.x has been chosen as 4 for each of explanation without loss of generality of other values for n.sub.x.

[0093] Referring now to FIGS. 14A to 14D, the individual phases mentioned before are depicted. The summation phases are split into a sampling phase φ.sub.1 and a charge transfer phase φ.sub.2, respectively. For example, FIG. 14A shows an actual electrical configuration of the neural amplifier according to FIG. 12 with the respective switch settings of φ.sub.1. Hence, during φ.sub.1 the input voltages at the differential inputs, e.g. the neuron inputs, are sampled onto the selected unit capacitors of the adjustable charge stores or respective capacitor DAC depending on the corresponding digital adjustment word.

[0094] As mentioned before, unselected unit capacitors may be connected to the common mode terminal V.sub.CM, thus sampling zero signal charge but still introducing charge injection and offset charge of the first integrating amplifier OTA1. This can make the total input offset independent of any weights, respectively adjustment words. Thus, it is cancelled by chopping. As the switching pair S2.sub.a, S.sub.2b is driven by a delayed clock φ1D, it does not contribute to charge injection offset. Moreover, the first chopping block ch1 does not contribute since it is switched during the non-overlap time of φ.sub.1 and φ.sub.2 such that no charges can be transferred from the switching process in the chopping block ch1. With respect to the second chopping block ch2, there may be a charge injection contribution, as charge remains trapped on the internal nodes n1a, n1b, to which the second chopping block ch2 is connected. However, this chopping block ch2 only toggles once during all summation phases, making its contribution small and negligible.

[0095] Referring now to FIG. 14B, the switching configuration during the charge transfer phase φ.sub.2 of the neural amplifier of FIG. 12 is shown. Accordingly, during φ.sub.2 the sampling capacitors C.sub.sia, C.sub.sib are discharged and their charge is transferred onto the integrating charge stores C.sub.fb1a, C.sub.fb1b. Furthermore, a charge Q.sub.off related to the input offset of the integrating amplifier OTA1 is transferred with


Q.sub.off=C.sub.s_total.Math.V.sub.off1.

[0096] As unselected unit sample capacitors of the adjustable charge store are not kept floating but connected to the common mode terminal V.sub.CM, a total sample capacitance seen during the charge transfer phase φ.sub.2 is constant and thus Q.sub.off is effectively cancelled by chopping. Furthermore, switches S.sub.4a, S.sub.4b add charge injection which is cancelled by chopping too. Switches S.sub.3a, S.sub.3b do not contribute charge injection due to the delayed switching signal φ.sub.2D.

[0097] Referring now to FIG. 14C, the electrical configuration of the neural amplifier of FIG. 12 is shown during the buffer offset compensation phase φ.sub.3. In particular, during this phase φ.sub.3, the differential capacitive amplifier OTA2 with the charge stores C.sub.fb2a, C.sub.fb2b is reset. By further configuring the integrating amplifier OTA1 in unity feedback, it is precharged to the offset voltage of the input side of the buffer stage BUF in order to cancel it at the capacitive amplifier OTA2, respectively its output, after the phase φ3. As switches S.sub.7a, S.sub.7b are open during this phase φ3, charge injection from switches S.sub.8a, S.sub.8b, S.sub.9a, S.sub.9b, S.sub.10a, S.sub.10b is mainly attracted to the lower impedance output of the amplifier OTA2, making residual charge injection small. Moreover, such charge injection is only added once per conversion, further reducing its contribution.

[0098] Referring now to FIG. 14D, the electrical configuration of the neural amplifier according to FIG. 12 during the charge transfer to buffer and offset sampling phase φ.sub.4 is shown. In particular, during this phase φ.sub.4, the integrating charge stores C.sub.fb1a, C.sub.fb1b are connected to the input side of the buffer stage, respectively the amplifier OTA2, while the integrating amplifier OTA1 is configured in unity feedback, thus forcing the charge on the integrating charge stores C.sub.fb1a, C.sub.fb1b to be transferred to the charge stores C.sub.fb2a, C.sub.fb2b. As the integrating charge stores C.sub.fb1a, C.sub.fb1b have been precharged to the difference between the first offset voltage at the amplifier OTA1 and the second offset voltage of the amplifier OTA2 during the previous phase φ4, there is no offset charge transferred onto charge stores C.sub.fb2a, C.sub.fb2b.

[0099] However, there may be some charge injection from switches S.sub.5a, S.sub.5b. As these switches S.sub.5a, S.sub.5b always remain at a virtual ground potential, this charge is not signal-dependent and only results in some residual offset, if any. Furthermore, as this charge is only added once per conversion, its impact would still be small. The implementation of the neural amplifier according to FIG. 12 avoids any signal swings at the input of both amplifiers OTA1, OTA2, such that there are no signal-dependent charge effects depending on the respective input capacitances of the amplifiers OTA1, OTA2 that would result in any gain error.

[0100] Moreover, there is no signal-dependent charge injection leaking to the output, making the gain error solely dependent on an open loop gain of the amplifiers and on the capacitor-matching of C.sub.fb1a, C.sub.fb1b, C.sub.fb2a, C.sub.fb2b and C.sub.sia, C.sub.sib.

[0101] In various implementations, a contribution of the amplifiers, in particular if implemented as OTAs, can be made small by using a high gain topology, as shown for example in FIG. 15, making the gain error insensitive to PVT variations.

[0102] FIG. 15 shows an example implementation of an operational transconductance amplifier with a differential input stage and a differential output stage with signal outputs connected between a pair of PMOS and NMOS cascode transistors that are driven by respective cascode bias voltages Vcasp, Vcasn respectively that may be generated by an appropriated biasing circuit. The differential output voltage is also used for a common mode feedback circuit CM controlling the current in the output current paths.

[0103] As mentioned before, the buffer and activation stage ASB further implements an activation function, which can be a clipping function. Clipping may be accomplished by limiting a supply voltage of the capacitive amplifier OTA2 and/or the buffer stage BUF itself. However, clipping can also be implemented by a dedicated clipping stage.

[0104] Referring now to FIG. 16, an example implementation of such a clipping stage ACT is shown that may be connected to the buffer stage BUF. In FIG. 16, clipping is performed by comparing the buffer output voltages V.sub.out_buf.sup.+, V.sub.out_buf.sup.− to a predefined reference voltage, in particular a differential voltage, and multiplexing between the buffer voltages V.sub.out_buf.sup.+, V.sub.out_buf.sup.− and a reference voltage defining the clipping level. If the buffer output is below the reference, the buffer output is used to drive the output of the neural amplifier, i.e. to provide the buffered output voltage. This voltage can be used to drive other neural amplifiers or, if applicable, an input pair of the same neural amplifier, if a recurrent neural network is implemented.

[0105] Otherwise, the reference voltages V.sub.ref.sup.+, V.sub.ref.sup.− will be used as the output voltages V.sub.out.sup.+, V.sub.out.sup.−.

[0106] As the clipping function must be applied both in positive and negative direction, clipping is performed in two steps, reusing the same comparator and employing a chopping block controlled by a control signal φ.sub.chop_clip. In particular, first clipping is checked in the positive range by comparing to the positive reference V.sub.ref.sup.+while, with reference to the example diagram of FIG. 17, φ.sub.chop_clip is zero. If clipping is detected, the positive reference is switched to the output V.sub.out.sup.+, V.sub.out.sup.− and the clipping operation has finished. The comparison is performed by the comparator and is subsequently placed flip-flop which allows a clocked operation on the basis of the clock signal clk.

[0107] In the case of no positive clipping, the reference is flipped by setting the control signal φ.sub.chop_clip to 1 for a comparison against the negative reference using the same comparator. If negative clipping is detected, the negative reference is directed to the output, otherwise the buffer output V.sub.out_buf.sup.+, V.sub.out_buf.sup.− is used.

[0108] The actual comparison is performed by precharging the capacitances in front of the comparator with the reference voltages and subsequently applying the buffered output voltages V.sub.out_buf.sup.+, V.sub.out_buf.sup.− to the sampled voltage in order to detect whether these are higher or lower than the precharged voltages.

[0109] As mentioned before, an alternative implementation of clipping is to supply the buffer output stage by the reference. Therefore, the buffer inherently clips the output to the desired levels. This may have the effect that the same clipping levels apply to all neural amplifiers, if the references or all neural amplifiers are supplied by a common voltage regulator, for example. This eliminates clipping threshold shift due to comparator offset. However, supply-based clipping cannot achieve hard clipping but instead is soft and resembles a logistic activation function.

[0110] With respect to the various implementations of the neural amplifier described above, a low offset and gain error can be achieved compared to conventional approaches of neural amplifiers, in particular for a high number of neuron inputs by applying, for example, circuit techniques in a fully differential neural amplifier. The reduction in circuit errors results in less concerns with respect to drift. Furthermore, periodic recalibration is not required. Specific implementations with the offset-compensated buffer stage, for example described in conjunction with FIGS. 12 to 14, improve the applicability of the neural amplifier for neural networks in a recurrent operating mode, where output voltages are fed back to inputs of the same or other neural amplifiers.

[0111] Multiple instances of a neural amplifier as described above can be used to form a neural network, as for example described in conjunction with FIG. 2. Such neural networks may be used in any circuit requiring weighted or unweighted analog summation of input voltages with high precision while providing parallel driving capability, which for example can be used in the mentioned analog neural networks. For example, analog neural networks are an interesting option for classifying sensor data with hidden or hardly visible patterns.

[0112] Referring now to FIG. 18, an example of a sensor device is shown comprising one or more sensors AS1, AS2 and an analog artificial neural network NN with one or more neural amplifiers as described above. For example, output signals of the one or more sensors AS1, AS2 are provided to differential inputs of the neural amplifiers, indicated as circles as in FIG. 2.

[0113] Training of the neural network can be performed online, i.e. during operation of the network, offline, e.g. by simulating the neural network in order to determine the respective weight factors, or even a combination of an offline training with a subsequent online calibration, for example. Other implementations are not excluded by these examples.