Analog neural memory array in artificial neural network with substantially constant array source impedance with adaptive weight mapping and distributed power
11355184 · 2022-06-07
Assignee
Inventors
- Hieu Van Tran (San Jose, CA)
- Thuan Vu (San Jose, CA)
- Stephen Trinh (San Jose, CA)
- Stanley Hong (San Jose, CA)
- Anh Ly (San Jose, CA)
- Vipin Tiwari (Dublin, CA)
Cpc classification
G11C16/0425
PHYSICS
G11C16/3418
PHYSICS
G11C16/0416
PHYSICS
H03M1/468
ELECTRICITY
H03F3/005
ELECTRICITY
H03F2203/45526
ELECTRICITY
G11C16/28
PHYSICS
G11C7/12
PHYSICS
H03F2203/45536
ELECTRICITY
H03M1/145
ELECTRICITY
H03M1/164
ELECTRICITY
G11C16/3427
PHYSICS
International classification
G11C16/28
PHYSICS
H03F3/00
ELECTRICITY
Abstract
Numerous embodiments of analog neural memory arrays are disclosed. In certain embodiments, each memory cell in the array has an approximately constant source impedance when that cell is being operated. In certain embodiments, power consumption is substantially constant from bit line to bit line within the array when cells are being read. In certain embodiments, weight mapping is performed adaptively for optimal performance in power and noise.
Claims
1. An analog neural memory system comprising: an array of non-volatile memory cells, wherein the cells are arranged in rows and columns, wherein each column of cells in a first plurality of columns of cells is connected to a different bit line in a plurality of bitlines, and each column of cells in a second plurality of columns of cells is connected to a different dummy bit line in a plurality of dummy bitlines; a set of dummy bit line switches arranged on a first end of the array, each of the set of dummy bit line switches coupled to one of the dummy bit lines in the plurality of dummy bitlines; a set of bit line switches arranged on a second end of the array opposite the first end of the array, each of the set of bit line switches coupled to one of the bit lines in the plurality of bit lines; and a summer for summing outputs from one or more bit lines, wherein the summer is adjustable based on a variable resistor or a variable capacitor; wherein during a read operation of a selected cell in the first plurality of columns of cells, current passes through a path formed in a bit line switch in the set of bit line switches coupled to the bit line connected to the selected cell in the first plurality of column of cells, the selected cell, a dummy bit line, and a dummy bit line switch in the set of dummy bit line switches into the summer.
2. The system of claim 1, wherein each dummy bit line switch in the set of dummy bit line switches is configured to pull the coupled dummy bit line to ground.
3. The system of claim 2, wherein array bit line interconnect impedance remains substantially constant when cells attached to the plurality of bit lines are selected for a read operation.
4. The system of claim 1, wherein two or more of the dummy bit line switches in the first set of dummy bit line switches are connected to a common ground.
5. The system of claim 1, wherein two or more of the bit lines are coupled to one another.
6. The system of claim 1, wherein array bit line interconnect impedance remains substantially constant when cells attached to the plurality of bit lines are selected for a read operation.
7. The system of claim 1, wherein the set of bit line switches couples to one or more of a sensing circuit, a summer, or an analog-to-digital converter circuit.
8. The system of claim 1, wherein the non-volatile memory cells in the array are split gate flash memory cells.
9. The system of claim 1, wherein the non-volatile memory cells in the array are stacked gate flash memory cells.
10. The system of claim 1, wherein the summer is adjustable based on a variable resistor.
11. The system of claim 1, wherein the summer is adjustable based on a variable capacitor.
12. The system of claim 1, further comprising an analog-to-digital converter for converting an output of the summer into a digital signal.
13. The system of claim 12, wherein the analog-to-digital converter comprises a successive approximation register.
14. The system of claim 13, wherein the analog-to-digital converter is a pipelined analog-to-digital converter.
15. The system of claim 1, further comprising: a source line coupled to a source line terminal of a row of non-volatile memory cells; and a source line transistor configured to pull the source line to ground.
16. The system of claim 15, wherein array interconnect impedance remains substantially constant when cells attached to the bit lines are selected for operation.
17. The system of claim 15, wherein the bit line switches couples to one or more of a sensing circuit, a summer, or an analog-to-digital converter circuit.
18. The system of claim 15, wherein two or more of the dummy bit line switches in the set of dummy bit line switches are connected to a common ground.
19. An analog neural memory system comprising: a first array of non-volatile memory cells, wherein the cells are arranged in rows and columns, wherein each column of cells in a first plurality of columns of cells is connected to a different bit line in a plurality of bitlines, and each column of cells in a second plurality of columns of cells is connected to a different dummy bit line in a plurality of dummy bitlines; a second array of non-volatile memory cells, the cells in the second array arranged in rows and columns, wherein a second plurality of columns of cells in the second array is connected to a different bit line in the plurality of bit lines and a second plurality of columns of cells in the second array is connected to a different dummy bit line in a plurality of dummy bitlines; and a set of multiplexors coupled to the first array and the second array and coupled to ground; wherein array interconnect impedance of the first array and the second array remain substantially constant when cells attached to the bit lines are selected for operation.
20. The system of claim 19, wherein the non-volatile memory cells in the first array and the second array are split gate flash memory cells.
21. The system of claim 19, wherein the non-volatile memory cells in the first array and the second array are stacked gate flash memory cells.
22. The system of claim 19, further comprising a summer for summing outputs from one or more bit lines.
23. The system of claim 22, wherein the summer is adjustable based on a variable resistor.
24. The system of claim 22, wherein the summer is adjustable based on a variable capacitor.
25. The system of claim 22, further comprising an analog-to-digital converter for converting an output of the summer into a digital signal.
26. The system of claim 25, wherein the analog-to-digital converter comprises a successive approximation register.
27. The system of claim 26, wherein the analog-to-digital converter is a pipelined analog-to-digital converter.
28. A method of operating an analog neural memory system comprising a first array comprising non-volatile memory cells arranged in rows and columns, a second array comprising non-volatile memory cells arranged in rows and columns, column multiplexors, local bit lines, and global bit lines, the method comprising: selecting, by the column multiplexors, a local bit line of the first array or a local bit line of the second array, and coupling the selected bit line to a global bit line; and multiplexing the global bit line with a second global bit line adjacent to the global bit line.
29. The method of claim 28, further comprising: summing outputs from one or more global bit lines.
30. A method of operating an analog neural memory system comprising an array of non-volatile memory cells, the method comprising: receiving, by a first n-bit analog-to-digital converter, a first output from the array; receiving, by a second n-bit analog-to-digital converter, a second output from the array; combining the first n-bit analog-to-digital converter with the second n-bit analog-to-digital converter to form a third analog-to-digital converter that generates a third output based on the first output and the second output, wherein the third output has a precision greater than n bits.
31. The method of claim 30, wherein the first analog-to-digital converter is a serial analog-to-digital converter and the second analog-to-digital converter is a serial analog-to-digital converter.
32. The method of claim 30, wherein the first analog-to-digital converter is a successive approximation converter and the second analog-to-digital converter is a successive approximation converter.
33. An analog neural memory system comprising: an array of non-volatile memory cells, wherein the cells are arranged in rows and columns, wherein each column of cells in a first plurality of columns of cells is connected to a bit line in a plurality of bitlines, and weights of a neural network are distributed among the bitlines to maintain voltage drop or power consumption at a substantially equal level among the bitlines.
34. The system of claim 33, wherein the non-volatile memory cells in the array are split gate flash memory cells.
35. The system of claim 33, wherein the non-volatile memory cells in the array are stacked gate flash memory cells.
36. The system of claim 33, further comprising a summer for summing outputs from one or more bit lines.
37. The system of claim 36, wherein the summer is adjustable based on a variable resistor.
38. The system of claim 36, wherein the summer is adjustable based on a variable capacitor.
39. The system of claim 36, further comprising an analog-to-digital converter for converting an output of the summer into a digital signal.
40. The system of claim 39, wherein the analog-to-digital converter comprises a successive approximation register.
41. The system of claim 40, wherein the analog-to-digital converter is a pipelined analog-to-digital converter.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
DETAILED DESCRIPTION OF THE INVENTION
(37) The artificial neural networks of the present invention utilize a combination of CMOS technology and non-volatile memory arrays.
(38) Embodiments of Improved VMM Systems
(39)
(40) Input circuit 1706 may include circuits such as a DAC (digital to analog converter), DPC (digital to pulses converter), DTC (digital to time converter), AAC (analog to analog converter, such as current to voltage converter), PAC (pulse to analog level converter), or any other type of converters. Input circuit 1706 may implement normalization, scaling functions, or arithmetic functions. Input circuit 1706 may implement a temperature compensation function on the input such as modulate the output voltage/current/time/pulse(s) as a function of temperature. Input circuit 1706 may implement activation function such as ReLU or sigmoid.
(41) Output circuit 1707 may include circuits such as a ADC (analog to digital converter, to convert neuron analog output to digital bits), AAC (analog to analog converter, such as current to voltage converter), ATC (analog to time converter), APC (analog to pulse(s) converter), or any other type of converter. Output circuit 1707 may implement activation function such as ReLU or sigmoid. Output circuit 1707 may implement statistic normalization, regularization, up/down scaling functions, statistical rounding, or arithmetic functions (e.g., add, subtract, divide, multiply, shift, log) on the neuron outputs, which are the outputs of VMM array 1701. Output circuit 1707 may implement a temperature compensation function on the neuron outputs (such as voltage/current/time/pulse(s)) or array outputs (such as bitline outputs) such as to keep power consumption of the VMM array 1701 approximately constant or to improve precision of the VMM array 1701 (neuron) outputs such as by keeping the slope approximately the same.
(42)
(43) One drawback of VMM system 1800 is that the input impedance for each cell varies greatly due to the length of the electrical path through the relevant bit line switch, the cell itself, and the relevant dummy bit line switch. For example,
(44)
(45) The benefit of this design can be seen in
(46)
(47)
(48) In an alternative embodiment, one or more dummy bit lines and one or more dummy bit line switches can be used instead of source line switch 2004 to pull the source lines to ground.
(49) In another embodiment, dummy rows can be utilized between rows as physical barriers to avoid FG-FG coupling (of two adjacent cells) between rows.
(50)
(51)
(52) VMM systems can be designed such that W+ and W− pairs are placed within the array in a manner that reduces FG to FG coupling or distributes power consumption in a more even fashion across the array and the output circuits. This is described below with reference to Tables 10 and 11. Additional details regarding the FG to FG coupling phenomena are found in U.S. Provisional Patent Application No. 62/981,757, filed on Feb. 26, 2020 by the same assignee, and titled “Ultra-Precise Tuning of Analog Neural Memory Cells in a Deep Learning Artificial Neural Network,” which is incorporated by reference herein.
(53) Table 10A shows an exemplary physical layout of an arrangement of two pairs of (W+, W−) bit lines. One pair is BL0 and BL1, and a second pair is BL2 and BL3. In this example, 4 rows are coupled to source line pulldown bit line BLPWDN. BLPWDN is placed between each pair of (W+, W−) bit lines to prevent coupling (e.g., FG to FG coupling) between one pair of (W+, W−) bit lines with another pair of (W+, W−) bit lines. BLPWDN therefore serves as a physical barrier between pairs of (W+, W−) bit lines.
(54) TABLE-US-00010 TABLE 10A Exemplary Layout for W, W− Pairs BLPWDN BL0 BL1 BLPWDN BL2 BL3 BLPWDN row0 W01+ W01− W02+ W02− row1 W11+ W11− W12+ W12− row2 W21+ W21− W22+ W22− row3 W31+ W31− W32+ W32−
(55) Table 10B shows different exemplary weight combination. A ‘1’ means that the cell is used and has a real output value, and a ‘0’ means the cell is not used and has no value or no significant output value.
(56) TABLE-US-00011 TABLE 10B Exemplary Weight Combinations for W+, W− Pairs BLPWDN BL0 BL1 BLPWDN BL2 BL3 BLPWDN row0 1 0 1 0 row1 0 1 0 1 row2 0 1 1 0 row3 1 1 1 1
(57) Table 11A shows another array embodiment of a physical arrangement of (w+, w−) pair lines BL0/1 and BL2/3. The array includes redundant lines BL01 and BL23 and source line pulldown bit lines BLPWDN. Redundant bitline BL01 is used to re-map values from the pair BL0/1, and redundant bit line BL23 is used to re-map values from the pair BL2/3, which will be shown in later Tables.
(58) TABLE-US-00012 TABLE 11A Exemplary Layout for W+, W− Pairs BLPWDN BL01 BL0 BL1 BL2 BL3 BL23 BLPWDN row0 W01+ W01− W02+ W02− row1 W11+ W11− W12+ W12− row2 W21+ W21− W22+ W22− row3 W31+ W31− W32+ W32−
(59) Table 11B shows an example where the distributed weight values do not need re-mapping, basically there is no adjacent ‘1’ between adjacent bit lines.
(60) TABLE-US-00013 TABLE 11B Exemplary Weight Combinations for W+, W− Pairs BLPWDN BL01 BL0 BL1 BL2 BL3 BL23 BLPWDN row0 1 0 1 0 row1 0 1 0 1 row2 1 0 1 0 row3 0 1 0 1
(61) Table 11C shows an example where distributed weights needs to be re-mapped. Here, there are adjacent ‘1’s in BL1 and BL3, which causes adjacent bit line coupling. The values therefore are re-mapped as shown in Table 11D, resulting in no adjacent ‘1’ values between any adjacent bit lines. In addition, by re-mapping, the total current along the bit line is now reduced, which leads to a more precise value in that bit line, which also leads to more distributed power consumption along the bit lines. Optionally, additional bitlines (BL01, BL23) optionally can be used to act as redundant columns.
(62) TABLE-US-00014 TABLE 11C Exemplary Weight Combinations for w+, w− Pairs BLPWDN BL01 BL0 BL1 BL2 BL3 BL23 BLPWDN row0 0 1 1 0 row1 0 1 1 0 row2 0 1 1 0 row3 0 1 1 0
(63) TABLE-US-00015 TABLE 11D Remapped Weight Combinations for w+, w− Pairs BLPWDN BL01 BL0 BL1 BL2 BL3 BL23 BLPWDN row0 0 0 1 0 0 1 row1 1 0 0 1 0 0 row2 0 0 1 0 0 1 row3 1 0 0 1 0 0
(64) Tables 11E and 11F depict another embodiments of remapping noisy cells (or defective cells) into the redundant columns such as BL01, BL23 in Table 11E or BL0B and BL1B in Table 11F.
(65) TABLE-US-00016 TABLE 11E Remapped Weight Combinations for w+, w− Pairs BLPWDN BL01 BL0 BL1 BL2 BL3 BL23 BLPWDN row0 0 0 1 0 0 1 row1 1 0 noisy or 1 0 0 defective cell (not used) row2 0 0 1 noisy or 0 1 defective cell (not used) row3 1 0 0 1 0 0
(66) TABLE-US-00017 TABLE 11F Remapped Weight Combinations for w+, w− Pairs BLPWDN BL0A BL0B BL1A BL1B BLPWDN row0 1 0 0 0 row1 noisy or 1 1 0 defective cell (not used) row2 1 0 noisy or 1 defective cell (not used) row3 0 0 1 0
(67) Table 11G shows an embodiment of a physical arrangement of an array that is suitable for
(68) TABLE-US-00018 TABLE 11G Exemplary Layout for w+, w− Pairs BLPWDN BL0 BLPWDN BL1 BLPWDN row0 W01+/− W02+/− row1 W11+/− W12+/− row2 W21+/− W22+/− row3 W31+/− W32+/−
(69) In Tables 10A-10B and 11A-11G, source line pulldown bit line BLPWDN can be implemented as a real dummy bitline BLDUM or as isolation bitlines BLISO, meaning these bitlines serve to isolate the data bitlines from each other so FG-FG coupling of adjacent cells is avoided. These bitlines are not used so they are tuned (programmed or erased) to a state that does not cause FG-FG coupling or leave them vulnerable to being disturbed by other cells being tuned (either programmed or erased) in the same row or the same sector, e.g., deeply or partially programmed cells or partially erased cells so that FG voltage is at a low level state.
(70) In another embodiment, a tuning bit line coupled to a column of cells is adjacent to a target bitline coupled to a column of cells, and the tuning bit line cells are used to tune the target bitline cells to desired target values during a programming operation using the FG-FG coupling between adjacent cells. Optionally, a source line pull down bitline can be used on the side of the target bit line opposite the side adjacent to the tuning bitline.
(71) Alterative embodiments for mapping noisy or defective cells can be implemented where such cells are designated as non-used cells, meaning they are to be (deeply) programed to not contribute any value to the neuron output.
(72) Alternative embodiments for identifying fast cells (which are cells that can be programmed to reach a certain value faster than a typical cell) can be implemented, where fast cells are identified and undergo a more precise tuning algorithm to not overshoot the target during a programming operation.
(73)
(74)
(75) Various output circuits will now be described that can be used with any of the VMM systems described herein.
(76)
(77) Summer circuits 2403 can include the circuits that are shown in
(78)
(79)
(80)
(81)
(82) For Input=Vin0: when switch 2754 and 2751 are closed and switches 2753, 2752 and 2757 are opened, input Vin0 is provided to top terminal of the capacitor 2758, whose bottom terminal is connected to VREF. Then switch 2751 is open and switch 2753 is closed to transfer the charge from the capacitor 2758 into the feedback capacitor 2756. Basically then the output VOUT=(C2758/C2756)*Vin0 (for case of with VREF=0 as example).
(83) For Input=Vin1: when switches 2753, 2754, and 2757 are closed and switches 2751, 2752 and 2757 are opened, both terminals of the capacitor 2758 are discharged to VREF. Then switch 2754 is opened and switch 2752 is closed, charging the bottom terminal of the capacitor 2758 to Vin1, which in turn charges up the feedback capacitor 2756 to VOUT=−(C2758/C2756)*Vin1 (for case of VREF=0).
(84) Hence, if the sequence described above for Vin1 input is implemented after the sequence described above for Vin0 is implemented, VOUT=(C2758/C2756)*(Vin 0−Vin1), for case of VREF=0 as example. This is used for example to realize W=W+−W−.
(85) Each ADC as shown in
(86) With reference again to
(87) In the embodiments that involve sequential operation of the arrays, power is more evenly distributed.
(88) In the embodiments that utilize the neuron (bit line) binary index method, power consumption is reduced in in the array since each cell coupled to the bit line only contains binary levels, the 2{circumflex over ( )}n level is accomplished by the summer circuit.
(89)
(90)
(91)
(92)
(93) Neuron output circuit 2811 or 2821 can, for example, perform summing, scaling, normalization, or arithmetic operations, without limitation. Converter 2822, for example, can perform ADC, PDC, AAC, or APC operation, without limitation.
(94)
(95)
(96) In one embodiment, VRAMP 3050 is provided to the inverting input of comparator 3004. The digital output (count value) 3021 is produced by ramping VRAMP 3050 until the comparator 3004 switches polarity, with counter 3020 counting clock pulses from the beginning of the ramp.
(97) In another embodiment, VREF 3055 is provided to the inverting input of comparator 3004. VC 3010 is ramped down by ramp current 3051 (IREF) until VOUT 3003 reaches VREF 3055, at which point the EC 3005 signal disables the count of counter 3020. The (n-bit) ADC 3000 is configurable to have a lower precision (fewer than n bits) or a higher precision (more than n bits), depending on the target application. The configurability of precision is done by configuring the capacitance of capacitor 3002, the current 3051 (IREF), the ramping rate of VRAMP 3050, or the clocking frequency of clock 3041, without limitation.
(98) In another embodiment, the ADC circuit of a VMM array is configured to have a precision lower than n bits and the ADC circuits of another VMM array is configured to have high a precision greater than bits.
(99) In another embodiment, one instance of serial ADC circuit 3000 of one neuron circuit is configured to combine with another instance of serial ADC circuit 3000 of the next neuron circuit to produce an ADC circuit with higher than n-bit precision, such as by combining the integrating capacitor 3002 of the two instances of serial ADC circuits 3000.
(100)
(101)
(102)
(103) Additional implementation details regarding configurable output neurons (such as configurable neuron ADC) circuits can be found in U.S. patent application Ser. No. 16/449,201, filed on Jun. 21, 2019 by the same assignee, and titled “Configurable Input Blocks and Output Blocks and Physical Layout for Analog Neural Memory in a Deep Learning Artificial Neural Network,” which is incorporated by reference herein.
(104) It should be noted that, as used herein, the terms “over” and “on” both inclusively include “directly on” (no intermediate materials, elements or space disposed therebetween) and “indirectly on” (intermediate materials, elements or space disposed therebetween). Likewise, the term “adjacent” includes “directly adjacent” (no intermediate materials, elements or space disposed therebetween) and “indirectly adjacent” (intermediate materials, elements or space disposed there between), “mounted to” includes “directly mounted to” (no intermediate materials, elements or space disposed there between) and “indirectly mounted to” (intermediate materials, elements or spaced disposed there between), and “electrically coupled” includes “directly electrically coupled to” (no intermediate materials or elements there between that electrically connect the elements together) and “indirectly electrically coupled to” (intermediate materials or elements there between that electrically connect the elements together). For example, forming an element “over a substrate” can include forming the element directly on the substrate with no intermediate materials/elements therebetween, as well as forming the element indirectly on the substrate with one or more intermediate materials/elements there between.