Serialized electro-optic neural network using optical weights encoding
11373089 · 2022-06-28
Assignee
Inventors
Cpc classification
G06N3/0675
PHYSICS
International classification
Abstract
Most artificial neural networks are implemented electronically using graphical processing units to compute products of input signals and predetermined weights. The number of weights scales as the square of the number of neurons in the neural network, causing the power and bandwidth associated with retrieving and distributing the weights in an electronic architecture to scale poorly. Switching from an electronic architecture to an optical architecture for storing and distributing weights alleviates the communications bottleneck and reduces the power per transaction for much better scaling. The weights can be distributed at terabits per second at a power cost of picojoules per bit (versus gigabits per second and femtojoules per bit for electronic architectures). The bandwidth and power advantages are even better when distributing the same weights to many optical neural networks running simultaneously.
Claims
1. An apparatus for implementing an optical neural network, the apparatus comprising: a first layer of neurons to emit a first array of data signals x.sub.j at a first optical frequency, where j=1, 2, . . . N, and N is a positive integer; a second layer of neurons, each neuron in the second layer of neurons configured to calculate a weighted average y.sub.i of the first array of data signals, where y.sub.i=sum (w.sub.ijx.sub.j), w.sub.ij is a weight, i=1, 2 . . . M, and M is a positive integer; and an optical weight transmitter, in optical communication with the second layer of neurons, to transmit an array of weight signals at a second optical frequency towards the second layer for calculating the weighted average y.sub.i, where the weight signals represent the weight w.sub.ij, wherein each neuron in the second layer comprises: a homodyne receiver, operably coupled to the first layer and the optical weight transmitter, to generate a weighted product between each data signal in the first array of data signals and a corresponding weight signal in the array of weight signals; and an integrator, operably coupled to the homodyne receiver, to calculate the weighted average y.sub.i from the weighted products.
2. The apparatus of claim 1, wherein the integrator comprises at least one of an RC filter or an integrating operational amplifier.
3. The apparatus of claim 1, wherein each neuron further comprises a nonlinear function unit to generate an output z.sub.i from the weighted average y.sub.i.
4. The apparatus of claim 3, wherein each neuron further comprises a neuron transmitter to transmit the output z.sub.i to a third layer in the optical neural network.
5. An apparatus for implementing an optical neural network, the apparatus comprising: a first layer of neurons to emit a first array of data signals x.sub.j at a first optical frequency, where j=1, 2, . . . N, and N is a positive integer; a second layer of neurons, each neuron in the second layer of neurons configured to calculate a weighted average y.sub.i of the first array of data signals, where y.sub.i=sum (w.sub.ijx.sub.j), w.sub.ij is a weight, i=1, 2 . . . M, and M is a positive integer; and an optical weight transmitter, in optical communication with the second layer of neurons, to transmit an array of weight signals at a second optical frequency towards the second layer for calculating the weighted average y.sub.i, where the weight signals represent the weight w.sub.ij, wherein the second optical frequency is greater than the first optical frequency by at least about 2π×20 GHz.
6. The apparatus of claim 1, wherein the optical weight transmitter comprises an array of grating couplers.
7. The apparatus of claim 1, wherein each neuron in the first layer is configured to transmit a copy of the first array of data signals x.sub.i in a serial manner toward every neuron in the second layer.
8. The apparatus of claim 1, wherein the first layer of neurons and the second layer of neurons are in a first optical neural network and further comprising: a second optical neural network, in optical communication with the first optical communication with the optical weight transmitter, to compute a product of another array of data signals with the array of weight signals.
9. A method for implementing an optical neural network, the method comprising: transmitting a first array of data signals x.sub.j at a first optical frequency from a first layer of neurons in the optical neural network to a second layer of neurons in the optical neural network, where j=1, 2, . . . N, and N is a positive integer; transmitting an array of weight signals at a second optical frequency to the second layer for calculating a weighted average y.sub.i, where the weight signals represent the weight w.sub.ij; and calculating the weighted average y.sub.i, of the first array of data signals at the second layer of neurons, where y.sub.i=sum (w.sub.ijx.sub.j), w.sub.ij is a weight, i=1, 2 . . . M, and M is a positive integer, wherein calculating the weighted average comprises: generating, at a homodyne receiver, a weighted product between each data signal in the first array of data signals and a corresponding weight signal in the array of weight signals; and integrating the weighted products to form the weighted average y.sub.i.
10. The method of claim 9, further comprising transmitting an output z.sub.i, from the second layer in the optical neural network to a third layer in the optical neural network.
11. The method of claim 9, further comprising: transmitting a copy of the first array of data signals x.sub.i in a serial manner toward every neuron in the second layer.
12. The method of claim 9, wherein the first layer of neurons and the second layer of neurons are in a first optical neural network and further comprising: transmitting the array of weight signals to a layer in a second optical neural network.
13. An optical processing system comprising: a plurality of optical neural networks; and an optical weight transmitter, in optical communication with the plurality of optical neural networks, to transmit an array of weight signals to each optical neural network in the plurality of optical neural networks, wherein each optical neural network in the plurality of optical neural networks comprises: a homodyne receiver, operably coupled to the first layer and the optical weight transmitter, to generate a weighted product between a corresponding weight signal in the array of weight signals and a corresponding data signal in an array of data signals and; and an integrator, operably coupled to the homodyne receiver, to calculate a weighted average from the weighted product.
14. The optical processing system of claim 13, wherein the optical weight transmitter is integrated in a photonic integrated circuit and comprises a grating coupler to couple the array of weight signals to the plurality of optical neural networks.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The skilled artisan will understand that the drawings primarily are for illustrative purposes and are not intended to limit the scope of the inventive subject matter described herein. The drawings are not necessarily to scale; in some instances, various aspects of the inventive subject matter disclosed herein may be shown exaggerated or enlarged in the drawings to facilitate an understanding of different features. In the drawings, like reference characters generally refer to like features (e.g., functionally similar and/or structurally similar elements).
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION
(13) Apparatus, systems, and methods described herein employ optical weight encoding for a neural network to perform learning and inference at energy consumptions that are orders of magnitude lower than those of conventional electronic neural networks, such as those using graphical processing units (GPUs). The weights of the neural network that determine the matrix transformations of input signals between neuron layers are encoded optically, in contrast to hardware-encoded implementations in an electronic neural work. The optically encoded weight transformation can be easily reprogrammed and can be particularly advantageous in data centers. In these data centers, a central transmitter can encode or modulate the weights into optical beams, which are subsequently divided and distributed (e.g., using beam splitters) across a large number of neural network inference processors, thereby reproducing the weights without retrieving the weights for each calculation.
(14) Optical Encoding for Neural Networks
(15) In this optical encoding, X.sub.i is the ith data vector entering the neural network. It can be a 1×N row matrix including N amplitudes E.sub.ij, j=1, 2 . . . N, where each amplitude is encoded in 2.sup.d levels, i.e., each amplitude stores d bits of information. Y.sub.i is the ith data output vector from the neural network, i.e., the neural network performs the function Y.sub.i=NN(X.sub.i), Δt is the duration of each time bin in which the input vector X.sub.i is encoded. M is the number of layers in the neural network. In practice, the number of layers M can be greater than 2 (e.g., 2 layers, 3 layers, 5 layers, 10 layers, 15 layers, 20 layers, 25 layers, 30 layers, 40 layers, 50 layers, or more, including any values and sub ranges in between).
(16) In this neural network, W.sub.k is the kth weight matrix, including elements w.sub.klm, where l, m=1, 2 . . . N and k=1, 2 . . . M. At each layer (e.g., kth layer), W.sub.k is a two-dimensional (2D) matrix. Data representing these weights are encoded into the amplitudes Ew.sub.klm of optical signals (also referred to as weight signals). In other words, by detecting the amplitudes of the weight signals, one can estimate the corresponding weight value Ew.sub.klm.
(17) In addition, ω.sub.s is the carrier frequency of the input data (x); ω.sub.w is the carrier frequency of the weight signals. For optical fields in the telecom band, ω.sub.s can be, for example, about 2π×200 THz. In some cases, ω.sub.w can be greater than ω.sub.s. For example, the difference between these frequencies can be Δω˜2π×20 GHz. In this case, the detectors in the neurons detect a beat note at the difference frequency whose envelope is modulated with the product of the weight and the data signal.
(18) Calculating Products
(19) For each transformation performed by the neural network, the data entry x.sub.ij is first multiplied by the weight w.sub.jlm, i.e. producing a product. The product between data entry x.sub.ij and weight w.sub.jlm can be obtained by colliding the corresponding data field amplitudes E.sub.ij with the weight field amplitudes Ew.sub.klm onto a homodyne receiver. The homodyne receiver includes a beam splitter followed by two photodetectors. Subtracting the intensities from the two detectors can recover |E.sub.ij∥Ew.sub.klm|, which is the product used for the transformation.
(20) Calculating Weighted Averages at Each Neuron
(21) After calculating each product between the data entry and the corresponding weight at each neuron, the neural network then adds these products together to calculate weighted averages at each neuron. An electronic integrator following each homodyne detector sums the products of data entries and the weights. For example, the electronic integrator can include an RC filter. In another example, the integrator can include an integrating operational amplifier (Op-Amp). This produces the weighted average Sum (|E.sub.il∥Ew.sub.klm|), l=1, 2 . . . N, on neuron m. The process of obtaining the matrix-vector process can be ultrafast, since it relies on the near-instantaneous processes of photogenerated carrier generation and carrier accumulation.
(22) Nonlinear Function
(23) The weighted average in the neural network is then passed through a nonlinear electronic function ƒ For example, the function can be a threshold function. In another example, the nonlinear function can be a sigmoidal function. The nonlinear function produces the output ƒ(Sum[|Eij∥Ew.sub.jlm|, j=1, 2 . . . N]), which is stored until translated into an optical signal, as detailed below.
(24) A Neural Network Using Optical Weight Encoding
(25)
(26) Each layer 110 includes a corresponding beam splitter 140a-140k with two inputs and at least one output, which is coupled to a corresponding neuron array 130. Each neuron array 130 in
(27) The beam splitter's first input is optically coupled to a corresponding array of optical weight transmitters 120a-120k. If the neural network 100 is one of several identical neural networks running in parallel on the same chip or on coupled chips, it may receive some or all of the weights from a set of weight transmitters shared by all of the neural networks. This reduces power consumption as explained in greater detail below.
(28) The beam splitter's second input is connected to the neuron array 130 in the preceding layer or laser transmitters 150 in the case of the input layer 110a. The optical weight transmitters 120 emit optical weights 121a-121k, which are encoded serially on optical carriers and combined with optical input signals 111a-111k entering the beam splitter's second input. The resulting signals are detected by the neurons 131 in the neuron array 130 at the beam splitter's output.
(29)
(30)
(31)
(32) These optical amplitudes E.sub.ij are sent to N neurons in step 192. At the same, the optical weight transmitters 120 transmit weight signals to the N neurons as illustrated in
(33) The transmitters in layer k=2 send their signals towards the next layer (here, layer 3) in step 196. The transmitter in each neuron transmits equally to the receivers in the subsequent (k=3) layer. The transmissions can occur in sequence: on the first clock cycle, neuron 1 of layer 2 emits to all neurons of layer 3. On the next clock cycle, neuron 2 emits to all neurons in layer 3. This process continues until all neurons in layer 2 have transmitted to layer 3. Each of these transmissions from layer 2 are again weighted by optical signals Ew.sub.lkm in the next layer of neurons (k=3).
(34) The process repeats up to the last layer 110k (k=M in subscript notation) (step 198). This last layer derives the final output vector y.sub.ij, with j=1, 2 . . . . N, as shown in
(35) Analog-to-digital (A/D) conversion can be a useful component for a neural network where the input and output are digital, but which internally works on analog encoding. There are several ways to simplify A/D conversion in this optical implementation. Suppose that the serialized input includes 4-bit numbers (a relatively low-bit encoding for illustration) x.sub.4, x.sub.3, x.sub.2, x.sub.1, where x.sub.k=0, 1. This can be encoded into an analog signal on the first encoding step (e.g., step 190 in
(36) Chip-Based Implementation
(37)
(38) Each of these neural networks can be implemented on the same PIC or on a different PIC or combination of PICs with vertical grating couplers to distribute the optical weights and other optical signals among PICs. In these implementations, the neuron arrays are implemented in the PIC(s), with grating couplers on the receiving side to couple incoming light from the optical weight transmitter array(s) 120 into on-chip waveguides, which guide the light to an on-chip homodyne detector as in
(39) The grating couplers used to couple light into and out of the PICs can be directional. For example, one type of grating coupler couples light through the top of the PIC, and another type couples light through the bottom of the PIC. These two types of vertical grating couplers can be employed to receive and transmit signals on each of the neuron layers, e.g., in stack of PICs used to implement different layers of the neural network.
(40) In the neural network described herein, it can be helpful for each layer to uniformly distribute the data signals in a free-space implementation, e.g., using a series of beam displacers can be used. More information about beam displacers can be found in P. Xue et al., “Observation of quasiperiodic dynamics in a one-dimensional quantum walk of single photons in space,” New J. Phys. 16 053009 (6 May 2014), which is hereby incorporated by reference in its entirety. In another example, the entire circuit can be implemented on one or a series of PICs. The PICs in this case can include cascaded and offset Mach Zehnder interferometers to spread light from one layer uniformly into all other layers.
(41) Power Consumption in Neural Network Using Optical Weight Encoding
(42) The entire set of weights in the neural network can be continuously streamed to the neurons. Effectively, the neural network “program/connectivity” is encoded into a large number of optical data streams. This can result in significant power savings, especially when the same data streams are applied across many copies of neural networks. It is common to run many instances of a neural network in parallel. For example, data centers by Google, Apple, Microsoft, and Amazon run very large numbers of instances of speech recognition simultaneously. Each of these instances uses the same “program” or weights W.sub.j. In an electronic neural network, each of the weights is recalled from memory for each running of the neural network on each instance. In contrast, an optical neural network allows a data center to encode the weights into optical intensities just once, and then to copy these weights using beam splitters and route them to many optical neural network processors. This amortizes the energy cost of encoding the weights as the number of neural network instances grows larger. Other energy costs are those related to the optical energy in each time step of the neural network processor, plus those in the detectors, integrators, and modulators.
(43) Energy Consumption in Neural Network Using Optical Weight Encoding
(44) Suppose that the optical neural network operates at 10 GHz modulation rate (a reasonable assumption for high-speed modulators and photodetectors like those in the neural network 100 of
(45) Summing these four types of contributions, the energy per time step P is about M×(U.sub.mod×(1+N/N.sub.p)+N×U.sub.opt+N×U.sub.pd). This yields a FLOPs (floating point operations) per second-to-power ratio of R/P=N/(U.sub.mod (1+N/Np)+N×U.sub.opt+N×U.sub.pd). For N.sub.p at about 1000, N at about 1000, the resulting R/P tends to be about 1/(U.sub.opt+U.sub.pd), which is about 5×10.sup.13 FLOPs/Joule, or P/R at about 20 fJ/FLOP. By comparison, modern digital electronic architectures (e.g., graphical processing units/GPUs) can only achieve P/R at about 100 pJ per FLOP. Therefore, the optical neural network described herein can be nearly four orders of magnitude more energy efficient than digital electronic implementations.
(46) Optical Neural Networks Using Photonic Integrated Circuit
(47) As mentioned above, the entire circuit (e.g., similar to the one shown in
(48) The PICs allow on-chip implementations of neural networks. For instance, small processing units are distributed across an integrated circuit. These processing units can use digital or analog encoding, and each processing unit represents a neuron. Each neuron can communicate to all other neurons in the neural network by fan-out via an optical 2D waveguide that distributes light across the PIC or electrically through a mesh of wires that connects all neurons to all other neurons. The weights can be encoded in memory that is placed directly above the neurons and are sent into the neurons using electrical or optical connections. In this manner, the entire neural network system is integrated into one chip.
(49) In the on-chip implementation, each neuron can broadcast an optical signal to all of the other neurons or to at least a subset of neurons depending on the neural network process.
(50) In some examples, instead of using interference for products as in the neuron of
(51) Removing Weight Bottlenecks in Digital Neural Networks
(52) In some examples, all signals remain digital in the neural network. Optical signals are used for all of the communication, as illustrated in
(53)
(54)
(55) These processors 530 may be optical neural networks like the one shown in
(56) In this system 500, the same set of weights is distributed optically across a large number of processors, so that the energy per memory weight retrieval is significantly reduced. In some examples, the energy per weight retrieval can be about 1 fJ/bit, which is only 0.1% of the energy for weight retrieval in existing systems (i.e. a 1000-fold savings). In addition, the system 500 can operate without complex and expensive memory next to processors. The immediate and low-power broadcast of weights to possibly thousands of processors can also greatly accelerate learning during operation.
(57)
(58)
(59)
CONCLUSION
(60) While various inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize or be able to ascertain, using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
(61) Also, various inventive concepts may be embodied as one or more methods, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
(62) All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
(63) The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
(64) The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
(65) As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e., “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
(66) As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
(67) In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03.