OPTICAL DIFFRACTIVE PROCESSING UNIT
20220164634 · 2022-05-26
Inventors
Cpc classification
G06N3/0675
PHYSICS
International classification
Abstract
An optical diffractive processing unit includes input nodes, output nodes; and neurons. The neurons are connected to the input nodes through optical diffractions. Weights of connection strength of the neurons are determined based on diffractive modulation. Each optoelectronic neuron is configured to perform an optical field summation of weighted inputs and generate a unit output by applying a complex activation to an optical field occurring naturally in a photoelectronic conversion. Each neuron is a programmable device.
Claims
1. A optical diffractive processing unit, comprising: input nodes; output nodes; neurons, connected to the input nodes through optical diffractions, wherein weights of connection strength of the neurons are determined based on diffractive modulation, each optoelectronic neuron is configured to perform an optical field summation of weighted inputs and generate a unit output by applying a complex activation to an optical field occurring naturally in a photoelectronic conversion, and the neurons are formed by a programmable device; the programmable device comprises an optical neural network containing a digital micromirror device, a spatial light modulator, and a photodetector; the digital micromirror device is configured to provide a high optical contrast for information coding; the spatial light modulator is configured to perform the diffractive modulation, wherein weighted connections between the input nodes and the neurons are implemented by free-space optical diffraction, and a receiving field of each neuron is determined by an amount of diffraction from a plane of the spatial light modulator to a plane of the photodetector; and the photodetector is configured to implement the optical field summation and the complex activation.
2. The optical diffractive processing unit of claim 1, wherein the optical neural network comprises multiple single-layer diffractive layers, and each single-layer diffractive layer comprises an input coding layer, a diffractive connection layer, an optical summation layer, and an optical non-linearity layer.
3. The optical diffractive processing unit of claim 2, wherein the input coding layer is implemented by a programmable input module configured to encode input data into incident light, a physical dimension of encoding comprises amplitude encoding, phase encoding or both the amplitude encoding and the phase encoding, and an encoding type comprises discrete encoding, continuous encoding, or both the discrete encoding and the continuous encoding.
4. The optical diffractive processing unit of claim 2, wherein the diffractive connection layer is implemented by optical diffraction.
5. The optical diffractive processing unit of claim 2, wherein the optical summation layer is implemented by optical coherence.
6. The optical diffractive processing unit of claim 2, wherein the optical non-linearity layer is implemented by a programmable detection module.
7. The optical diffractive processing unit of claim 2, wherein the single-layer diffractive layers are connected sequentially.
8. The optical diffractive processing unit of claim 1, wherein the optical neural network comprises a three-layer optoelectronic diffractive deep neural network (D.sup.2NN).
9. The optical diffractive processing unit of claim 1, wherein the optical neural network comprises recurrent modules; an output of each recurrent module is a recurrent state; an input of each recurrent module comprises an output state of a previous recurrent module and sequential input data; each recurrent module comprises multiple states inside; and another diffractive neural network architecture is connectable to a recurrent neural network during an inference process.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The above and/or additional aspects and advantages of the present invention will become obvious and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, in which:
[0007]
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
DETAILED DESCRIPTION
[0027] Embodiments of the disclosure will be described in detail below. Examples of the embodiments are shown in the accompanying drawings, in which the same or similar reference numerals indicate the same or similar elements or elements with the same or similar functions. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to explain the disclosure, but should not be construed as limiting the disclosure.
[0028] In order to make the content of the disclosure clear and easy understand, the content of the disclosure will be described in detail below with reference to the following embodiments, drawings, and tables.
[0029] In recent years, a variety of optical computing architectures have been proposed, including optical computing based on optical coherence and optical diffraction. However, the existing optical computing faces contradiction between programmability and large-scale computational operations.
[0030] Embodiments of the disclosure provide an optical diffractive processing unit, which has extensive properties and large-scale computing advantages.
[0031] The optical diffractive processing unit includes input nodes, output nodes; and neurons.
[0032] The neurons are connected to the input nodes through optical diffractions. Weights of connection strength of the neurons are determined based on diffractive modulation.
[0033] Each optoelectronic neuron is configured to perform an optical field summation of weighted inputs and generate a unit output by applying a complex activation to an optical field occurring naturally in a photoelectronic conversion.
[0034] The neurons are formed by a programmable device. The programmable device includes an optical neural network containing a digital micromirror device, a spatial light modulator and a photodetector.
[0035] The digital micromirror device is configured to provide a high optical contrast for information coding.
[0036] The spatial light modulator is configured to perform the diffractive modulation. Weighted connections between the input nodes and the neurons are implemented by free-space optical diffraction, and a receiving field of each neuron is determined by an amount of diffraction from a plane of the spatial light modulator to a plane of the photodetector.
[0037] The photodetector is configured to implement the optical field summation and the complex activation.
[0038] In some examples, the optical neural network further includes multiple single-layer diffractive layer. Each single-layer diffractive layer includes an input coding layer, a diffractive connection layer, an optical summation layer, and an optical non-linearity layer.
[0039] In some examples, the input coding layer is implemented by a programmable input circuit configured to encode input data into incident light. Physical dimensions of encoding include one or more of amplitude encoding and phase encoding, and an encoding type includes one or more of discrete encoding and continuous encoding.
[0040] In some examples, the diffractive connection layer is implemented by optical diffraction.
[0041] In some examples, the optical summation layer is implemented by optical coherence.
[0042] In some examples, the optical non-linearity layer is implemented by a programmable detection circuit.
[0043] In some examples, the single-layer diffractive layers are connected sequentially.
[0044] In some examples, the optical neural network further includes a three-layer optoelectronic diffractive deep neural network (D.sup.2NN).
[0045] In some examples, the optical neural network further includes recurrent modules. An output of each recurrent module is a recurrent state. An input of each recurrent module includes an output state of a previous recurrent module and sequential input data. Each recurrent module includes multiple states. Another diffractive neural network architecture can be connected to a recurrent neural network during an inference process.
[0046] The optical diffractive processing unit will be described in detail below.
[0047] <Optical Diffractive Processing Unit, DPU>
[0048]
[0049] In the disclosure, for processing the optical information via the diffractive neurons, unit input data can be quantized and electro-optically converted to a complex-valued optical field through an information-coding (IC) module. Different input nodes are physically connected to individual neurons through light diffractive connections (DCs), where the synaptic weights that control the strength of the connections are determined by the diffractive modulation (DM) of the wavefront. Each diffractive optoelectronic neuron performs the optical field summation (OS) of its weighted inputs, and generates a unit output by applying a complex activation (CA) function to the calculated optical field that occurs naturally in the photoelectric conversion. The unit output is transmitted to multiple output nodes.
[0050]
[0051] The DMD can provide a high optical contrast for information coding (IC), which helps the system calibration and optical signal processing. The DMD encodes the binary unit input into the amplitude of coherent optical field. In the disclosure, the phase distribution is modulated by the phase SLM to achieve diffractive modulation (DM). The diffractive connections (DCs) between the input nodes and the artificial neurons are implemented by free-space optical diffraction, where a receiving field of each neuron is determined by an amount of diffraction from a SLM plane to a sensor plane. In some embodiments, electro-optical conversion characteristics of the CMOS pixels are used to realize functions of the artificial neurons (i.e., the optical field summation (OS) and complex activation (CA)), and efficiently generate the unit output. The photoelectric effect is used to measure the intensity of the incident optical field, preparation of non-linear optical materials is avoided and the complexity of the system is reduced due to the CA function. By controlling and buffering the massively parallel optoelectronic dataflow through the electronic signals, the DPU is allowed to be temporally multiplexed and programmed for customizing different types of optical neural network (ONN) architecture.
[0052] <Adaptive Training>
[0053] The effectiveness of an adaptive training approach and the functionality of the DPU are validated by constructing an optoelectronic diffractive deep neural network (D.sup.2NN) for classifying the MNIST (Modified National Institute of Standards and Technology) handwritten digits. The structure of the D.sup.2NN is illustrated in
[0054] Based on numerical simulation, compared with the electronic computing, the totally-optical D.sup.2NN model can classify the MNIST database with higher model accuracy. However, the difficulty of experiments comes from defects of the physical system, such as layer misalignment, optical aberrations, and manufacturing errors. These defects inevitably degrade the performance of the computer-designed network model, and lead to the difference between the numerical simulation and the actual experiments. In the disclosure, by applying a measured intermediate light field output by the DPU to adaptively adjust the network parameters, the system error-induced model decrement can be effectively compensated in the photoelectric D.sup.2NN. Compared with a situ training method that attempts to directly update the gradient for the system, the adaptive training method of the disclosure can correct the computer-trained model layer by layer, and has high robustness and high efficiency.
[0055]
[0056] As described above, the MNIST classification is performed by a three-layer optoelectronic D.sup.2NN (as illustrated in
[0057] To circumvent system error and improve recognition performance, adaptive training of the constructed three-layer optoelectronic D.sup.2NN is implemented with two-stage fine tuning of the pre-trained model. In details, a trade-off can be made between experimental accuracy and training efficiency by using a full training set and a mini-training set (for example 2% of the full training set). A first stage of adaptive training and a second stage of adaptive training of the DPU output of a first layer and a second layer are recorded. The first stage of adaptive training uses the experimentally measured output of the first layer as the input of the second layer, and the parameters of the second and third diffraction layers are retrained on the computer. In the same way, the experimentally measured output of the second layer is used for retraining the parameters of another third diffraction layer in the second stage of adaptive training. Each adaptive training stage is initialized with the pre-trained model to fine tune the network parameters under the same training settings. After each stage, the phase mode of the SLM will be updated accordingly with refined parameters to adapt to system defects and reduce the accumulation of system errors. Through the adaptive training, intensity distribution of the DPU outputs between simulations and experiments are well matched, especially in the last layer, During the experiments, the example test digits “2” and “0” are correctly classified (
[0058] It is to be noted that the adaptive training is a training algorithm that matches the DPU, and its principle is not only applicable to the above-mentioned embodiments. In the following embodiments, the adaptive training is also adopted.
Third Embodiment
[0059] In a convolutional neural network (CNN) architecture, segmenting the hidden layer into a set of feature maps with weight sharing is a key mechanism that leads to high model performance. Therefore, inference capability and mode robustness of the optoelectronic D.sup.2NN can be further enhanced by designing a multi-channel diffractive hidden layer as well as an external and internal interconnectivity structure (
[0060] The performance of the D-NIN-1 is evaluated by constructing a three-layer architecture as illustrated in
[0061] Compared with the optoelectronic D.sup.2NN, stacking multiple DPUs on each hidden layer can provide a higher degree of freedom and robustness to fine tune the pre-trained model of the D-NIN-1 to adapt to system defects. With the programming of the optoelectronic DPU system to deploy the D-NIN-1 model, the experimental classification accuracy over the whole test dataset reaches the blind-testing accuracy of 96.8% after the adaptive training.
[0062] In the disclosure, the energy distribution of ten predefined detection regions on the output layer is analyzed based on the inference result (
Fourth Embodiment
[0063] In addition to a single image, the reconfigurability of the DPU allows to construct a large-scale diffractive recurrent neural network (D-RNN) to perform high-accuracy recognition tasks of video sequences. To demonstrate its functionality, a standard RNN architecture is configured based on the recurrent connections of the DPU layers and is applied to the task of video-based human action recognition. The folded and unfolded representations of the proposed D-RNN are shown in
[0064] The constructed D-RNN for the task of human action recognition is evaluated on two benchmark databases (i.e., the Weizmann database and the KTH database) with preprocessing to adapt to the network input. The Weizmann database includes ten types of natural actions, i.e., bend, jumping-jack (jack), jump-forward-on-two-legs (jump), jump-in-place-on-two-legs (pjump) run, gallop-sideways (side), skip, walk, wave-two-hands (wave 2) and wave-one-hand (wave 1). Sixty video sequences (actions) are sued as a training set, and thirty video sequences (actions) are used as a test set. Each video sequence has about 30 to 100 frames respectively. The KTH database includes six types of natural actions, i.e., boxing, handclapping, handwaving, jogging, running, and walking. Each of the video sequences includes about 350 to 600 frames. The system 36 is trained and tested by using a first scene (150 video sequences) and 16:9 data splitting. The recurrent connection of the hidden layer at different time steps allows the D-RNN to process a variable sequence length of inputs. Although a longer network sequence length (larger N) can incorporate more frames for the recognition decisions this causes difficulties for the network in training as well as the forgetting of long-term memory, that is, the vanishing of frame information at a time step that is far from the current time step. Therefore, for each video sequences in the database with a length of M, setting N<<M and the video sequence is divided into numbers of sub-sequences with the same length as N, with which the D-RNN is trained and tested. In the disclosure, the model accuracy is quantitatively evaluated with two metrics, i.e., frame accuracy and video accuracy. The frame accuracy can be obtained by statistically summarizing the inference results of all sub-sequences in the test set. The video accuracy is calculated based on the predicted category of each video sequence in the test set and is derived by applying the winner-takes-all strategy (the action category with the most votes) on the testing results of all sub-sequences in the video sequence.
[0065] Through controlled variable experiments and performance analysis, the network sequence lengths are set to 3 and 5 respectively for the Weizmann and KTH databases, and serial numbers of sub-sequences in each video sequence are 10 to 33 and 70 to 120, respectively. The D-RNN architecture is evaluated by configuring the DPU read-out layer and pre-trained with the optimal fusing coefficient of 0.2 for both the Weizmann and KTH databases, to achieve the blind-testing frame accuracy of 88.9%, corresponding to the video accuracy of 100% and 94.4% respectively for the two modes. To implement the model experimentally, the adaptive training is performed by fine tuning the modulation coefficients of only the read-out layer due to the recurrent connection inherence of the D-RNN. The designed modulation coefficients of the memory, read-in and read-out DPU layers after the adaptive training are illustrated in
[0066] The experimental testing results of all sub-sequences are visualized with the categorical voting matrix in
[0067] The recognition accuracy and robustness of the D-RNN can be further enhanced, forming the D-RNN++ architecture, by transferring the trained D-RNN hidden layer and using the electronic read-out layer to replace the DPU read-out layer (
Fifth Embodiment
[0068] The computing speed and energy efficiency of the DNN architecture constructed by the DPUs according to the disclosure are determined, and the values are determined from a total number of computing operations (including optical and electronic computing operations) based on time and (system) energy consumption.
[0069] The number of optical operations: the total number of optical computing operations (OP) in the DPU including the following three parts, i.e., light field modulation, diffractive weighted connections, and complex nonlinear activation. In the computing process, the number of complex-number operations of the DPU are converted into the number of real-number operations. Each complex-number multiplication includes 4 real-number multiplications and 2 real-number summations, and each complex-number summation includes 2 real-number summations. Given the number of input nodes and the number of output neurons are both set to K, the optical field modulation and the complex nonlinear activation both have K complex-number multiplications. Each complex-number multiplication corresponds to 6K actual operations. Physical diffractive weighted connections between the input nodes and the output neurons have K.sup.2 complex-number multiplications and (K−1) K complex-number summations, which correspond to 2K (4K−1) actual operations. Therefore, the total number of actual operations for the optical computing in the DPU is: R.sub.d=2K (4K−1)+6K+6K=2K (4K+5) OP.
[0070] The number of electronic operations: the DPU electronically allocates some computing operations in the following three aspects, to flexibly control the dataflow during network configuration. (1) Through the optoelectronic implementation, the basic binary quantization of the unit input requires 2K actual operations, including threshold computing and numerical quantification. (2) For configurations of the D-NIN-1 and the D-RNN architecture, each external connection to the DPU needs K real-number multiplications, and the connections need K real-number summations therebetween. (3) The electronic fully connected read-out layer has K.sub.r=(2K.sub.1−1)K.sub.2 real-number operations, where K.sub.1 denotes the number of read-out nodes and K.sub.2 denotes the number of category nodes.
[0071] The total number of operations: R.sub.t=R.sub.o+R.sub.e. Under the network settings as described above, the optical operations and the electronic operations are denoted as R.sub.o and R.sub.e respectively. R.sub.o=Q.Math.R.sub.d, where Q is the number of temporal multiplexing of the DPU layer, and R.sub.d is an optical operation of each DPU. For the D.sup.2NN and D-NIN-1 (++), K=560×560. For the D-RNN (++), K=700×700. For the D-NIN-1, K.sub.1=196, K.sub.2=10. For the D-RNN with the Weizmann database, K.sub.1=2500 and K.sub.2=10. For the KTH database, K.sub.1=2500 and K.sub.2=6.
[0072] The computing speed of the DNN architecture constructed by the DPU can be expressed as:
where τ.sub.t=Qτ.sub.c represents the total duration of the computing, τ.sub.c is the duration of each cycle of the DPU workflow, and R.sub.e<<R.sub.o. τ.sub.o=Q.Math.τ.sub.d denotes the duration of different network architectures for processing the total optical operation R.sub.o, where τ.sub.d represents the duration of processing an optical operation R.sub.d by the DPU layer. Then, the energy efficiency can be expressed as:
where E.sub.t=E.sub.o+E.sub.e represents the energy consumption of the computing operation, E.sub.o=P.sub.o.Math.τ.sub.o represents the energy consumption of the optical operations, P.sub.o is the power of a light source, E.sub.e=R.sub.e/η.sub.e represents the energy consumption of the electronic operations, and η.sub.e is the energy efficiency of the electronic computing processor. Through the optoelectronic implementation of the DPU system, the power consumption of the incident light measured by a silicon optoelectronic sensor is 1.4 μW, i.e., P.sub.o=1.4 μW. For the D.sup.2NN and the D-NIN-1, τ.sub.c=5.9 ms and T.sub.d=4.4 ms. For the D-RNN, =7.1 ms and T.sub.d=5.5 ms. Since the electronic operations of the DPU system are all performed by Intel® Core™ i7-5820K CPU @ 3.30 GHz, the computing speed v.sub.e=188.5 GOPs/s, the power consumption P.sub.e=140.0 W, and the energy efficiency η.sub.e=v.sub.e/P.sub.e=1.3 GOPs/J. Therefore, the computing energy efficiency of the DPU system is approximately R.sub.o/R.sub.e times higher than that of the electronic computing processor. The energy efficiency of the system can be determined by further considering the energy consumption of the programmable optoelectronic device:
where E.sub.t=E.sub.o+E.sub.e+E.sub.dev represents the total energy consumption of the DPU system with different architectures, E.sub.dev and P.sub.dev are the total energy consumption and power consumption of optoelectronic device, respectively. The power consumption of the sCMOS sensor is about 30.0 W, the SLM is 10.0 W, and the DMD is 6.1 W.
[0073] Based on the above n method, the table in
[0074] In the description of this specification, descriptions with reference to the terms “one embodiment”, “some embodiments”, “examples”, “specific examples”, or “some examples” etc. mean specific features, structures, materials, or features described in conjunction with embodiments or examples are included in at least one embodiment or example of the disclosure. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. In addition, the described specific features, structures, materials, or features can be combined in any one or more embodiments or examples in a suitable manner. Furthermore, those skilled in the art can combine different embodiments or examples and features of different embodiments or examples described in this specification without any contradiction.
[0075] In addition, the terms “first” and “second” are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include at least one feature. In the specification of the disclosure, “plurality” means at least two, such as two, three or more, unless otherwise specifically defined.
[0076] Any process or method description in the flowchart or described in other ways herein can be understood as a module, segment or part of codes that includes one or more executable instructions for implementing logic functions or steps of the process. The scope of the preferred embodiment of the disclosure includes additional implementations, which may not be in the order shown or discussed, including performing involved functions in a substantially simultaneous manner or in the reverse order, which should be understood by those skilled in the art.
[0077] The skilled in the art can understand that all or part of the steps of the method of the foregoing embodiments can be implemented by a program instructing relevant hardware. The program can be stored in a computer-readable storage medium. When the program is executed, one or a combination of the steps of the method embodiment can be executed.
[0078] Further, the functional units in various embodiments of the disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
[0079] The aforementioned storage medium may be a read-only memory, a magnetic disk or an optical disk. Although embodiments of the disclosure are shown and described above, it can be understood that the above-mentioned embodiments are exemplary and should not be construed as limiting the disclosure. Those skilled in the art can make changes, modifications, substitutions, and modifications on embodiments within the scope of the disclosure.