Generative Model for Inverse Design of Materials, Devices, and Structures
20210281349 · 2021-09-09
Assignee
Inventors
- Keisuke Kojima (Weston, MA)
- Yingheng Tang (West Lafayette, IN, US)
- Toshiaki Koike-Akino (Belmont, MA, US)
- Ye Wang (Andover, MA, US)
Cpc classification
G02B6/2813
PHYSICS
International classification
Abstract
A photonic device for splitting optical beams includes an input port configured to receive an input beam having an input power, a power splitter including perturbation segments arranged in a first region and a second region of a guide material having a first refractive index, each segment having a second refractive index, wherein the first region is configured to split the input beam into a first beam and a second beam, wherein and the second region is configured to separately guide the first and second beams, wherein the first refractive index is greater than the second refractive index, and output ports including first and second output ports connected the power splitter to respectively receive and transmit the first and second beams.
Claims
1. A system for training a device design network, comprising: an interface configured to input data of a device; a memory to store the device design network including an encoder, a decoder, and an adversarial block; and a processor, in connection with the memory, configured to: update the encoder and the decoder based on a first loss function to reduce the difference between the input data and output data of the decoder, wherein the encoder is constructed by at least one convolutional layers followed by at least one parallel fully connected layer to extract features of a layout of the device; and update the adversarial block to construct by maximizing a second loss function.
2. (canceled)
3. The system of claim 1, wherein each of the two convolutional layers includes more than two channels.
4. The system of claim 1, wherein each of the two parallel fully connected layer include two input/output dimensions.
5. The system of claim 1, wherein the device is an optical power splitter, wherein the extracted features of the layout are mean (μ) and covariance (σ) for the Gaussian distribution.
6. The system of claim 1, wherein the device is a power splitter.
7. The system of claim 1, wherein the device is a WDM device.
8. The system of claim 1, wherein the device is a mode convertor.
9. (canceled)
10. The system of claim 3, wherein the two convolutional layers include 8 channels and 16 channels, respectively.
11. The system of claim 4, wherein each of the two parallel fully connected layer include 800 input/output dimensions and 60 input/output dimensions, respectively.
12. A computer-implemented method for training a device design network including an encoder, a decoder and an adversarial block stored in a memory in connection with a processor that is configured to perform steps of the method, the steps comprising acquire input data of a device via an interface; updating the encoder and the decoder based on a first loss function to reduce the difference between the input data and output data of the decoder, wherein the encoder is constructed by at least one convolutional layers followed by at least one parallel fully connected layer to extract features of a layout of the device; and updating the adversarial block to construct by maximizing a second loss function.
13. The computer-implemented method of claim 12, wherein each of the two convolutional layers includes more than two channels.
14. The computer-implemented method of claim 12, wherein each of the two parallel fully connected layer include two input/output dimensions.
15. The computer-implemented method of claim 12, wherein the device is an optical power splitter, wherein the extracted features of the layout are mean (μ) and covariance (σ) for the Gaussian distribution.
16. The computer-implemented method of claim 12, wherein the device is a power splitter.
17. The computer-implemented method of claim 12, wherein the device is a WDM device.
18. The computer-implemented method of claim 12, wherein the device is a mode convertor.
19. The computer-implemented method of claim 13, wherein the two convolutional layers include 8 channels and 16 channels, respectively.
20. The computer-implemented method of claim 14, wherein each of the two parallel fully connected layer include 800 input/output dimensions and 60 input/output dimensions, respectively.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The presently disclosed embodiments will be further explained with reference to the attached drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0037] The following description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.
[0038] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements.
[0039] Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed, but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function's termination can correspond to a return of the function to the calling function or the main function.
[0040] Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium. A processor(s) may perform the necessary tasks.
[0041]
[0042]
[0043]
Where the N is the random number which obey the standard Gaussian distribution (with mean of 0 and covariance of 1).
[0044]
[0045]
[0046]
Where,
[0047]
MSE_LOSS(x,y)=(x−y).sup.2
[0048] For Loss1, the first portion is the Binary cross entropy loss, the second part is the KL divergence loss and the third part is the Mean Square Error multiplied by a constant β to reach a training balance between the adversarial block and the main model (encoder and decoder). For Loss2, the loss function is just a simple MSE loss. The network update in phase 1 is based on Loss1. By update the weight in the encoder and the decoder, we want to minimize Binary Cross Entropy Loss between the input (702) and the output pattern (107). In the meantime, the difference between the condition (701) and the adversarial condition (505) needs to be maximized so that the encoder only extracted the pattern of the input pattern. The network update in Phase 2 is based on Loss2. In this phase, only the encoder block and the adversarial block are used and only the weight parameters in the adversarial block is updated (608). Here the loss is the MSE Loss between the between the condition (701) and the adversarial condition (505). By update the adversarial block, we want to minimize the MES loss to form an adversarial relation between the two blocks. In order to achieve the balance between the two phases, Phase 1 updates three times while Phase 2 updates once. In order to do that, we introduce a variable n with initial value of 0. Every time the CVAE block is finished update (604), we check the n value (605). If n is smaller than 3, we add 1 to n and go back to step. If the n is 3, we feed data to update the adversarial block (606) and rest n to 0 (through 609).
[0049]
[0050]
[0051] In order to fully train the Neural network model. We used the concept of active learning.
[0052]
[0053]
[0054] The power splitter is formed of nanostructured segments that are arranged in the guide material to effectively guide the input optical beam along predesigned beam paths toward the output ports. In this case, the nanostructured segments is the nanostructured hole that have a refractive index being less than that of the guide material of the power splitter. The Waveguide of the power splitter is Silicon and the material of the nanostructured hole is Silicon Dioxide (SiO2).
[0055]
[0056]
[0057]
[0058] According to some embodiments of present invention, there are the following advantages with respect to the device generated from the model. The devices can be manufactured in very compact sizes. For instance, the footprint can be only 2.25 um×2.25 um or less, which is the smallest splitter according to our knowledge. With such compact size, it has the potential to be massively integrated in optical communication chips with relatively low area budget.
[0059] The devices designed according to embodiments of the present invention can operate on a ultra-wide bandwidth (from 1250 nm to 1800 nm) while maintaining an excellent performance (overall 90% transmission), which is 5 times larger than the similar devices reported previously. Accordingly, the devices can cover all the optical communication band (From O band to L band corresponding to wavelengths ranging from 1260 nm to 1625 nm).
[0060] The model has been proved to generate any devices that the user want instantly without further optimization, which significantly saves the designing time.