SYSTEM AND METHOD FOR SELECTIVE MICROCAPSULE EXTRACTION
20220371009 · 2022-11-24
Inventors
Cpc classification
B01L2200/0652
PERFORMING OPERATIONS; TRANSPORTING
G01N15/1427
PHYSICS
B01L3/50273
PERFORMING OPERATIONS; TRANSPORTING
B01L2300/0864
PERFORMING OPERATIONS; TRANSPORTING
G06V20/69
PHYSICS
B01L3/502715
PERFORMING OPERATIONS; TRANSPORTING
B01J13/046
PERFORMING OPERATIONS; TRANSPORTING
B01L2300/0816
PERFORMING OPERATIONS; TRANSPORTING
B01L2400/0436
PERFORMING OPERATIONS; TRANSPORTING
B01L3/502761
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A system for selective microcapsule extraction includes a non-planar core-shell microfluidic device. The non-planar core-shell microfluidic device generates microcapsules defining a core-shell configuration. A subset of the microcapsules contain aggregates, tissues, or at least one cell. A camera captures images of the microcapsules. A detection module includes a processor and a memory. The memory includes instructions that when executed by the processor causes the detection module to provide the images of the microcapsules as an input to a machine learning model. The machine learning model identifies microcapsules containing aggregates, tissues, or at least one cell. A force generator generates a force to extract the microcapsules. A microcontroller selectively activates the force generator to generate the force when the detection module identifies a microcapsule containing aggregates, tissues, or at least one cell to extract the microcapsule.
Claims
1. A system for selective microcapsule extraction, comprising: a non-planar core-shell microfluidic device configured to generate a plurality of microcapsules defining a core-shell configuration, wherein a subset of the microcapsules of the plurality of microcapsules contain aggregates, tissues, or at least one cell; a camera configured to capture a plurality of images of the generated plurality of microcapsules; a detection module including a processor and a memory, the memory including instructions stored thereon which when executed by the processor causes the detection module to: provide the plurality of images of the generated plurality of microcapsules as an input to a machine learning model; and identify, by the machine learning model, a microcapsule of the subset of microcapsules containing aggregates, tissues, or at least one cell; a force generator configured to generate a non-invasive force to extract the microcapsule of the subset of microcapsules; and a microcontroller configured to selectively activate the force generator to generate the non-invasive force when the detection module identifies the microcapsule of the subset of microcapsules to extract the microcapsule of the subset of microcapsules.
2. The system of claim 1, wherein at least one microcapsule of the subset of microcapsules generated by the non-planar core-shell microfluidic device defines a biomimetic environment containing a plurality of single cells, a plurality of cell tissues, and a plurality of cell aggregates.
3. The system of claim 2, wherein the biomimetic environment includes at least one hydrogel surrounding the plurality of single cells, the plurality of cell tissues, and the plurality of cell aggregates.
4. The system of claim 1, wherein the microfluidic device includes at least two immiscible phases, and wherein the force generator is selectively activated by the microcontroller to extract the microcapsule of the subset of microcapsules from a first phase of the at least two immiscible phases to a second phase of the at least two immiscible phases.
5. The system of claim 1, wherein a core of at least one microcapsule of the subset of microcapsules generated by the non-planar core-shell microfluidic device is surrounded by an inner wall of the at least one microcapsule of the subset of microcapsules.
6. The system of claim 5, wherein the core of the at least one microcapsule of the subset of microcapsules is substantially centered within an inner space of the corresponding microcapsule of the plurality of microcapsules.
7. The system of claim 1, wherein the machine learning model includes a machine learning classifier, or a convolutional neural network.
8. The system of claim 1, wherein the extracted microcapsule of the subset of microcapsules includes ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells, and wherein the ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells are arranged as a single cell, multiple cells, or cell aggregates.
9. The system of claim 1, wherein the non-invasive force generated by the force generator includes an electrical force, an acoustic force, or a mechanical force.
10. The system of claim 1, wherein the camera is a digital camera, a high speed digital camera, a camera embodied in a smartphone, or a camera embodied in a tablet computer.
11. A computer-implemented method for selective microcapsule extraction, comprising: generating a plurality of microcapsules, wherein a subset of the microcapsules of the plurality of microcapsules contain aggregates, tissues, or at least one cell; capturing a plurality of images of the generated plurality of microcapsules; providing the plurality of images of the generated plurality of microcapsules as an input to a machine learning model; identifying, by the machine learning model, a microcapsule of the subset of microcapsules containing aggregates, tissues, or at least one cell based on the provided input; and selectively applying a non-invasive force to extract the microcapsule of the subset of microcapsules containing aggregates, tissues, or at least one cell.
12. The computer-implemented method of claim 11, wherein generating the plurality of microcapsules includes generating at least microcapsule defining a biomimetic environment containing a plurality of cells, a plurality of cell tissues, and a plurality of cell aggregates.
13. The computer-implemented method of claim 12, further including generating the biomimetic environment to include at least one hydrogel surrounding the plurality of cells, the plurality of cell tissues, and the plurality of cell aggregates.
14. The computer-implemented method of claim 11, wherein extracting the microcapsule of the subset of microcapsules includes extracting the microcapsule of the subset of microcapsules from a first phase to a second phase of at least two immiscible phases.
15. The computer-implemented method of claim 11, wherein generating the plurality of microcapsules includes generating at least one core surrounded by an inner wall of a corresponding microcapsule of the plurality of microcapsules.
16. The computer-implemented method of claim 11, wherein generating the plurality of microcapsules includes generating at least one core substantially centered within an inner space of a corresponding microcapsule of the plurality of microcapsules.
17. The computer-implemented method of claim 11, wherein the plurality of images of the generated plurality of microcapsules is provided as an input to a machine learning classifier, or a convolutional neural network.
18. The computer-implemented method of claim 11, wherein extracting the microcapsule of the subset of microcapsules includes extracting ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells, and wherein the ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells are arranged as a single cell, multiple cells, or cell aggregates.
19. The computer-implemented method of claim 11, further including delivering the extracted microcapsule of the subset of microcapsules as an in-vivo treatment, providing the extracted microcapsule of the subset of microcapsules for an in-vitro study, or providing the extracted microcapsule of the subset of microcapsules for cell analysis.
20. A system for selective microcapsule or droplet extraction, comprising: a non-planar core-shell microfluidic device configured to generate a plurality of microcapsules or a plurality of droplets defining a core-shell configuration, wherein a subset of the microcapsules of the plurality of microcapsules or a subset of the droplets of the plurality of droplets contain aggregates, tissues, or at least one cell; a camera configured to capture a plurality of images of the generated plurality of microcapsules or the generated plurality of droplets; a detection module including a processor and a memory, the memory including instructions stored thereon which when executed by the processor causes the detection module to: provide the plurality of images of the generated plurality of microcapsules or the generated plurality of droplets as an input to a machine learning model; and identify, by the machine learning model, a microcapsule of the subset of microcapsules or a droplet of the subset of droplets containing aggregates, tissues, or at least one cell; a force generator configured to generate a force to extract the microcapsule of the subset of microcapsules or the droplet of the subset of droplets; and a microcontroller configured selectively activate the force generator to generate the force when the detection module identifies the microcapsule of the subset of microcapsules or the droplet of the subset of droplets to extract the microcapsule of the subset of microcapsules or the droplet of the subset of droplets.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] Various aspects and features of the present disclosure are described hereinbelow with reference to the drawings wherein:
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
DETAILED DESCRIPTION
[0043] Descriptions of technical features or aspects of an exemplary configuration of the disclosure should typically be considered as available and applicable to other similar features or aspects in another exemplary configuration of the disclosure. Accordingly, technical features described herein according to one exemplary configuration of the disclosure may be applicable to other exemplary configurations of the disclosure, and thus duplicative descriptions may be omitted herein.
[0044] Exemplary configurations of the disclosure will be described more fully below (e.g., with reference to the accompanying drawings). Like reference numerals may refer to like elements throughout the specification and drawings.
[0045] The presently disclosed subject matter relates generally to a system and method for detection and selective extraction of microcapsules. In certain embodiments, the system and method include machine learning-based detection. In certain embodiments, the system operates on a microfluidic chip. In an example embodiment, the method includes a highly efficient machine learning based label-free on-chip detection and selective extraction method for obtaining highly pure samples of cell/tissue-laden hydrogel microcapsules.
[0046] In the present disclosure, devices, systems, and methods for a highly efficient deep learning-enabled label-free on-chip detection and selective extraction of cell aggregate-laden hydrogel microcapsules are described. This is achieved through using categorically labeled images to train a deep learning-based detection model, which is then used to dynamically analyze the real-time images for label-free detection of the cell aggregate-laden microcapsules with ˜100% efficiency. Once detected, a DEP force is activated to extract the cell-laden microcapsules from oil into an aqueous phase with high efficiency (˜97%), high purity (˜90%), and high cell viability (>95%). DEP is a simple and fast method of moving particles that allows for a quick transfer of microcapsules from oil into an aqueous phase. An exemplary system includes a microfluidic device for microcapsule generation, a cell phone camera for imaging the on-chip detection area, a deep learning model for detection via analyzing the video frames from the camera in real-time, and a microcontroller that receives the output from the deep learning model and controls the switch to activate/deactivate the DEP-based extraction. In the microfluidic device (see, e.g., microfluidic device 300 in
[0047] The microcapsules further flow into the detection region (ii) where images are taken by a cell phone camera via the objective of a low-cost Zeiss (Oberkochen, Germany) Primovert brightfield light microscope. This can be done by attaching an iPhone® cell phone (Apple Inc., CA, USA) to the microscope with the phone camera being overlayed on the microscope eyepiece. The phone also relays the images to a computer. The deep learning model on the computer analyzes the input images in real-time to determine if the microcapsule currently in the detection region contains a cell aggregate or is empty. This information is then sent to a microcontroller, which controls a switch that turns on when the model determines there is a microcapsule containing a cell aggregate in the detection region. Based on the flow speed of the oil phase, distance between two adjacent microcapsules, inference time of the detection system, and time needed for DEP activation, a distance of 10 μm can be employed between the detection region and electrode location to ensure timely extraction of a detected cell aggregate-laden microcapsules with minimal interference of neighboring microcapsules. When the switch is turned on, an electric field is applied across the microchannel via the two electrodes (E1 and E2, located at 10 μm downstream of the detection region) to generate DEP force for selectively extracting the cell aggregate-laden microcapsule from the oil phase into the isotonic aqueous extraction solution (iii). The extracted microcapsules then flow to the outlet 1 (O1, iv), while non-extracted microcapsules stay in the oil phase and flow to the outlet O2. The microcapsules have a diameter of 219.4+/−8.2
[0048] A deep learning model can be utilized to enable label-free detection of cell aggregate-laden microcapsules in real-time. This is achieved through training the deep learning neural network model using pre-labeled (i.e., with or without a cell-laden microcapsule) images of the detection region. Once the model is trained, images from the cell phone camera showing the detection region of the microfluidic chip are read into the detection program and the model determines whether or not there is an aggregate-laden microcapsule in the detection region in real-time.
[0049] An exemplary detection model is based on the single shot multibox detector (SSD), a current state-of-the-art model for object detection. The detection model includes two components: a backend feature extractor, followed by several convolutional layers for bounding box predictions. The predicted bounding boxes are refined through non-maximum suppression. A comparison between three different backend structures is described (see, e.g.,
[0050] When real-time extraction is performed on cell aggregate-laden microcapsules, enabled by the deep learning-based label-free detection of the cell aggregate-laden microcapsules on-chip using the detection model trained with the MobileNet backend structure, the efficiency and purity of the approach described herein for both detection and selective extraction through microcapsule collection and counting, as well as quantification of cell viability is increased. A video frame breakdown of the deep learning-based detection and selective extraction is shown in
[0051] The deep learning-based label-free methods described herein can detect cell aggregates (50-250 μm in diameter) with an ˜100% detection efficiency. To determine the selective extraction efficiency and purity as well as cell viability, microcapsules are collected from both the aqueous outlet O1 and oil outlet O2. The extraction efficiency is defined as the percent of extracted cell aggregate laden microcapsules out of all cell aggregate-ladened microcapsules, while the extraction purity is defined as the percent of the extracted cell aggregate-laden microcapsules out of the total extracted microcapsules. The purity of the selectively extracted microcapsules is significantly much higher than that (˜2%) without selective extraction.
[0052] The deep learning-based label-free method can detect cell aggregates of 50-250 μm in diameter with an ˜100% detection efficiency, which enables selective extraction of the cell aggregate laden microcapsules via DEP force with an ˜97% extraction efficiency. This high detection efficiency can also be attributed to the size of the microcapsules and the design of the devices and systems described herein, to ensure the cell aggregates are not far away from the plane of focus (i.e., within the depth of focus) so that they show up in the images used by the deep-learning algorithm for detection. This detection method is much better than a previously reported optical sensor-based approach that is unable to detect or extract any cell aggregates less than 82 μm. This is important for biomedical applications, for instance for islet microencapsulation, because islets can be as small as 50 μm. The purity (˜90%) of the deep learning-based extraction is also much higher than that (˜30%) achieved with an optical sensor-based detection method for DEP-based extraction.
[0053] The throughput of the devices, systems, and methods described herein is ˜1.5 microcapsules per second. This is due to the flow rates of the aqueous and oil phases used, which are optimized based on the time needed for detection and extraction, as well as the time needed for the oil/aqueous interface to become stable after extracting a cell-laden microcapsule. If the rate of microcapsule generation is too high and the microcapsules are too close to each other, the purity of the extracted cell-laden microcapsules may decrease. This is because the neighboring microcapsules (cell-laden or not) may be extracted either before the DEP is de-activated, or before the interface becomes stable after extracting a target cell-laden microcapsule. This may contribute partially to the low purity (˜30%) of conventional optical sensor-based systems with a throughput of ˜3.75 microcapsules per second. Nonetheless, for the applications of microencapsulating islets and follicles, usually ˜100 follicles or 1000 islets are needed at a time, making this throughput of ˜1.5 microcapsules per second sufficient. For applications that require higher throughput, smaller microcapsules that may cause less interface destabilization could be used, along with more viscous oil phase and aqueous extraction solution to further stabilize the interface, for increasing the throughput while keeping a high purity. The electrical conductivity of the oil phase can also be adjusted for faster extraction. Advances in deep learning and improved backend structures, along with a high-speed camera for imagining, and a faster computer processor and microcontroller can also increase throughput. Throughput may also be increased by running multiple microfluidic devices in parallel.
[0054] The deep learning system can also be trained with images of smaller aggregates (e.g., smaller than 50-250 μm for the application of microencapsulating single pancreatic islets and ovarian follicles that have a diameter over the same range) or single cells. The microfluidic device and detection system can also be adjusted to ensure that smaller aggregates and single cells can be identified and are within the depth of focus for imaging so that a high detection and extraction efficiency could be achieved. Reducing the microcapsule size may also help to increase detection efficiency, especially for detecting smaller aggregates or one single cell in the microcapsules. The position of cells, the plane of focus, and microcapsule edge opacity may vary with microcapsule size and can affect the detection efficiency. This type of model can also be applied to sort cell aggregates based on their size by training the model with images of aggregates of varying sizes in the microcapsules.
[0055] The elimination of cell labeling with the label-free devices, systems and methods described herein is of great significance to the use of the cells for downstream biomedical applications where labeled cells cannot be used, including treating type 1 diabetes with microencapsulated islets, as well as microencapsulation of an ovarian follicle for biomimetic 3D culture to treat infertility. The devices, systems, and methods described herein allow for a quick transfer of microcapsules from the time of detection to extraction (˜15 ms), and microcapsules are moved from the oil phase that is not favorable for the survival of living cells, to an aqueous solution in less than 10 seconds (determined by dividing the gelling microchannel length with the flow speed in the channel), from the time when the microcapsules are generated to their extraction. This contributes to the high cell viability of the extracted sample. This approach eliminates the need for tedious manual sorting of non-labeled aggregates and the associated possibility of sample contamination.
[0056] Referring generally to
[0057] Referring particularly to
[0058] Referring particularly to
[0059] Referring particularly to
[0060] The microfluidic device (see, e.g., microfluidic device 300 in
[0061] A deep learning-based detection program processes images of the microcapsules received from a camera (e.g., camera 102) to determine if the microcapsules contain a cell aggregate or are empty. Once a cell aggregate-laden microcapsule is detected, the microcontroller (e.g., microcontroller 105) is informed to turn the switch (e.g., switch 209) on, activating a force (e.g., a DEP force) to extract the cell aggregate-laden microcapsule into the aqueous extraction channel (iii in
[0062] Referring particularly to
[0063] A camera 102 captures images of the microcapsules. The images may be digital images and may include video images. The camera 102 may be a digital camera, a high speed digital camera, a camera embodied in a smartphone, or a camera embodied in a tablet computer.
[0064] A detection module 103 includes a processor and a memory. The memory includes instructions that when executed by the processor cause the detection module 103 to provide the images of the microcapsules as an input to a machine learning model. The detection module may run on any computer, as described herein (see, e.g., the computer 1400 described in more detail below with reference to
[0065] The machine learning model identifies microcapsules containing aggregates, tissues, or at least one cell. The machine learning model may include a machine learning classifier, or a convolutional neural network (see, e.g.,
[0066] A force generator 104 generates a force to extract the microcapsules. A microcontroller 105 selectively activates the force generator 104 to generate the force when the detection module 103 identifies a microcapsule containing aggregates, tissues, or at least one cell to extract the microcapsule. The force may be a non-invasive force.
[0067] In an aspect of the present disclosure, the force generated by the force generator 104 includes an electrical force, an acoustic force, or a mechanical force.
[0068] Referring particularly to
[0069] Referring particularly to
[0070] In an aspect of the present disclosure, the microcapsule 510 generated by the non-planar core-shell microfluidic device 500 defines a biomimetic environment containing a plurality of single cells, a plurality of cell tissues, and a plurality of cell aggregates. The biomimetic environment may also contain a single cell. The biomimetic environment may include at least one hydrogel surrounding the aggregates, tissues, or cell(s).
[0071] The microfluidic device 500 includes at least two immiscible phases (e.g., an oil phase 501 and an aqueous phase 502). The force generator is selectively activated by the microcontroller to extract microcapsules from a first phase to a second phase (e.g., by activating electrodes 503, 504).
[0072] Referring particularly to
[0073] In an aspect of the present disclosure, the extracted microcapsule 510 includes ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells. The ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells are arranged as a single cell, multiple cells, or cell aggregates.
[0074] Referring to
[0075] Referring to
[0076] Referring to
[0077] The system 1000 described below with reference to
[0078] Referring to
[0079] Referring particularly to
[0080] Referring to
[0081] With ongoing reference to
[0082] Referring to
[0083] In an aspect of the present disclosure, the extracted microcapsule is provided as an in-vivo treatment, for an in-vitro study, or for cell analysis.
[0084] Referring to
[0085] In some aspects of the disclosure, the memory 1402 can be random access memory, read-only memory, magnetic disk memory, solid-state memory, optical disc memory, and/or another type of memory. The memory 1402 can communicate with the processor 1401 through communication buses 1403 of a circuit board and/or through communication cables such as serial ATA cables or other types of cables. The memory 1402 includes computer-readable instructions that are executable by the processor 1401 to operate the computer 1400 to execute the algorithms described herein. The computer 1400 may include a network interface 1404 to communicate (e.g., through a wired or wireless connection) with other computers or a server. A storage device 1405 may be used for storing data. The computer 1400 may include one or more FPGAs 1406. The FPGA 1406 may be used for executing various machine learning algorithms. A display 1407 may be employed to display data processed by the computer 1400.
[0086] The machine learning models described herein (e.g., including a neural network) can be trained using a dataset including known matching and non-matching entries (e.g., data of previously verified microcapsules including aggregates, tissues, or at least one cell and verified microcapsules not including any cells, aggregates or tissues). The machine learning models may be trained on datasets for multiple cells or aggregates of cells contained in a core-shell microcapsule, as described herein.
[0087] An exemplary training process for the deep-learning model is described in more detail below. The deep learning model can be trained using labeled images of empty and cell aggregate-laden microcapsules. Images of air bubbles and noise in the oil phase are included to help the model distinguish between noise, microcapsules, and cell aggregates. First, iVcam is used to record videos of microcapsules in the detection region using an iPhone® attached to the eyepiece of a Zeiss Primovert microscope. Then the videos are split into frames, and frames with empty microcapsules and cell aggregate-laden microcapsules are collected (400 empty and 400 aggregate-laden). Images are cropped to include only one microcapsule and labeled as “empty” or “cell aggregate-laden” using the program LabelImg in the Python® Package Index (PyPI). Labeled image data is then divided randomly into training and testing data (80% training, 20% testing). Images are used to train the deep learning models using Tensorflow® (Google) for 100,000 steps. Testing data is then used to confirm the model's detection precision.
[0088] The documents listed below and referenced herein are incorporated herein by reference in their entireties, except for any statements contradictory to the express disclosure herein, subject matter disclaimers or disavowals, and except to the extent that the incorporated material is inconsistent with the express disclosure herein, in which case the language in this disclosure controls. Incorporation by reference of the following shall not be considered an admission by the applicant that the incorporated materials are prior art to the present disclosure, nor shall any document be considered material to patentability of the present disclosure.
[0089] J. Zhang, R. J. Coulston, S. T. Jones, J. Geng, O. A. Scherman, C. Abell, Science (80-.). 2012, 335, 690.
[0090] A. S. Mao, B. Ozkale, N. J. Shah, K. H. Vining, T. Descombes, L. Zhang, C. M. Tringides, S. W. Wong, J. W. Shin, D. T. Scadden, D. A. Weitz, D. J. Mooney, Proc. Natl. Acad. Sci. U.S.A. 2019, 116, 15392.
[0091] A. J. Vegas, O. Veiseh, M. Gürtler, J. R. Millman, F. W. Pagliuca, A. R. Bader, J. C. Doloff, J. Li, M. Chen, K. Olejnik, H. H. Tam, S. Jhunjhunwala, E. Langan, S. Aresta- Dasilva, S. Gandham, J. J. McGarrigle, M. A. Bochenek, J. Hollister-Lock, J. Oberholzer, D. L. Greiner, G. C. Weir, D. A. Melton, R. Langer, D. G. Anderson, Nat. Med. 2016, 22, 306.
[0092] X. He, ACS Biomater. Sci. Eng. 2017, 3, 2692.
[0093] W. Zhang, X. He, J. Healthc. Eng. 2011, 2, 427.
[0094] P. Agarwal, J. K. Choi, H. Huang, S. Zhao, J. Dumbleton, J. Li, X. He, Part. Part. Syst. Charact. 2015, 32, 809.
[0095] J. K. Choi, P. Agarwal, H. Huang, S. Zhao, X. He, Biomaterials 2014, 35, 5122.
[0096] M. Ma, A. Chiu, G. Sahay, J. C. Doloff, N. Dholakia, R. Thakrar, J. Cohen, A. Vegas, D. Chen, K. M. Bratlie, T. Dang, R. L. York, J. Hollister-Lock, G. C. Weir, D. G. Anderson, Adv. Healthc. Mater. 2013, 2, 667.
[0097] A. M. White, J. G. Shamul, J. Xu, S. Stewart, J. S. Bromberg, X. He, ACS Biomater. Sci. Eng. 2020, 6, 2543.
[0098] S. Zhao, Z. Xu, H. Wang, B. E. Reese, L. V Gushchina, M. Jiang, P. Agarwal, J. Xu, M. Zhang, R. Shen, Z. Liu, N. Weisleder, X. He, Nat. Commun. 2016, 7, 1.
[0099] R. Seemann, M. Brinkmann, T. Pfohl, S. Herminghaus, Reports Prog. Phys. 2012, 75, 016601.
[0100] D. M. Headen, G. Aubry, H. Lu, A. J. Garcia, Adv. Mater. 2014, 26, 3003.
[0101] L. Shang, Y. Cheng, Y. Zhao, Chem. Rev. 2017, 117, 7964.
[0102] H. Huang, X. He, Lab Chip 2015, 15, 4197.
[0103] K. Y. Lee, D. J. Mooney, Prog. Polym. Sci. 2012, 37, 106.
[0104] D. J. Collins, A. Neild, A. DeMello, A. Q. Liu, Y. Ai, Lab Chip 2015, 15, 3439.
[0105] M. De Groot, B. J. De Haan, P. P. M. Keizer, T. A. Schuurs, R. Van Schilfgaarde, H. G. D. Leuvenink, Lab. Anim. 2004, 38, 200.
[0106] D. Dufrane, W. D'hoore, R. M. Goebbels, A. Saliez, Y. Guiot, P. Gianello,
[0107] Xenotransplantation 2006, 13, 204.
[0108] X. He, T. L. Toth, Semin. Cell Dev. Biol. 2017, 61, 140.
[0109] M. Sun, P. Durkin, J. Li, T. L. Toth, X. He, ACS Sensors 2018, 3, 410.
[0110] H. Wang, P. Agarwal, B. Jiang, S. Stewart, X. Liu, Y. Liang, B. Hancioglu, A. Webb, J. P. Fisher, Z. Liu, X. Lu, K. H. R. Tkaczuk, X. He, Adv. Sci. 2020, 7, 2000259.
[0111] H. Huang, M. Sun, T. Heisler-Taylor, A. Kiourti, J. Volakis, G. Lafyatis, X. He, Small 2015, 11, 5369.
[0112] J. Nam, H. Lim, C. Kim, J. Yoon Kang, S. Shin, Biomicrofluidics 2012, 6, 024120.
[0113] A. Sciambi, A. R. Abate, Lab Chip 2015, 15, 47.
[0114] X. He, Ann. Biomed. Eng. 2017, 45, 1676.
[0115] P. R. O'Neill, W. K. A. Karunarathne, V. Kalyanaraman, J. R. Silvius, N. Gautama, Proc. Natl. Acad. Sci. U.S.A. 2012, 109, 20784.
[0116] D. R. Gossett, H. T. K. Tse, J. S. Dudani, K. Goda, T. A. Woods, S. W. Graves, D. Di Carlo, Small 2012, 8, 2757.
[0117] E. Pariset, C. Pudda, F. Boizot, N. Verplanck, J. Berthier, A. Thuaire, V. Agache, Small 2017, 13, DOI 10.1002/sm11.201770201.
[0118] T. S. H. Tran, B. D. Ho, J. P. Beech, J. 0. Tegenfeldt, Lab Chip 2017, 17, 3592.
[0119] E. H. M. Wong, E. Rondeau, P. Schuetz, J. Cooper-White, Lab Chip 2009, 9, 2582.
[0120] J. J. Agresti, E. Antipov, A. R. Abate, K. Ahn, A. C. Rowat, J. C. Baret, M. Marquez, A. M. Klibanov, A. D. Griffiths, D. A. Weitz, Proc. Natl. Acad. Sci. U.S.A. 2010, 107, 4004.
[0121] Z. Cao, F. Chen, N. Bao, H. He, P. Xu, S. Jana, S. Jung, H. Lian, C. Lu, Lab Chip 2013, 13, 171.
[0122] S. Webb, Nature 2018, 554, 555.
[0123] X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, A. Ozcan, Science (80-.). 2018, 361, 1004.
[0124] E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O'Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, S. Finkbeiner, Cell 2018, 173, 792.
[0125] Y. Wu, H. Shroff, Nat. Methods 2018, 15, 1011.
[0126] P. Zhang, S. Liu, A. Chaurasia, D. Ma, M. J. Mlodzianoski, E. Culurciello, F. Huang, Nat. Methods 2018, 15, 913.
[0127] A. S. Adamson, A. Smith, JAMA Dermatology 2018, 154, 1247.
[0128] T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, C. L. Zitnick, in Lect. Notes Comput. Sci. (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), 2014, pp. 740-755.
[0129] J. Schmidhuber, Neural Networks 2015, 61, 85.
[0130] Y. LeCun, Y. Bengio, G. Hinton, Nature 2015, 521, 436.
[0131] Y. J. Heo, D. Lee, J. Kang, K. Lee, W. K. Chung, Sci. Rep. 2017, 7, 1.
[0132] Z. Zhang, J. Ge, Z. Gong, J. Chen, C. Wang, Y. Sun, Int. J. Lab. Hematol. 2020, DOI 10.1111/ijlh.13380.
[0133] V. Anagnostidis, B. Sherlock, J. Metz, P. Mair, F. Hollfelder, F. Gielen, Lab Chip 2020, 20, 889.
[0134] M. Girault, H. Kim, H. Arakawa, K. Matsuura, M. Odaka, A. Hattori, H. Terazono, K. Yasuda, Sci. Rep. 2017, 7, DOI 10.1038/srep40072.
[0135] A. Chu, D. Nguyen, S. S. Talathi, A. C. Wilson, C. Ye, W. L. Smith, A. D. Kaplan, E. B. Duoss, J. K. Stolaroff, B. Giera, Lab Chip 2019, 19, 1808.
[0136] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, K. Murphy, in Proc.—30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, 2017, pp. 3296-3305.
[0137] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, A. C. Berg, in Lect. Notes Comput. Sci. (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), 2016, pp. 21-37.
[0138] A. Mousavian, D. Anguelov, J. Košecká, J. Flynn, in Proc.—30th IEEE Conf. Comput.
[0139] Vis. Pattern Recognition, CVPR 2017, 2017, pp. 5632-5640.
[0140] S. Ren, K. He, R. Girshick, J. Sun, in Adv. Neural Inf. Process. Syst., 2015, pp. 91-99.
[0141] A. Neubeck, L. Van Gool, in Proc. - Int. Conf. Pattern Recognit., 2006, pp. 850-855.
[0142] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, arXiv 2017, 1704.04861.
[0143] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2015, pp. 1-9.
[0144] K. He, X. Zhang, S. Ren, J. Sun, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770-778.
[0145] A. C. Schapiro, T. T. Rogers, N. I. Cordova, N. B. Turk-Browne, M. M. Botvinick, in ArXiv Prepr., 2016.
[0146] J. Jo, Y. C. Moo, D. S. Koh, Biophys. J. 2007, 93, 2655.
[0147] H. Huang, X. He, Appl. Phys. Lett. 2014, 105, 143704.
[0148] MicroChem, “SU-8 2000 (2000.5-2015) Permanent Epoxy Negative Photoresist PROCESSING GUIDELINES FOR: SU-8 2100 and SU-8 2150,” can be found under www.atgc.cajp, 2010.
[0149] It will be understood that various modifications may be made to the aspects and features disclosed herein. Therefore, the above description should not be construed as limiting, but merely as exemplifications of various aspects and features. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended thereto.