Magnetically modulated computational cytometer and methods of use
12038370 ยท 2024-07-16
Assignee
Inventors
- Aydogan Ozcan (Los Angeles, CA)
- Aniruddha Ray (Los Angeles, CA, US)
- Yibo Zhang (Los Angeles, CA, US)
- Dino Di Carlo (Los Angeles, CA)
Cpc classification
B03C1/01
PERFORMING OPERATIONS; TRANSPORTING
F28F21/08
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
G03H2001/005
PHYSICS
B01D35/06
PERFORMING OPERATIONS; TRANSPORTING
B03C1/288
PERFORMING OPERATIONS; TRANSPORTING
G01N2015/1454
PHYSICS
F28F21/06
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
G01N2015/1445
PHYSICS
B03C2201/26
PERFORMING OPERATIONS; TRANSPORTING
B03C1/02
PERFORMING OPERATIONS; TRANSPORTING
F28F25/082
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
B03C1/30
PERFORMING OPERATIONS; TRANSPORTING
G03H2222/12
PHYSICS
B03C2201/18
PERFORMING OPERATIONS; TRANSPORTING
F28F2275/06
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
International classification
B03C1/01
PERFORMING OPERATIONS; TRANSPORTING
B03C1/02
PERFORMING OPERATIONS; TRANSPORTING
G03H1/00
PHYSICS
Abstract
A computational cytometer operates using magnetically modulated lensless speckle imaging, which introduces oscillatory motion to magnetic bead-conjugated rare cells of interest through a periodic magnetic force and uses lensless time-resolved holographic speckle imaging to rapidly detect the target cells in three-dimensions (3D). Detection specificity is further enhanced through a deep learning-based classifier that is based on a densely connected pseudo-3D convolutional neural network (P3D CNN), which automatically detects rare cells of interest based on their spatio-temporal features under a controlled magnetic force. This compact, cost-effective and high-throughput computational cytometer can be used for rare cell detection and quantification in bodily fluids for a variety of biomedical applications.
Claims
1. A cytometer device comprising: an optically transparent sample holder configured to hold a volume of sample therein, the volume of sample containing one or more objects therein with at least some of the one or more objects containing magnetic particles bound or conjugated thereto; a moveable scanning head disposed adjacent to the optically transparent sample holder, the moveable scanning head comprising a lensless imaging module comprising one or more illumination sources configured to illuminate the sample holder from a first side and an image sensor disposed on a second side of the sample holder, the image sensor configured to capture a plurality diffraction patterns created by one or more objects within the volume of sample, the moveable scanning head further comprising a plurality of electromagnets located laterally adjacent to the lensless imagine module; and a translation stage coupled to the moveable scanning head and configured to move the moveable scanning head along different regions of the optically transparent sample holder.
2. The cytometer device of claim 1, wherein the plurality of electromagnets comprise first and second electromagnets that are alternatingly driven 180? out of phase with respect to each other.
3. The cytometer device of claim 1, wherein the plurality of electromagnets are driven at a frequency between about 0.01 Hz and about 100 kHz.
4. The cytometer device of claim 1, wherein the optically transparent sample holder comprises one of a capillary, tube, flow cell, or microfluidic channel.
5. The cytometer device of claim 1, wherein the plurality of electromagnets each comprise respective permalloy rods associated therewith.
6. The cytometer device of claim 1, further comprising a computing device operatively connected to the cytometer device and configured to receive a plurality of images or video obtained by the image sensor.
7. The cytometer device of claim 6, the computing device further comprising image processing software configured to identify candidate objects of interest in the images or video and classify the objects of interest as a target object of interest or not a target object of interest.
8. The cytometer device of claim 7, wherein the image processing software performs drift correction prior to identifying candidate objects of interest.
9. The cytometer device of claim 7, wherein the image processing software inputs a plurality of images or video to a trained neural network to classify the objects of interest.
10. The cytometer device of claim 9, wherein the trained neural network comprises a fully connected trained neural network.
11. The cytometer device of claim 1, wherein the one or more objects comprise cells conjugated with magnetic particles.
12. The cytometer device of claim 1, further comprising one or more additional optically transparent sample holders, wherein each additional optically transparent sample holder is associated with a moveable scanning head disposed adjacent to the optically transparent sample holder, the moveable scanning head comprising a lensless imaging module comprising one or more illumination sources configured to illuminate the sample holder from a first side and an image sensor disposed on a second side of the sample holder, the image sensor configured to capture a plurality of diffraction patterns created by one or more objects within volume of sample, the moveable scanning head further comprising a plurality of electromagnets located laterally adjacent to the respective lensless imaging module.
13. A method of identifying one or more target objects among non-target objects within a sample comprising: conjugating the one or more target objects with one or more magnetic particles; loading an optically transparent sample holder with a sample containing the conjugated target object(s) and non-target objects; applying an alternating magnetic field to the sample holder containing the sample; illuminating the optically transparent sample holder with illumination from one or more light sources and capturing a plurality of images or video of diffraction patterns generated by the target object(s) and non-target objects within the sample while the alternating magnetic field is applied; and subjecting the plurality of images or video to image processing to identify candidate target object(s).
14. The method of claim 13, further comprising inputting images or video of the candidate target object(s) to a trained neural network that outputs a classification of the candidate target object(s) as a target object or non-target object.
15. The method of claim 13, further comprising inputting images or video of the candidate target object(s) to a machine learning software algorithm that outputs a classification of the candidate target object(s) as a target object or non-target object.
16. The method of claim 13, wherein the image processing software performs drift correction prior to identifying candidate target object(s) of interest.
17. The method of claim 14, wherein the trained neural network comprises a fully connected trained neural network.
18. The method of claim 13, wherein the target object comprises a cell or cluster of cells.
19. The method of claim 18, wherein the target object comprises a cancer cell or cluster or cancer cells.
20. The method of claim 13, wherein the target object comprises a cell or cluster of cells of a particular phenotype, morphology, shape, size, or genotype.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS
(12)
(13) As noted above, in some embodiments, there are multiple optically transparent samples holders 14 containing in the cytometer device 10 which permits parallel processing of multiple samples (or larger volumes of the same sample). The cytometer device 10 includes at least one moveable scanning head 16 that disposed adjacent to the optically transparent sample holder 14.
(14) In some cases, the optically transparent sample holder 14 may pass through all or a portion of the scanning head(s) 16. The moveable scanning head(s) 16 includes several components. First, the scanning head 16 includes a lensless imaging module 18 that is used to illuminate the sample within the sample holder 14 with light and obtain a plurality of diffraction pattern images or a movie 100 over a period of time of objects 90 (
(15) Certain objects 90 are target objects that are desired to be identified or classified. These target objects 90 are to be distinguished from non-target objects 90. In one example, the target objects 90 comprise cells of a particular type or phenotype, morphology, shape, size, or genotype. For example, the target objects 90 may comprise cancer cells or certain type of cancer cells such as circulating tumor cells (CTCs). The target objects 90 are conjugated to one or more magnetic particles 92. Particles 92 may include magnetic beads or the like. The one or more magnetic particles 92 may be made from superparamagnetic particles. An example of such particles 92 include Dynabeads? (Invitrogen, Carlsbad, California, USA). The presence of the conjugated or bound magnetic particle(s) 92 in response to the externally applied magnetic field is what allows for the identification/classification of the target objects 90 as explained herein.
(16) The lensless imaging module 18 includes one or more illumination sources 20 configured to illuminate the sample from a first side of the sample holder 14 and an image sensor 22 (e.g., CMOS sensor) disposed on a second side of the sample holder 14. In one configuration, the one or more illumination sources 20 include a laser diode although light emitting diodes (LEDs) may also be used. The one or more illumination sources 20 may be driven by on-board driver circuitry (not shown) located in the scanning head 16, for example. As described herein, the one or more illumination sources 20 include a laser diode (650-nm wavelength, AML-N056-650001-01, Arima Lasers Corp., Taoyuan, Taiwan) for illumination, which has an output power of ?1 mW. In one configuration, the one or more illumination sources 20 emit light onto the top of the sample holder 14 while the image sensor 22 captures time series of speckle pattern images 100 from the bottom of the sample holder 14.
(17) The image sensor 22 is configured to capture a time series of speckle pattern images 100 created by the one or more objects 90 within the volume of sample. The moveable scanning head(s) 16 further include, in one embodiment, first and second electromagnets 24, 26 located laterally adjacent to the lensless imaging module 18. That is to say, first and second electromagnets 24, 26 are located on either side of the lensless imaging module 18. In some embodiments, optional permalloy (nickel-iron magnetic alloy) rods 28 are used with each of the electromagnets 24, 26 to enhance or relay the magnetic force on the objects 90. In other embodiments, more than two electromagnets 24, 26 may be used with the moveable scanning head(s) 16. The plurality of electromagnets 24, 26 are driven using dedicated circuitry and/or a function generator 30 (
(18) The cytometer device 10 further includes a translation stage 32 mechanically coupled to the moveable scanning head(s) 16 and configured to move the moveable scanning head(s) 16 along different regions of the optically transparent sample holder 14. For example, the translation stage 32 may be a linear translation stage 32 that moves the scanning head(s) 16 to different regions on the sample holder 14. In the embodiment illustrated in
(19) The stepper motor 38 is operated by a driver/drive circuitry (not shown) that is controlled via a microcontroller 44. The microcontroller 44 interfaces with a computing 46 that is used to control the operation of the cytometer device 10 as well as process the images/videos 100 that are acquired by the image sensor 22. The computing device 46 may include a laptop as illustrated but it may also include, for example, a personal computer, tablet PC, mobile phone, or remote computer such as a server or the like. In some embodiments, various tasks or operations may be divided between multiple computing devices 46. For example, one computing device 46 may be used to control the cytometer device 10 and acquire the images/videos 100. Another computing device 46 may run the trained neural network 112 and results may be returned to the controlling computer device 46 (or another computing device 46 entirely). Of course, these tasks may be consolidated into a single computing device 46. The computing device 46 includes image processing software 110 that is executed thereon or thereby that it used to process the images/videos obtained from the image sensor. The computing device 46 may also be integrated into the cytometer device 10 in some embodiments. Some computations or image processing may also take place within the microcontroller 44.
(20) With reference to
(21) The image processing software 110 then uses a high-pass filtered back-propagation step that calculates holographic images at different axial distances within the three-dimensional sample (see
Experimental
(22) Characterization of the Oscillation of Bead-Cell Conjugates Under Alternating Magnetic Force
(23) The detection technique capitalizes on the periodic oscillatory motion of the target objects 90 of interest (i.e., cells), with a large number of labeling magnetic particles 92, to specifically detect them with high throughput. A pair of electromagnets 24, 26 were used to exert periodic and alternating magnetic force on the magnetic particles 92 bound to these cells of interest 90 (
(24) The movement of MCF7 cells 90 conjugated with Dynabeads? 92 was recorded by mounting the magnetic actuator and the labeled cells onto a 40?0.6NA benchtop microscope (see
(25) Various unbound magnetic beads 92 and clusters of beads 92 are also observed within the sample (
(26) Cell Detection and Classification Using CMA and Deep Learning
(27) The sample, which contains the periodically oscillating target cells 90 and other types of unwanted background particles or objects 90, is illuminated with coherent light. The interference pattern recorded by the CMOS image sensor 22 represents an in-line hologram of the target cells 90, which is partially obscured by the random speckle noise resulting from the background particles, including other unlabelled cells, cell debris and unbound magnetic particles 92. Recorded at 26.7 frames per second using the CMOS image sensor 22, these patterns exhibit spatio-temporal variations that are partially due to the controlled cell motion. This phenomenon is exploited for the rapid detection of magnetic-bead-conjugated rare cells 90 from a highly complex and noisy background.
(28) The cell candidates 90 that are detected in this preliminary screening step contain a large number of false positives, which mainly result from unbound magnetic beads 92 that form clusters under the external magnetic field. Therefore, another classification step was used (
(29) An autofocusing step is applied to each candidate object 90 to create an in-focus amplitude and phase video, which is then classified (as positive/negative) by a densely connected P3D CNN 112. These classification results are used to generate the final rare cell detection decisions and cell concentration measurements. The CNN was trained and validated with manually labelled video clips generated from ten samples that were used solely for creating the training/validation datasets. This training needs to be performed only once for a given type of cell-bead conjugate.
(30) Evaluation of System Performance
(31) To quantify the LoD of the platform 10 for detecting MCF7 cells 90 in human blood, cultured MCF7 cells 90 were spiked in whole blood at various concentrations and used the technique to detect the spiked MCF7 cells. Using spiked samples instead of clinical samples provides a well-defined system to characterize and quantify the capabilities of the platform, which is an important step before moving to clinical samples in the future. In each experiment, 4 mL of MCF7-spiked whole human blood at the desired concentration was prepared. Then, the procedure in
(32) MCF7 concentrations of 0 mL.sup.?1 (negative control), 10 mL.sup.?1, 100 mL.sup.?1 and 1000 mL.sup.?1 were tested, where three samples for each concentration were prepared and independently measured.
(33) Because the training of the deep neural network 112 inherently includes randomness, the repeatability of the network training process was further evaluated. For this, the training data was randomly and equally divided into five subsets, and five individual networks 112 were trained by assigning one different subset as the validation dataset and the combination of the remaining four subsets as the training dataset. Each of the five networks was blindly tested to generate the serial dilution results. The mean and standard deviation of the detected concentrations resulting from the five networks are shown in
(34) The under detection of the system is due to a combination of both systematic errors and random factors. A major reason for under detection is the tuning of the classification network 112. In the preliminary screening step, because there are typically a large number of false positive detections and a low number of true positive detections (since the target cells are quite rare), the classifier must be tuned to have an extremely low false positive rate (FPR) to have a low LoD. To satisfy this, a widely adopted method for tuning the classifier was adopted where a decision threshold was selected based on the training/validation dataset, which leads to a zero FPR. However, an inevitable side effect of reducing the FPR is a reduction in the true positive rate (TPR). Based on the validation results, when a decision threshold of 0.999999 was used, the TPR dropped to 10.5%. This explains a major portion of the reduced detection rate that was observed in the serial dilution tests (
(35) TABLE-US-00001 TABLE 1 Concentration (mL.sup.?1) Before enrichment After enrichment Total labeled MCF7 cells 1.1 ? 10.sup.5 9.4 ? 10.sup.4 Non-clustering labeled 4.7 ? 10.sup.4 MCF7 cells Labeled MCF7 cell clusters 1.7 ? 10.sup.4 Blood cells ~5 ? 10.sup.9 1.6 ? 10.sup.6 (estimated based on the average healthy human blood cell concentration) Magnetic beads 1.3 ? 10.sup.6 Bead clusters 1.1 ? 10.sup.5
(36) Table 1 shows the concentrations of different types of cells and particles in the sample before and after the magnetic enrichment. MCF7 cells were spiked into a whole blood sample at a concentration of 1.1?10.sup.5 mL.sup.?1, and enrichment was performed following the procedure reported in
(37) The remainder of the under detection and fluctuations in the detection rate at different concentrations may be associated with various other factors, e.g., sample handling errors (especially at low cell concentrations), clustering of the target cells, and non-uniform labelling of cells 90 with magnetic beads 92. In fact, MCF7 cells are known to form clusters and have thus been extensively used for preparing in vitro tumour models. In an experiment where MCF7 cells were spiked at a concentration of 1.1?10.sup.5/mL (Table 1), it was observed that ?50% of the MCF7 cells formed clusters after enrichment. However, the amount of clustering is expected to be lower at decreased MCF7 concentrations, which partially explains the reduced detection efficiency at higher cell concentrations. This clustering of cells not only reduces the overall number of target entities but may also exhibit changes in their oscillation patterns and may be misclassified by the classifier.
(38) Discussion
(39) The computational cytometry technique may be applied for the detection of various types of rare cells 90 in blood or other bodily fluids using appropriately selected ligand-coated magnetic beads 92. There are several advantages of the magnetically modulated speckle imaging technique. The first important advantage is its ability to detect target rare cells 90 without any additional modification such as labeling with fluorescent or radioactive compounds. The same magnetic beads 92 that are used for capturing and isolation of target cells 90 from whole blood are also used for the purpose of periodic cell modulation and specific detection within a dense background. False positives are mitigated by identifying the controlled spatio-temporal patterns associated with the labeled target cells 90 through a trained deep neural network 112.
(40) Compared to existing approaches, the technique also has the advantages of a relatively low LoD, rapid detection and low cost, which makes it suitable for sensitive detection of rare cells 90 in resource-limited settings. For example, fluorescence imaging and Raman microscopy have been widely used to detect rare cells and have been shown to have very low LoDs (e.g., ?1 cell/mL), but they are typically limited by a high system cost and complexity.
(41) The entire prototype of the computational cytometer 10 shown in
(42) Methods
(43) Cell Preparation
(44) MCF7 cell lines were purchased from ATCC (Manassas, Virginia, USA). Cells were plated with 10 mL of growth media in T75 flask (Corning Inc., New York, USA) at a concentration of 1?10.sup.5 cells/mL. The growth media was composed of Dulbecco's Modified Eagle Medium (DMEM, Gibco?, Life Technologies, Carlsbad, California, USA) supplemented with 10% (v/v) fetal bovine serum (FBS, Gibco?, Life Technologies, Carlsbad, California, USA) and 1% penicillin-streptomycin (Sigma-Aldrich Co., St. Louis, Missouri, USA). Cells 90 were grown in a humidified incubator at 37? C. in a 5% CO.sub.2 environment. Cells were harvested by treating them with 0.25% trypsin-edta (Gibco?, Life Technologies, Carlsbad, California, USA) for 3 min 2-3 days after seeding depending on confluency. Then, cells 90 were pelleted by centrifuging for 3 min at 1200 RPM and resuspended in the growth media to a final concentration of 1?10.sup.6 cells/mL.
(45) Sample Preparation
(46) Rare cell dilution: The MCF7 cells 90 were serially diluted in Dulbecco's phosphate-buffered saline (DPBS, Sigma-Aldrich Co., St. Louis, Missouri, USA) at different concentrations (2?10.sup.4 cells/mL, 2?10.sup.3 cells/mL, and 2?10.sup.2 cells/mL). The dilution of MCF7 cells 90 in whole blood was prepared by mixing the cell solution with whole blood at a ratio of 1:19 (v/v). Most of the experiments were performed by mixing 200 ?l of cell solution with 3.8 mL of whole blood. Healthy human whole blood (from anonymous and existing samples) was obtained from the UCLA Blood and Platelet Center.
(47) Bead washing: CELLection Epithelial Enrich Dynabeads? 92 (Invitrogen, Carlsbad, California, USA) were first resuspended in DPBS and vortexed for 30 sec. A magnet (DX08B-N52, K&J Magnetics, Inc., Pipersville, Pennsylvania, USA) was then used to separate the Dynabeads? 92 and the supernatant was discarded. This process was repeated three times, and the Dynabeads? 92 were resuspended in DPBS at the initial volume.
(48) Rare cell separation: The washed Dynabeads? 92 were then added to the MCF7-spiked whole blood sample at a concentration of 2.5 ?L beads per 1.0 mL of blood sample. The mixture was incubated for 30 min with gentle tilting and rotation. A magnet was placed under the vial for 5 min and the supernatant was discarded after that. To this solution, 1 mL of cold DPBS buffer was added and mixed gently by tilting from side to side. This magnetic separation procedure was repeated five times. After the final step, the sample was resuspended in 0.7 mL of DPBS and gently mixed with 2.5 mL of 400 cP methyl cellulose solution (Sigma-Aldrich Co., St. Louis, Missouri, USA) using a pipette. The sample was incubated for 5 min to reduce the number of bubbles before it was loaded into a glass capillary tube 14 (Part #BRT 2-4-50; cross-section inner dimension of 2 mm?4 mm; Friedrich & Dimmock, Inc., Millville, New Jersey, USA). The ends of the capillary tube 14 were sealed with parafilm before the tube 14 was mounted onto the computational cytometer 10 for imaging and cell screening.
(49) Design of the Computational Cytometer Based on Magnetically Modulated Lensless Speckle Imaging
(50) As shown in
(51) The translation stage 32 (i.e., linear translation stage) is custom-built using off-the-shelf components. A bipolar stepper motor 38 (No. 324, Adafruit Industries LLC., New York, USA) with two timing pulleys 40 and a timing belt 36 is used to provide mechanical actuation, and the lensless imaging module 18 is guided by a pair of linear motion sliders and linear motion shafts 34 on either side of the scanning head. 3D-printed plastic is used to construct the housing for the scanning head 16, and laser-cut acrylic is used to create the outer shell or enclosure 12 of the device 10.
(52) Image Acquisition
(53) After the sample is loaded into the capillary tube 14 and placed onto the computational cytometer 10, the image acquisition procedure begins. The translation stage 32 moves the scanning head 16 to a series of discrete positions along the glass tube 14. At each position, the stage stops 32, allowing the CMOS image sensor 22 to capture a sequence of 120 holograms at a frame rate of 26.7 fps before moving onto the next position. The image data are saved to a solid-state drive (SSD) (which may be disposed in the computing device 46) for storage and further processing.
(54) Because the FOV corresponding to the edges (i.e., top and bottom rows) of the image sensor 22 is subject to highly unbalanced magnetic force field due to the closeness to one of the electromagnets 24, 26, only the central 1374 rows of the image sensor's 22 pixels are used to capture the image sequence, where the magnetic force from the two electromagnets 24. 26 are relatively balanced.
(55) Because the temperature of the CMOS image sensor 22 quickly rises when it is turned on, it tends to cause undesired flow inside the glass tube 14 due to convection. Therefore, a scanning pattern is engineered to reduce the local heating of the sample: if one denotes 1, 2, . . . , 32 as the indices of the spatially adjacent scanning positions, the scanning pattern follows 1, 9, 17, 25, 2, 10, 18, 26, . . . . This scanning pattern ensures that a given part of the sample cools down before the scanning head 16 moves back to its neighborhood. The power to the image sensor 22 was also cut off during the transition between the two successive scanning positions, which was implemented by inserting a MOSFET-based switch into the power line of the USB cable.
(56) Computational Detection and Localization of Cell Candidates and Deep Learning-Based Classification
(57) The image processing procedure (
(58) 1. Preliminary Screening
(59) Computational Drift Correction
(60) The sample fluid in the glass capillary tube 14 often drifts slowly throughout the duration of the image acquisition, which is due to e.g., the imperfect sealing at the ends of the tube and the convection due to the heat from the image sensor 22. Because the detection and classification of the target cells 90 are largely based on their periodic motion, the drifting problem should be corrected. Since the sample is embedded within a viscous methyl cellulose, minimal turbulent flow is observed, and the drifting motion within the imaged FOV is almost purely translational. A phase correlation method was used to estimate the relative translation between each frame in the sequence with respect to a reference frame (chosen to be the middle frame in the holographic image sequence), and used 2D bilinear interpolation to remove the drift between frames (
(61) Detection of Target Cell Candidates
(62) The detection of the target cell candidates 90 plays a key role in automatically analyzing the sample, because it greatly narrows down the search space for the rare cells of interest and allows the subsequent deep learning-based classification to be applied to a limited number of holographic videos. In the preliminary screening stage, the lateral locations of the MCF7 candidate cells 90 are detected (
B.sub.i(z.sub.j)=HP[(A.sub.i,z.sub.j)](1)
(63) where HP(?) denotes the high-pass filter, (?) denotes angular spectrum propagation, A.sub.i denotes the i-th frame of the raw hologram sequence after the drift correction, z.sub.j denotes the j-th propagation (axial) distance. The selected propagation distances ranged from 800 ?m to 5000 ?m with a step size of 100 ?m to ensure coverage of all possible MCF7 candidates 90 within the sample tube. A zoomed-in image of B.sub.i(z.sub.j) corresponding to an example region is shown in
(64) Next, for every given propagation distance, a CMA algorithm was applied to reveal the oscillatory motion of the target cells 90 within the sample, which focuses on periodic changes in the recorded frames:
(65)
(66) To simplify segmentation, a maximum intensity projection along the axial direction (i.e., z) was applied to flatten the 3D image stack into a 2D image, which can be written as:
(67)
2. Classification
Autofocusing and Video Generation
(68) After the preliminary screening, which identifies the lateral centroids of potential target cell candidates 90, the subsequent processing is applied to each MCF7 candidate 90 only within their local area. Autofocusing.sup.62,63 was first performed to locate the MCF7 candidate in the axial direction. Because C(x, y; z.sub.j) should have a higher value when approaching the in-focus position of each MCF7 candidate 90, the approximate axial position was obtained by maximizing (as a function of z.sub.j) the sum of the pixel values of C(x,y;z.sub.j) (j=1, 2, . . . , N.sub.H) in a local neighborhood around each individual MCF7 candidate 90. A local neighborhood size of 40?40 pixels (i.e., 66.8 ?m?66.8 ?m) was used.
(69) This process can be written as follows:
(70)
(71) The same criterion to find the focus plane can be applied again with finer axial resolution to obtain a more accurate estimation of the axial distance for each MCF7 candidate 90. A step size of 10 ?m was used in this refined autofocusing step. Two examples of this process are shown in
(72) Finally, the in-focus amplitude and phase video corresponding to each MCF7 candidate 90 was generated by digitally propagating every frame of the drift-corrected hologram sequence to the candidate's in-focus plane. The final video has 120 frames at 26.67 fps with both the amplitude and phase channels, and each frame has a size of 64?64 pixels (pixel size=1.67 ?m). Two examples corresponding to two cell candidates 90 are shown in
(73) Target Cell Detection Using Densely Connected P3D CNN
(74) Each video of the MCF7 candidate 90 was fed into a classification neural network 112 (
(75) The detailed structure of the densely connected P3D CNN 112 is shown in
m.sub.p+1=Max[Conv.sub.t(Conv.sub.s(m.sub.p)?m.sub.p)?(Conv.sub.s(m.sub.p)?m.sub.p)](5)
(76) For example, consider an input video with a size of c?t?h?w where c, t, h and w denote the number of channels, number of frames (time), height and width of each frame (space), respectively. Here, c=2, t=120, and h=w=64. First the video passes through a 1?7?7 spatial convolutional layer 50 (stride=2) and a 9?1?1 temporal convolution layer 52 (stride=3) sequentially. The output channel numbers of the layers are included in
(77) Network Training and Validation
(78) Ten experiments (i.e., ten samples) were performed to create the training/validation datasets for the classifier 112 and then used the trained classifier to perform blind testing on additional serial dilution experimental data (
(79) Next, the training/validation datasets were randomly partitioned into a training set and a validation set with no overlap between the two. The training set contained 1713 positive videos and 11324 negative videos. The validation set contained 788 positive videos and 3622 negative videos. The training dataset was further augmented by randomly mirroring and rotating the frames by 90?, 180? and 270?. The convolutional layer weights were initialized using a truncated normal distribution, while the weights for the FC layer were initialized to zero. Trainable parameters were optimized using an adaptive moment estimation (Adam) optimizer with a learning rate of 10.sup.?4 and a batch size of 240. The network converged after ?800-1000 epochs. The network structure and hyperparameters were first optimized to achieve high sensitivity and specificity for the validation set. At a default decision threshold of 0.5, a sensitivity and specificity of 78.4% and 99.4%, respectively, were achieved for the validation set; a sensitivity and specificity of 77.3% and 99.5%, respectively, were achieved for the training set. After this initial step, because the rare cell detection application requires the classifier to have a very low FPR, the decision threshold of the classifier was further tuned to avoid false positives. For this, the training and validation datasets were combined to increase the total number of examples, and the decision threshold (for positive classification) was gradually increased from 0.5 while monitoring the FPR for the combined training/validation dataset. It was found that a decision threshold of 0.99999 was able to eliminate all false positive detections in the combined training/validation dataset. The decision threshold was further raised to 0.999999 to account for potential overfitting of the network to the training/validation data and further reduced the risk of false positive detections.
(80) At a decision threshold of 0.999999, as expected, the TPR dropped down to 10.5% (refer to
(81) Computation Time
(82) Using the current computer code, which is not optimized, it takes ?80 s to pre-process the data within one FOV (corresponding to a volume of 14.7 mm.sup.2?2 mm) for extracting the MCF7 cell candidates, corresponding to the preliminary screening step in
(83) COMSOL Simulation of the Magnetic Force Field Generated by the Electromagnet and the Permalloy Rods
(84) Because of space constraints, the electromagnet could not be placed sufficiently close to the imaging area, which caused the magnetic force to be low. A custom-machined rod 42 made of permalloy (relative permeability ?.sub.r?100,000) was used to relay the force field and enhance the relative magnetic force on target cells by ?40 times. A rod 42 was used for each electromagnet 24, 26. To simulate the magnetic force field distribution near an electromagnet with and without the permalloy rod, a finite element method (FEM) simulation was conducted using COMSOL Multiphysics (version 5.3, COMSOL AB, Stockholm, Sweden). A 3D model was developed using the magnetic field interface provided in the COMSOL AC/DC physics package. A stationary study was constructed based on the geometry of a commercially available electromagnet, where the core was modeled with a silicon steel cylinder (radius=3 mm, height=10 mm), and the coil was modeled with a surface current of 10 A/m on the side of the core running in the azimuthal direction. The permalloy rod was modeled using Permendur. A thick layer of air was added as a coaxial cylinder with a radius of 10 mm and a height of 30 mm. The magnetic flux density inside the simulation space was simulated using the magnetic field module. Then, a coefficient form PDE module in the mathematics library was used to derive the relative magnetic force field. The magnetic force that is received by superparamagnetic beads is given by:
(85)
(86) The simulation results are shown in
(87) High-Pass Filtered Back-Propagation Using the Angular Spectrum Method
(88) The recorded holographic speckle images were back-propagated to different axial distances (i.e., z-distances) using the angular spectrum method with a high-pass filtered transfer function. Because the approximate size of the target cells of interest is known a priori, a high-pass filter was factored into the propagation transfer function in the spatial frequency domain, which was useful for suppressing noise and unwanted artifacts (i.e., larger objects outside of cell range not needed so these can be filtered out).
(89) The free-space propagation transfer function is given by:
(90)
{tilde over (H)}(f.sub.x,f.sub.y;z)=H(f.sub.x,f.sub.y;z).Math.min{G.sub.1(f.sub.x,f.sub.y),G.sub.2(f.sub.x,f.sub.y)}(8) where G.sub.1 and G.sub.2 are the high-pass filters in the spatial frequency domain, given by
G.sub.1(f.sub.x,f.sub.y)=1?exp[???.sub.1.sup.2(f.sub.x.sup.2+f.sub.y.sup.2)](9)
and
G.sub.2(f.sub.x,f.sub.y)=1?exp[???.sub.2.sup.2f.sub.y.sup.2](10) where ?.sub.1=50 ?m and ?.sub.2=117 ?m. G.sub.1 was used mainly to suppress the low-frequency interference patterns caused by the various interfaces in the light path, and G.sub.2 was used mainly to suppress the unwanted diffraction patterns due to the grooves in the capillary tubes, which is a manufacturing artifact.
(91) While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. For example, while one specific trained neural network has been used to classify objects, other machine learning algorithms implemented using image processing software 110 may be used to classify the candidate target object(s) as a target object or non-target object. In addition, the target object 90 may include single cells or clusters of cells. The invention, therefore, should not be limited, except to the following claims, and their equivalents.