DEVICE AND METHOD FOR MOTILITY-BASED LABEL-FREE DETECTION OF MOTILE OBJECTS IN A FLUID SAMPLE
20210374381 · 2021-12-02
Assignee
Inventors
- Aydogan Ozcan (Los Angeles, CA)
- Yibo Zhang (Los Angeles, CA, US)
- Hatice Ceylan Koydemir (Los Angeles, CA, US)
Cpc classification
G06V20/69
PHYSICS
G03H1/08
PHYSICS
International classification
G03H1/00
PHYSICS
G03H1/08
PHYSICS
Abstract
Systems and methods for detecting motile objects (e.g., parasites) in a fluid sample by utilizing the locomotion of the parasites as a specific biomarker and endogenous contrast mechanism. The imaging platform includes one or more substantially optically transparent sample holders. The imaging platform has a moveable scanning head containing light sources and corresponding image sensor(s) associated with the light source(s). The light source(s) are directed at a respective sample holder containing a sample and the respective image sensor(s) are positioned below a respective sample holder to capture time-varying holographic speckle patterns of the sample contained in the sample holder. The image sensor(s). A computing device is configured to receive time-varying holographic speckle pattern image sequences obtained by the image sensor(s). The computing device generates a 3D contrast map of motile objects within the sample use deep learning-based classifier software to identify the motile objects.
Claims
1. An imaging platform for the label-free detection of motile objects in a sample comprising: one or more substantially optically transparent sample holders; a moveable scanning head containing one or more coherent light sources and corresponding image sensor(s) associated with the one or more coherent light sources; a translation stage configured to translate the moveable scanning head along the one or more optically transparent sample holders; a computing device configured to receive time-varying holographic speckle pattern image sequences obtained by the image sensor(s), the computing device comprising computational motion analysis software configured to generate a three-dimensional (3D) contrast map of motile objects within the one or more optically transparent sample holders, the computing device further comprising a deep learning-based classifier software to identify motile objects in the three-dimensional (3D) contrast map.
2. The imaging platform of claim 1, wherein the sample comprises a biological fluid.
3. The imaging platform of claim 2, wherein the biological fluid comprises blood.
4. The imaging platform of claim 2, wherein the biological fluid comprises cerebrospinal fluid.
5. The imaging platform of claim 1, wherein the one or more optically transparent sample holders comprise one or more capillary tubes.
6. The imaging platform of claim 1, wherein the scanning head contains one or more light sources selected from the group consisting of laser diodes, light-emitting diodes, and lasers, projecting light onto the one or more optically transparent sample holders.
7. The imaging platform of claim 1, wherein the translation stage further comprises one or more linear motion shafts holding the moveable scanning head and a stepper motor coupled to the moveable scanning head via a belt.
8. The imaging platform of claim 1, wherein the moveable scanning head further comprises a one or more heat sinks for the image sensor(s).
9. The imaging platform of claim 1, wherein the computational motion analysis software performs object function normalization (OFN) to suppress strongly scattering objects within the sample.
10. A method of using the imaging platform of claim 1, comprising: loading the sample into the one or more optically transparent sample holders; translating the moveable scanning head to different regions of the one or more optically transparent sample holders; obtaining time-varying holographic speckle pattern image sequences using the image sensor(s); and identifying motile objects in the sample using the deep learning-based classifier software.
11. The method of claim 10, wherein the fluid sample is first exposed to a lysis buffer prior to loading.
12. The method of claim 10, wherein the fluid sample is allowed to settle prior to translating the moveable scanning head.
13. The method of claim 10, wherein the deep learning-based classifier software outputs a count of motile objects in the sample.
14. The method of claim 10, wherein the deep learning-based classifier software outputs a concentration of motile objects in the sample.
15. The method of claim 10, wherein the deep learning-based classifier software outputs a positive or negative classification for the sample.
16. The method of claim 10, wherein the sample is a biological sample.
17. The method of claim 10, wherein the sample is an environmental sample.
18. The method of claim 10, wherein the motile objects comprise parasites.
19. A method of detecting motile objects in a sample comprising: obtaining a plurality of time-varying holographic speckle pattern image sequences of the sample using a moveable scanning head containing one or more coherent light sources and corresponding image sensor(s) associated with the one or more coherent light sources; and processing the plurality of time-varying holographic speckle pattern image sequences with a computing device configured to receive the time-varying holographic speckle pattern image sequences obtained by the image sensor(s), the computing device comprising computational motion analysis software configured to generate a three-dimensional (3D) contrast map of motile objects within the one or more optically transparent sample holders, the computing device further comprising a deep learning-based classifier software to identify motile objects in the three-dimensional (3D) contrast map.
20. The method of claim 19, further comprising the computing device outputting a count of the motile objects.
21. The method of claim 19, further comprising the computing device outputting a concentration of the motile objects in the sample.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] The foregoing and other aspects of embodiments are described in further detail with reference to the accompanying drawings, wherein like reference numerals refer to like elements and the description for like elements shall be applicable for all described embodiments wherever relevant. Reference numerals having the same reference number and different letters (e.g., 104a, 104b, 104c) refer to like elements and the use of the number without the letter in the Detailed Description refers to each of the like elements.
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
DETAILED DESCRIPTION OF ILLUSTRATED EMBODIMENTS
[0049] The present invention is directed to an imaging platform for label-free detection of motile objects 200 in a sample (see
[0050] The scanning head 102 includes one or more lensless imagers 104a, 104b, 104c housed within a scanning head housing 109 (e.g., printed by 3D-printed plastic, molded plastic, formed metal, etc.). Each lensless imager 104a, 104b, 104c includes an illumination source 116. The illumination source 116 may be a laser diode, such as a 650-nm laser diode (product no. AML-N056-650001-01, Arima Lasers Corp., Taoyuan, Taiwan) having an output power of ˜1 mW, or other suitable illumination device. For instance, the illumination source 116 other than a laser diode, including a light-emitting diode (LED), another laser light source, and the like.
[0051] The emitted light 117 from the illumination source 116 passes through an optional aperture 118. The aperture 118 may be a 3D-printed aperture or other suitably constructed aperture (e.g., molded, machined, etc.). The aperture 118 functions to limit the emission angle of the emitted light and avoid light leakage into the adjacent imagers 104. The aperture 118 is optional and may not be present in all embodiments. The aperture 118 serves to prevent light leakage to the nearby image sensor 124. In embodiments where the light leakage is not an issue (e.g., where spacing or configuration of lensless imagers 104 does not suffer from light leakage), the aperture may be omitted.
[0052] The sample 101 is loaded into substantially optically transparent fluidic holders 120a 120b, 120c (also referred to as “sample holders”). The term “substantially optically transparent” means that the element is sufficiently transparent to obtain images 168 of a sample 101 through the element of sufficient quality to identify motile objects 200 in the sample 101. In one embodiment, each fluidic holder 120a, 120b, 120c is a glass capillary tube. The capillary tube may be rectangular in cross-sectional profile, or other suitable cross-sectional profile, such as circular, oval, etc.). The fluidic holder 120 is filled with the sample 101 (e.g., a bodily fluid to be screened), and is positioned a z.sub.1 distance 122 below the illumination source 116. In the illustrated embodiment, the z.sub.1 distance 122 is ˜7 cm below the illumination source 116. Again, the aperture 118 is optional and may not be present in all embodiments. The aperture 118 serves to prevent light leakage to the nearby image sensor 124 of the adjacent imagers 104. In embodiments where the light leakage is not an issue (e.g., where spacing or configuration of the lensless imagers 104 does not suffer from light leakage), the aperture 118 may be omitted.
[0053] Each of the imagers 104a, 104b, 104c has an image sensor 124 positioned on the opposing side of the respective fluidic holder 120 from the respective illumination source 116 such that it can image a diffraction or speckle pattern of the emitted light 117 from the illumination source 116 through the sample 101 at a section of the sample 101 based on the position of the scanning head 102. For example, in the illustrated embodiment, the image sensor 124 is positioned below the fluidic holder 120, with the illumination source 116 above the fluidic holder 120. The image sensor 124 may be any suitable image sensor, such as a 10-megapixel CMOS image sensor (product no. acA3800-14 um, Basler, Ahrensburg, Germany) with a 1.67 μm pixel size and an active area of 6.4 mm×4.6 mm (29.4 mm.sup.2). The image sensor 124 is positioned below the illumination source 116 a z.sub.2 distance 126. The z.sub.1 distance 122 is typically much greater than the z.sub.2 distance. In the illustrated embodiment, the z.sub.2 distance 126 (i.e., the air gap) between the image sensor 124 and the bottom surface of the fluidic holder 120 is about 1-1.5 mm, or 1-3 mm, or 0.5-5 mm, to reduce the heat transfer from the image sensor 124 to the sample 101.
[0054] Because each image sensor 124 has one or more circuit boards 125 that generate heat, heat sinks 128 are optionally inserted between the circuit boards 125 and arranged on the sides of the scanning head 102 to dissipate heat and prevent image sensor 124 malfunction and/or damage. The heat sinks 128 may be custom-made aluminum heat sinks, or other suitable heat sinks, including other materials and construction.
[0055] The embodiment used in the Examples described herein uses a scanning head 102 with three identical lensless imagers 104a, 104b, 104c that image three different capillary tubes 120a, 120b, 120c. These tubes 120a, 120b, 120c could be loaded with samples from different patients or the same patient. It should be understood that more (or fewer) lensless imagers 104 may also be used.
[0056] The translation stage 106 is configured to move the scanning head 102 in order to move the imagers 104 relative to the fluidic holders 120 so that the imagers 104 can obtain images 168 of different regions of the sample 101 contained in the respective fluid holders 120. In the illustrated embodiment, the translation stage 106 moves the scanning head 102 in a linear direction along the length of the fluidic holders 120 and is thus referred to as a linear translation stage 106. In the illustrated embodiment, the linear translation stage 106 includes two linear motion shafts 130a, 130b which are mounted to the aligned parallel to the longitudinal axis of the fluidic holders 120. The motion shafts 130a, 130b may be product no. 85421, Makeblock Co., Ltd., Shenzhen, China, or other suitable motion shafts. The linear translation stage also has two linear motion sliders 132 which are coupled, and controllably moveable relative, to the motion shafts 130a, 130b. The linear motion sliders 132 may be product no. 86050, Makeblock Co., Ltd., Shenzhen, China. The linear translation stage 106 also includes a timing belt 134 (e.g., product no. B375-210XL, ServoCity, Winfield, Kans., or other suitable timing belt) operably coupled to two timing pulleys 136a, 136b (e.g., product no. 615418, ServoCity, Winfield, Kans., or other suitable timing pulley) and a stepper motor 138 (e.g., product no. 324, Adafruit Industries LLC., New York City, N.Y., or other suitable motor) operably coupled to the timing belt 134.
[0057] The scanning head 102 is mounted onto the motion sliders 132 using screws or other suitable fasteners. The scanning head 102 with the attached motion sliders 132 moves along the stationary linear motion shafts 130a, 130b. The stepper motor 138 provides power to drive the coupled timing belt 134 and timing pulleys 136 to move the scanning head 102 back-and-forth along the linear motion shafts 130a, 130b. While the specific linear translation stage 106 utilized and disclosed herein may be used with the imaging platform 100, it should be understood that other translation mechanisms and devices that are configured to move the scanning head 102 in a linear direction relative to the fluidic holders 120 may be used. These may include motor or servo-based devices that are mechanically coupled or linked to the scanning head 102 to impart linear movement. Likewise, the translation stage 106 may translate in different directions depending on the sample volume that is to be scanned. For example, a three-dimensional volume may be scanned in orthogonal (or other directions) to cover the sample volume. Thus, a variety of different translation motions may be used in conjunction with the translation stage 106.
[0058] The computing device 112 is configured to control the operation of the imaging platform 100. In the illustrated embodiment, the computing device 112 is a laptop computer, but the computing device 112 may include other computer-based devices (e.g., a personal computer or in some instances a tablet computer or other portable computing device). The computing device 112 may include one or more microprocessors 111, a storage device 160, a graphics processing unit (GPU) 161, and a display 163.
[0059] Referring to the schematic diagram of
[0060] In the illustrated embodiment, the illumination source 116 (e.g., laser diodes) and the stepper motor 138 are powered using a 12 V power adapter 150. Various digital switches 156a, 156b, 156c built from metal-oxide-semiconductor field-effect transistors (MOSFETs) are controlled by the digital outputs from the microcontroller 144 to cut the power to the laser diodes 116 and the image sensors 124 when they are unused. Specifically, to control the power to the image sensors 124, including cutting the power to the image sensor 124, the power wire of a USB 3.0 cable of the image sensor 124 is cut and a MOSFET-based digital switch 156a is inserted into the power line.
[0061] The computing device 112 contains a control program 114 that is used to control and interact with data obtained from the imaging platform 100. For example, in the specific embodiment disclosed herein, the control program 114 is a Windows®-based application written in C-Sharp programming language (C#). The control program 114 includes a GUI 115 which enables the user to initiate the screening of the current sample 101, in addition to various other functionalities, such as customizing image acquisition parameters, performing a live view of the diffraction patterns, taking a snapshot, and stopping the acquisition. It should be appreciated that other programming languages or scripts may be used as well.
[0062] Accordingly, the control program 114 controls the imaging platform 100 to obtain the time-varying holographic speckle pattern image sequences. After the sample 101 is loaded into the fluidic holders 120a, 120b, 120c on the imaging platform 100, and the sample 101 is allowed to settle for a predetermined waiting time (e.g., a waiting time of 3-4 minutes, for instance, 4 minutes for lysed whole blood and 3 minutes for artificial CSF, see
[0063] The temperature of the image sensor 124 rises when it is powered, leading to temperature gradient-induced convection flow of the liquid sample 101. An example of a temperature gradient-induced convection flow for the exemplary imaging platform 100 is illustrated in
[0064] The acquired sequence of images 168 (e.g., movies or clips) are saved to the storage device 160 (e.g., a hard drive) for processing. All three image sensors 124, capturing uncompressed 8-bit images 168, generate a total data rate of ˜421 MB/s, which slightly exceeds the average write-speed of a typical storage device 160 (see
[0065] A CMA algorithm 162 (e.g., programmed into CMA software 162) is utilized to generate 3D contrast data from particle locomotion in noisy holograms and speckled interference patterns and also applies deep learning-based classification to identify the signals corresponding to the parasite of interest. As an example,
Examples
[0066] Detection of Parasite Locomotion in 3D Using Holographic Speckle Analysis
[0067] To sustain a high frame rate (˜26.6 fps) which is essential to the parasite detection technique, the full field of view (FOV) of each of the image sensors 124 was split in two halves, each ˜14.7 mm.sup.2.
[0068] To address this challenge, the spatial-temporal variations in the detected speckle patterns due to the rapid locomotion of motile trypanosomes within blood can be utilized. A CMA algorithm 162 (or CMA software 162) taking advantage of this was developed, which involves holographic back-propagation, differential imaging (with an optimally-adjusted frame interval for trypanosome locomotion), and temporal averaging, conducted at each horizontal cross section within the sample volume. Object function normalization (OFN) was introduced into each differential imaging step to suppress potential false positives due to unwanted, strongly scattering objects within the sample. The algorithm was then followed by post-image processing and deep learning-based classification to identify the signals caused by trypanosomes (see the description below for details).
[0069] Similarly, the results of imaging trypanosomes within WBC-spiked artificial CSF samples are shown in
[0070] As detailed in Table 1 below, >80% of the total image processing time to image and detect these trypanosomes is spent on the CMA algorithm 162, which involves thousands of fast Fourier transforms of ˜6-megapixel images 168 for each recorded image sequence (see the Methods section below for details). Therefore, graphics processing unit (GPU) 164 based parallel computing is helpful for the speed-up of the CMA algorithm 162. Using a single GPU 164, the entire image processing task for one experiment (216 image sequences in total for the three parallel image sensors 124) takes ˜26 minutes and ˜21 minutes for blood and CSF samples, respectively. When using two GPUs 164, because each GPU 164 is given a separate image sequence to process at a given time, there is minimal interference between the GPUs 164 and maximal parallelism can be achieved. Therefore, ˜2-fold speed-up is observed when using two GPUs 164, resulting in a total image processing time of ˜13 minutes and ˜11 minutes for blood and CSF experiments, respectively. Combined with all the other sample preparation steps, the total detection time per test amounts to ˜20 minutes and ˜17 minutes for blood and CSF samples, respectively (see
TABLE-US-00001 TABLE 1 Single GPU Dual GPUs Time per image Total time per Time per image Total time per sequence (ms) test (min) sequence (ms) test (min) Processing step Blood/CSF Blood/CSF Blood/CSF Blood/CSF Copy data from CPU 316.7/212.7 1.14/0.77 346.2/231.7 0.62/0.42 memory to GPU memory Image normalization 44.3/30.0 0.16/0.11 49.0/32.5 0.09/0.06 Autofocusing 523.0/NA.sup. 1.88/NA 680.8/NA.sup. 1.23/NA Computational motion 6205.0/5515.7 22.34/19.86 6137.8/5561.5 11.05/10.01 analysis Post image filtering 106.7/136.0 0.38/0.49 108.2/138.3 0.19/0.25 Segmentation 10.1/10.1 0.04/0.04 10.1/10.1 0.02/0.02 Deep learning-based 9.8/10.3 0.04/0.04 9.8/10.3 0.02/0.02 classification Total 7215.6/5914.8 25.98/21.29 7341.9/5984.4 13.22/10.77
[0071] Quantification of the LoD for Trypanosomes
[0072] The LoD of the exemplary imaging platform 100 was determined for detecting trypanosomes in lysed whole blood by performing serial dilution experiments, and the results are shown in
[0073] For T. brucei, stage determination is critical for determining the most appropriate treatment regimen. This is currently done by collecting CSF via a lumbar puncture and examining the CSF under a microscope. Patients with <5 μL.sup.−1 WBCs and no trypanosomes in the CSF are classified as stage I; otherwise, if there are >5 μL.sup.−1 WBCs or if trypanosomes are found in the CSF, they are classified as stage II. To address this need for high-throughput CSF screening, the LoD of the exemplary imaging platform 100 to detect trypanosomes in CSF was also quantified. For this purpose, an artificial CSF sample that is spiked with human WBCs was used, where cultured trypanosomes were spiked into the artificial CSF solution at concentrations of 3 mL.sup.−1, 10 mL.sup.−1, 100 mL.sup.−1, and 1000 mL.sup.−1, in addition to a negative control (N=3 for each concentration). The concentration of spiked human WBCs was selected as 20 WBCs/μL to evaluate the performance of the device to detect trypanosomes in a scenario where the WBC concentration was four times higher than the 5 μL.sup.−1 threshold used in stage determination. Unlike the blood sample, the CSF solution is optically clear and lysis was not needed, which helped us further improve the LoD: as shown in
[0074] Detection of T. vaginalis Parasites
[0075] Although the parasite T. brucei was chosen to validate the motility-based detection approach of the imaging platform 100, it is understood that this approach is broadly applicable for the detection of a variety of motile microorganisms. As a preliminary test of the performance of the exemplary imaging platform 100 on a completely different motile parasite, T. vaginalis was selected. T. vaginalis is the protozoan parasite responsible for trichomoniasis, which is the most common non-viral STD in the United States and worldwide. T. vaginalis infects the urogenital tract of both women and men. Although often asymptomatic, T. vaginalis infection has been associated with increased risk related to other health conditions including human immunodeficiency virus (HIV) infection, pre-term labor, pelvic inflammatory disease and prostate cancer. For the diagnosis of trichomoniasis, cell culture followed by microscopy remains the best, most reliable method, as it is highly sensitive and can detect T. vaginalis from an inoculum containing as few as three parasites per mL. However, it is limited by the high cost, inconvenience, a long examination time, as well as susceptibility to sample contamination. The most common diagnostic method, wet-mount microscopy, suffers from poor sensitivity (51%-65%). Thus, the highly sensitive lensless time-resolved holographic speckle imaging method could be of substantial benefit.
[0076] With only minor adjustments to the CMA algorithm 162 (see the discussion below), it was demonstrated that the exemplary imaging platform 100 can detect T. vaginalis in phosphate-buffered saline (PBS) solution and culture medium (see
Discussion of Examples
[0077] A new imaging platform 100 and methods for motility-based parasite detection has been presented, based on lensless time-resolved holographic speckle imaging. The new imaging platform 100 has been demonstrated as being effective for rapid detection of trypanosomes within lysed blood and CSF, achieving an LoD that is better than the current parasitological methods (see
[0078] This diagnostic method could also be beneficial for improving the diagnosis of bloodstream HAT or Chagas infection, or facilitating earlier identification of stage II HAT cases, when the parasitemia in the CSF is under the LoD of traditional methods and when the WBCs in the CSF are still scarce. The imaging platform 100 may also be useful for follow-up after disease treatment in order to screen patients for earlier and more sensitive detection of relapse. These advances could result in improved treatment outcomes for patients and increase the cure rate of disease. In addition to HAT, animal trypanosomiasis severely limits economic development. Therefore, applying motility-based detection to aid screening of infected livestock and development of vector control options could help to raise endemic areas out of poverty. In the case of Chagas disease, this technique could be adapted for screening of blood donors or blood products as well as sugarcane juice and acai juice products to help reduce various routes of transmission. Given the large populations at risk, the ability to rapidly analyze various types of samples/liquids in a simple and automated fashion will be particularly critical for developing a viable strategy to screen samples in regions where disease incidence declines owing to eradication efforts.
[0079] The imaging platform 100 and label-free detection method take advantage of the locomotion patterns of parasites to maximize the detection signal-to-noise ratio (SNR). Trypanosomes are known for their incessant motion, and motility is crucial to their survival as well as their virulence in the host. The swimming behavior of trypanosomes is highly complex. Because the flagellum is laterally attached to the cell body, parasite translocation is accompanied by cell body rotation, resulting in a “corkscrew” swimming pattern. Moreover, in addition to cell translocation, the flagellum generates rapid, three-dimensional beating patterns. The average beating frequency of T. brucei is estimated as 18.3±2.5 Hz in forward moving cells and 13.1±0.8 Hz in backward moving ones, whereas the rotational frequency of forward moving cells is 2.8±0.4 Hz. The frame rate that matches the average beating frequency (forward moving), according to the Nyquist sampling rate, is equal to 36.6 fps. In other words, a frame rate of at least 36.6 fps is able to record the speckle changes corresponding to each flagellar stroke; and even higher frame rates can record the speckle changes with finer time resolution, corresponding to different time points during a flagellar stroke. Assuming optimal subtraction time interval (Δt) and time window (T) are used (see discussion below,
[0080] T.b. brucei is widely used as a model microorganism for the study of trypanosomes because it is non-pathogenic to humans and therefore safe to conduct experiments on. It is anticipated that the imaging platform 100 and methods disclosed herein will be readily applicable to T.b. gambiense, T.b. rhodesiense and T. cruzi, since their movements are fundamentally similar. Mouse blood and an artificial CSF solution were used throughout the testing due to safety concerns, but the lysis buffer also works with human blood. Future research may be conducted on testing patient samples from endemic regions to establish the sensitivity and specificity of the presented technique for the diagnosis of various trypanosomiases.
[0081] Numerous motile organisms can cause infections in humans. The imaging platform 100 and disclosed methods may also be configured to automatically differentiate different parasites. For instance, the amplitude and phase movie that is generated for each detected signal (see
[0082] In the Examples, trypanosomes were utilized to demonstrate the feasibility of lensless time-resolved holographic speckle imaging to be employed in detection of parasitic infection. While the approach capitalized on the motility of trypanosomes, this platform is broadly applicable to other motile parasites, including other eukaryotic parasites such as T. vaginalis (see
[0083] Motile bacteria also cause a number of human diseases. Although bacteria are typically much smaller than trypanosomes, the concept of motility-based detection combined with optical magnification may also be utilized for label-free detection of bacterial pathogens. There may be potential uses of motility-based detection for screening of other bodily fluids such as urine or diluted mucosal secretions and stool samples. Therefore, the imaging platform 100 and methods disclose herein have considerable potential to impact various global health challenges. Lastly, using motility as a biomarker and endogenous contrast can create new possibilities beyond clinical diagnostics. As a label-free 3D imaging modality that is robust to light-scattering and optically dense media, it can also be employed to study motile microorganisms within various fluid environments in a high-throughput manner.
[0084] Materials and Methods of the Examples
[0085] Sample Preparation
[0086] Lysis buffer preparation: 44 mM sodium chloride (product no. 71379, Sigma Aldrich), 57 mM disodium phosphate (product no. 30412, Sigma Aldrich), 3 mM monopotassium phosphate (product no. 60220, Sigma Aldrich), 55 mM glucose (product no. G8270, Sigma Aldrich), and 0.24% (w/v) sodium dodecyl sulfate (product no. L4390, Sigma Aldrich) in reagent grade water (product no. 23-249-581, Fisher Scientific) were mixed for 2 hours using a magnetic stir bar on a magnetic mixer. The solution was then filtered using a disposable filtration unit (product no. 09-740-65B, Fisher Scientific) for sterilization and was stored at room temperature. This buffer solution lyses all the components of whole blood including RBCs and WBCs but does not lyse the trypanosomes.
[0087] Artificial CSF preparation: According to a previous method, 1.25 M sodium chloride, 260 mM sodium bicarbonate (product no. SX0320-1, EMD Millipore), 12.5 mM sodium phosphate monobasic (product no. 56566, Sigma Aldrich), and 25 mM potassium chloride (product no. P5405, Sigma Aldrich) were mixed well, and 10 mM magnesium chloride (product no. 208337, Sigma Aldrich) was added to make 10× artificial CSF. The solution was then filtered using a disposable filtration unit for sterilization. 10× stock solution was diluted ten-fold with reagent grade water to make 1× artificial CSF.
[0088] Culturing trypanosomes: 427-derived bloodstream single marker trypanosomes (T. b. brucei) were cultivated at 37° C. with 5% CO.sub.2 in HMI-9 medium with 10% heat-inactivated fetal bovine serum (product no. 10438026, Gibco) as described in Oberholzer, M., Lopez, M. A., Ralston, K. S. & Hill, K. L. Approaches for Functional Analysis of Flagellar Proteins in African Trypanosomes. in Methods in Cell Biology 93, 21-57 (Elsevier, 2009).
[0089] Collection of trypanosome infected mouse blood: All experiments involving mice were carried out in accordance with the guidelines and regulations of the UCLA Institutional Animal Care and Use Committee (IACUC), NIH Public Health Service Policy on Humane Care and Use of Animals, USDA Animal Welfare regulations, and AAALAC International accreditation standards under IACUC-approved protocol ARC #2001-065. Mouse infections were performed as described in Kisalu, N. K., Langousis, G., Bentolila, L. A., Ralston, K. S. & Hill, K. L. Mouse infection and pathogenesis by Trypanosoma brucei motility mutants. Cell. Microbiol. 16, 912-924 (2014), with the following modifications: Female BALB/cJ mice (product no. 000651, Jackson Laboratory, age 11-24 weeks) were injected intraperitoneally with 5×10.sup.5-1×10.sup.6 parasites in 0.1-0.2 mL ice-cold phosphate buffered saline with 1% glucose (PBS-G). Parasitemia was monitored by counting in a hemacytometer, and infected blood samples were collected when parasitemia reached ˜10.sup.7-10.sup.8 parasites/mL. Infected blood was collected from either the saphenous vein or by cardiac puncture after euthanasia into heparinized capillary tubes (product no. 22-260950, Fisher Scientific) or heparinized collection tubes (product no. 8881320256, Covidien).
[0090] Separation of WBCs from human blood: Ficoll-Paque PREMIUM (product no. 45-001-751, Fisher Scientific) was utilized for in vitro isolation of mononuclear cells from blood using density gradient separation according to manufacturer's instructions. Human blood samples were acquired from UCLA Blood and Platelet Center after de-identification of patients and related information and were used in the separation of WBCs from blood. 2 mL ethylenediaminetetraacetic acid (EDTA)-treated blood were mixed with 2 mL sterile PBS (product no. 10-010-049, Fisher Scientific) in a 5 mL centrifuge tube (product no. 14-282-300, Fisher Scientific) by drawing the mixture in and out of a pipette. 3 mL of Ficoll-Paque PREMIUM were placed in a 15 mL conical centrifuge tube (product no. 14-959-53A, Fisher Scientific) and the diluted blood sample was carefully layered on the Ficoll-Paque PREMIUM. The suspension was centrifuged at 400×g for 40 minutes at 19° C. using a centrifuge with swing-out rotors (Allegra X-22R, Beckman-Coulter). After centrifugation, the upper layer containing plasma and platelets was removed and mononuclear cells were transferred to a sterile centrifuge tube. To wash the cell isolate, it was mixed in 6 mL PBS and centrifuged at 400×g at 19° C. for 13 minutes. The washing step was repeated twice, and the pellet was suspended in 1 mL PBS. The concentration of WBC was determined by counting in a hemacytometer and diluted accordingly to a stock solution of 8×10.sup.5WBC/mL in PBS.
[0091] Protocol for calibration curve analysis for blood samples: Freshly collected trypanosome-infected mouse blood was diluted in uninfected mouse blood (Balb/C, female, pooled, sodium heparin, Charles River Inc.) to a concentration of approximately 10.sup.6 parasites/mL. A sample of this trypanosome-infected blood was lysed with 3 volumes of lysis buffer and the trypanosome concentration was determined by counting in a hemacytometer. The trypanosome-infected blood was then diluted accordingly with uninfected blood to achieve the desired concentrations for calibration curve analysis.
[0092] Protocol for calibration curve analysis for CSF samples: Cultured trypanosomes were freshly harvested for each measurement to ensure consistent parasite motility. Trypanosomes were grown to a concentration of ˜1×10.sup.6-1.5×10.sup.6 cells/mL and harvested by centrifugation at 1200×g for 5 minutes. The cell pellet was resuspended in 1 mL of PBS-G and diluted approximately 10-fold to 10.sup.5 cells/mL in PBS-G. The trypanosome concentration was determined by counting in a hemacytometer and the sample was then diluted accordingly into 1× artificial CSF to achieve the desired concentrations for calibration curve analysis.
[0093] Sample preparation for imaging: The experiments were conducted using blood and artificial CSF samples. Borosilicate capillary tubes (inner dimensions: 1 mm height×10 mm widthט30 cm length; product no. LRT-1-10-67, Friedrich & Dimmock, Inc.) were prepared by dipping one end of the capillary tube (the fluidic holders 120) into Vaseline jelly to plug the end. Plastic capillaries, e.g., those made of acrylic, can also be used instead of glass. Excess jelly was removed using a Kimwipe (product no. 06-666, Fisher Scientific) and the tube end was sealed with parafilm (product no. 13-374-12, Fisher Scientific). For each tube, 4 mL of sample was prepared. For blood samples, 3 mL of lysis buffer was mixed with 1 mL of uninfected or infected whole blood in a centrifuge tube. For CSF samples, 100 μL WBC stock solution was placed into trypanosome-infected artificial CSF to have 2×10.sup.4WBCs/mL (i.e., 20 WBCs/μL) in the final mixture. Each sample was mixed well by drawing the mixture in and out of a pipette before loading into the capillary tube. The open end of the capillary tube was then sealed using the jelly and parafilm. The glass capillary was then cleaned using a Kimwipe moistened with methanol (product no. A452SK-4, Fisher Scientific) and put on the device.
[0094] Culturing T. vaginalis: T. vaginalis strain G3 (Beckenham, UK 1973, ATCC-PRA-98) was cultured in modified TYM media supplemented with 10% horse serum (Sigma), 10U/ml penicillin-10 μg/ml streptomycin (Invitrogen), 180 μM ferrous ammonium sulfate, and 28 μM sulfosalicylic acid at 37° C..sup.52. Culture was passaged daily and maintained at an approximate concentration of 1×10.sup.6 cells/mL.
[0095] Design of the High-Throughput Lensless Time-Resolved Speckle Imaging Platform
[0096] As shown in
[0097] (1) Scanning head 102: Three identical lensless imagers 104 are built next to each other, housed by a scanning head housing 109 comprising a 3D-printed plastic using a 3D printer (Objet30 Pro, Stratasys). As shown in
[0098] (2) Linear translation stage 106: A linear translation stage 106 is built from two linear motion shafts 130a, 130b (product no. 85421, Makeblock Co., Ltd., Shenzhen, China), two linear motion sliders 132a, 132b (product no. 86050, Makeblock Co., Ltd., Shenzhen, China), a timing belt 134 (product no. B375-210XL, ServoCity, Winfield, Kans.), two timing pulleys 136a, 136b (product no. 615418, ServoCity, Winfield, Kans.) and a stepper motor 138 (product no. 324, Adafruit Industries LLC., New York City, N.Y.). The scanning head 102 is mounted onto the motion sliders 132a, 132b using screws.
[0099] (3) Scanning head housing 109: The housing 109 of the scanning head 102 is made from 3D-printed plastic. The outer shell of the imaging platform (the main housing 108 of the imaging platform 100) is made from laser-cut ¼-inch acrylic sheets.
[0100] (4) Electronic circuitry 110: A printed circuit board (PCB) 142 is custom-built to automate the imaging platform 100, and includes a microcontroller 144 (Teensy LC, PJRC) connected to the laptop computer 112 via USB 2.0, laser diode driver circuits 146 built from constant current circuits (product no. LM317DCYR, Texas Instruments), and a stepper motor driver circuit 148 (product no. TB6612, Adafruit). The laser diodes 116 and the stepper motor 138 are powered using a 12 V power adapter 150. Various digital switches 156a, 156b, 156c, built from metal-oxide-semiconductor field-effect transistors (MOSFETs) are controlled by the digital outputs from the microcontroller 144 to cut the power to the laser diodes 116 and the image sensors 124 when they are unused. Specifically, to cut the power to the image sensor 124, the power wire of the USB 3.0 cable of the image sensor is cut and a MOSFET-based digital switch is inserted into the power line.
[0101] (5) Control program 114: A Windows application written in C# with a graphical user interface 115 enables the user to initiate the screening of the current sample in addition to various other functionalities, such as customizing image acquisition parameters, performing a live view of the diffraction patterns, taking a snapshot, and stopping the acquisition.
[0102] Image Acquisition
[0103] After the sample is loaded onto the imaging platform 100 and has settled for a 3-4 minutes waiting time (4 minutes for lysed whole blood and 3 minutes for artificial CSF, see
[0104] The temperature of the image sensor 124 rises when powered, leading to temperature gradient-induced convection flow of the liquid sample 101 (see
[0105] During the testing of the Examples, the acquired images 168 are saved to an SSD 160 for processing. All three image sensors 124, capturing uncompressed 8-bit images 168, generate a total data rate of ˜421 MB/s, which slightly exceeds the average write-speed of the solid-state drive (SSD). Therefore, a queue is created in the RAM 158 of the laptop computer 112 for each image sensor 124 to temporarily buffer the incoming image data, and another thread is created to constantly move the image data from the buffer into the SSD. However, because all the remaining image data can be fully saved to the SSD during the aforementioned downtime between positions, the total image acquisition time per test is not increased due to the limited write-speed. As a more time-efficient alternative, the acquired images 168 can be temporarily stored in the RAM 158, while they are constantly moved to the GPUs 164 for processing in batches corresponding to each image sequence. In this way, the image processing can be performed concurrently with the image acquisition, reducing the total time per test (see Results above,
[0106] Image Processing Using CMA and Deep Learning-Based Identification
[0107] The CMA algorithm 162 is used to generate 3D contrast from particle locomotion in noisy holograms and speckled interference patterns, and applies deep learning-based classification to identify the signals corresponding to the parasite of interest. As an example,
[0108] 1. Hologram Preprocessing to Mitigate the Variations and Non-Uniformity of the Illumination
[0109] For every 8-bit raw image acquired by each image sensor (see
[0110] 2. Determining the Range of Axial-Distances of the Fluid Sample Under Test
[0111] In the case of lysed blood, because most of the cell debris tend to fully sediment within the 4 minute wait time (see
[0112] In the case of clear media such as CSF where objects/particles are sparse, autofocusing to the bottom of the channel can be challenging. Therefore, the z.sub.b distance of each capillary tube is pre-calibrated (see below) and used throughout the experiments. Because the z.sub.b distance is pre-calibrated, i.e., not adaptively calculated for each sample, we specify a larger range of digital z-scanning, [z.sub.b−500 μm, z.sub.b+1500 μm], also with a 50 μm step size. Note that z.sub.b is slightly different for each of the three channels of the device and is calibrated respectively.
[0113] 3. CMA Algorithm to Generate Contrast from Locomotion
[0114] The z-distances to be scanned are denoted as z.sub.j (j=1, . . . , N.sub.z) as determined by the previous step. each element of is digitally propagated to each of z.sub.1 with a high-pass filtered coherent transfer function (see
B.sub.i,j=HP[S(Ā.sub.j,z.sub.j)] (1)
[0115] where S # represents the angular spectrum-based back-propagation, HP represents high-pass filtering, and i=1, . . . , N.sub.F, j=1, . . . , N.sub.z.
[0116] Next, time-averaged differential analysis with OFN is applied (see
[0117] where δ.sub.F is the subtraction frame interval, exp[γ.Math.½|B.sub.i+δF,jB.sub.i,j|] is the OFN factor, γ is a parameter related to OFN that is respectively tuned for lysed blood (γ=2) and CSF experiments (γ=3). Time-averaging significantly improves the SNR by smoothing out random image noise as well as random motion of unwanted particles/objects while preserving the true signals of motile microorganisms. OFN further suppresses potential false positive signals resulting from e.g., strongly scattering, unwanted particles/objects such as cell debris (see below and
[0118] 4. Post-Image Processing and Segmentation
[0119] The z-stack C.sub.j (j=1, . . . , N.sub.z) suffers from a low-spatial-frequency background that mainly results from high-frequency noise in the raw images 168, which remains when performing high-pass filtered back-propagation and frame subtraction. Therefore, as shown in
[0120] Segmentation of candidate signal points within F is performed by 2D median filtering (3×3 pixel window, pixel size=1.67 μm), thresholding (threshold=0.01 for detecting trypanosomes in lysed blood and 0.02 for detecting trypanosomes in CSF) followed by dilation (disk-shape structuring element, radius=2 pixels, pixel size=1.67 μm) and searching for connected pixel regions. Connected regions that are smaller than 5 pixels are discarded. 64-by-64 pixel image patches centered around the pixel-value-weighted centroids of these connected regions are cropped from F (without 2D median filtering), and are used for the downstream identification by a deep learning-based classifier.
[0121] 5. Deep Learning-Based Classifier for Detection of Motile Trypanosomes
[0122] A CNN that consists of three convolution blocks followed by two fully-connected layers is built and trained to identify true signal spots created by motile trypanosomes. The detailed network structure is shown in
[0123] 6. Generation of Test Results
[0124] The image processing steps (see
[0125] 7. 3D Localization of Motile Microorganisms and Movie Generation
[0126] The technique also offers the capability to locate the motile microorganisms in 3D and generate in-focus amplitude and phase movies of them for a close-up observation, using the following steps. For each signal spot that is classified as positive by the CNN classifier, using the corresponding z-stack D.sub.j (j=1, . . . , N.sub.z), only a “column” that is 30×30 pixels in x-y, centered around this spot, while spanning the entire z-range (N.sub.z layers) is cropped out. Then, an autofocusing metric is used to evaluate each of the N.sub.z layers, and the layer that corresponds to the maximum value of the autofocusing metric corresponds to its in-focus position. Both ToG and Tamura coefficient-based criteria were tried, and both work very well for this purpose. While the current z-localization accuracy is limited by the z-step size we chose (Δz=50 μm), it can be further improved through finer z-sectioning. Using the currently found z-localization distance as an initial guess, high-pass filtered back-propagation and differential analysis (detailed in Step 3 CMA Algorithm, above) is performed over a z-range of ±100 μm around the initial guess with a finer z-step size of 5 μm. However, OFN is disabled this time; in other words, the exponential normalization factor in Eq. 2 is removed, owing to OFN's side effect of slightly weakening the signal at the optimal focus distance, where the object function of the microorganism is the strongest. Autofocusing is performed again over the same 30×30-pixel region over different z-layers similarly as before. The previously determined x-y centroid, in addition to the newly found z-distance, is used as the 3D location of this motile microorganism. Because the additional high-pass filtered back-propagation and differential analysis may be only performed on a smaller region-of-interest (ROI) around each given spot (e.g., in the experiments described herein, an ROI of 512×512 pixels is used), the 3D localization is computationally efficient. The 3D localization capability can be used to generate movies (detailed below), or to study microorganism behavior in biological or biomedical research settings.
[0127] Using the obtained 3D position of each motile microorganism, the movie of each detected microorganism can be generated by digitally back-propagating (without high-pass filtering) each frame of the recorded raw image sequence A.sub.i (i=1, . . . , N.sub.F) or the illumination-corrected version Ā.sub.i to the corresponding z-coordinate. The amplitude and phase channels of the back-propagated images 168 are displayed side by side. The generated movies can potentially be used as an additional way to confirm the detection result of this platform when trained medical personnel are available.
[0128] Timing of Image Processing Algorithm
[0129] Here, a laptop 112 equipped with an Intel Core i7-6700K central processing unit (CPU) 111 @ 4.00 GHz, 64 GB of RAM was used, and two Nvidia GTX 1080 GPUs 164 for image processing. Table 1 summarizes the time required for the image processing workflow, using a single GPU 164 or using two GPUs 164 simultaneously. Here, it is assumed that during image acquisition, the images 168 captured by the imaging device 124 are temporarily stored in the CPU RAM 158 and are constantly moved to the GPU memory in batches corresponding to the scanning positions, where it is processed by the GPU 164 (or GPUs). In this way, image processing can be performed concurrently during image acquisition, shortening the time requirement per test. This situation is mimicked by pre-loading existing data from the hard drive 160 into the RAM 158 of the computer 112 before starting the timer, which provides a reasonable estimation of the time cost of the processing. Because the number of acquired images 168 and the image processing workflow for lysed blood and CSF are different (see previous subsections and Methods), their timing results are calculated individually. In Table 1, timing results for lysed blood and CSF are separately by “/”.
[0130] Pre-Calibration of the z-Distance Range
[0131] To pre-calibrate the z-distance range for each of the three channels of the imaging platform 100, one capillary tube whose bottom outer surface was purposely made dirty was installed. Then, three holograms was captured when the scanning head is at the two ends of its scanning range as well as in the middle, and autofocused to the dirty surface using the three holograms respectively.sup.53, 54. The expected z.sub.b in this case was calculated from the averaged autofocusing distance by adding the wall thickness of the glass capillary tube. The calibration step needs to be done only once.
[0132] High-Pass Filtered Computational Back-Propagation
[0133] The diffraction patterns are back-propagated to the given z-distances using the angular spectrum method, involving a 2D fast Fourier transform (FFT), a matrix multiplication in the spatial frequency domain with the free-space transfer function, and an inverse FFT. However, because the approximate size of the trypanosomes is known, a high-pass filter is added into the transfer function in the spatial frequency domain to suppress other noises and artifacts.
[0134] The coherent transfer function of free-space propagation is given by
[0135] where z is the propagation distance, λ is the optical wavelength, f.sub.x and f.sub.y are spatial frequencies in x and y, respectively.
[0136] On top of H, two high-pass filters, H.sub.1 and H.sub.2, are added to suppress unwanted interference patterns. H.sub.1 is a 2D Gaussian high-pass filter, which is used to suppress the low-frequency interference patterns owing to the reflection from the various surfaces in the light path, including the protective glass of the image sensor and the various interfaces of the capillary tube loaded with fluids. H.sub.1 is given by
H.sub.1(f.sub.x,f.sub.y)=1−exp[−½σ.sub.1.sup.2(f.sub.x.sup.2+f.sub.y.sup.2)] (4)
[0137] where σ.sub.1=25.05 μm. H.sub.2 is used to suppress the interference patterns caused by the unwanted grooves of the manufactured glass capillary tubes. Because the grooves are oriented along the axial direction of the capillary tubes, corresponding to the y-direction in the captured images, their energy is mostly concentrated close to the f.sub.x axis in the spatial frequency domain. Therefore, H.sub.2 performs high-pass filtering to f.sub.y, which is given by
H.sub.1(f.sub.x,f.sub.y)=1−exp[−½σ.sub.1.sup.2(f.sub.x.sup.2+f.sub.y.sup.2)] (5)
[0138] where σ.sub.2=116.9 μm.
[0139] The final coherent transfer function, which combines H, H.sub.1 and H.sub.2, is given by
{tilde over (H)}(f.sub.x,f.sub.y;z)=H(f.sub.x,f.sub.y;z).Math.min{H.sub.1(f.sub.x,f.sub.y),H.sub.2(f.sub.x,f.sub.y)} (6)
[0140] where min{H.sub.1, H.sub.2} chooses the smaller filter value from H.sub.1 or H.sub.2.
[0141] Optimization of the Subtraction Frame Interval δ.sub.F and Total Analyzed Frames N.sub.F in the Computational Motion Analysis (CMA) Algorithm with Object Function Normalization (OFN)
[0142] The subtraction frame interval δ.sub.F and total analyzed frames N.sub.F are parameters that should be optimized for the parasite (or microorganism) to be detected. δ.sub.F and N.sub.F are related to the subtraction time interval Δt and the total analyzed time T through
δ.sub.F=Δt.Math.R (7)
N.sub.F=T.Math.R (8)
[0143] where R is the frame rate of the recorded sequence (i.e., 26.6 fps in the system). By optimally choosing δ.sub.F (or Δt), the signal from the characteristic locomotion of the microorganism of interest can be amplified with respect to the noise, which includes image sensor noise in addition to unwanted random motion of the background objects/particles in the sample. N.sub.F (or T), on the other hand, determines the window of time-averaging. A larger N.sub.F, in general, will result in reduction of the random background noise through averaging; but at the same time, it can potentially also weaken the useful signal if the microorganism swims away from its original location during T due to directional motion.
[0144] δ.sub.F and N.sub.F are optimized for trypanosome detection by evaluating the average signal-to-noise ratio (SNR) of the processed images by CMA with OFN (corresponding to
[0145] As shown in
[0146] For CSF (see
[0147] Object Function Normalization (OFN) to Remove Potential False Positives
[0148] OFN is essential to reduce potential false positives owing to strongly scattering particles/objects within the sample (see
[0149] Construction and Training of the Convolutional Neural Network (CNN) for the Identification of Trypanosomes
[0150] Generation of Positive Images for Training/Validation
[0151] Positive images are manually identified from experimental data with a relatively high concentration of spiked trypanosomes. For blood, one test (i.e., one scanning experiment with three capillary tubes) with a spiked trypanosome concentration of ˜10.sup.4/mL was used (no overlap with the data reported in
[0152] Generation of Negative Images for Training/Validation
[0153] Negative training/validation images entirely came from negative control experiments (no overlap with the data reported in
[0154] Network Architecture
[0155] A CNN 166 was constructed with three convolutional blocks and two fully connected (FC) layers (see
[0156] Network Training
[0157] The CNN 166 was implemented in TensorFlow (version 1.7.0) and Python (version 3.6.2). The convolutional kernels were initialized using a truncated normal distribution (mean=0, standard deviation=5.5×10.sup.−3). The weights of the FC layers were initialized using the Xavier initializer. All network biases were initialized as zero. The learnable parameters were optimized using the adaptive moment estimation (Adam) optimizer with a learning rate of 10.sup.−3. A batch size of 64 was used, and the network was trained for ten thousand iterations until converged.
[0158] CUDA Acceleration of the CMA Algorithm
[0159] The CMA algorithm 162 was accelerated using CUDA C++ and was run on a laptop computer 112 with dual Nvidia GTX 1080 graphics processing units 164 (GPUs). The most computationally intensive mathematical operations in the CMA algorithm 162 were fast Fourier transforms (FFTs) and inverse FFTs (IFFTs). The Nvidia CUDA Fast Fourier Transform library (cuFFT) library was used to perform these operations. Thrust library was used to perform reduction (i.e., summation of all elements) of an image, which was further used to calculate the mean value of the image for normalization. Other various basic mathematical operations on real or complex-valued images were implemented using custom-written CUDA kernel functions. The CUDA code was carefully optimized to parallelize computation, maximize efficiency and minimize GPU memory usage. For instance, when performing back-propagation of the diffraction patterns to each z-distance, the high-pass-filtered coherent transfer function (Equations 3-8) was only calculated once per z-distance, which was reused for all the frames in the time sequence. When performing time-averaged differential analysis with OFN (Eq. 2), only (δ.sub.F+1) back-propagated images (i.e., B.sub.i) needed to be stored in the GPU memory at each given time without sacrificing performance, which reduced the GPU memory usage and made it possible to process even larger-scale problems (e.g., image sequences with more frames, or performing CMA at more z-distances) or to use lower-end GPUs with less memory.
[0160] Before performing FFTs, the raw images 168 (vertical: 1374 pixels, horizontal: 3840 pixels) were padded to a size of 1536×4096 pixels. The padded pixels were assigned the mean value of the unpadded image to reduce artifacts from discontinuities. Because the new dimensions are powers of 2 and 3 (1536=2.sup.9×3 and 4096=2.sup.12), the FFT operation was accelerated by a factor of ˜2.4× compared to without padding. After IFFT was complete, the images 168 were cropped back to the original size for other image processing steps.
[0161] COMSOL Simulation of Sample Heating Due to the Image Sensor
[0162] The temperature of the image sensor 124 rises when it is turned on, creating a temperature gradient above it. Therefore, the fluid sample 101 within the glass tube 120 will gradually start to flow, also causing the particles within the glass tube to move directionally. As a result, the signal of motile microorganisms generated by the CMA algorithm 162 will weaken due to a “smearing” effect; and in the meantime, the movement of the other unwanted particles will increase the background noise and false positive detections, which is undesirable. The fluid sample 101 velocity due to convection is related to the height of the fluid channel. Due to the drag force near the channel wall, a thinner channel will lead to a reduced fluid velocity at a given time after the onset of heating. However, as a tradeoff, a thinner channel also results in a reduced screening throughput.
[0163] COMSOL Multiphysics simulation software was used to estimate the flow speed inside the channel. As shown in
[0164]
[0165] While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. The invention, therefore, should not be limited, except to the following claims, and their equivalents. Thus, various changes and modifications may be made to the disclosed embodiments without departing from the scope of the following claims. For example, not all of the components described in the embodiments are required, and alternative embodiments may include any suitable combinations of the described components, and the general shapes and relative sizes of the components may be modified. Likewise, dimensional and other limitations found in the drawings do not limit the scope of the invention.