Mobile microscopy system for air quality monitoring
11054357 ยท 2021-07-06
Assignee
Inventors
Cpc classification
G01N2015/1454
PHYSICS
G01N15/0255
PHYSICS
International classification
G01N15/00
PHYSICS
Abstract
A lens-free microscope for monitoring air quality includes a housing that contains a vacuum pump configured to draw air into an impaction nozzle. The impaction nozzle has an output located adjacent to an optically transparent substrate for collecting particles. One or more illumination sources are disposed in the housing and are configured to illuminate the collected particles on the optically transparent substrate. An image sensor is located adjacent to the optically transparent substrate, wherein the image sensor collects particle diffraction patterns or holographic images cast upon the image sensor. At least one processor is disposed in the housing and controls the vacuum pump and the one or more illumination sources. Image files are transferred to a separate computing device for image processing using machine learning to identify particles and perform data analysis to output particle images, particle size, particle density, and/or particle type data.
Claims
1. A system comprising a portable, lens-free microscope device for monitoring air quality, the system comprising: a housing; a pump configured to draw air into an impaction nozzle disposed in the housing, the impaction nozzle having an output located adjacent to an optically transparent substrate having a sticky or tacky material thereon for collecting particles contained in the air; one or more illumination sources disposed in the housing and configured to illuminate the collected particles on the optically transparent substrate; an image sensor disposed in the housing and located adjacent to the optically transparent substrate at a distance of less than 5 mm, wherein the image sensor collects diffraction patterns or holographic images cast upon the image sensor by the collected particles; and at least one processor disposed in the housing, the at least one processor controlling the pump and/or the one or more illumination sources; and a computing device configured to execute software thereon for receiving a diffraction patterns or holographic images from the image sensor and reconstructing differential holographic images containing phase and/or amplitude information of the collected particles and outputting particle images and one or more of particle size data, particle density data, or particle type data of the collected particles based on a machine learning algorithm in the software using extracted spectral and/or spatial features comprising one or more of minimum intensity (I.sub.m), average intensity (I.sub.a), maximum intensity, standard deviation of intensity, area (A), maximum phase, minimum phase, average phase, standard deviation of phase, eccentricity of intensity, and eccentricity of phase.
2. The system of claim 1, wherein the computing device comprises one of a local computing device or a remote computing device.
3. The system of claim 2, wherein the remote computing device comprises a server.
4. The system of claim 1, wherein the diffraction patterns or holographic images are labeled with a spatial and temporal data related to the sampled air.
5. The system of claim 1, wherein the extracted spectral and spatial features are obtained at different illumination wavelengths.
6. The system of claim 1, wherein the particle type data comprises particle type comprising one or more of bacteria, viruses, pollen, spores, molds, biological particles, soot, inorganic particles, and organic particles.
7. The system of claim 1, wherein the software executed by the computing device is configured to eliminate artifacts using a trained machine learning algorithm based on the extracted spectral and spatial features.
8. The system of claim 1, wherein the software executed by the computing device is configured to execute a digital peeling process to identify and eliminate spatial artifacts or false positives.
9. The system of claim 1, wherein each holographic image is associated with a GPS location and time/date stamp.
10. The system of claim 1, further comprising a portable electronic device containing software and/or an application thereon configured to receive the particle images and particle size data, particle density data, or particle type data of the collected particles and output the same to the user on a user interface.
11. The system of claim 10, wherein the user interface provides search functionality to search sampled air samples based on one or more of particle size, particle density, particle type, sample location, sample date or time.
12. A method of monitoring air quality using a portable microscope device comprising: activating a pump disposed in the portable microscope device to capture aerosol particles on an optically transparent substrate; illuminating the optically transparent substrate containing the captured aerosol particles with one or more illumination sources contained in the portable microscope device; capturing casted before and after holographic images or diffraction patterns of the captured aerosol particles with an image sensor disposed in the portable microscope device and disposed adjacent to the optically transparent substrate, wherein the before holographic images or diffraction patterns are obtained prior to capture of the aerosol particles and the after holographic images or diffraction patterns are obtained after capture of the aerosol particles; transferring the image files containing the holographic images or diffraction patterns to a computing device; processing the image files containing the before and after holographic images or diffraction patterns with software contained on the computing device to generate a differential hologram or diffraction pattern followed by outputting a holographic image reconstruction and one or more of particle size data, particle density data, or particle type data of the captured aerosol particles.
13. The method of claim 12, wherein the software executed by the computing device outputs one or more of the particle size data, particle density data, or particle type data based on a machine learning algorithm in the software using extracted spectral and spatial features comprising one or more of minimum intensity (I.sub.m) average intensity (I.sub.a), maximum intensity, standard deviation of intensity, area (A), maximum phase, minimum phase, average phase, standard deviation of phase, eccentricity of intensity, and eccentricity of phase.
14. The method of claim 12, further comprising transferring one or more of particle images, particle size data, particle density data, or particle type data to a portable electronic device for display thereon.
15. The method of claim 12, wherein the portable electronic device comprises a smartphone or tablet computer having an application loaded thereon for displaying one or more of the particle size data, particle density data, or particle type data using a graphical user interface.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS
(31)
(32) The air sampler assembly 22 contains an image sensor 24 (seen in
(33) The air sampler assembly 22 further includes an impaction nozzle 30 that is used to trap or collect aerosol particles that are sampled from the sample airstream that is pumped through the microscope device 10 using the vacuum pump 14. The impaction nozzle 30 includes a tapered flow path 32 that drives the airstream through the impaction nozzle 30 at high speed. The tapered flow path 32 terminates in a narrow opening (e.g., rectangular shaped) through which the air passes. The impaction nozzle 30 further includes an optically transparent substrate 34 (best seen in
(34) The optically transparent substrate 34 is located immediately adjacent to the image sensor 24. That is to say the airstream-facing surface of the optically transparent substrate 34 is located less than about 5 mm from the active surface of the image sensor 24 in some embodiments. In other embodiments, the facing surface of the optically transparent substrate 34 is located less than 4 mm, 3 mm, 2 mm, and in a preferred embodiment, less than 1 mm. In one embodiment, the optically transparent substrate 34 is placed directly on the surface of the image sensor 24 to create a distance of around 400 m between the particle-containing surface of the optically transparent substrate 34 and the active surface of the image sensor 24. The particle-containing surface of the optically transparent substrate 34 is also located close to the impaction nozzle 30, for example, around 800 m in one embodiment. Of course, other distances could be used provided that holographic images and/or diffraction patterns of captured particles 100 can still be obtained with the sensor.
(35) Referring to
(36) The lens-free microscope device 10 includes one or more processors 50 contained within the housing 12 which are configured to control the vacuum pump 14 and the one or more illumination sources 40 (e.g., LED driver circuitry). In the embodiment illustrated in
(37) The one or more processors 50, the one or more illumination sources 40, and the vacuum pump 14 are powered by an on-board battery 54 as seen in
(38)
(39) For example, in one embodiment, as seen in panel image (i) of
(40)
(41)
(42)
(43) Turning now to the particle detection/verification portion of the algorithm, for each patch, the particles are subject to digital focusing operation is performed using the Tamura coefficient for estimating the z distance from the particle 100 to the sensor plane of the image sensor 24. A peeling operation is also employed in the holographic reconstruction process to reject spatial artifacts that are created as part of the image reconstruction operation. Operation 370 shows that digital focusing operation is performed and a morphological reconstruction process is performed and various spectral and spatial features of the particle 100 are extracted. In one embodiment, the extracted features include minimum intensity average intensity I.sub.a, area A, and maximum phase .sub.M, R.sub.Tam, which is explained herein is a ratio of the Tamura coefficient of a particular focus plane against the largest Tamura coefficient at different propagated z planes. The extracted spectral and spatial features are fed to a SVM-based machine learning model to digitally separate spatial artifacts from true particles. The extracted spectral and spatial features are also extracted and stored (seen in operation 380) and are then later used for particle sizing and/or type analysis. Additional spatial parameters include maximum intensity, standard deviation of intensity, maximum phase, minimum phase, average phase, standard deviation of phase, eccentricity of intensity, and eccentricity of phase. For example, these spatial parameters may be used to identify particle sizes, particle densities, and particle types.
(44) The peeling operation 390 consists of erasing or removing particles at increasing thresholds relative to background. Thus, the easiest to find (e.g., brightest) are first identified and reconstructed and then digitally erased. A higher threshold is then established as seen in operation 400 and then the image is then subject to the threshold mask (operation 410) whereby the mask is applied and the particles that are not masked out are subject to auto-focusing and rejection of spatial artifacts by the SVM model as described above. In one embodiment, there are a plurality of such peeling cycles performed (e.g., four (4)) with increasing thresholds being applied at each cycle.
(45) For particle sizing, the extracted spectral and spatial features for each patch are combined as seen in operation 420. A model (f) is then used to map these features to the diameter of the particles 100. While the function may have any number of dimensions, a second-order polynomial model of f was used and the extracted spectral and spatial features were extended to the second order (see Equation 2 herein) as seen in operation 430. Normalization of the features is then performed in operation 440 (see Equation 3 herein). Next, a trained machine learning model that was trained with ground truth particles that were manually measured was used to map the trained spectral and spatial features to particle diameter as seen in operation 450. Having all the particle diameters, the particle count and particle statistics can be output as illustrated in operation 460. This may include, for example, a histogram such as that illustrated in panel image (v) of
Experimental
(46) Air Platform Spatial Resolution, Detection Limit, and Field of View
(47)
(48) In the experiments described herein, the microcontroller is a Raspberry Pi A+. The microcontroller may include wireless transfer functionality such that images and data may be communicated from the portable microscope device to a server via Wi-Fi, Bluetooth, or the like. Alternatively, the microcontroller may communicate with a portable electronic device such as a Smartphone or the like that runs a custom-application or app that is used to transfer images to the computing device 52 (e.g., it acts as a hotspot) operate the device as well as display results to the user using the display of the portable electronic device. Here, the microcontroller is used to run the vacuum pump, control the light sources, control the image sensor (CMOS sensor), saves the images, transfers the image and coordinates the communication of the device with the computing device 52. Control of the device may be accomplished, in one embodiment, by the portable electronic device (e.g., Smartphone or tablet PC).
(49) The collected aerosol particles 100 are imaged and quantified by the lens-free microscope device 10. In one preferred aspect of the invention, the holographic images obtained with the lens-free microscope device are used in conjunction with a portable electronic device (e.g., Smartphone, tablet PC) that includes a custom software application 64 thereon that is used to control the microscope device as well as transfer images and data to and from a computing device 52 (e.g., server) that is used to process the raw images obtained using the microscope device 10.
(50) Processing of the acquired c-Air images was remotely performed. As shown in
(51) The USAF-1951 resolution test target was used to quantify the spatial resolution of the lens-free microscope device 10. The reconstructed image of this test target is shown in
(52) In the reconstructed lens-free differential images, the detection noise floor was defined as 3 (0.01 is the standard deviation of the background) from the background mean, which is always 1 in a given normalized differential image. For a particle to be viewed as detected, its lens-free signal should be above this 3 noise floor. 1 m particles were clearly detected, which was also cross-validated by a benchtop microscope comparison. It should be noted that, as desired, this detection limit is well below the 50% cut-off sampling diameter of the impactor (d.sub.50=1.4 m).
(53) In terms of the imaging field of view, the active area of the CMOS sensor in the c-Air design used for experiments herein was 3.672.74=10.06 mm.sup.2. However, in the impactor air sampler geometry, the particles are deposited immediately below the impaction nozzle. Thus, the active area that will be populated by aerosols and imaged by the lens-free microscope device will be the intersection of the active area of the CMOS sensor and the impaction nozzle opening. Because the slit has a width of only 1.1 mm, the resulting effective imaging field of view of tested c-Air device was 3.671.1=4.04 mm.sup.2. With either the selection of a different CMOS image sensor chip or a custom-developed impaction nozzle, the nozzle slit area and the image sensor area can have larger spatial overlaps to further increase this effective field of view.
(54) Machine Learning Based Particle Detection and Sizing
(55) A custom-designed machine-learning software algorithm was used on the computing device 52 (e.g., server) that was trained on size-calibrated particles to obtain a mapping from the detected spatial characteristics of the particles to their diameter, also helping to avoid false positives, false negatives as well as over-counting of moved particles in the detection process. For this purpose, spectral and spatial features were extracted from the holographic particle images, including e.g., minimum intensity I.sub.m, average intensity I.sub.a, and area A, and a second-order regression model was developed that maps these features to the sizes of the detected particles in microns. The model is deterministically learned from size-labeled particle images, which are manually sized using a standard benchtop microscope. Specifically, after extraction of the features I.sub.m, I.sub.a, and A of the masked region in a particle peeling cycle, a model is developed, f, that maps these features to the particle diameter D in microns, i.e.,
D=f(I.sub.m,I.sub.a,{square root over (A)})(1)
(56) where f can potentially have infinite dimensions. However, a simplified second-order polynomial model of f was employed and the features were extended to the second-order by defining:
X=[1,I.sub.m,I.sub.a,{square root over (A)},I.sub.m.sup.2,I.sub.a.sup.2,A,I.sub.m,I.sub.a,I.sub.m{square root over (A)},I.sub.a{square root over (A)}].(2)
(57) A linear mapping was defined, , that relates the second-order features to the diameter of the particle:
(58)
(59) where T refers to the transpose operation, and and represent the mean and standard deviation of X, respectively.
(60) Based on the above mapping, 395 size-labeled microbeads were used for training and blind testing. These polystyrene microbeads ranged in diameter from 1 m to 40 m, as shown in
min.sub..sup.T{circumflex over (X)}.sub.trD.sub.tr.sup.mic.sub.1(4)
(61) This minimization was performed by CVX (Available at: http://cvxr.com/cvx/), a software package designed for solving convex optimization problems. The same trained parameter was then applied for the cross-validation set, which was comprised of another 197 microbeads. Particle sizing training errors (E.sub.tr) and testing errors (E.sub.cv) were validated by evaluating the norm of difference:
E.sub.tr=.sup.T{circumflex over (X)}.sub.trD.sub.tr.sup.mic.sub.p(5)
E.sub.cv=.sup.T{circumflex over (X)}.sub.cvD.sub.cv.sup.mic.sub.p(6)
(62) where .sup.T {circumflex over (X)}.sub.cv is the testing feature mapping, and D.sub.cv.sup.mic is the calibrated diameter for the testing set. In addition, p=1 is used for calculating the mean error, and p= is used for calculating the maximum error. Note that this training process only needs to be done once, and the trained parameter vector, , and the normalization parameters, and , are then saved and subsequently used for blind particle sizing of all the captured aerosol samples using c-Air devices.
(63)
(64) Particle Size Distribution Measurements and Repeatability of the c-Air Platform
(65) Two c-Air devices, which were designed to be identical, were used to conduct repeated measurements at four locations: (1) the class-100 clean room of California NanoSystems Institute (CNSI); (2) the class-1000 clean room at CNSI; (3) the indoor environment in the Engineering IV building at the University of California, Los Angeles (UCLA) campus; and (4) the outdoor environment at the second floor patio of the Engineering IV building. At each location, seven samples for each c-Air device were obtained with a sampling period of 30 s between the two successive measurements. These sample c-Air images were processed as described herein, and the particle size distributions for each location were analyzed and compared.
(66) The mean and standard deviation of the seven measurements in each of the four locations are summarized in Table 1 below. It is interesting to note that c-Air measured the TSP density at 7 counts/L for the class-100 clean room and 25 counts/L for the class-1000 clean room at CNSI, which are both comparable to the ISO 14644-1 clean room standards, i.e., 3.5 counts/L for the class-100 clean room and 36 counts/L for the class-1000 clean room for particles 0.5 m.
(67) The measurements of TSP, PM10, and PM2.5 densities from the same data set were additionally used to elucidate two aspects of the repeatability of the c-Air platform, i.e., the intra-device and inter-device repeatability. The intra-device repeatability is defined as the extent to which the measurement result varies from sample to sample using the same c-Air device to measure the air quality in the same location (assuming that the air quality does not change from measurement to measurement with a small time lag in between). The strong intra-device repeatability of c-Air is evident in the standard deviation (std, ) in Table 1 below.
(68) TABLE-US-00001 TABLE 1 Class-100 Class-1000 Clean Room Clean Room Indoor Outdoor Device A Total mean 10.76 27.32 114.14 196.52 (count/L) std 3.34 27.94 17.09 55.25 PM10 mean 10.76 26.92 111.59 195.84 (count/L) std 3.34 27.41 17.22 55.43 PM2.5 mean 7.95 14.88 67.99 113.36 (count/L) std 2.87 14.33 12.03 43.45 Device B Total mean 6.14 23.57 151.63 190.89 (count/L) std 6.17 22.02 67.8 18 PM10 mean 6.14 23.57 147.76 190.79 (count/L) std 6.17 22.02 67.01 18.06 PM2.5 mean 4.4 16.19 89.93 116.59 (count/L) std 3.24 14.67 39.31 13.53
(69) The inter-device repeatability is defined as the extent to which the results vary from each other using two c-Air devices that are designed as identical to measure the air quality in the same location. To further quantify the inter-device repeatability, a -test was performed (i.e., Mann-Whitney -test or Wilcoxon rank sum test) on the 24 sets of measurement data from devices A and B at four different locations. In the -test, the goal was directed to verify the null hypothesis (H=0) for two sets of samples, X and Y:
(70)
(71) That is, the experiments strived to test if the medians of the two samples are statistically the same. Compared to other tests for repeatability, e.g., the student t-test, the -test requires fewer assumptions and is more robust. A Matlab built-in function, ranksum, was used to perform the -test, and the hypothesis results and prominent p-values are summarized in Table 2 below. As shown in this table, the null hypothesis H=0 is valid for all the 24 sets of measurement data (from devices A and B at four different locations), showing the strong inter-device repeatability of the c-Air platform.
(72) TABLE-US-00002 TABLE 2 Class-100 Class-1000 Clean Room Clean Room Indoor Outdoor Total H 0 0 0 0 (count/L) P- 0.07 0.46 0.13 0.53 value PM10 H 0 0 0 0 (count/L) P- 0.07 0.46 0.21 0.53 value PM2.5 H 0 0 0 0 (count/L) P- 0.09 0.43 0.16 0.25 value
(73) 2016 Sand Fire Incident Influence at >40-Km Distance Using c-Air Device
(74) On Jul. 22, 2016, the Sand Fire incident struck near the Santa Clarita region in California and remained uncontained for several days. Although the UCLA campus is more than 40 km from the location of the fire, on July 23 around noon, smoke and ashes filled the sky near UCLA. Six air samples were obtained using the c-Air device at an outdoor environment at UCLA, as described in the above section. The results were compared with a previous sample obtained on a typical day, Jul. 7, 2016, using the same device and at the same location. The data of both days contained six 30-s air samples measured with c-Air, with a 2-min interval between the successive samples. For each day, the particle size distributions of the six samples were averaged and the standard deviations were plotted as the histogram in
(75) Comparison of c-Air Device with a Standard BAM PM2.5 Instrument
(76) On Aug. 16, 2016, a c-Air device was tested at the Reseda Air Quality Monitoring Station (18330 Gault St., Reseda, Calif., USA) and a series of measurements were made during a 15-h period starting from 6:00 a.m. The performance of the c-Air device was compared with that of the conventional EPA-approved BAM PM2.5 measurement instrument (BAM-1020, Met One Instruments, Inc.).
(77) The EPA-approved BAM-1020 pumps air at 16.7 L/min and has a rotating filter amid airflow that accumulates PM2.5 to be measured each hour. A beta-particle source and detector pair inside measures the attenuation induced by the accumulated PM2.5 on the filter and converts it to total mass using the Beer-Lambert law. The quantity reported from BAM-1020 is hourly averaged PM2.5 mass density in g/m.sup.3. In comparison, the c-Air device is programmed to obtain a 30-s average particle count per 6.5 L of air volume. It performs sizing and concentration measurements using optical microscopic imaging.
(78) To enable a fair comparison, four 30-s measurements were made each hour, with 10- to 15-min intervals between consecutive c-Air measurements. PM2.5 densities were made corresponding to these samples and obtained their average as the final measured PM2.5 density for a given hour. This c-Air average was compared to the hourly average PM2.5 mass density measured by BAM-1020. The measurements of the c-Air device were obtained on the roof of the Reseda Air Sampling Station close to the inlet nozzle of BAM-1020; however, it was situated 2 m from it to avoid interference between the two systems.
(79)
(80) Spatial-Temporal Mapping of Air-Quality Near LAX
(81) On Sep. 6, 2016, two c-Air devices were used, device A and device B, to measure the spatio-temporal air quality changes around Los Angeles International Airport (LAX). Two 24-h measurements were made spanning two different routes that represent the longitudinal and latitudinal directions, which were centered at LAX. Six locations were in each route and measurements were made with a period of 2 h in each route over 24 h. These raw c-Air measurements were sent to the remote server for automatic processing to generate the particle image and particle size statistics at each time and location.
(82) Route 1 extended from LAX to the east in a longitudinal direction. Along this route, six sites were located that were located at 3.37 km, 4.34 km, 5.91 km, 7.61 km, 9.95 km, and 13.1 km east of LAX, respectively. LAX shows a pattern of a large number of flights throughout the day (7 a.m. to 11 p.m.); however, it shows a prominent valley at late night (2 a.m.), where the number of flights is minimal, as shown by the flights curves in
(83) Unlike Route 1, Route 2 extended from the south to the north of LAX, spanning a latitudinal direction. The six locations chosen in this route were located 3.58 km, 1.90 km, 0.50 km, 0.01 km, 1.46 km, and 2.19 km north of LAX, respectively. Similar to Route 1,
(84) Methods
(85) Impaction-Based Air-Sampler
(86) To capture aerosols, an impaction-based air sampler was used on account of its high-throughput, simple hardware, and compatibility with microscopic inspection. As described herein, the impactor that was used in the lens-free microscope device includes an impaction nozzle and a sticky or tacky sampling coverslip (Air-O-Cell Sampling Cassette, Zefon International, Inc.). The off-the-shelf Air-O-Cell sampling cassette was taken apart and the upper portion was used that contained the impaction nozzle and sticky coverslip. The sticky coverslip was then rested directly on the imaging sensor. A vacuum pump (Micro 13 vacuum pump (Part No. M00198) available from Gtek Automation, Lake Forrest, Calif.) drives the laminar airstream through the nozzle at high speed. The sticky or tacky coverslip is placed to directly face the airstream. The airstream can be easily redirected while the aerosol inside the stream impacts with and is collected by the sticky coverslip. This collection is subsequently used for computational imaging.
(87) The aerosol captured by the impactor is actually a random process. The probability that an individual aerosol particle passing through the impactor will be captured depends on the particle size, laminar airflow rate, and nozzle width. This probability is related to the Stokes number (Stk):
(88)
(89) where .sub.p is the particle mass density, d.sub.P denotes the particle diameter, U represents the flow velocity, is the air viscosity, and D.sub.j denotes the nozzle diameter. The impaction efficiency increases as Stk increases. Based on the same terminology, the cut-off size, d.sub.50, is defined as the diameter of the particle at which the impaction efficiency decreases to 50%. In the experimental design, the air sampler (with a nozzle of 1.1 mm by 14.5 mm) was connected to and driven by a miniaturized pump with a throughput of 13 L/min. Based on the above relationship, the 50% cut-off sampling diameter can be estimated as d.sub.50=1.4 m.
(90) c-Air Lens-Free On-Chip Microscope and Air Sampling Design
(91) For rapid imaging and inspection of the collected aerosols, the impactor was combined with a lens-free microscope in a single mobile device as shown
(92) The image capturing and air sampling processes are illustrated in
(93) Next, the pump was powered on to push the air through the sampler for 30 seconds, thereby screening 6.5 L of air. The three LEDs were then sequentially turned on/off, and three sample images were thereby obtained with the newly captured aerosol particles. These background images and sample images were both transferred or synced to the server for further processing. The synching operation involves transferring the image files as well as updating a log file in the server that is sued for debugging and display purposes. In this approach, two sets of images (i.e., before and after sampling) were obtained to employ a differential imaging strategy. Specifically, after subtraction of the sample image from its corresponding background image, a differential hologram was formed, which contained the information of only the newly captured particles. For particle sizing, only the images captured under the green LED illumination were used. By merging all the differential holographic images captured using the three LEDs, red; green; blue (RGB) color images of the captured particles could also be obtained, revealing the color information of the specimen, if desired. This may be used, for example, to discern particle type or other particle-specific information, for example. However, in some embodiments where only particle counting and sizing is performed, only a single color illumination is needed. To avoid awaiting completion of the steps before a new sample could be obtained, the sampling process was programmed using a parallel approach. Accordingly, when a new sampling request arrived before the previous result was synced; the un-synced sample was cached first. It was later synced when the device became idle. In this embodiment, the entire device sampling process was controlled by a custom-developed program on a microcomputer (Raspberry Pi A+). Of course, different microcomputers or microprocessors may be used to control operation of the device as well as acquire images using the imaging array.
(94) c-Air Smartphone App
(95) To control the c-Air device, an iOS-based app was developed using Swift (Apple Inc.). Of course, the application or app could work on another operating system such as, for example, Android. The app is installed on an iOS device (e.g., iPhone 6s) and is used together with the c-Air device. The app has two basic functions: (1) controlling the device for sampling air; and (2) displaying the results processed by the server. The app may also facilitate the transfer of images and data to and from the remote server. In one embodiment, the app automatically establishes a personal hotspot upon launching, to which the device connects through Wi-Fi.
(96) The same app is additionally used to view the server-processed results of the air samples (i.e., the results of processed holographic images obtained using the device) captured from different locations. The full history of the samples obtained by this device can be accessed in (iv) map view or (vi) list view. Selecting a sample in list view or a pinpoint in map view creates a summary of the server-processed results. The results can be viewed in two aspects using the app: a reconstructed microscopic image of the captured aerosols on the substrate, with an option to zoom into individual particles, and a histogram of the particle size and density distribution (panel image (v) of
(97) Remote Processing of c-Air Images
(98) Processing of the captured holographic c-Air images was performed on a Linux-based server (Intel Xeon ES-1630 3.70-GHz quad-core central processing unit, CPU) running Matlab. Of course, other computing devices (e.g., servers) containing the software thereon may also be used. As illustrated in
(99) In Step 1, the raw holograms, each approximately 5 MB in size, are transferred from the microscope device to the server at 1 s or less per image using Wi-Fi. In Step 5, the processed information is packaged as a JPEG file of the reconstructed image plus a vector containing the particle density of each size range, which is later rendered and displayed as a histogram on the Smartphone app. The specific algorithms used in this workflow are detailed below.
(100) Pre-Processing and Differential Hologram Formation
(101) The server receives the two sets of holograms (background and sample) in three colors: red (R), green (G), and blue (B). For pre-processing before the hologram reconstruction and particle detection steps, the raw format images were first extracted, which are then de-Bayered, i.e., only the information of the corresponding color channel is maintained. Next, to isolate the current aerosol sample collected during the latest sampling period, three differential holograms in R, G, and B are generated by digitally subtracting the corresponding background image from the sample image and normalizing it to a range of zero to two with a background mean centered at one. Alternatively, de-Bayering may take place after the differential holograms are obtained.
(102) Holographic Reconstruction
(103) A 2D distribution of captured particles, O(x, y), can be reconstructed through digital propagation of its measured hologram, A(x, y), to the image plane using the angular spectrum method:.sup.29
O(x,y)=F.sup.1{F{A(x,y)}.Math.H(f.sub.x,f.sub.y)}(9)
(104) where F and F.sup.1 define the spatial Fourier transform and its inverse, respectively, and H(f.sub.x, f.sub.y) is the propagation kernel (i.e., the angular spectrum) in the Fourier domain, which is defined as:
(105)
(106) where f.sub.x and f.sub.y represent the spatial frequencies of the image in the Fourier domain. The propagation kernel H(f.sub.x, f.sub.y) is uniquely defined, given the illumination wavelength , refractive index of the medium, n, and the propagation distance z. Without further clarification, all the propagation-related terms herein refer to this angular-spectrum-based digital propagation of a complex object. Twin-image-related spatial artifacts due to intensity-only detection are discussed in the following subsections.
(107) Digital Auto-Focusing on Aerosols
(108) In the lens-free on-chip imaging geometry, the specific distance from the sample to the sensor plane is usually unknown and must be digitally estimated for accurate reconstruction and particle analysis. Here, a digital measure was used, termed the Tamura coefficient, for autofocusing and estimation of the vertical distance, z, from the particle to the sensor plane. See Memmolo, P. et al. Automatic focusing in digital holography and its application to stretched holograms. Opt. Lett. 36, 1945-1947 (2011), which is incorporated herein by reference. It is defined as the square root of the standard deviation of an image over its mean:
(109)
(110) where I.sub.z is the intensity image of a hologram after propagation by distance z.
(111) For speeding up this auto-focusing process, a fast searching algorithm was used based on the Tamura coefficient, which is illustrated in
(112) In the second step, after a concave height interval (l, u) is identified, a golden-ratio search is run to find the correct depth focus for each particle. To this end, the golden ratio is defined as =({square root over (5)}1)/2 where p=u (u l) and q=l+(u l) is the golden-ratio division point on each side of a given height interval. After propagating the hologram to these new heights (p and q), one compares the Tamura coefficient, T.sub.p and T.sub.q, at these two heights, p and q, respectively. If T.sub.p<T.sub.q, one moves the lower bound, l=p, and then let p=q, T.sub.p=T.sub.q, q=l+(u l). Otherwise, one moves the upper bound, u=q, and then let q=p, T.sub.q=T.sub.p, and p=u (u l). This process is repeated until the length of the height interval is smaller than a predefined threshold, u l<, e.g., =0.1 m.
(113) Particle Detection Using Digital Peeling
(114) Direct back-propagation of the acquired hologram using Equation (9) to the auto-focused sample plane generates a spatial artifact, called the twin-image noise, on top of the object. This twin-image artifact affects the detection of aerosol particles. If left unprocessed, it can lead to false-positives and false-negatives. To address this problem, an iterative particle peeling algorithm is employed in the holographic reconstruction process. Additional details regarding digital peeling (or count-and-clean) may be found in McLeod, E. et al., High-throughput and label-free single nanoparticle sizing based on time-resolved on-chip microscopy, ACS Nano 9, 3265-3273 (2015), which is incorporated herein by reference. It is additionally combined with a machine learning algorithm or model to further reject these spatial artifacts. The machine learning algorithm may include a support vector machine (SVM)-based learning model, deep learning model, or the like known to those skilled in the art. The algorithm used according to one embodiment is summarized in
(115) This peeling algorithm contains four cycles of detection and erasing (peeling out) of the particles at progressively increasing thresholds, i.e., 0.75, 0.85, 0.92, and 0.97, where the background is centered at 1.0 during the differential imaging process, as described in previous sections. The highest threshold (0.97) is selected as 3 from the background mean, where a 0.01 is the standard deviation of the background. A morphological reconstruction process is used to generate the image mask instead of using a simple threshold. Because most particles have a darker center and a somewhat weaker boundary, using a single threshold may mask parts of the particle, potentially causing the particle to be missed or re-detected multiple times in subsequent peeling cycles. This is avoided by using a morphological reconstruction process.
(116) In each cycle of this digital particle peeling process, one first adjusts the image focus using the auto-focusing algorithm described herein. Then, a morphological reconstruction is employed to generate a binary mask, where each masked area contains a particle. For each mask, a small image (100100 pixels) is cropped and fine auto-focusing is performed on this small image to find the correct focus plane of the corresponding particle. At this focus plane, various spectral and spatial features of the particle are extracted, e.g., minimum intensity I.sub.m, average intensity I.sub.a, area A, and maximum phase .sub.M. The image is then propagated to five different planes uniformly spaced between 20 m above and 20 m below this focus plane. The Tamura coefficient of this focus plane is calculated and compared to the coefficients of these five other planes. The ratio of the Tamura coefficient at this focus plane against the highest Tamura coefficient of all six planes is defined as another feature, R.sub.Tam. These four features, I.sub.m, .sub.M, A, and R.sub.Tam, are then fed into an SVM-based learning model to digitally separate spatial artifacts from true particles and reject such artifacts. This learning algorithm is detailed below. After all the detected particles in this peeling cycle are processed, one digitally peels out these counted particles, i.e., replace the thresholded area corresponding to each detected particle with the background mean, on both the image and twin image planes. The algorithm proceeds to the next peeling cycle with a higher threshold and repeat the same steps.
(117) After completing all four peeling cycles, the extracted features, I.sub.m, .sub.M, and A, are further utilized for particle sizing using a machine-learning algorithm, as detailed further below. This sizing process is only performed on true particles, which generates a histogram of particle sizes and density distributions, as well as various other parameters, including, for example, TSP, PM10, and PM2.5, reported as part of c-Air result summary.
(118) Elimination of False-Positives Using a Trained Support Vector Machine
(119) To avoid false-positives in the detection system, used a trained linear SVM was used that is based on four features, I.sub.m, .sub.M, A, and R.sub.Tam, as described previously, to distinguish spatial artifacts from true particles and increase c-Air detection accuracy. These spectral and spatial features were selected to provide the best separation between the true- and false-particles. To train this model, two air sample images were obtained using a c-Air prototype, one indoor and one outdoor. Then, in addition to the c-Air based analysis, the sampling coverslip was removed and inspected for the captured particles under a benchtop bright-field microscope using a 40 objective lens. The thresholded areas in the peeling cycle and lens-free reconstruction process were compared with the images of the benchtop microscope to mark each one of these detected areas as a true particle or a false one. Using this comparison, a total of more than 2,000 thresholded areas were labeled and half of this training data was fed into the SVM model (implemented in Matlab using the function svmtrain). The other half was used for blind testing of the model, which showed a precision of 0.95 and a recall of 0.98.
(120) Detection and Exclusion of Moved Particles
(121) Multiple differential imaging experiments were performed on the same substrate. It was observed that a small number of aerosol particles (3%) moved or changed their positions on the substrate in the later runs. In the reconstruction of the differential hologram, the moved particle appears as a pair of white-black points, where the white particle appears because it was present in the previous image but is absent in the current one. To avoid over-counting the aerosols on account of these moved particles, a threshold-based algorithm was used as part of the four peeling cycles to detect these white particles. Then, the nearest black particles that were similar in size and intensity to the detected white particles were marked to define a moved particle. The moved particle was then removed from the total particle density distribution, thereby avoiding double counting of the same particle.
(122) Converting Particle Count to Particle Density
(123) For each sample (and the corresponding c-Air differential hologram), the particle detection and sizing algorithm, as previously described, provides the particle count in the number of particles for different sizes/diameters. To facilitate a more universal unit, the sampling particle count, N.sub.i, was converted to particle density n.sub.i (in counts/L) using the following equation:
(124)
(125) where Q=13 L/min is the flow rate of air, and t=0.5 min is the typical sampling duration. In addition, L.sub.total=15.5 mm is the total length of the impactor nozzle slit, and L.sub.sensor=3.67 mm is the part of the slit being imaged, which equals the longer edge of the CMOS sensor active area. The conversion equation here assumes that the particle distribution is uniform along the sampler nozzle length, which is a valid assumption because the nozzle tapering is in its orthogonal direction, while the structure of the sampler in this direction is spatially invariant.
(126) The c-Air system provides a lens-free microscope device together with the use of machine learning to provide a platform that is portable and cost-effective for PM imaging, sizing, and quantification. The platform uses a field-portable device weighing approximately 590 grams, a Smartphone app for device control and display of results, and a remote server (or other computing device 52) for the automated processing of digital holographic microscope images for PM measurements based on a custom-developed machine learning algorithm. The performance of the device was validated by measuring air quality at various indoor and outdoor locations, including an EPA-regulated air sampling station, where a comparison of c-Air results with those of an EPA-approved BAM device showed a close correlation. The c-Air platform was used for spatio-temporal mapping of air-quality near LAX, which showed the PM concentration varying throughout the day in accordance with the total number of flights at LAX. The strength of this correlation, as well as the daily average PM, exponentially decreased as a function of the increasing distance from LAX. The c-Air platform, with its microscopic imaging and machine learning interface, has a wide range of applications in air quality regulation and improvement.
(127) While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. The invention, therefore, should not be limited, except to the following claims, and their equivalents.