Apparatus, optical system, and method for digital holographic microscopy
10365606 ยท 2019-07-30
Inventors
- Thanh Nguyen (Silver spring, MD, US)
- George Nehmetallah (Washington, DC, US)
- Vy Bui (Silver spring, MD, US)
Cpc classification
G02B21/365
PHYSICS
G03H1/0866
PHYSICS
G03H2226/02
PHYSICS
G03H2001/005
PHYSICS
G02B21/18
PHYSICS
G03H2001/0456
PHYSICS
G03H1/0443
PHYSICS
International classification
G03H1/00
PHYSICS
G03H1/08
PHYSICS
G02B21/36
PHYSICS
G02B21/18
PHYSICS
Abstract
A digital holography microscope, a method, and a system are provided. The digital holography microscope comprising two microscope objectives configured in a bi-telecentric configuration; a sample holder configured to receive a sample; a couple charged device configured to capture one or more images; a display; and a processor configured to retrieve a Convolutional Neural Network (CNN) model associated with a type of the sample, mitigate aberrations in the one or more images using at least the CNN model having as input an unwrapped phase associated with each of the one or more images, and output the mitigated one or more images via the display.
Claims
1. A digital holography microscope comprising: two microscope objectives configured in a bi-telecentric configuration; a sample holder configured to receive a sample; a couple charged device configured to capture one or more images; a display; and a processor configured to retrieve a Convolutional Neural Network (CNN) model associated with a type of the sample, mitigate aberrations in the one or more images using at least the CNN model having as input an unwrapped phase associated with each of the one or more images, and output the mitigated one or more images via the display.
2. The digital holography microscope of claim 1, wherein an output of the CNN model is a background image.
3. The digital holography microscope of claim 1, wherein the mitigating step further includes: obtain the Fourier transform of each of the one or more images; determine a phase of the one or more images in the Fourier domain; unwrap the phase of each of the one or more images; input the unwrapped phase to the CNN model; combine the unwrapped phase and the output of the CNN model to obtain background phase information, the output of the CNN model being a background image; determine a conjugated phase aberration based on the background phase information; compensate the conjugated phase aberration; and determine an aberration-free image based on at least the compensated conjugated phase aberration.
4. The digital holography microscope of claim 3, wherein compensating the phase includes: multiply in a spatial domain a first term associated with the conjugated phase aberration with an inverse Fourier transform of a first order spectrum associated with the image.
5. The digital holography microscope of claim 4, wherein the processor is further configured to: determine the conjugated phase aberration using Zemike polynomial fitting.
6. The digital holography microscope of claim 5, wherein the determining the aberration free image further includes applying an angular spectrum reconstruction technique on a compensated image associated with the compensated phase to obtain an aberration-free reconstructed image.
7. The digital holography microscope of claim 3, wherein the processor is further configured to: unwrap the aberration-free reconstructed image.
8. The digital holography microscope of claim 1, wherein the CNN model includes a ground truth and training data associated with the type of the sample.
9. A method for image acquisition, the method comprising: depositing a sample in a sample holder of a digital holography microscope having two microscope objectives in a bi-telecentric configuration; capturing one or more images using the couple charged device of the digital microscope; retrieving, using processing circuitry, a Convolutional Neural Network (CNN) model associated with a type of the sample; mitigating, using the processing circuitry, aberrations in the one or more images using at least the CNN model having as input an unwrapped phase associated with each of the one or more images; and outputting the mitigated one or more images via a display of the digital holography microscope.
10. The method of claim 9, wherein an output of the CNN model is a background image.
11. The method of claim 9, wherein the mitigating step further includes: obtaining the Fourier transform of each of the one or more images; determining a phase of the one or more images in the Fourier domain; unwrapping the phase of each of the one or more images; inputting the unwrapped phase to the CNN model; combining the unwrapped phase and the output of the CNN model to obtain background phase information, the output of the CNN model being a background image; determining a conjugated phase aberration based on the background phase information; compensating the conjugated phase aberration; and determining an aberration-free image based on at least the compensated conjugated phase aberration.
12. The method of claim 11, wherein compensating the phase includes: multiplying in a spatial domain a first term associated with the conjugated phase aberration with an inverse Fourier transform of a first order spectrum associated with the image.
13. The method of claim 12, further comprising: determining the conjugated phase aberration using Zemike polynomial fitting.
14. The method of claim 11, wherein the determining the aberration free image further includes applying an angular spectrum reconstruction technique on a compensated image associated with the compensated phase to obtain an aberration-free reconstructed image.
15. The method of claim 14, further comprising: unwrapping the aberration-free reconstructed image.
16. The method of claim 9, wherein the CNN model includes a ground truth and training data associated with the type of the sample.
17. A system comprising: a digital holography microscope including: two microscope objectives in a bi-telecentric configuration, a couple charged device configured to capture one or more images of a sample, and a display; and a processor configured to retrieve a Convolutional Neural Network (CNN) model associated with a type of the sample, mitigate aberrations in the one or more images using at least the CNN model having as input an unwrapped phase associated with each of the one or more images, and output the mitigated one or more images via the display.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
DETAILED DESCRIPTION
(20) Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout several views, the following description relates to an optical system and associated methodology for digital microscopy.
(21)
(22) System 100 has an a focal configuration, where the back focal plane of the MO coincides with the front focal plane of the Tube lens (f.sub.o f.sub.TL), with the object placed at the front focal plane of the MO, resulting in the cancellation of the bulk of the spherical phase curvature normally present in traditional DHM systems. The optical beam from a laser 102 travels through a neutral density filter 104. In one example, the laser 102 is a HeNe laser. In other implementations, the laser may be a multiwavelength. Then, the optical beam travel through a spatial filter 106 (e.g., a microscope objective having a 10 magnification) and a periscope system 108 (e.g., a pin hole). Then, the beam is collimated with a collimating lens 110 to produce a plane wave beam. In one implementation, the collimated beam may be passed through a polarizer 112. The collimated beam is split into a reference beam and an object beam using a beam splitter 114 which is focused on the biological sample using an a focal configuration. The two beams which are tilted by a small angle (<1) from each other are recombined using a second beam splitter 132 and interfere with each other on a couple charged device (CCD) 134 to generate an off-axis hologram. The magnification of the BT-DHM system 100 is M=f.sub.TL/f.sub.o. The direction of the object beam may be altered using a mirror 116. Then, the beam may be passed through a first tube lens 118 and a first microscope objective 120 to focus the beam on a sample or a sample holder 124. Then, the beam is passed through a second microscope objective 122 and a second tube lens 126. The reference beam may be passed through a neutral density filter 128 and a mirror 130 to direct the beam to the second beam splitter 132.
(23) The numerical reconstruction algorithms used in constructing digital holograms are the discrete Fresnel transform, the convolution approach, and the reconstruction by angular spectrum as described in G. Nehmetallah, and P. P. Banerjee, Applications of digital and analog holography in 3D imaging, Adv. Opt. and Photon, 4(4), 472-553 (2012) incorporated herein by reference in its entirety.
H(f.sub.x,f.sub.y)=F[h(x,y)]=.sub..sup.h(x,y)exp{2i(xf.sub.x+yf.sub.y)}dxdy(1)
U(f.sub.x,f.sub.y)=H(f.sub.x,f.sub.y)exp(2if.sub.zd)(2)
u(,)=F.sup.1[U(f.sub.x,f.sub.y)]=.sub..sup.U(f.sub.x,f.sub.y)exp{2i(f.sub.x+f.sub.y)}df.sub.xdf.sub.y (3)
where d is the distance between image plane and CCD, h(x,y) is the hologram, u(,) is the reconstructed image, F is the Fourier transform operator, is the wavelength, and f.sub.x,f.sub.y,f.sub.z={square root over (1/.sup.2f.sub.x.sup.2f.sub.y.sup.2)} are the spatial frequencies. The numerical reconstruction algorithms may be implemented by the processor 136.
(24) In DHM a MO is introduced to increase the spatial resolution which may be calculated according to Eq. (4). Due to the magnification M introduced by the MO the pixel size in the image plane, .sub.mag and .sub.mag scale according to:
.sub.mag=d/(NxM),.sub.mag=/(NyM),(4)
where N is the number of pixel in one dimension, and x, y denote the sampling intervals or pixel size x=y=L/N, LL are the dimensions of the CCD sensor 134. The dimensions of the CCD 134 may be stored in a memory associated with the processor 136. The sampling intervals may be predefined and stored in the memory of the processor 136. In other implementations, the sampling intervals may be set by a user or determined by the processor 136 based on past results.
(25) This is intuitively understood by realizing that the holographic recording is a recording of the geometrically magnified virtual image located at distance d. Thus, the pixel resolution is automatically scaled accordingly. For a transmissive phase object on/between transmissive surface/s, the phase change (optical thickness T) due to the change in index n can be calculated as:
(26)
where the phase due to the biological sample is expressed as:
(27)
where R is the radius of curvature of the spherical curvature of the MO and (,) is the total phase of the object beam without using the bi-telecentric configuration.
(28) Conventional image reconstruction using Eq. (3) contains phase aberrations which can be mitigated with the image reconstruction method described herein.
(29)
(30) Training the CNN model 220 requires a training dataset of sub-sampled phase aberration images and their corresponding ground truth (label) images. Details of the data preparation steps for training the CNN model 220 and the implementation of the CNN model are described further below.
(31) The cancer cells from the highly invasive MDA-MB-231 breast cancer cell line are seeded on type I collagen hydrogels, polymerized at 4 mg/ml and a temperature of 37 C. in 35 mm glass-bottomed petri dishes. The cells on collagen may be incubated for 24 hours in DMEM medium containing 10% fetal bovine serum, in standard tissue culture conditions of 37 C. and 5% CO.sub.2, and 100% humidity. Then, cells are taken from the incubator and imaged with the bi-telecentric DHM system 100 described above to produce phase reconstruction maps.
(32)
(33) Forty holograms containing cancer cells were also reconstructed using the PCA method. For the training stage of the deep-learning CNN, 306 single cells were manually segmented from those forty reconstructed holograms to obtain real phase distribution images and corresponding ground truth binary images (0 for background, 1 for cells). Then, each of cell's phase distribution images, binary masks and subsampled phase aberration images were augmented using flipping (horizontally and vertically) and rotating (90, 180, and 270). Therefore, 1836 single cell phase distribution images, corresponding to 1836 single cell binary masks and 1260 sub-sampled background phase aberration were obtained. In order to create the training data set, 4-10 real phase maps of cells were randomly added into each of the 1260 phase aberration images that contain no samples at random positions. It should be noted that the total phase is the integral of the optical path length (OPL). These phase maps were preprocessed with a moving average filter [55] to smooth out the edges due to the manual segmentation. Similarly, and corresponding to the same 4-10 random positions of the real phase maps, the ground truth binary masks were added to a zero background phase map to create the labeled dataset. Notice that, different types of cells can produce different shapes. In one implementation, a future objective would be to quantitatively assess the growth and migratory behavior of invasive cancer cells, and hence cells from the invasive MDA-MB-231 breast cancer line were used for this purpose.
(34) Note that, for each type of cells, manual segmentation is only performed once. Hence, the manual segmentation is only performed in the data preparation stage. Usually, deep learning CNN techniques require a certain amount of training data to produce good results. This additional overhead to collect and prepare the training data can be expensive. However, by augmenting 210 phase images (without sample present) and 310 cell images through flipping and rotation, a training dataset of 1260 phase aberration images and their corresponding ground truths images is created. Eighty percent of these images were randomly selected for training, and the rest of images were used for validation.
(35)
(36)
(37) In one example, process 500 is implemented to obtain Random background phase aberration when no sample is present in the system 100. The first microscope objective 120 and the second microscope objective 122 are both shifted up, down, and rotated to create different phase aberrations. Two-hundred and ten holograms without a sample present are captured and reconstructed, using angular spectrum method. The background sub-sampled (256256) phase aberration are reconstructed after using a band-pass filter around the +1 order (virtual image location) by using an inverse Fourier transform and phase unwrapping.
(38)
(39)
(40) The implementation of deep learning CNN for automatic background detection for digital holographic microscopic images is described next. The deep learning architecture contains multiple convolutional neural network layers, including max pooling layers, unpooling layers with rectified linear unit (ReLU) activation function and batch normalization (BN) function, similar to the architecture used described in O. Ronneberger, P. Fischer, and T. Brox, U-net: Convolutional networks for biomedical image segmentation, International Conference on Medical Image Computing and Computer Assisted Intervention, Springer International Publishing, 18 May (2015) incorporated herein by reference in its entirety. Let us denote by x.sup.(i), x.sup.(i) and y.sup.(i) to be the input data volume (correspond to the initial group of phase aberration images), the currently observed volume data at a certain stage of the CNN model, and the output data volume of the CNN model, respectively. The input and output data volume along with the ground truth images have a size of (batchSizeimageWidthimageHeightchannel), where batchSize is the number of the images in each training session. In the model described herein, the input volume has a size (81281281) (1 channel indicates a grayscale image), whereas the output volume has a size (81281282) (2 channels for 2 classes obtained from the one-hot-encoding of the ground truth images). An output neuron in the U-net model is calculated through convolution operations (i.e., defined as a convolution layer) with the preceding neurons connected to it such that these input neurons are situated in a local spatial region of the input. Specifically, each output neuron in a neuron layer is computed by the dot product between their weights and a connected small region of the input volume, with an addition of the neuron bias:
x.sub.l.sup.(i)=.sub.j=0.sup.MW.sub.l.sup.(j)x.sub.l-1.sup.(j)+B.sub.l.sup.(j),i=1,2, . . . ,N,(6)
where W is the weight, B is the bias, j is the index in the local spatial region M which is the total number of elements in that region, N is the total number of neurons in each layer which can be changed depending on the architecture, and l is the layer number.
(41) The U-net model contains two parts: Down-sampling (Indicated by 802 in
(42)
where x.sup.(i) is the i.sup.th pixel in the volume data under training, N is the total number of pixels in the volume data: N=batchSizelayerwidthlayerheightchannel, where layerwidth and layerheight is the width and height of the image at the l.sup.th layer, and the channel is the number of weights W in the l.sup.th layer. Other activation functions may be used as would be understood by one of ordinary skill in the art.
(43) On the other hand, batch normalization allows the system to: (a) have much higher learning rates, (b) be less sensitive to the initialization conditions, and (c) reduce the internal covariate shift. BN can be implemented by normalizing the data volume to make it zero mean and unit variance as defined in Eq. (8):
(44)
where
(45)
is a regularization parameter to avoid the case of uniform images), is a scaling factor, is the shifting factor (=1, =0), and is the output of the BN stage.
(46) The down-sampling and up-sampling may be done using max pooling and unpooling, respectively. Max pooling is a form of non-linear down-sampling that eliminates non-maximal values, and helps in reducing the computational complexity of upper layers by reducing the dimensionality of the intermediate layers. Also, max pooling may be done in part to avoid over fitting. The unpooling operation is a non-linear form of up sampling a previous layer by using nearest neighbor interpolation of the features obtained by max pooling, and resulting gradually shape of samples. The deep learning CNN model described herein has a symmetrical architecture with max pooling and unpooling filters both with a 22 kernel size.
(47) In one implementation, the Softmax function, a linear classifier defined in Eq. (9), is used in the last layer to calculate the prediction probability of background/cell potential as:
(48)
where N(81281282) is the number of pixels (neurons) needed to be classified in the segmentation process.
(49) An error is a discrepancy measure between the output produced by the system and the correct output for an input pattern. A loss value is the average of errors between the predicted probability S(y.sup.(i)) and the corresponding ground truth pixel L.sup.(i). The loss function is measured by using the cross entropy function which is defined as:
(50)
(51) The training is performed by the processor 136 by iterating the process of feeding the phase aberration images in batches through the model and calculating the error using an optimizer to minimize the error. The Stochastic Gradient Descent (SGD) optimizer is employed in the back propagation algorithm. Instead of evaluating the cost and the gradients over the full training set, the processor 136 evaluates the values of these parameters using less training samples. The learning rate was initially set to 1e-2, the decay to 1e-6, and the momentum to 0.96. Other parameters used in one example are: batchsize of 8, image size of 128128 instead of 256256 to avoid memory overflow (images may be resized at the end of the process), depth channel of 32 at the first layer, the deepest channel is 512, and training with 360 epochs. The model described herein was implemented in Python using TensorFlow/Keras framework and the implementation was GPU-accelerated with NVIDIA GeForce 970M.
(52)
(53) To evaluate the performance of the deep neural network and ZPF technique described herein, 30 holograms recorded by the system 100 and reconstructed using the process 200 shown in
(54)
(55)
(56) In order to measure the conjugated background phase aberration, the pixels from the raw phase image are selected by the processor 136 corresponding to the background pixels' locations obtained from the binary image where (BC.sup.(i)=1), then converted to a 1D vector to perform the polynomial fitting. Then, the polynomial fitting is implemented using a 5.sup.th order polynomial with 21 coefficients as:
S(x,y)=.sub.i=0.sup.5.sub.j=0.sup.5p.sub.ijx.sup.iy.sup.i,i+j5,(12)
where p.sub.ij are the coefficients, i and j are polynomial orders, x and y present pixel coordinates. Let the arrays P=[p.sub.00 p.sub.10 . . . p.sub.ij . . . p.sub.05] and A=[a.sub.0 a.sub.1 . . . a.sub.10 . . . a.sub.20], hold the polynomial model's coefficients and the Zernike model's coefficients.
(57) The 21 coefficients of the P polynomial are used to calculate the coefficients of the Zernike polynomial as shown in the following equation:
A=z.sub.i,j,p.sup.1.Math.P.(13)
(58) The z.sub.i,j,p matrix consists of coefficients corresponding to each order of the Zernike polynomials:
(59)
(60) The Zernike polynomial model is used to construct the conjugated phase, as:
P.sub.conjugated=exp(j.sub.k=0.sup.20a.sub.kZ.sub.k),k=1,2, . . . ,21(15)
where Z.sub.k coefficients are expressed according to Zemax classification.
(61) After obtaining the background area from CNN, the conjugated phase aberration may be calculated using ZPF, and then multiplied with the initial phase. To obtain the full size aberration compensated reconstructed image, zero padding and spectrum centering is performed on the Fourier transform of the aberration compensated hologram. Then, the angular spectrum reconstruction technique is performed to obtain the phase height distribution of the full-sized, aberration-free reconstructed hologram, as shown in
(62) Schematic 1102 and schematic 1104 of
(63)
where |.| denotes the area, A and A are the segmented areas of a test data based on deep learning CNN and manual segmentation, respectively.
(64) Background's DC (0.9582-0.9898) is much higher than cell's DC (0.7491-0.8764) because of the larger common area in the background. This lessen the effect of true negative and false positive scenarios in ZPF.
(65) Schematic 1302 of
(66)
(67) However, the CNN+ZPF technique takes advantage of the background area; the majority of background information was fitted with higher order (up to 5.sup.th order). Hence, the conjugated phase aberration looks more distorted because of those higher orders.
(68)
(69) Another example of testing data is shown in
(70) Due to the different temperatures during collagen polymerization (37 C. versus 4 C.), one image in the new dataset has collagen fiber features not apparent in the CNN model training imageset. However, the background region is correctly detected even with the introduction of the new features. Thus, the CNN+ZPF technique has higher accuracy in measuring the phase aberration (1.68 rad of flatness using PCA and 0.92 rad of flatness using CNN+ZPF) as shown in trace 1510. Schematic 1502 shows the phase aberration. Schematic 1504 shows a CNN's binary mask where background is fed into ZPF. Schematic 1506 shows a conjugated residual phase using CNN+ZPF. Schematic 1508 shows fibers after aberration compensation (the fibers are indicated by arrows). Schematic 1510 shows the phase profile along the dash line of schematic 1508. The bars denote the flatness of region of interest.
(71) To further validate the system and methodologies described herein, a dataset with more cancer cells than the training images in the CNN model was used (i.e., the training dataset contains 4-10 cells in a single-phase image).
(72) The digital holographic microcopy system and associated methodology described herein automatically compensate for the phase aberration using a combination of Deep Learning Convolutional Neural Network with Zernike polynomial fitting technique. The technique benefits from PCA's ability to obtain the training data for the deep learning CNN model. The trained CNN model can be used as an automatic and in situ process of background detection and full phase aberration compensation. The CNN model described herein detects the background with a high precision. While, many image segmentation techniques are not robust when applied to DHM images due to the overwhelming phase aberration, CNN segments the background spatially based on features regardless to the number of cells and their unknown positions. Thus, the trained CNN technique in conjunction with the ZPF technique is a very effective tool that can be employed in real time for autonomous phase aberration compensation in a digital holographic microscopy system.
(73) In one implementation, a fully automatic method to obtain aberration free quantitative phase imaging in Digital Holographic Microscopy (DHM) based on deep learning is provided. The method combines a supervised deep learning technique with Convolutional Neural Network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.
(74) In one implementation, the functions and processes of the processor 134 may be implemented by a computer 1726. Next, a hardware description of the computer 1726 according to exemplary embodiments is described with reference to
(75) Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 1700 and an operating system such as Microsoft Windows, UNIX, Oracle Solaris, LINUX, Apple macOS and other systems known to those skilled in the art.
(76) In order to achieve the computer 1726, the hardware elements may be realized by various circuitry elements, known to those skilled in the art. For example, CPU 1700 may be a Xenon or Core processor from Intel Corporation of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU 1700 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU 1700 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
(77) The computer 1726 in
(78) The computer 1726 further includes a display controller 1708, such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing with display 1710, such as a Hewlett Packard HPL2445w LCD monitor. A general purpose I/O interface 1712 interfaces with a keyboard and/or mouse 1714 as well as an optional touch screen panel 1716 on or separate from display 1710. General purpose I/O interface also connects to a variety of peripherals 1718 including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard.
(79) The general purpose storage controller 1720 connects the storage medium disk 1704 with communication bus 1722, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the computer 1726. A description of the general features and functionality of the display 1710, keyboard and/or mouse 1714, as well as the display controller 1708, storage controller 1720, network controller 1706, and general purpose I/O interface 1712 is omitted herein for brevity as these features are known.
(80) The features of the present disclosure provide a multitude of improvements in the technical field of digital microscopy. In particular, the controller may remove aberrations from the collected samples. The methodology described herein could not be implemented by a human due to the sheer complexity of data, gathering and calculating and includes a variety of novel features and elements that result is significantly more than an abstract idea. The methodologies described herein are more robust to inaccuracies. The method described herein may be used for early cancer detection. Thus, the implementations described herein improve the functionality of a digital microscope by mitigating aberrations in the acquired images. Thus, the system and associated methodology described herein amount to significantly more than an abstract idea based on the improvements and advantages described herein.
(81) Obviously, numerous modifications and variations are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.
(82) Thus, the foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.
(83) The above disclosure also encompasses the embodiments listed below.
(84) A non-transitory computer readable medium storing computer-readable instructions therein which when executed by a computer cause the computer to perform a method for capturing an image using digital holography, the method comprising:
(85) depositing a sample in a sample holder of a digital holography microscope having two microscope objectives in a bi-telecentric configuration;
(86) capturing one or more images using the couple charged device of the digital microscope;
(87) retrieving a Convolutional Neural Network (CNN) model associated with a type of the sample;
(88) mitigating aberrations in the one or more images using at least the CNN model having as input an unwrapped phase associated with each of the one or more images; and
(89) outputting the mitigated one or more images via a display of the digital holography microscope.