Time-resolved imaging method with high spatial resolution

Abstract

A method for operating a point laser-scanning microscope includes scanning a sample with a focused illumination laser beam; recording a plurality of images by detecting elements being configurable to an intensity mode, in which the recorded images are intensity images g.sub.i,j(n) related to photons collected during an entire dwell time of the illumination beam on an individual position n, or to a time-resolved mode, in which the recorded images are time-resolved images g.sub.i,j.sup.t(n, t), the collected photons being discriminated based on their arrival times to individual detecting elements; calculating a fingerprint image a by summing the plurality of intensity images g.sub.i,j(n) over all positions n; estimating shift matrices s.sup.x and s.sup.y from the intensity images g.sub.i,j(n); reconstructing at least one of a time-resolved object function f.sup.t and an intensity object function f; and visualizing at least one of a high-resolution time-resolved image f.sup.t˜ and a high-resolution intensity image f.sup.˜.

Claims

1. A method for operating a point laser-scanning microscope, the method comprising: scanning a sample with a focused illumination laser beam; recording, by an array of detecting elements optically conjugated with a focal point of the focused illumination laser beam, a plurality of images of the sample over a scan by the focused illumination laser beam, wherein each detecting element denoted by indexes (i,j) of said array of detecting elements generates a detection signal for each of different positions n of the focused illumination laser beam on the sample, from which with the scan of the sample a respective image of the sample is produced, and wherein said detecting elements are configurable to an intensity mode, in which the recorded images are a plurality of intensity images g.sub.i,j(n) related to photons collected during an entire dwell time of the focused illumination laser beam on an individual position n, or to a time-resolved mode, in which the recorded images are a plurality of time-resolved images g.sub.i,j.sup.t(n, t), in which collected photons are discriminated based on their times of arrival to individual detecting elements; if the detecting elements are configured to the time-resolved mode, calculating the plurality of intensity images g.sub.i,j(n), by integrating the plurality of time-resolved images over time g.sub.i,j.sup.t(n, t); independently of whether the detecting elements are configured to the intensity mode or to the time-resolved mode, calculating a fingerprint image a by summing the plurality of intensity images g.sub.i_j(n) over all positions n of the focused illumination laser beam on the sample, said fingerprint image depending simultaneously on an illumination point-spread function, hereinafter illumination PSF, h.sup.exc, and a detection point-spread function, hereinafter detection PSF, h.sup.det, independently of whether the detecting elements are configured to the intensity mode or to the time-resolved mode, estimating shift matrices s.sup.x and s.sup.y from the plurality of intensity images g.sub.i,j(n), reconstructing at least one of: i) a time-resolved object function f.sup.t, based on the plurality of time-resolved images g.sub.i,j.sup.t(n, t), fingerprint image a and shift matrices s.sup.x and s.sup.y, and ii) an intensity object function f, based on the plurality of intensity images g.sub.i,j(n), fingerprint image a and shift matrices s.sup.x and s.sup.y, or by integrating the time-resolved object function f.sup.t over time, and, visualizing at least one of a high-resolution time-resolved image f.sup.t˜ and a high-resolution intensity image f.sup.˜, based on said time-resolved object function and intensity object function.

2. The method of claim 1, wherein reconstructing the time-resolved object function f.sup.t comprises: estimating the illumination PSF h.sup.exc and the detection PSF h.sup.det based on the fingerprint image a, and estimating the time-resolved object function f.sup.t by multi-image deconvolution.

3. The method of claim 1, wherein reconstructing the time-resolved object function f.sup.t comprises: calculating the time-resolved object function f.sup.t by pixel reassignment.

4. The method of claim 1, wherein reconstructing the intensity object function f comprises: estimating the illumination PSF h.sup.exc and the detection PSF h.sup.det based on the fingerprint image a, and estimating the intensity object function f by multi-image deconvolution.

5. The method of claim 1, wherein reconstructing the intensity object function f comprises calculating the intensity object function f by pixel reassignment.

6. The method of claim 1, further comprising: aligning said array of detecting elements with an optical axis of the point laser-scanning microscope based on the calculated fingerprint image a.

7. The method of claim 1, further comprising: calculating microscope magnification based on the estimated shift matrices s.sup.x and s.sup.y.

8. A point laser-scanning microscope comprising: a focused illumination laser beam configured to scan a sample; and an array of detecting elements optically conjugated with the focal point of the focused illumination laser beam, said detecting elements being configured to record a plurality of images of the sample over a scan by the focused illumination laser beam, wherein the point laser-scanning microscope is configured to carry out the method of claim 1.

9. The point laser-scanning microscope of claim 8, wherein each detecting element is a single-point detector and has a time resolution of the order of magnitude of 100 ps.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Features and advantages of the proposed method and microscope system will be presented in the following detailed description, which refers to the accompanying drawings, given only as a non-limiting example, wherein:

(2) FIGS. 1a-1c are perspective diagrams that represent three embodiments of a spectrometer according to the invention;

(3) FIG. 1 shows a functional representation of a microscope according to the invention;

(4) FIG. 2 shows a schematic representation of a detector of the microscope in FIG. 1 (left) and a fingerprint image (right);

(5) FIG. 3 shows the performance of a prototype according to the invention: time resolution (jitter) of a single detector element (left), time resolution as a function of the location where the photon is collected (center) and normalized efficiency of photon detection within the active area of the detector (right); and

(6) FIG. 4 shows a reconstruction example of a high-resolution intensity image. Top, left to right: a series of intensity images g obtained with a 5-by-5 array of SPAD detectors (scale bar: 1 μm), showing the cytoskeleton of a fixed cell; a fingerprint image a calculated from g; drift matrix d.sup.x and d.sup.y estimated from g; PSF h.sub.i,j calculated for each element of the SPAD array (scale bar: 100 nm) obtained from the excitation PSFh.sub.exc, the detection PSFh.sub.det, and the estimated shift matrices s.sup.x and s.sup.y. Below, from left to right: intensity image recorded from the center pixel g.sub.ic,jc (scale bar: 1 μm); conventional CLSM intensity image at low resolution (scale bar: 1 μm); intensity image reconstructed by multi-image deconvolution (scale bar: 1 μm).

DETAILED DESCRIPTION

(7) With reference to FIG. 1, a laser scanning microscope device configured to obtain the set of data from which the time-resolved image and/or the high-resolution intensity image is extracted is now described. In summary, the apparatus comprises a main unit 10, conventional per se, and a detection module 20. A single or multi-photon excitation beam EB is generated by a laser source (not illustrated) and reflected towards a lens 11 by a dichroic mirror 13. The excitation beam is focused on an object to be analyzed (not shown) by means of the lens 11. A spatial light modulator, made as a micro-mirror device, is indicated at 14, while an acousto-optical modulator (AOM) or electro-optical (EOM) modulator is indicated at 15. A device to scan the object to be analyzed, made, for example, as a galvanometric mirror, is indicated at 16. The fluorescence signal FS emitted by the object is collected by the same lens 11 and transmitted through the dichroic mirror 13 and through a confocal pinhole 17 toward the detection module 20. The pinhole 17 may be fully opened (diameter>>1 Airy unit) when necessary.

(8) The detection module 20 essentially comprises a zoom lens system 21 and an array of detector elements 23, together with the control electronics 25 thereof. Each of the detector elements of the array 23 may be a SPAD detector or a detector with similar temporal performance.

(9) The zoom lens 21 combines the pinhole plane 17 (which is positioned in an image plane of the microscope system) with the plane where the detector array 23 is positioned. The function of the zoom lens 21 is to control the physical size of the projected detection PSF in the image plane (or in the conjugate image plane) where the detector array 23 is positioned. In other words, the zoom lens 21 controls the physical size of the fluorescence diffraction spot generated by a single point fluorescent source positioned in the object plane and of which the image on the detector array 23 is formed. Essentially, the zoom lens 21 controls the system's magnification in the image plane where the detector array 23 is positioned. Such magnification is chosen in such a way that (i) the size of each individual detector element, projected onto the detection plane, is much smaller than one Airy unit, and (ii) most of the fluorescent light is collected by the detector array 23. Obviously, such two requirements must be balanced when using a detector array with a relatively small number of detector elements.

(10) The introduction of the zoom lens 21, instead of a much simpler fixed magnification telescope, is preferable to preserve the generality of the system, e.g. to work with different wavelengths and different lenses.

(11) The detector array 23 is represented by an array (two-dimensional) of M.sub.x×M.sub.y detector elements, each of which is independent (fully parallel system), has a sensitivity at the level of a single photon and has a time resolution (low time jitter) such as to measure the average fluorescence lifetime of the most common fluorophores used in fluorescence microscopy. The operating parameters of the detector array (excess bias voltage, hold-off time and the number of active elements) may be adjusted before each measurement.

(12) Each element of the detector array 23 has an active square area (other shapes may be used), surrounded by an inactive frame (FIG. 2, left). Similar to other pixel-based devices, the pixel pitch may be defined as the distance between (the baricenters of) two adjacent pixels (if the pixel is square these pixels lie on the same line or in the same column). An important feature of the detector is the fill factor which may be calculated as the ratio between the active area and the square of the pixel pitch. The overall photon detection efficiency (PDE) of the detector, i.e. the probability that a photon reaching the detector is recorded, is directly dependent on the fill factor. To further improve the PDE, an array of microlenses 24 is used to direct the photons towards the center of each detector element. Each element of the detector array signals the arrival of a photon with a TTL (transistor-transistor logic) signal on a dedicated digital channel. Three additional digital lines form a communication bus that is useful for the initial configuration of the entire array during the start of the measurement.

(13) The control electronics and data acquisition of the detection module 20 are developed with a field-programmable gate array (FPGA) processor. This allows one to integrate all the photons collected during the dwell time on the single point n (intensity mode) or to measure the arrival times relative to an external reference signal (e.g. the pulse of the excitation laser) due to the on-board integration of time-digital converters (TDC) (time-resolved mode or TCSPC).

(14) It is important to note that the detector and data acquisition electronics are configured to operate in a completely asynchronous manner, i.e. when a photon is detected, is counted or its arrival time is measured, and the elements are independent of each other, without a limited frame-rate or the inconveniences of a sequential reading.

(15) Communication (synchronization) with the microscope control system (indicated at 18 in FIG. 1) is carried out by means of digital pixel/line/frame clock lines, supplied by the manufacturer.

(16) Before describing the method according to the invention, it is necessary to introduce image formation in the case of time-resolved imaging.

(17) Using a continuous formulation, the direct operator describing the image formation for the time-resolved case is the following
g.sub.i,j.sup.t(x,y,t)=ƒ.sup.t(x,y,t)*.sub.2Dh.sub.i,j(x,y)
where ƒ is the object function that also includes the time information, i.e. the distribution/probability of emitting a fluorescent photon after a certain time from the excitation event. Convolution is applied only on the spatial dimension.

(18) Step 1. Record the Time-Resolved Image Series/Array (or TCSPC)g.sup.t with the Architecture Described Above.

(19) For different positions of the focused excitation laser beam (single-photon or multi-photon) on the sample, the signals generated by the elements of the detector array 23 are read, and the series/array of time-resolved images is obtained g.sup.t
g.sup.t=g.sub.i,j.sup.t(n,t) with i=1, . . . , M.sub.x, j=1, . . . , M.sub.y, ic=┌M.sub.x/2┐, jc=┌M.sub.y/2┐, n=(n.sub.x, n.sub.y), n.sub.x=1, . . . , N.sub.x, n.sub.y=1, . . . , N.sub.y, t=1, . . . , N.sub.t.

(20) Essentially, each image g.sub.i,j.sup.t(n, t) is a three-dimensional array where the temporal axis t shows the histogram of the photon arrival time obtained with the TCSPC measurement, i.e., the number of photons collected in a spatial pixel (n.sub.x, n.sub.y) and in a certain time window (time container) t by the excitation event.

(21) Step 2. Calculate the Series of Intensity Images g from g.sup.t

(22) Given the TCSPC image array g.sup.t collected with the equipment described above in TCSPC mode, the array is integrated into the time dimension and the intensity image array g is obtained

(23) g i , j ( n ) = .Math. t g i , j t ( n , t )

(24) Step 3. Calculate the “Fingerprint” Image a

(25) Given the intensity image array g, the so-called “fingerprint” image a is calculated, from which the excitation PSF h.sup.exc and the detection PSF h.sup.det are estimated.

(26) The fingerprint image a is defined as follows. All the photons collected by each detector element during an entire measurement are integrated, producing the fingerprint image a. In practice, images are obtained during a single experiment M.sub.x×M.sub.y, and the fingerprint image a is produced by summing all the intensity image values per image:

(27) a ( i , j ) = .Math. n g i , j ( n )

(28) To understand the properties of the fingerprint image a and how to obtain the PSF therefrom, it is important to derive a in the continuous domain.

(29) Considering a detector array composed of infinitesimal elements, one observes that the image g.sub.x′,y′ acquired by an element at the position (x′, y′)∈custom character.sup.2 may be expressed as
g.sub.x′,y′(x,y)=(h.sub.x′,y′*ƒ)(x,y)
where ƒ is the object/sample function, h.sub.x′,y′ denotes the PSF associated with the detector element in the position (x′, y′) and * denotes the convolution operator; the fingerprint image a(x′, y′), defined with respect to the detector coordinates, is
a(x′,y′)=∫∫.sub.x,yg.sub.x′,y′(x,y)dxdy=∫∫.sub.x,y(h.sub.x′,y′*ƒ)(x,y)dxdy.

(30) Applying the convolution integration property, the fingerprint image is

(31) a ( x , y ) = x , y h x , y ( x , y ) dxdy .Math. x , y f x , y ( x , y ) dxdy = Φ x , y h x , y ( x , y ) dxdy
where Φ is the total flow of photons from the sample. Note that a(x′, y′) is independent of the sample in the condition Φ>0 (Φ>>0) but is closely related to the PSF of the microscope system.

(32) Remembering that the PSF of each infinitesimal element is
h.sub.x′,y′(x,y)=h.sup.exc(x,y).Math.[h.sup.det(x,y)*δ(x−x′,y−y′)]
=h.sup.exc(x,y).Math.h.sup.det(x−x′,y−y′)
and by replacing in the previous equation, it is possible to obtain
a(x′,y′)=Φ∫∫.sub.x,yh.sup.exc(x,y).Math.h.sup.det(x−x′,y−y′)dxdy=(h.sup.exc*h.sup.det)(x′,y′)
where * denotes the correlation operator. In summary, the fingerprint image is dependent on the instrument and not dependent on the sample. In addition, it depends on both the excitation PSF and the detection PSF.

(33) Note that the fingerprint image may also be used to align the system. In particular, to co-align the excitation PSF and the detection PSF on the central pixel (ic,jc) of the detector array. This procedure is very important for a detector with a small number of elements. A misalignment produces a loss of fluorescence photons. If the system is correctly aligned, the central pixel is the brightest and the pixel intensity values are distributed symmetrically and isotropically relative to the center. A feedback control system may be implemented that measures the fingerprint image and thus adjusts the detector's xy position accordingly to maximize the intensity of the central pixel.

(34) In addition, the fingerprint image may be used as a figure of merit to continuously adapt optical elements (adaptive optics, AO) such as spatial light modulators (SLM) or deformable mirrors (DM) and compensate for optical aberrations introduced by the sample or the microscope system during the focusing of the laser beam or the fluorescence beam.

(35) Step 4. Estimate the Shift Matrices s.sup.x and s.sup.y, the Drift Matrices d.sup.x and d.sup.y, and the Magnification of the Microscope Magn

(36) Given the array of the intensity images g, the shift matrices s.sup.x and s.sup.y are calculated.

(37) As described for the pixel reassignment method, each image g.sub.i,j is translated (in the image plane) relative to g.sub.ic,jc by half the distance between the element (i,j) and the element (ic,jc), i.e. d.sub.i,j.sup.x=s.sub.i,j.sup.x/2 and d.sub.i,j.sup.y=s.sub.i,j.sup.y/2. Thus, the shift matrices s.sup.x and s.sup.y may be estimated by recovering the shift between the different images g.sub.i,j. Unlike in the prior art [9], a phase correlation method is used for its noise resilience and higher speed than algorithms in the spatial domain. The phase correlation estimates the shift between two similar images relying on a representation of the data in the frequency domain, which in the present description is obtained by fast Fourier transforms (FFT).

(38) To calculate the phase correlation between the two different sample images (g.sub.i,j and g.sub.ic,jc), the so-called correlogram is first defined r.sub.i,j:

(39) r i , j = FFT - 1 ( FFT ( g i , j ) FFT ( g ic , jc ) * .Math. "\[LeftBracketingBar]" FFT ( g i , j ) FFT ( g ic , jc ) * .Math. "\[RightBracketingBar]" )
subsequently, the maximum of the correlogram is found, the position of which denotes the drift between the two images:
(d.sup.x(i,j),d.sup.y(i,j))=argmax.sub.(n.sub.x.sub.,n.sub.y.sub.)(r.sub.i,j(n))

(40) The maximum position is obtained using a fitting algorithm or a centroid-based algorithm to obtain sub-pixel values, where d.sup.x/y(i,j)∈custom character.

(41) Given the drift matrices d.sup.x and d.sup.y, the shift matrices s.sup.x and s.sup.y may be calculated as follows
s.sup.x/y=d.sup.x/y×2

(42) Other approaches that estimate shift matrices use: (i) a theoretical model, based on the physical distance between the detector elements and the system magnification, (ii) a calibration sample, e.g. spheres.

(43) However, such approaches do not allow the particularities of each sample and the conditions of the specific measurement to be taken into account. Moreover, sample magnification is not always easy to estimate. On the other hand, the above-described approach is sensitive to the assumption of Gaussian form for the excitation PSF and the detection PSF. However, for (i,j) within the first Airy disk centered in (ic,jc), the assumption is solid and the estimate of s.sup.x/y(i,j) is robust. An optimal approach could integrate in the estimation of the maximum of the correlogram some constraints based on knowledge related to the geometric shape of the detector and the magnification of the system.

(44) The magnification Magn of the system may be determined using the s.sup.x/y(ic,jc) values estimated for the first-order neighbors (custom character) of the element (ic,jc), i.e., (ic+1,jc), (ic−1,jc), (ic,jc+1) and (ic,jc−1), together with the pixel pitch of the detector (PP) and the pixel dimension of the image (DP):

(45) Magn = 4 .Math. PP DP .Math. .Math. ( i , j ) 𝒩 ( ic , jc ) ( s x ( i , j ) 2 + s y ( i , j ) 2 )

(46) Step 5. Calculate the Time-Resolved Object Function f.sup.t.

(47) Given the array of time-resolved images g.sup.t, the fingerprint image a and the shift matrices s.sup.x, s.sup.y, an estimate of the object function f.sup.t is calculated as described below (steps 5.1-5.3).

(48) Step 5.1. Estimate the Excitation PSF h.sup.exc and the Detection PSF h.sup.det

(49) Based on the relationship with the fingerprint image a described above, the excitation PSF h.sup.exc and the detection PSF h.sup.det may be estimated according to the minimization problem
({tilde over (h)}.sup.exc,{tilde over (h)}.sup.det)=argmin.sub.h.sub.exc.sub.,h.sub.detJ.sub.MSE(h.sup.exc,h.sup.det|a)
or
({tilde over (φ)}.sup.exc,{tilde over (φ)}.sup.det)=argmin.sub.φ.sub.exc.sub.,φ.sub.detJ.sub.MSE(h.sup.exc,h.sup.det|a)
in the case of a parameterization of PSFs. Here the MSE functional is
J.sub.MSE(h.sup.exc,h.sup.det|a)=Σ.sub.(i,j).sup.(M.sup.x.sup.,M.sup.y.sup.)(a(i,j)−(h.sup.exc*h.sup.det)(i,j)).sup.2

(50) The MSE function may be minimized with numerical techniques according to known practices.

(51) Step 5.2. Calculate the Time-Resolved Object Function by Means of Multi-Image Deconvolution (MID).

(52) Since all information concerning PSFs (including shift values) has been previously estimated, the original problem may be solved by using a conventional multi-image deconvolution, in particular by minimizing the Kullback-Leibler distance (KL) or the mean square error distance (MSE).
{tilde over (f)}.sub.MID.sup.t=argmin.sub.f.sub.tJ.sub.KL/MSE(ƒ|{tilde over (h)}.sup.exc,{tilde over (h)}.sup.det,{tilde over (s)}.sup.x,{tilde over (s)}.sup.y,g.sup.t).

(53) In the time-resolved case, the KL distance is

(54) J KL ( h exc , h det , s x , s y , f t .Math. g t ) = .Math. i , j .Math. t .Math. n ( g i , j t ln g i , j t ( h i , j * 2 D f t ) + ( h i , j * 2 D f t ) - g i , j t )
and the MSE distance is

(55) 0 J MSE ( h exc , h det , s x , s y , f t .Math. g t ) = .Math. i , j .Math. t .Math. n ( ( h i , j * 2 D f t ) - g i , j t ) 2

(56) The MSE or KL functional may be minimized with numerical techniques according to known practices.

(57) Step 5.3. Calculate the Time-Resolved Object Function f.sup.t by Means of Pixel Reassignment (PR).

(58) Following the pixel reassignment approach, a high-resolution time-resolved image may be obtained by simply summing all the images after shifting each image g.sub.i,j.sup.t back by the estimated amount d.sup.x/y(i,j):

(59) f ~ PR t ( n , t ) = .Math. ( i , j ) ( M x , M y ) FFT 2 D - 1 ( FFT 2 D ( g i , j t ( n , t ) ) exp ( - id x ( i , j ) n x ) exp ( - id y ( i , j ) n y ) )

(60) Essentially, each 2D image associated with each time container and detector element is shifted independently. For this reason, both the FFT and the reverse FFT are carried out in 2D.

(61) Step 6. Calculate the Intensity Object Function f.

(62) Given: (i) the intensity image array g, the fingerprint image a, and the shift matrices s.sup.x, s.sup.y or (ii) the previously estimated time-resolved object function f.sup.t, an estimate of the intensity object function f is calculated, as described below (steps 6.1-6.3).

(63) Step 6.1. Calculate the Intensity Object Function f by Time Integration.

(64) Given the high-resolution time-resolved image, it is possible to obtain a high-resolution intensity image by integrating the time dimension of the time-resolved object function f.sup.t in the reconstructed time.

(65) f ~ MID / PR i , j ( n ) = .Math. t f ~ MID / PR i , j t ( n , t )

(66) Step 6.2. Calculate the Intensity Object Function f by Means of Multi-Image Deconvolution.

(67) Given the excitation PSF h.sup.exc, the detection PSF h.sup.det and the shift matrices s.sup.x, s.sup.y, the intensity object function f may be calculated directly from the series of intensity images g by means of multi-image deconvolution and without estimating the time-resolved object function f.sup.t, with a substantial reduction in computational effort.

(68) In this case, it is necessary to minimize the KL or MSE distance only with respect to f
{tilde over (f)}.sub.MID=argmin.sub.fJ.sub.KL/MSE(ƒ|{tilde over (h)}.sup.exc,{tilde over (h)}.sup.det,{tilde over (s)}.sup.x,{tilde over (s)}.sup.y,g).

(69) The MSE or KL functional may be minimized with numerical techniques according to known practices.

(70) Step 6.3. Calculate the Intensity Object Function f by Means of Pixel Reassignment.

(71) Given the shift matrices s.sup.x, s.sup.y, the intensity object function f may be calculated directly from the series of intensity images g by means of pixel reassignment and without estimating the time-resolved object function f.sup.t, with a substantial reduction in computational effort.

(72) In this case, the reassignment estimation of the pixels is

(73) f ~ PR ( n ) = .Math. ( i , j ) ( M x , M y ) FFT - 1 ( FFT ( g i , j ( n ) ) exp ( - id x ( i , j ) n x ) exp ( - id y ( i , j ) n y ) )

(74) If the microscope apparatus operates in the intensity mode (i.e. without taking TCSPC measurements) only the intensity image array g is generated. In this case, only the methods proposed in steps 6.2 and 6.3 may be used. For a laser beam operating in continuous wave mode, the importance of recording the signal in TCSPC mode declines.

(75) The technical advantages of the methods described above are as follows. Reconstruction of robust (or time-resolved) intensity images without parameters.

(76) The ability to separately estimate the PSFs and the shift matrices and only then estimate the sample function makes the reconstruction method more robust. Estimating the shift matrices using a phase correlation approach allows for sub-pixel results and quick calculations. fingerprint image.

(77) The correct system setting (in terms of xy alignment) may be achieved by a closed-loop control system (feedback system) that uses the “fingerprint image” as the metric and mechanical elements to move the detector along x and y. Compatibility with time-resolved measurements.

(78) This allows ISM to be combined with fluorescence-lifetime imaging microscopy (FLIM). Simple (and possibly automatic) alignment of the system, due to the “fingerprint image”.

(79) The main technical advantages of the above-described equipment are as follows. SPAD Detector Array

(80) The photons are collected from an array of M.sub.x by M.sub.y single photon avalanche diode (SPAD) detectors with photon-level sensitivity (in the example shown the matrix is composed of 25 elements, i.e. M.sub.x times M.sub.y is equal to 5). It may be demonstrated that even with a relatively low number of elements (>=25 elements), the spatial resolution of the reconstructed image (by PR) is close to the theoretical resolution improvement of the closed pinhole confocal microscope. Such consideration is crucial because a large number of detector elements would require 1) tightly integrated electronics, sacrificing the fill factor, and 2) a sequential reading (the same principle as the camera), introducing a frame-rate and discarding the time information at the source.

(81) All the detectors are fully independent of each other: (i) the interference (cross-talk) between the elements is negligible (the SPAD array shows interference values <1% for the first horizontal neighbors and <0.2% for the first diagonal neighbors) and (ii) each element reveals the arrival of a photon immediately with a digital TTL signal on a dedicated line. Devices characterized by the latter property are sometimes referred to as “event-driven cameras” or “asynchronous cameras”, because each photon generates a power-on signal and there is no fixed frame rate.

(82) All the detectors have a time jitter (or time resolution) fully compatible with the measurement of the average lifetime of the excited state of the fluorophores most used in fluorescence microscopy, the values of which fall in the range 1-10 ns (the SPAD array has a time resolution between 110 and 160 ps).

(83) All detectors have a hold-off that ensures read bandwidths compatible with fast scanning systems (in the above-described SPAD array, the hold-off time and excess bias voltage may be set with a communication bus in the range of 25 ns to 200 ns, and 4V to 6V, respectively). For example, the ability to set the hold-off to 25 ns allows each individual element to work with a read bandwidth of 40 MHz. Moreover, the independence among all the detector elements allows a higher effective read bandwidth of the detector system: since in the above-described point laser scanning architecture the dimensions of the SPAD array projected on the object plane are smaller than the diffraction dimensions; the photons generated by the observation volume (scanned along the sample) are diffused over the entire detector array, therefore the SPAD array may collect a higher photon flow (generated by the observation volume) than the reading from a single element. This technical feature is particularly important when combining the detector with a fast resonant scanning system, such as resonant mirrors or adjustable focus optics based on acoustic gradients.

(84) An array of M.sub.x by M.sub.y microlenses, in which each lens is focused in the center of an element of the detector array, may be used to increase the fill factor and thus the photon detection efficiency (PDE). A zoom lens system is used to expand the detection beam, so that the size of the entire detector, projected onto the detection plane, is around 1 Airy unit. The zoom lens system guarantees the generality of the architecture described above, in terms of the wavelength used, the magnification and the numerical aperture of the lens. FPGA processor

(85) The system control electronics were developed using reconfigurable hardware. An FPGA processor allows the signal to be acquired from a sufficient number of digital lines. A time-to-digital converter (TDC) implemented directly on the FPGA processor allows a digital signal to be acquired with a time resolution (time jitter) of tens of picoseconds.

(86) A prototype was built by the inventors, modifying the detecting part of a CLSM. The instrument is equipped with an excitation laser source with a wavelength of 635 nm (LDH-D-C640, PicoQuant). The laser beam is scanned on the sample by a pair of galvanometric mirrors (6215HM40B, CTI-Cambridge) and a lens (CFI Plan Apo VC60x oil, Nikon). Fluorescent photons are collected by the same lens, de-scanned and filtered by a dichroic mirror (H643LPXR, AHF Analysentechnik). Finally, the beam is expanded and projected onto the SPAD detector array. The detector array has 25 elements arranged in a 5-by-5 array and is mounted on a commercial support with micrometer screws for fine alignment on the three axes. The spatial and temporal performance of the detector was determined, showing a jitter between 110 and 160 ps within the active area and excellent uniformity of detection (FIG. 3). The system may be aligned, and the magnification adjusted using the “alignment array” method described above, to make images of actual samples (FIG. 4).

(87) The detector array is controlled by a dedicated operating card that provides power and provides electronic signal conditioning. The card provides 25 digital output channels (each linked to the arrival of a photon on a specific element of the detector array), which are entered into the data acquisition system.

(88) The data acquisition system was developed with a commercial FPGA development board (National Instruments USB-7856R), equipped with a Kintex-7 FPGA processor, connected to a personal computer. To synchronize the acquisition system with the microscope control system, standard digital pixel/line/frame clock lines were used.

(89) When the microscope is used for real-time imaging, the photons collected from each pixel by each detector are processed by dedicated algorithms run on graphics processing units (GPUs) to provide real-time high-resolution images of the sample.

(90) Obviously, changes are possible with respect to the system architecture and the use of the methods described above.

(91) For example, even if the number of elements (25, arranged in a 5-by-5 array) was chosen as an optimal compromise between resolution gain and complexity of the acquisition system (each element constitutes a digital channel), the number of elements may be increased to 49 (7-by-7 array) or 81 (9-by-9 array).

(92) In addition, the SPAD-based detection system may be used for other non-fluorescence-based laser point scanning microscopy techniques such as second harmonic generation microscopy, Raman and scattering.

(93) Moreover, the detection system may be used in depletion microscopy by means of stimulated emission depletion (STED) microscopy.

(94) Since the fingerprint image is a direct representation of the illumination and detection PSFs, it is possible to use the fingerprint image to derive a metric to be supplied to an adaptive optics control system (e.g., based on spatial light modulators SLM) to compensate for aberrations induced by the system or sample.

BIBLIOGRAPHICAL REFERENCES

(95) [1] U.S. Pat. No. 4,549,204 A [2] Super-resolution in confocal imaging. Sheppard, C. J. R., Optik, 80(2), (1988) [3] Image scanning microscopy. Muller, C. B., Enderlein, J., Phys. Rev. Lett. 104, 198101 (2010) [4] Optical Photon Reassignment Microscopy (OPRA). Roth, S., Sheppard, C. J. R., Wicker, K., Heintzmann, R., Optical Nanoscopy, 2-5, (2013) [5] Re-scan confocal microscopy: scanning twice for better resolution. De Luca, G. M. R. et al., Biomedical Optics Express 2014, 4(11) [6] Rapid nonlinear image scanning microscopy. Gregor, I. et al., Nature Methods 2017, In Press [7] WO 2015/055534 A1 [8] Method of super-resolution based on array detection and maximum-likelihood estimation. Li, H. et al., Applied Optics 2016, 55(35): 9925-9931 [9] Parallel detecting super-resolution microscopy using correlation-based image restoration. Yu, Z. et al., Optics Communications 2017, 404(35): 139-146