Method for Detecting Emission Light, Detection Device and Laser Scanning Microscope
20230258916 · 2023-08-17
Inventors
Cpc classification
G01J3/027
PHYSICS
G01N21/6408
PHYSICS
G02B21/008
PHYSICS
International classification
Abstract
The invention relates to a method for detecting emission light, in particular fluorescent light from at least one fluorescent dye, in a laser scanning microscope, wherein the emission light emanating from a sample is guided, by an imaging optical unit, onto a two-dimensional matrix sensor having a plurality of pixels and being located on an image plane, and a detection point distribution function is detected by the matrix sensor in a spatially oversampled manner. The method is characterized in that the emission light emanating from the sample is spectrally separated in a dispersion device, in particular in a dispersion direction; the spectrally separated emission light is detected by the matrix sensor in a spectrally resolved manner; and during the analysis of the intensities measured by the pixels of a pixel region, the spectral separation is cancelled at least for some of said pixels. Additional aspects of the invention relate to a detection device for the spectrally resolved detection of emission light in a laser scanning microscope and to a laser scanning microscope.
Claims
1. A method for detecting emission light, in particular fluorescence light from at least one fluorescent dye, in a laser scanning microscope, in which emission light coming from a sample is guided by way of an imaging optical unit to a two-dimensional matrix sensor that is situated in an image plane and has a multiplicity of pixels, wherein a detection point spread function is detected in spatially oversampled fashion using the matrix sensor, wherein the emission light coming from the sample is spectrally decomposed, in particular in a dispersion direction, using a dispersion device, the spectrally decomposed emission light is detected in spectrally resolved fashion using the matrix sensor, and the evaluation of the intensities measured by the pixels of a pixel region includes a reversal of the spectral separation for at least some of these pixels.
2. The method as claimed in claim 1, wherein at least one pixel region which is assigned to the emission of a dye is identified on the basis of a spectrum measured using the matrix sensor.
3. The method as claimed in claim 1, wherein an intensity value associated with a specific wavelength is determined for the determination of a spectral intensity distribution of the emission light on the matrix sensor by virtue of the measurement data of a plurality of pixels in a column of the matrix sensor perpendicular to a dispersion direction being summed.
4. The method as claimed in claim 2, wherein maxima and minima are automatically searched for in the determined spectral distribution in order to identify the pixel regions and spectral limits for calculating the point spread function of a specific dye are proposed to a user on the basis of maxima and minima that have been found, or spectral limits are automatically defined on the basis of the maxima and minima that have been found.
5. The method as claimed in claim 1, wherein the pixel regions overlap on the matrix detector and a spectral unmixing of the intensities measured by the individual pixels is carried out.
6. The method as claimed in claim 1, wherein a detection point spread function is determined for at least one fluorescent dye.
7. The method as claimed in claim 1, wherein emission light emitted by a plurality of points on a sample that are illuminated by excitation light at the same time is simultaneously guided to the matrix sensor and evaluated.
8. The method as claimed in claim 1, wherein the matrix sensor is operated in a photon counting mode.
9. The method as claimed in claim 1, wherein in order to reverse the spectral separation for the individual pixels of a pixel region, the intensity values measured by these pixels are combined by calculation, taking into account the spectral intensity distribution of the emission light for the dye associated with the pixel region and taking into account a spatial intensity distribution of individual spectral components on the matrix sensor.
10. The method as claimed in claim 9, wherein an intensity distribution measured by pixels of a column perpendicular to a dispersion direction is used as the spatial intensity distribution of the individual spectral components.
11. The method as claimed in claim 1, wherein in order to reverse the spectral separation for the individual pixels of a pixel region, the intensity values measured by the pixels are assigned to a location in the image plane that has been displaced relative to the respective pixel, with a displacement vector depending on the location of the respective pixel and the wavelength associated with that location.
12. The method as claimed in claim 11, wherein a wavelength-independent component of the displacement vector is obtained for a specific pixel by scaling a vector component of the vector from a reference pixel to the relevant pixel by a reassignment factor.
13. The method as claimed in claim 11, wherein a detection point spread function obtained after performing the pixel reassignment has substantially the same shape in a dispersion direction as perpendicular to the dispersion direction.
14. The method as claimed in claim 11, wherein the displacement vectors are associated with a wavelength range assigned to a sample structure are determined by evaluating a phase correlation of a plurality of scanned images.
15. The method as claimed in claim 1, wherein time-resolved measurements for determining fluorescence lifetimes of the dyes are carried out using the matrix sensor.
16. A detection apparatus for detecting emission light in a laser scanning microscope, in particular for carrying out the method as claimed in claim 1, the detection apparatus comprising: two-dimensional matrix sensor in an image plane with a multiplicity of pixels for spatially oversampled detection of a detection point spread function of emission light coming from a sample and comprising an imaging optical unit for guiding the emission light to the two-dimensional matrix sensor, a dispersion device for the spectral separation of the emission light, wherein the matrix sensor is configured and positioned for the spectrally resolved detection of the spectrally separated detection light, and evaluation electronics connected to the matrix sensor are configured, within the scope of evaluating the intensities measured by the pixels of a pixel region, to reverse the spectral separation for these pixels.
17. The detection apparatus as claimed in claim 16, wherein in order to reverse the spectral separation, the evaluation electronics are configured to combine, by calculation, the intensity values measured by pixels of a pixel region, taking into account a spectral intensity distribution of the emission light for the dye associated with the pixel region and taking into account a spatial intensity distribution of individual spectral components on the matrix sensor.
18. The detection apparatus as claimed in claim 16, wherein in order to reverse the spectral separation for the individual pixels of a pixel region, the evaluation electronics are configured to assign the intensity values measured by the pixels to a location in the image plane that has been displaced relative to the respective pixel, with the displacement vector depending on the location of the respective pixel and the wavelength associated with that location.
19. The detection apparatus as claimed in claim 16, wherein the dispersion device comprises a light-diffracting and/or a light-refracting device.
20. The detection apparatus as claimed in claim 16, wherein the matrix sensor comprises an analog integrating detector and/or a photon counting detector.
21. The detection apparatus as claimed in claim 16, wherein a dispersion direction lies in the direction of a coordinate direction of the matrix sensor.
22. The detection apparatus as claimed in claim 16, wherein in order to increase the detection efficiency, a microlens is arranged upstream of the matrix sensor.
23. The detection apparatus as claimed in claim 16, wherein a diameter of an Airy disk of the detection point spread function in the plane of the matrix sensor is less than twenty times a lattice constant of the matrix sensor.
24. The detection apparatus as claimed in claim 16, wherein a spectral bandwidth per pixel of the matrix sensor in a dispersion direction is less than 0.5 nm.
25. The detection apparatus as claimed in claim 16, wherein the imaging optical unit comprises a zoom system.
26. The detection apparatus as claimed in claim 16, wherein in order to determine a fluorescence lifetime of dyes, the matrix sensor and the evaluation electronics are configured to carry out time-resolved measurements.
27. A laser scanning microscope, comprising: a light source for emitting excitation light, an excitation beam path with a microscope objective for guiding the excitation light onto or into a sample to be examined, a scanning device located in the excitation beam path and serving to scan at least one illumination spot over the sample, a detection beam path for guiding emission light emitted by the sample to a detection unit, the detection unit for detecting the emission light, a main color splitter for separating excitation light and emission light, and a control and evaluation unit for controlling the light source and for evaluating measurement data obtained by the detection unit, wherein the detection unit comprises a detection apparatus as claimed in claim 16.
28. The microscope as claimed in claim 27, wherein the control and evaluation unit is configured to search for maxima and minima in a determined spectral distribution and to propose spectral limits for calculating the point spread function of a specific dye on the basis of maxima and minima that have been found or to independently define spectral limits for calculating the point spread function of a specific dye on the basis of maxima and minima that have been found.
29. The microscope as claimed in claim 27, wherein the control and evaluation unit is configured together with the detection apparatus to carry out a method for detecting the emission light, the method including: spectrally decomposing the emission light coming from the sample using the dispersion device, detecting the spectrally decomposed emission light in a spectrally resolved fashion using the matrix sensor, and evaluating intensities measured by pixels of a pixel region, the evaluating including a reversal of the spectral separation for at least some of these pixels.
Description
[0063] Further advantages and features of the method according to the invention, of the detection apparatus according to the invention, and of the laser scanning microscope according to the invention are explained below with reference to the attached figures. In the figures:
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
[0075]
[0076] Identical and identically acting components are generally identified by the same reference signs in the figures.
[0077]
[0078] Excitation light 14 emitted by the light source 12 reaches the main color splitter 18 via a deflection mirror 16 and is guided in the direction of the scanning device 22, which can be arranged in a plane optically conjugate to the back pupil of the microscope objective 24, at the said main color splitter. From the scanning device 22, the excitation light 14 reaches the microscope objective 24 via a scanning objective and a tube lens, the said microscope objective focusing the excitation light 14 at an illumination spot 27 in a sample plane 26 on or in the sample S. Scanning objective and tube lens are illustrated schematically as one component 23 in
[0079] The region exposed to excitation light 14 on or in the sample S then emits emission radiation 28, this typically being fluorescence light from the dyes with which the sample S was prepared in advance. The emission radiation 28 then travels back to the main color splitter 18 along the same path in the “descanned” detection beam path 30 that the excitation light 14 took previously, but it is transmitted by the said main color splitter and then reaches the detection unit 32, which comprises a detection apparatus 200 according to the invention. The data measured by the detection unit 32 are evaluated by the control and evaluation device 34. The control and evaluation device 34 may also serve to control the light source 12 and the scanning unit 22.
[0080] The microscope 100 according to the invention is a confocal laser scanning microscope in particular.
[0081]
[0082] In the exemplary embodiment illustrated in
[0083] Emission light 28 confocally filtered by a pinhole (not depicted here) is collimated by an optical system (likewise not depicted here) and steered to the dispersion device 40.
[0084] The individual spectral components 42, 44, 46, 47 are then incident on the imaging optical unit 48, which is depicted schematically as a lens element in
[0085] The imaging optical unit 48 focuses the different spectral components 42, 44, 46, 47 on the matrix sensor 50 with a multiplicity of pixels 51. The matrix sensor 50 can, for example, be a SPAD multi-line camera having, for example, 5 lines with 800 pixels each, corresponding to 800 columns.
[0086]
[0087]
[0088] The evaluation electronics 60, which may also comprise a data grabber, may be implemented, for example, in an FPGA, in a microcontroller, or in comparable components. It is essential that a reduction in the data volume is achieved as close to the hardware as possible, that is to say as close as possible to the matrix sensor 50, so that the data are able to flow as continuously as possible from the evaluation electronics 60 to a control PC, that is to say to the control and evaluation unit 34 of the microscope 100 according to the invention from
[0089] The situation in
[0090] In order to now achieve this, the imaging optical unit 48 is initially dimensioned relative to the dimensions of the matrix sensor 50, that is to say relative to the dimensions of the SPAD camera in the example shown. By way of example, the matrix sensor 50 can have a pixel pitch, which can also be referred to as the lattice constant of the matrix sensor 50, of a=25 μm.
[0091] The diameter of an Airy disk is known to depend on the wavelength of light and the numerical aperture as follows:
[0092] To evaluate the signal for ISM, the point spread function must be oversampled by at least three to four pixels per spatial direction. In order to attain this for the relevant wavelength range from 450 nm to 800 nm, a detection-side numerical aperture of approximately NA=0.01 is required when the sensor is illuminated. This emerges from the diagram in
[0093]
[0094] In principle, the various spectral components must be integrated numerically in order to recover the point spread function from the dispersively smeared signal distribution. Once the intensities of the pixels have been obtained in this way, the image scanning microscopy (ISM) method can be applied in the next step.
[0095] The intensity incident on a particular pixel 51 is denoted by I.sub.i,j, where the index i denotes the number of the column, that is to say runs in the x-direction and in the dispersion direction 41. The index j denotes the number of the row and thus runs in the y-direction.
[0096] The intensity I.sub.i,j can be written as:
[0097] Here, Sp(λ) is the emission spectrum of the excited fluorescent dye and Airy (x,λ) is the Airy function corresponding to the spatial intensity distribution of the point spread function, with the wavelength λ in the Airy function representing a parameter which, in particular, determines the width of the Airy function Airy(x,λ). x.sub.0 denotes the center of the point spread function. The center of the point spread function, that is to say the point of maximum intensity, is then defined by the dispersion x.sub.0(λ) and by an adjustment by optical means to a column center y.sub.0.
[0098] Typically, the spectral distributions of common fluorescent dyes (fluorophores) have spectral bandwidths of approximately δλ=50 nm (see
[0099] A one-dimensional deconvolution problem now remains in the calculation of the unperturbed Airy function from the spectral smearing in the x-direction when the spectrum Sp(λ) of the excited fluorescent dye is known. The computing and hardware effort in this context should not be underestimated. After all, the signal from at least 4×730 pixels must be evaluated.
[0100] Further assumptions, as set forth below, can be made in order to further reduce the computational burden on the evaluation electronics 60. The first assumption made is that the unperturbed point spread function is radially symmetric. This means that the spatial expansion of the point spread function along the spectrum in the dispersion direction 41, that is to say in the x-direction, can be determined from the readable intensity distribution orthogonal thereto, that is to say in the y-direction I.sub.j.sup.(y). Moreover, the spectral bandwidth per pixel at Δλ<0.5 nm is very low, and so the dispersive signal smearing due to the pixel extent can be neglected to a good approximation, with the result that the spectrum per pixel can be assumed to be piecewise constant. Moreover, the width of the integration range, that is to say the number of pixels over which the integral must be formed, can also be determined from the intensity distribution determined in the orthogonal direction, that is to say the y-direction. The diagram in
[0101] Here, sp.sub.n is the discretized intensity of the fluorescent dye at the spectral position of the relevant pixel. The task now is to determine the components of Airy functions centered in neighboring pixels in the intensity I.sub.i,j measured at the pixel (i,j). In this respect, too, an approximation can be obtained from the data measured in the orthogonal direction, that is to say in the y-direction.
[0102] Initially, the sum of the signal over all pixels perpendicular to the dispersion direction, that is to say over all pixels in a column, is advantageously calculated. This yields the spectrum of the emission which can be normalized, with the result that a distribution as shown in
I.sup.cal.sub.i;=(I(i−2),I(i−1),I(i),I(i+1),I(i+2))=(0.008,0.16,0.65,0.16,0.008)
[0103]
[0104] The values determined thus allow the discretization of the last remaining integral ∫.sub.Δx.sub.
[0105] Here, P.sub.n,j is the determined component at the pixel (i,j) of the intensity of the Airy function centered at a neighboring pixel n. The contributions of I.sub.j.sup.(y)*∫.sub.Δx.sub.
[0106] In the case of a photon-counting matrix sensor in the Geiger mode, the P.sub.i−n,j assign the photons detected per matrix sensor pixel N.sub.i−n,j to the subpixels of the PSF to be determined, according to the previously calibrated specific distribution of the PSF at a given sensor position, as follows:
P.sub.i−n,j=I.sub.i−n.sup.cal.Math.N.sub.i−n,j (6)
[0107] This implements the reversal according to the invention of the spectral splitting for the pixels of the pixel region associated with the dye under consideration.
[0108] Should the error generated by neglecting the dispersive signal smearing appear too large, countermeasures can be taken using a larger sensor array and a longer focal length of the imaging optical unit. Using an array of, for example, 4×1400 pixels and a focal length of f=100 mm, the bandwidth per pixel is already Δλ<0.25 nm. A reasonable operating point can thus be found in any case. The matrix sensor 50 does not have to be restricted to a few rows. Significantly more image lines are allowed if a high frame rate can be achieved, and this then also renders measurements with multipoint excitation possible. The method according to the invention benefits from such a parallelization because the pixel dwell times can then be lengthened by the factor of the parallelization at a given frame rate.
[0109] The limitation of the number of photons in the LSM is problematic for the application of the above-described principle. The matrix sensor 50 supplies digital signals, that is to say photons per sensor pixel and image. A single image of the matrix sensor 50 is subsequently converted into one pixel of an entire LSM image. Since the photon flux incident on the matrix sensor 50 can be of the order of a few megahertz, for example, and the pixel dwell time of the LSM should be of the order of one μs, only a few photons per pixel dwell time are distributed overall over the 4×730 pixels, for example. Accordingly, most of the pixels of the matrix sensor 50 will supply zero as the datum and only a few pixels will supply one. Therefore, a direct single readout of the matrix sensor 50 may not yet deliver a usable result. In addition, the emission and detection of the photons are statistical events, and so the distribution of the photons in the long-term average cannot yet be derived from a single image with such a small number of photons. This is illustrated in
[0110] Accordingly, the system needs to be calibrated with an integrated image. However, in principle, this is a very fast process and generally does not require more than one image scan. This is illustrated in
[0111] From the measurements of
[0112] A second option for reversing the spectral separation, according to the invention, for the pixels of a pixel region that is assigned to a dye is based on the method of pixel reassignment. It will be described with reference to
[0113] Initially the known method of pixel reassignment is explained on the basis of
[0114] The situation of image scanning is considered, in which a sample is scanned using a punctiform illumination source and the intensity radiated back by the sample is measured in spatially resolved fashion using a confocal two-dimensional detector array fixed relative to the sample. Let the confocal matrix sensor have n by m pixel (i,j) with indices i j, which run from 1 to n and m, respectively.
[0115] As the intention is to evaluate the spectral measurements on a rectangular array at a later stage, a rectangular array will be assumed here for the sake of simplicity. However, this is not a restriction in itself and, in principle, other geometries of pixelated sensors can also be considered. In particular, hexagonal arrangements are often used because a good fill factor can be obtained therewith.
[0116] Then, for a clear representation, the assumption is made that the magnification from the object plane to the intermediate image plane is 1. Also, it is advantageous to assume that each pixel of the sensor is smaller than 1 Airy unit (Airy unit=AU). This case corresponds to the standard assumption in image scanning that the PSF is detected in a spatially oversampled manner and that each pixel represents an effective pinhole which is smaller than 1 AU (or smaller than 0.8 AU or, even better, smaller than 0.3 AU); see above. In principle, the arrangement also works with a PSF smaller than the pixel size. However, in that case it is only possible to measure the spectrum and record spectral confocal images. Image scanning then cannot be used in a meaningful way.
[0117] The image scanning microscope can be considered to be a linear and space-invariant system, that is to say the sample plane is imaged linearly into the image plane 56 of the matrix sensor. Let the variable x denote the location in the image plane 56, that is to say the plane of the matrix sensor, projected back into the sample plane. If the scanning position at the point x is in the sample, that is to say if, in other words, the maximum of the illumination intensity is at the location x in the sample, then the intensity g.sub.i,j(x) measured by the pixel (i,j) can be described as a convolution as follows:
g.sub.i,j(x)=(h.sub.i,j*f)(x)=∫h.sub.i,j(x−x′)f(x′)dx′ (7)
[0118] Here, h.sub.i,j(x)=h.sup.exc(x)h.sup.em(x−d.sub.i,j) is the effective PSF for the respectively considered sensor pixel. h.sub.i,j(x) is the product of the PSF h.sup.exc(x) of the excitation and the PSF of the emission h.sup.em(x−d.sub.i,j). The PSF h.sup.exc(x) of the excitation can in principle be measured and can be assumed to be known. In order to obtain a correct PSF of detection, h.sup.em(x−d.sub.i,j) would have to convolved with the aperture function, which describes the geometric shape of the pixel, if the size of an individual pixel is not negligible. d is an in-plane vector of the detector array corresponding to the offset between the reference element, for example an element at the center of the sensor, and the pixel (i,j). d.sub.i,j can thus be written as
[0119] In a simplified consideration, the assumption can be made that the maximum of the effective PSF h.sub.i,j(x) is located approximately in the middle between the maxima of the functions h.sup.exc (x) and h.sub.i,j(x−d.sub.i,j), that is to say approximately at a position s.sub.i,j=d.sub.i,j/2. This would be exactly the case for an aberration-free system if the excitation PSF and the emission PSF were identical, which would be the case for fluorescence without a Stokes shift. The basic concept of pixel reassignment assumes that most of the intensity measured by the pixel (i,j) comes from a location in the sample that does not correspond to the location coordinate of the relevant pixel in the image plane 56. In the simplified consideration, where the assumption is made that the effective PSF has its maximum in the middle between the maxima of the functions h.sup.exc(x) and h.sub.i,j(x−d.sub.i,j), the location from which the intensity by the measured pixel (i,j) originates is at the location s.sub.i,j=d.sub.i,j/2 in the sample plane.
[0120] The basic concept of pixel reassignment and image scanning lies in displacing the signals that have been displaced in relation to the reference pixel back in the direction of the reference pixel and in adding the said signals. In principle, this is intuitively clear, because each pixel (i,j) of the matrix sensor in the confocal system operated in this way supplies a displaced image. In addition to the return displacement, the images can also simply be registered to one another or other forms of calculation, such as multiview deconvolutions, can be used to advantageously combine signals from all pixels with one another by calculation.
[0121]
[0122] Image scanning is known to achieve better optical resolution and an improved signal-to-noise ratio (SNR). The use of filters to achieve color dependency in image scanning is known. The use of a strip grating, with which a PSF containing two colors is measured and evaluated, is also known.
[0123] But it is obvious that significantly more information is obtained if the spectrum is measured directly at a plurality points. This may further serve for spectral unmixing of the data. Additionally, it would be advantageous to be able to use the positive properties of image scanning for such measurements as well.
[0124] Using the detection apparatus according to the invention described here, it is possible, for example to carry out a measurement in such a way that the spectral components of a specific emission band, which correspond to a pixel region on the matrix sensor 50, are spectrally combined so that, for this spectral band, the method of so-called image scanning (also referred to as Airy scanning or optical reassignment) can be carried out. This combining is what is referred to in the terminology of the independent claims as reversal of the spectral splitting.
[0125] In order to bring about the combination of the pixels associated with an emission band, it is necessary for the dispersive influence of the grating or another dispersive device to be reversed for a number of N contiguous pixels, that is to say for the pixels of a pixel region.
[0126] Ultimately, this is comparable to a situation in which the light is allowed to run back through the dispersive element in such a way that the dispersion is reversed. This would then give rise to a PSF which only contains the spatial information and in which the spectral components are again spatially superposed in a point spread function. Something like this is described in US2019258041, for example. However, a reversal of the dispersion with optical means is not always possible. By way of example, gratings are preferably used for spectral splitting because these produce a linear dispersion. On account of the limited efficiency, however, the multiple use of a grating, that is to say both on the outward and return path of the light, is not advantageous. Moreover, purely optical arrangements for the reversal of the dispersion are complex, expensive, and difficult to adjust.
[0127] In the case of a linear dispersion, as produced by a grating, the relationship λ=kx applies to the assignment of the wavelength along the longitudinal direction x of the matrix sensor. Here, k is a constant of proportionality which depends on the strength of the dispersion, that is to say on the line width of the grating. The unit is therefore nm/mm. In principle, however, the considerations do not only apply to a grating but can also be applied to a prism as a dispersive element. However, the relationship between the location on the sensor and the wavelength can then no longer be described using a simple linear relationship in that case. The calibration and the evaluation of the measurements are then somewhat different.
[0128] Moreover, the use of a grating is preferable because the linear dispersion leads to optimal sampling of the spectrum over the wavelengths (pixels per wavelength) and the relationship between location and wavelength remains linear. The use of prisms is optically a little more efficient under certain circumstances, but leads to better sampling of the blue part of the spectrum, while the red wavelengths are spectrally “compressed” and are therefore not sampled as well. However, precisely this is disadvantageous, since there is a more sparing excitation/detection for longer chosen wavelengths, especially for the imaging of living samples. Multiple staining should be easily detectable here, especially with the arrangement according to the invention.
[0129] An exemplary embodiment is to be specified here.
[0130]
[0131] Thus, the essential change is that the assumption is made for the displacement vector d.sub.i,j that the displacement path consists of two parts:
d.sub.i,j=2s.sub.i,j+ξ.sub.i
ξ=ξ(λ) (9)
[0132] Here, ξ(λ) is a function of the wavelength. For example, a possible choice for this function is
[0133] The constant of proportionality k has units of length and is determined by the strength of the dispersion of the dispersive element, that is to say in particular by the grating constant of the utilized grating 43.
[0134] Hence, the displacement path is represented as follows:
[0135] In this example, the dispersion therefore only affects the x-direction. In
[0136] Then, the following is obtained for the centroid of the effective PSF:
[0137] Thus, there is no change in the direction perpendicular to the dispersion, in a manner comparable to a conventional image scanning (airy scanning) evaluation, while there is stretching/compression in the direction of the dispersion. This is implemented in such a way that the components whose wavelengths are further away from the reference wavelength are corrected more, with the result that the centroid of the effective PSF is then displaced closer in the direction of the reference pixel. The centroids are then no longer at half the displacement length between the pixels and the reference pixel, but are displaced somewhat closer to the reference pixel. This therefore provides a rule as to how the components of the pixels must be displaced in order to ultimately be able to combine the contributions of all pixels by calculation. Optionally, further calibrations may also be used here in order to determine the correct s.sub.i,j. However, what may occur, especially in the case of thick samples which are often examined using multi-photon microscopy, is that the displacement vectors are influenced by sample-induced aberrations (Castello et al. 2019;
[0138] Determination of the Displacement Vectors by Way of a Phase Correlation
[0139] Another possibility of data evaluation for the case under consideration is based on the assumption that the wavelengths of a band associated with a dye, which are evaluated here, are generally characterized by a very specific spatial structure of the sample and that this structure is ultimately identical for all spectral components since, of course, this structure was marked using the corresponding dye. A second color, for example the blue-green spectrum in
[0140] In this case, the pixels of the scanned image are initially numbered and labeled according to n=(n.sub.x,n.sub.y). In this way, the image that consists of N.sub.x×N.sub.y image points is denoted g.sub.i,j(n), with n.sub.x=1 . . . N.sub.x and ny=1 . . . N.sub.y. Furthermore, the so-called correlogram (related to a reference pixel (3, 3) in this case) is defined as:
[0141] FFT and FFT.sup.−1 in this case denote, in a manner known per se, the (fast) Fourier transform and its inverse, respectively. The maximum of this correlogram
[0142] then supplies the respective displacement vector, by which the image content must be pushed back.
[0143] The scope of this method is likewise discussed in [Castello et al., 2019]. Ultimately, this procedure is similar to what is known as a registration of the images which supply the various pixels. An advantage of this evaluation is that the dispersion, in principle, does not even have to be known and different functional curves of the dispersion can also be treated using the algorithm. Moreover, the method is less dependent on aberrations in the image on the sensor. However, the computational outlay is higher.
[0144] In principle, further methods known from image scanning can also be used for the present invention. Reference is again made to [Castello et al., 2019]. What is known as a multiview deconvolution for the data evaluation is also discussed there, and this can also be used in the present invention. Furthermore, it is possible to resort to the published literature on the Zeiss Airyscan.
[0145] Thus, this specifies another way of how the data of the confocal spectral sensor can be evaluated with spatial oversampling of the PSF in order to simultaneously determine the better resolved images with increased SNR and at the same time ascertain the spectrum.
[0146] As described above, the spectrum itself can always be obtained by summing the pixels in a column, that is to say perpendicular to the dispersion direction (direction j in
[0147] Application to Multi-Color Excitations
[0148] The method according to the invention can also be applied very advantageously to the simultaneous detection of a plurality of dyes. To this end, it is advantageous that the integration limits, especially in the dispersion direction 41, can be defined flexibly. In this case, integration limits mean the limits within which the individual spectral contributions of the point spread function must be summed for a specific dye. This allows, for example, the extent of the point spread function to be calibrated separately for each dye. This is explained in more detail in connection with
[0149]
[0150] Automation Options
[0151] The calibration of the system always only applies strictly to a fixed preset experiment, as it depends on the chosen objective and the examined dyes, in particular. Following a modification of the experiment, it is therefore advantageous to let the system relearn the calibration by evaluating the averaged image data according to
[0152] A continuous renewal of the calibration data from the last (few) LSM image scans is moreover advantageous. This allows the system to react independently to changes in the experimental surroundings.
[0153] A further advantageous aspect lies in the option of automatically setting the spectral channels by defining the integration limits. This is rendered possible by the high-resolution sampling of the spectral space at increments of 1 nm or even finer. By way of example, an algorithm for finding maxima and minima can use the integrated signal from
LIST OF REFERENCE SIGNS
[0154] 10 Excitation beam path [0155] 12 Light source [0156] (12, 3) Reference pixel [0157] 14 Excitation light [0158] 16 Deflection mirror [0159] 18 Main color splitter [0160] 22 Scanning device [0161] 23 Tube lens [0162] 24 Microscope objective [0163] 26 Sample plane [0164] 27 Illumination spot [0165] 28 Emission light [0166] 29 Zeroth order diffraction [0167] 30 Detection beam path [0168] 32 Detection unit [0169] (3, 3) Reference pixel [0170] 34 Control and evaluation unit, in particular a PC [0171] Dispersion device [0172] 41 Dispersion direction [0173] 42 Spectral component of the emission light 28 [0174] 43 Grating [0175] 44 Spectral component of the emission light 28 [0176] 46 Spectral component of the emission light 28 [0177] 47 Spectral component of the emission light 28 [0178] 48 Imaging optical unit [0179] 50 Two-dimensional matrix sensor [0180] 51 Pixel of the matrix sensor 50 [0181] 53 Size of a pixel [0182] 54 Matrix sensor [0183] 56 Image plane (=plane of the matrix sensor 50) [0184] 57 Centroid of the function s.sub.5.4 [0185] 60 Evaluation electronics [0186] 71 Pixel region of a dye [0187] 72 Pixel region of a dye [0188] 80 Circularly symmetric point spread function [0189] 100 Laser scanning microscope [0190] 200 Detection apparatus [0191] a Grid constant of the matrix sensor 50 [0192] d.sub.i,j Pixel reassignment displacement vector [0193] d.sub.i,j.sup.x-component of the displacement vector d.sub.i,j [0194] d.sub.i,j.sup.y y-component of the displacement vector d.sub.i,j [0195] g.sub.i,j (n) Image at the position n [0196] g.sub.i,j, g.sub.i,j (x) Intensity value measured by the pixel i,j [0197] g.sub.i,j Position vector to pixel i,j in image plane 56 [0198] h.sup.em(λ) PSF of the emission [0199] h.sup.exc(x) PSF of the excitation [0200] h.sub.i,j(x) Effective PSF for pixels i,j [0201] i Column of the matrix sensor 50 [0202] (i,j) Pixel [0203] j Row of the matrix sensor 50 [0204] m Number of rows of the matrix sensor 50 [0205] mi Minimum in the spectral distribution sp; [0206] mx1 Maximum in the spectral distribution sp; [0207] mx2 Maximum in the spectral distribution sp; [0208] n Number of columns of the matrix sensor 50 [0209] n Position vector to an image point [0210] n.sub.x x-component of the position vector n to an image point [0211] n.sub.y y-component of the position vector n to an image point [0212] r.sub.i,j Correlogram for pixel i,j [0213] s1 Emission spectrum of a dye [0214] s2 Emission spectrum of a dye [0215] s.sub.i,j Position vector to the maximum of the effective PSF h.sub.i,j(x) [0216] sp(λ) Spectral distribution (continuous) [0217] sp.sub.i Spectral distribution (discrete) [0218] x Location in the image plane 56 projected back into the sample plane [0219] x Coordinate direction of matrix sensor 50 (=dispersion direction) [0220] y Coordinate direction of matrix sensor 50 (perpendicular to dispersion direction) [0221] Airy(x,λ) Airy function [0222] I.sup.cal.sub.i Spatial intensity distribution [0223] I.sub.i,j Intensity value measured by the pixel i,j [0224] N.sub.x Number of image points in x-direction [0225] N.sub.y Number of image points in y-direction [0226] FFT Fast Fourier Transform [0227] FFT.sup.−1 Inverse Fast Fourier Transform [0228] Pi-n,j Overlap data relating to a spatial overlap on the matrix sensor 50 of spectral components of a point spread function of a dye that are displaced in the dispersion direction 41 [0229] PSF point spread function (Point Spread Function) [0230] S Sample [0231] SNR Signal-to-noise ratio [0232] λ(λ), ξ.sub.i(ξ) Wavelength-dependent component of displacement vector d.sub.i,j [0233] δλSpectral bandwidth of a dye [0234] λ Wavelength [0235] λ.sub.i Wavelength at the column i of the matrix sensor 50 [0236] λ.sub.r Wavelength at the location or column of a reference pixel [0237] k Constant for modeling the dispersion for displacement vector d.sub.i,j