Microscope and method for SPIM microscopy

09791687 · 2017-10-17

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for SPIM microscopy with a microscope winch includes (1) an illumination arrangement for illuminating a sample with a substantially planar light sheet, and (2) a detection arrangement for detecting light emitted by the sample with an objective. The sample is displaced through the light sheet in direction of the objective's optical axis, and the sample is illuminated under a first illumination angle and a second illumination angle. A plurality of sample planes are then detected at each illumination angle and stored as at least a first image stack and a second image stack. The image stacks are aligned relative to one another, and are combined in one image stack. A the three-dimensional image stack is projected into a two-dimensional rendering, sample features are aligned, a coordinate transformation is determined, and the coordinate transformation for alignment is applied to the combined image stack.

Claims

1. A method for selective plane illumination microscopy (“SPIM”) with a microscope; wherein the microscope comprises: an illumination arrangement comprising an illumination light source; and an illumination beam path configured to illuminate a sample with a light sheet; a detection arrangement configured to detect light emitted by the sample with an objective; wherein the light sheet is substantially planar in a focus of the objective or in a defined plane in a vicinity of the focus of the objective, and the objective has an optical axis which intersects the plane of the light sheet at an angle different than zero; wherein the method comprises: displacing the sample through the light sheet in direction of the optical axis of the objective to detect different sample planes; illuminating the sample is under at least a first illumination angle and a second illumination angle; detecting a plurality of sample planes at each illumination angle, and storing the sample planes as at least a first image stack and a second image stack; and wherein the method further comprises: a step 1 of aligning the first and second image stacks relative to one another so that coordinate systems of all of the image stacks are aligned in a coordinate system of the first image stack; a step 2 of combining the first and second image stacks into a three-dimensional combined image stack; a step 3 of projecting the three-dimensional combined image stack into a two-dimensional rendering; a step 4 of aligning sample features captured from different illumination directions of the two-dimensional rendering relative to one another with respect to position; a step 5 of determining a coordinate transformation from coordinates of the aligned sample features; and a step 6 of applying the coordinate transformation for alignment to the combined image stack.

2. The method according to claim 1; wherein the orientation of the combined image stack is changed, and steps 2-6 are applied to the newly oriented combined image stack to generate a three-dimensionally oriented image stack.

3. The method according to claim 2; wherein the three-dimensionally oriented image stack is adjusted with respect to its orientation to an original illumination direction.

4. The method according to claim 1; wherein the coordinate transformation is a rigid transformation or an affine transformation or an elastic transformation or a locally elastic transformation.

5. The method according to claim 1; wherein an intensity comparison is carried out in each instance between the images in step 3 within the combined image stack for the individual pixels of the images.

6. The method according to claim 5; wherein a pixel with a maximum value or minimum value or a predefined threshold value or the average or the median is utilized.

7. The method according to claim 1; wherein the alignment according to step 4 is carried out in a computer through image analysis and/or via input means for a user.

8. The method according to claim 7; wherein a rough alignment is provided in the computer and a fine alignment is provided via input means.

9. The method according to claim 1; wherein the at least first image stack and second image stack are detected in at least two spectral regions for acquiring different fluorescence markers.

10. The method according to claim 9; wherein a first image stack and a second image stack with identical or similar spectral region are utilized to carry out steps 1-6.

11. The method according to claim 1; wherein, during time-series captures, steps 1-6 are carried out at a first time point and the determined coordinate transformations are applied to image stacks captured at further time points.

12. The method according to claim 1; wherein steps 1-6 are repeated at a plurality of time points of a time series during time-series captures.

13. A non-transitory computer-readable storage medium comprising a computer program configured to implement the method according to claim 1.

14. A microscope for selective plane illumination microscopy (“SPIM”) comprising: an illumination arrangement comprising an illumination light source; and an illumination beam path configured to illuminate a sample with a light sheet; a detection arrangement configured to detect light emitted by the sample with an objective; wherein the light sheet is substantially planar in a focus of the objective or in a defined plane in a vicinity of the focus of the objective, and the objective has an optical axis which intersects the plane of the light sheet at an angle different than zero; and wherein the microscope is configured to implement the method according to claim 1.

15. The microscope according to claim 14, further comprising: a graphical user interface (GUI) for implementation of at least step 4 by a user.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows the basic construction of a SPIM microscope for implementing the method according to the invention;

(2) FIG. 2 lists the basic sequence of the method according to the invention in method steps S1-S11;

(3) FIG. 3 shows the sample chamber PK (see FIG. 1) which is rotatable around a perpendicular axis;

(4) In FIG. 4, a projection from the three-dimensional rendering into a two-dimensional rendering is carried out in Step 4; and

(5) In FIG. 5 in step S7, based on this image stack, a reorientation into ST1+ST2 VO is carried out in and a top view in direction of the Y axis is generated.

DETAILED DESCRIPTION OF EMBODIMENTS

(6) It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for purposes of clarity, many other elements which are conventional in this art. Those of ordinary skill in the art will recognize that other elements are desirable for implementing the present invention. However, because such elements are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements is not provided herein.

(7) The present invention will now be described in detail on the basis of exemplary embodiments.

(8) FIG. 2 lists the basic sequence of the method according to the invention in method steps S1-S11: S1. Capture of an image stack from at least two directions. S2. Alignment of the one image stack with the coordinates of the other image stack. S3. Superposition of the image stacks. S4. Projection from the three-dimensional space into a two-dimensional rendering. S5. The sample features which are contained in the rendering and which were captured from different directions are corrected with respect to the position of the sample features relative to one another. S6. Determination of a transformation matrix from the position correction and application to the superposed image stacks from S3. S7. Change of the view of the superposed image stack which is corrected in X and Y. Top view is selected in FIG. 3. S8. Projection from the three-dimensional space into a two-dimensional rendering.

(9) S9. New position correction of the sample features contained in the rendering relative to one another. S10. Determination of a transformation matrix from the position correction and application to the superposed image stacks from S7. S11. Change of the view of the image stack from S10 into the original view from S3 (front view in the example).

(10) The procedure will be described in more detail in FIGS. 3 to 5 with reference to the above-mentioned method steps.

(11) FIG. 3 shows the sample chamber PK (see FIG. 1) which is rotatable around a perpendicular axis. By rotating the sample chamber and moving the light sheet through the sample by a Z-displacement of the sample chamber and/or of the light sheet, image stacks from different illumination angles of the light sheet are captured via the objective. For example, as was stated above, the capture can take place under illumination axes z, z′ which are perpendicular to one another.

(12) Without limiting generality, other, different capture angles are also possible, also, for example, three captures under an angular offset of 30 degrees, for example.

(13) By moving the light sheet in Z direction, stacks of recorded images are made which are stored in the image storage (CU in FIG. 1).

(14) A stack of individual images ST1 and ST2 which were captured at angles of 0 and 90 degrees are shown schematically. Different object details of the sample are shown schematically in ST1 and ST2.

(15) The three individual images are selected in this instance merely in the interest of clarity; each stack can include 1000 individual mages, for example.

(16) The quantity of individual images per stack need not be identical in order to carry out the method according to the invention.

(17) In step S2 in FIG. 3, the orientation of stack ST2 is adapted to the orientation of stack ST1, i.e., its coordinate system is rotated by 90 degrees along the depicted y axis.

(18) In the next step S3, identically oriented image stacks ST1 and ST2 are superposed in a collective stack ST1+ST2. In doing so, the exact sequence of individual images is not crucial; for example, ST2 can also be arranged behind ST1.

(19) It follows from the above-mentioned different quantity of individual images of ST1 and ST2 which is possible in principle that the Z-distances between the individual images can also vary during capture.

(20) The image stacks can also have a lateral offset or the individual images in ST1 and ST2 can have different dimensions in lateral direction.

(21) In FIG. 4, a projection from the three-dimensional rendering into a two-dimensional rendering is carried out in Step 4.

(22) In this case, the image pixel of an individual image having the greatest intensity is determined pixel by pixel in Z-direction (axially), for example, with reference to the image pixels of an image from ST1, from the superposed images in Z-direction. Instead of a maximum intensity, a determined intensity threshold can also be selected or the minimal intensity in Z-direction can also be determined.

(23) When this is carried out for all image pixels, a two-dimensional rendering ST1+ST2 2D results, which contains image data from ST1 and from ST2 (shown in dashes) as is shown schematically. These data may be displayed differently to the user.

(24) In step S5, a displacement and/or rotation of the individual images ST1 and ST2 in the 2D rendering, in this case in the X/Y plane, is carried out by the user via input means (CU in FIG. 1) or in an automated manner, or initially roughly in an automated manner and then finely by the user and, accordingly, a correction of the image position of ST1 and ST2 relative to one another is carried out. For this purpose, the two image datasets ST1 and ST2 are separated from each other in the 2D plane and displaceably/rotatably, generally transformably, arranged.

(25) The displacements/rotations which are carried out are detected in the CU and transformed into a mathematical coordinate transformation for the X/Y coordinates in the two-dimensional rendering.

(26) For example and without limitation, this can be an affine transformation. This transformation is applied (step S6) to the superposed image stack ST1+ST2 as it was before step S4 resulting in a three-dimensional image stack that is corrected with respect to X/Y. In FIG. 5 in step S7, based on this image stack, a reorientation into ST1+ST2 VO is carried out in and a top view in direction of the Y axis is generated. This top view still contains structure features of ST1 and ST2 and, in a manner analogous to step S4, a two-dimensional rendering is generated in step S8 from the three-dimensional rendering through intensity analysis, but this time in Y direction. In this subsequent two-dimensional rendering, sample features from ST1 and ST2 which are distinguishable and displaceable/rotatable individually again lie in the X/Z plane. In step S9, in turn in a manner analogous to S5, the automated and/or manual alignment and superposition relative to one another is carried out by determining a coordinate transformation which, in a manner analogous to S6, leads in S10 to a three-dimensional rendering which is now aligned in X/Z direction as also previously already in X/Y direction with respect to the sample details and is available for further rendering or storage and subsequent examination in CU.

(27) In step S11, restoral to the original orientation as after step S3 with the axial direction in Z direction can take place in order better to mirror the original capture conditions.

(28) As was mentioned before, there can be more than two image stacks, for example, three image stacks captured at 30 degrees. In an advantageous manner, however, this does not lead to an increase in the above-described method steps; rather only more than two image stacks are overlaid and reduced to two coordinates in the two different orientations as was shown in FIG. 3.

(29) A graphical user interface (GUI) which is conventional in the art can be used on a screen as means for the user, for example, with a plurality of sliders for X-, Y- and Z-displacement and for rotation.

(30) While this invention has been described in conjunction with the specific embodiments outlined above, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, the preferred embodiments of the invention as set forth above are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the inventions as defined in the following claims.