Angularly-selective illumination
11092794 · 2021-08-17
Assignee
Inventors
Cpc classification
G02B27/58
PHYSICS
G02B21/008
PHYSICS
G02B21/16
PHYSICS
G02B21/0032
PHYSICS
G02B21/367
PHYSICS
International classification
G02B21/36
PHYSICS
G02B27/58
PHYSICS
Abstract
An optical apparatus comprises an illumination module (100) comprising a carrier (110), which has at least one light-transmissive region (112), for example. The illumination module (100) comprises a plurality of light sources (111), which are arranged on the carrier (110).
Claims
1. A microscope comprising: an illumination module comprising a carrier and a plurality of light sources, which are arranged on the carrier, wherein the plurality of light sources are configured to be actuated separately for light production purposes, a computing unit, which is configured to actuate the illumination module for the purposes of illuminating a specimen object from a first illumination direction with first light and to capture a first image during the illumination from the first illumination direction by means of at least one detector, wherein the computing unit is furthermore configured to actuate the illumination module for the purposes of illuminating the specimen object from a second illumination direction with second light and to capture a second image during the illumination from the second illumination direction by means of the at least one detector, wherein the computing unit is furthermore configured to determine a distance between imaging locations of the specimen object in the first image and in the second image, wherein the first light and the second light have different wavelengths and/or polarizations, and wherein the illumination of the specimen object from the first illumination direction and the second illumination direction, at least in part, occurs parallel in time.
2. The microscope as claimed in claim 1, wherein the computing unit is furthermore configured to determine a position of the specimen object parallel to the optical axis on the basis of the distance between the imaging locations of the specimen object in the first image and in the second image.
3. The microscope as claimed in claim 1, furthermore comprising: a specimen holder, which is configured to immobilize the specimen object, a motor, which is configured to displace a focal plane of the microscope in relation to the specimen holder, wherein the computing unit is configured to actuate the motor on the basis of the determined distance between the imaging locations of the specimen object in the first image and the second image.
4. The microscope as claimed in claim 1, furthermore comprising: a specimen holder, which is configured to immobilize the specimen object, a manual adjustment unit, which is configured to displace a focal plane of the microscope in relation to the specimen holder, a user interface, which is configured to output an indicator to a user, said indicator proposing an actuation of the adjustment unit on the basis of the distance between the imaging locations of the specimen object in the first image and the second image.
5. The microscope as claimed in claim 1, furthermore comprising: the at least one detector, which comprises a plurality of groups of pixels, which each have different sensitivities in relation to wavelengths and/or polarizations of light.
6. The microscope as claimed in claim 1, furthermore comprising: a plurality of detectors, at least one spectral element, which is configured to produce a plurality of partial beam paths, assigned to the detectors, on the basis of a separation of light in relation to wavelengths and/or polarizations.
7. The microscope as claimed in claim 6, wherein the at least one spectral element is selected from the following group: beam splitter; dichroic element; color filter; polarization filter; grating; filter wheel; and prism.
8. A method, comprising: illuminating a specimen object from a first illumination direction with first light and capturing a first image during the illumination from the first illumination direction, illuminating the specimen object from a second illumination direction with second light and capturing a second image during the illumination from the second illumination direction, and determining a distance between imaging locations of the specimen object in the first image and in the second image, wherein the first light and the second light have different wavelengths and/or polarizations, and wherein the illumination of the specimen object from the first illumination direction and the second illumination direction, at least in part, occurs parallel in time.
Description
BRIEF DESCRIPTION OF THE FIGURES
(1) The properties, features and advantages of this invention described above and the way in which they are achieved will become clearer and more clearly comprehensible in association with the following description of the exemplary embodiments which are explained in greater detail in association with the drawings.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
DETAILED DESCRIPTION OF EMBODIMENTS
(35) The present invention is explained in greater detail below on the basis of preferred embodiments with reference to the drawings. In the figures, identical reference signs designate identical or similar elements. The figures are schematic representations of different embodiments of the invention. Elements illustrated in the figures are not necessarily depicted as true to scale. Rather, the different elements illustrated in the figures are reproduced in such a way that their function and general purpose become comprehensible to the person skilled in the art.
(36) Below, techniques are described in relation to an illumination module, which can be used for the angle-selective illumination of a specimen object. The illumination module comprises a plurality of light sources, which are arranged at a distance from one another, and thus can implement illumination of a specimen object from a plurality of illumination directions. Different polarizations are assigned to the different illumination directions. Then, measurement images, which correspond to the individual illumination directions, can be combined with one another. As a result, a result image can be obtained by digital post-processing, said result image having a phase contrast.
(37) Here, various examples relate to the particularly flexible combination of such an illumination module for angle-selective illumination with optical apparatuses of different configuration. Various examples describe how such an illumination module can be combined with an LSM. Further examples describe how such an illumination module can be combined with a reflected-light microscope or a transmitted-light microscope.
(38) By way of example, different examples describe how such an illumination module can be combined with a laser light source of an LSM. Here, the illumination module could be structurally connected or coupled to a PMT that is arranged in transmission geometry and configured to detect a fluorescence signal of a corresponding specimen object. To this end, the illumination module can have a light-transmissive region, for example, through which light can pass to the detector. By way of example, the illumination module could be implemented by a carrier with a centrally arranged recess/aperture as a light-transmissive region; then it can be possible to at least partly arrange the PMT in the aperture. Here it is also possible, for example, for the emission spectrum of the light sources of the illumination module to differ from the detection spectrum of the PMT. Here, the detection spectrum of the PMT can be matched to the wavelength of a laser light source and/or to the wavelength of a fluorescence signal; by way of example, the emission spectrum of the light sources of the illumination module can lie in the infrared spectral range.
(39) In further examples, it would be possible, for example, for the illumination module to have a perforated carrier, wherein no further optical elements are arranged in the aperture or apertures. In other examples, different optical elements can be arranged in the region of the at least one aperture, for example, a lens, a Bertrand lens with angle-selective shadowing, a grating, etc.
(40) In various examples, it may be possible to combine the illumination module with an optical microscope. Here, the illumination module can be arranged in a detection beam path or in a further beam path of the illumination module, for example, wherein the further beam path can at least partly differ from the detection beam path. In various examples, it is possible to combine such an illumination module with an overview camera. By way of example, the overview camera can be configured to capture an overview image of a specimen holder of the microscope. A corresponding beam path that is associated with the camera can therefore have a comparatively low magnification.
(41) By way of example, the microscope can be a conventional reflected-light microscope or transmitted-light microscope. Such techniques can be applied, in particular, in conjunction with wide-field microscopy, in which an overview image of a specimen object is created and fine positioning takes place on the basis of the overview image. The individual configuration and arrangement of the light sources of such an illumination module for angle-selective illumination can vary in different examples. By way of example, an LED array could be used as a corresponding matrix structure. In other examples, use could also be made of light sources with a comparatively large lateral extent, such as halogen light sources or organic light-emitting diodes, for example. By way of example, these could be arranged in different quadrants on the carrier in relation to a centrally arranged aperture, which forms the light-transmissive region. Optionally, it is also possible for organic light-emitting diodes themselves to have a light-transmissive configuration and thus implement the light-transmissive region.
(42) The illumination module having a light-transmissive region can be dispensed with in various examples. By way of example, the illumination module could have a continuous carrier, which is not light-transmissive, in such examples. By way of example, it would be possible here for the at least one detector to be applied, for example adhesively bonded, to the carrier. By way of example, the detector could be applied centrally or in off-centered fashion on the carrier.
(43) Different effects can be obtained by means of such techniques. By way of example, it may be possible to combine different imaging techniques with one another. By way of example, conventional, analog imaging techniques—such as analog, optical reflected-light microscopy or analog, optical transmitted-like microscopy or fluorescence imaging, for example—can be combined with techniques that are based on digital post-processing. Moreover, it is possible to combine fluorescence imaging techniques with techniques from non-fluorescence imaging. By way of example, digital techniques in conjunction with angle-selective illumination—as described in conjunction with DE 10 2014 112 242 A1—may facilitate a fully automatic or at least partly automatic production of a fitting contrast. The corresponding disclosure, in the entirety thereof, is incorporated herein by cross-reference. Therefore, such techniques can also be implemented without in-depth expert knowledge. Particularly in conjunction with the fluorescence imaging, using suitable emission spectra of the light sources of the illumination module renders it possible to avoid biological specimen objects being used up by the techniques of angle-selective illumination and thus adversely affecting the fluorescence imaging.
(44) Moreover, it is possible to implement optical apparatuses which, as described above, combine different imaging techniques but use a common objective to this end. This saves installation space and reduces costs and complexity. By way of example, by means of the techniques of angle-selective illumination, it may be possible to produce phase-contrast images; here, the provision of a specific differential phase-contrast (DIC) optics or Zernike optics may be dispensed with.
(45)
(46) In the example of
(47) A light-transmissive region 112 is arranged in the region of the geometric center of the carrier 101. In principle, the light-transmissive region 112 is optional. In one example, the light-transmissive region 112 can be implemented by light-transmissive solid material; examples of light-transmissive material would be, for example: glass; plastic; plastics film; etc. By way of example, the light-transmissive material can be embedded in the surrounding material of the carrier 101 and can be securely connected to the latter. By way of example, the light-transmissive region 112 could be implemented as a glass plate, which is embedded in the metallic carrier. In a further example, the light-transmissive region 112 can be implemented by a cutout or an aperture.
(48) While a single, contiguous light-transmissive region 112 is illustrated in relation to the example of
(49) What can be achieved as a result of the light-transmissive region is that the illumination module 100 can be combined particularly flexibly with an optical apparatus. By way of example, it may be possible for the illumination module 100 to be arranged within the beam path of the optical apparatus; then, light can pass through the light-transmissive region 112 along the beam path through the illumination module 100. In this way, the illumination module 100 can be flexibly integrated into the optical apparatus.
(50)
(51) It is evident from
(52)
(53)
(54)
(55) In the example in
(56)
(57) The illumination module 100 defines a further beam path 222 (dot-dashed line in
(58)
(59) Despite the spatial proximity between the illumination module 100 and the detector 230, an interaction between these elements 100, 230 can be comparatively low. By way of example, this can be achieved by virtue of the detection spectrum of the detector 230 being different from the emission spectrum of the light sources 111.
(60) While the detector 230 is arranged in the aperture of the light-transmissive region 112 in
(61) The examples above illustrated scenarios in which the carrier 110 of the illumination module 100 has a light-transmissive region 112, which is implemented by an aperture, for example. However, in other examples, it is possible for the carrier 110 not to have a corresponding light-transmissive region 112. Such a scenario is illustrated in the example in
(62)
(63)
(64) By way of example, the detector 241 may be a CCD sensor or a CMOS sensor. By way of example, the detector 241 could be part of a camera. By way of example, the detector 241 could produce an overview image. However, the detector 241 could also produce a greatly magnified image.
(65)
(66)
(67) In the example in
(68)
(69)
(70) By way of example, it would be possible for the camera 311 to capture an image for the specimen object illuminated by the angle-selective illumination by means of the illumination module 110. It would also be possible to provide a further detector (not illustrated in
(71)
(72)
(73) The microscope 300 of
(74)
(75)
(76) An illumination module 100 according to the example in
(77) In the example in
(78) While the example in
(79)
(80) By means of such techniques, it is possible to implement different illumination directions with light of different wavelengths and/or polarizations.
(81)
(82) In the example in
(83) In the example in
(84)
(85) While
(86)
(87)
(88)
(89) From a comparison of
(90)
(91) In the examples in
(92) In the examples in
(93)
(94) Furthermore, the computing unit 299 is configured to combine the measurement images to obtain a result image. The result image has a phase contrast. Here, the computing unit 299 can be configured to apply techniques of digital post-processing, which are disclosed in relation to DE 10 2014 112 242 A1.
(95)
(96) Initially, a current illumination direction 91 is selected in step 2001. Then, one or more light sources 111 are activated in step 2002 such that the illumination of a specimen object is obtained from the selected illumination directions. An associated measurement image is captured in step 2003, for example by means of a suitable detector 241, for example by means of a CCD detector or a CMOS detector, an overview camera and/or a photomultiplier. To this end, the detector 241 can be actuated in a suitable manner. The measurement image is captured while the specimen object is illuminated from the current illumination directions.
(97) Then, a check is carried out in step 2004 as to whether it is necessary to capture a further measurement image from a further illumination direction 91. If this is the case, steps 2001-2003 are carried out again.
(98) Subsequently, a result image is produced in step 2002. The result image has a phase contrast for an imaged object (phase-contrast image). The phase-contrast image is determined by combining the measurement images, which were captured in the iterations of step 2003.
(99) While the measurement images typically have no, or no significant, phase-contrast, it is possible to produce the phase-contrast image by combining the measurement images, said phase-contrast image having a significant phase-contrast component. Particularly in comparison with other conventional techniques of phase-contrast imaging, the method described in the present case can obtain the effect of a particularly simple implementation of the phase-contrast imaging on the one hand and the effect of a particularly simple, and hence cost effective and robust, configuration of the optical apparatus on the other hand.
(100) By way of example, it would be possible for the illumination directions to form pairs in each case or to be arranged in pair-wise fashion. Here, it may be possible for an illumination direction always to be assigned to only one pair. However, it would also be possible for at least some of the illumination directions to be assigned to a plurality of pairs. At least the measurement images of the illumination directions belonging to one pair can then be combined to obtain a respective result image.
(101) Different criteria for the assignment of two illumination directions 91 to a pair may apply. By way of example, geometric criteria of the illumination directions 91 of a pair may apply, for instance in relation to the optical axis; in this way, it may be possible to produce a particularly high phase-contrast component in the phase-contrast image, for example. By way of example, the illumination directions of a pair could be included symmetrically in relation to the axis of the optical apparatus along which an idealized light ray experiences no, or only little, deflection (optical axis) and/or be arranged symmetrically in relation to a plane containing the optical axis. However, as an alternative or in addition thereto, it would also be possible to take account of a time of illumination and capture as an alternative or additional criterion of the belonging of two illumination directions 91 to a pair; by way of example, those illumination directions 91, for which the respective measurement image is captured immediately in succession or successively in a timely fashion, can form a pair; in this way, a certain robustness in relation to movement artifacts could be obtained, for example. In general, a subsequent evaluation for producing the phase-contrast image can also be taken into account as an alternative or additional criterion for the belonging of two illumination directions 91 to a pair; by way of example, an individual result image could always be produced for the two measurement images of a pair by combining these measurement images.
(102) It would be possible for the two illumination directions 91 of a pair to include correlating angles with the optical axis. By way of example, correlating angles can mean: substantially the same angles or substantially the same angles in terms of magnitude; substantially can be characterized, in particular, in relation to technical limitations in the accuracy, such as, e.g., systematic or statistical errors when capturing the measurement images by the optical apparatus and/or a limitation of an illumination apparatus of the optical apparatus caused by the construction thereof. To the extent that angles which, although they are different in absolute terms, are the same within the accuracy of the optical apparatus, for example, are implemented, this can satisfy such a criterion of substantially the same angles. Such criteria apply below to corresponding specifications of angles and/or other properties of illumination directions 91 or of the optical apparatus.
(103) For the purposes of describing geometric properties of the illumination directions, it may be helpful to describe the illumination directions 91 by way of an illumination vector. The illumination vectors can be defined in relation to an origin of the optical apparatus, for instance in relation to the object and/or an intersection of a focal plane with the optical axis. A length of the illumination vectors can correspond to an amplitude of the illumination from the respective illumination direction, for example; in the subsequent explanation of the orientation of various illumination vectors, it may be possible to dispense with taking account of a length of the illumination vectors. Then, the angle included by an illumination vector with the optical axis may correspond to the angle of the respective illumination direction.
(104) By way of example, it may be desirable for illumination vectors of a pair of illumination directions to include an angle with one another, said angle being greater than 10°, preferably greater than 20°, particularly preferably greater than 40°. As an alternative or in addition thereto, it would also be possible for illumination vectors of a pair of illumination directions to each include an angle with the optical axis, said angle being greater than 5°, preferably greater than 10°, particularly preferably greater than 20°. What this can achieve is that a difference vector between the two illumination vectors of a pair of illumination directions 91 has a significant component perpendicular to the optical axis; this can increase the phase contrast in the phase-contrast image particularly strongly.
(105) In particular, it may be possible for the illumination vectors of two illumination directions of a pair of illumination directions to be transformed into one another by rotation about the optical axis of the optical apparatus through an angle of greater than 25°, preferably greater than 50°, particularly preferably greater than 85°. As a result of this, the difference vector becomes particularly large.
(106) The two illumination directions of a pair of illumination directions can also be arranged in such a way that associated illumination vectors include with one another, by way of rotation about the optical axis, an angle of 160° to 200°, advantageously of 175° to 185°, particularly advantageously of 180°. It would also be possible for the associated illumination vectors to be transformed into one another by way of rotation about the optical axis through an angle of 70° to 110°, advantageously of 85° to 95°, particularly advantageously of 90°. Expressed differently, the two illumination vectors of a pair of illumination directions 91 can lie in a plane and can be arranged symmetrically or substantially symmetrically in relation to the optical axis. The optical axis can lie in this plane (be contained in this plane), for example, if a rotation through 180° transforms the two illumination vectors into one another. In this way, a comparatively large phase-contrast component can be obtained in the phase-contrast image because the two illumination directions of a pair are arranged in complementary fashion to one another in this way.
(107) In general, it may be desirable to use a relatively large number of illumination directions for the purpose of obtaining the phase-contrast image. In particular, the phase-contrast component in the phase-contrast image can increase in the case of an appropriate arrangement of the various illumination directions 91. By way of example, it would be possible to take account of a plurality of pairs of illumination directions. By way of example, it would be possible to illuminate the object sequentially from 2 or 4 or 6 or 8 illumination directions or more illumination directions. By way of example, it would be possible for a first pair of illumination directions to determine a first difference vector of associated illumination vectors. Accordingly, a second pair of illumination directions can determine a second difference vector of associated illumination vectors. The first and second difference vector can include an angle with one another, for example an angle of 70° to 110°, advantageously 85° to 95°, particularly advantageously 90°.
(108) Accordingly, it would also be possible for a first plane to be defined by the illumination vectors of a first pair of illumination directions 91. By way of example, a second plane can be defined by the illumination vectors of a second pair of illumination directions. The first plane and the second plane can include an angle, for example an angle of 70° to 110°, with one another, advantageously 85° to 95°, particularly advantageously 90°. By way of example, the planes can be defined by virtue of the respective illumination vectors lying in the plane. It would also be possible for the planes to be defined by a normal vector that is oriented parallel to a difference vector of the respective illumination vectors; the optical axis can lie in the plane.
(109) Thus, in this way, difference vectors of the illumination vectors of the two pairs of illumination directions 91 can include a comparatively large angle of up to 90° with one another; as a result, the phase-contrast in the phase-contrast image can be increased along various image directions. By way of example, a phase-contrast component in the phase-contrast image can be particularly large along those image directions for which the illumination vectors of a pair of illumination directions have a component perpendicular to the optical axis. In particular, a phase-contrast component in the phase-contrast image can be particularly large along those directions for which the difference vector of the illumination vectors of a pair of illumination directions has a component perpendicular to the optical axis. Therefore, it may be desirable to use complementary and/or symmetrically arranged illumination directions. In order to produce an isotropic phase contrast in the phase-contrast image, it may be desirable for the illumination directions to include uniformly distributed angles with the optical axis.
(110) Such illumination directions 91, or illumination vectors as described above, can be implemented by suitable arrangement and/or extent of the light sources 111 on the carrier 101.
(111) The above-described techniques can also be used to determine a position of a specimen object parallel to the optical axis of an optical apparatus (z-position). To this end, use can be made, in particular, of the different illumination modules described herein, which facilitate an illumination from different illumination directions.
(112) In the three-dimensional space spanned by x, y, z-axes, the z-component of the position may thus be determined; the optical axis defines the z-axis and is e.g. parallel to the latter. On the basis of the z-position determined, e.g. a focus unit of the optical apparatus, for example a motor coupled to a specimen holder, may be driven and, in this way, the object may be positioned in the focal plane of the optical apparatus depending on the z-position determined (focusing of the object). Autofocus applications can be implemented. If no autofocus is present for the motor-driven adjustment of the focal plane but there is a manual adjustment unit instead, an indicator can be output to the user via a user interface, for example. This indicator can instruct the user to undertake a certain actuation of the adjustment unit in order thereby to facilitate focusing of the object. By way of example, the user interface can comprise an optical output and/or an acoustic output. By way of example, the direction of rotation of a setting dial of the adjustment unit could be indicated.
(113) Images of the object which image the object particularly sharply may subsequently be captured. Such techniques may be employed in a wide variety of fields, e.g. in microscopy or in fluorescence measurement or in parallel with phase contrast imaging.
(114) By means of the techniques described herein, it may be possible, in particular, to facilitate particularly fast focusing. To this end, it may be possible for the illumination of the specimen object from different illumination directions to occur at least partly parallel in time. To this end, it would be possible, for example, to carry out the illumination from different illumination directions by means of light with different wavelengths or colors or polarizations. In this way, it is possible to separate the signals that correspond to the different illumination directions.
(115) For the exemplary application of the fluorescence measurement, it may be possible, for example, to determine the z-position before and/or during the fluorescence measurement by means of the techniques described below. It may thus be ensured that the fluorescing object is situated in the focal plane of the optical apparatus during the measurement; in this way, it is possible to increase an accuracy during the fluorescence measurement. The techniques described in detail below are based on evaluating a first image and a second image with illumination of the object from different first and second illumination directions. In this case, this angle-selective illumination may be carried out e.g. in particular with one or more wavelengths that are outside the fluorescence-active range of the fluorescing specimen. In principle, the z-position may thus be determined at the same time as the fluorescence measurement. This may make it possible, for example, in particular, to position moving specimens as a function of time reliably in the focal plane. Furthermore, the z-position may generally be determined from only two illumination processes; by this means, too, it is possible to reduce a light-toxic effect on the fluorescing object. When measuring dyes, the wavelength of the light for determining the z-position may be chosen e.g. outside the excitation range of the dyes. In this way, bleaching of the dyes may be reduced or avoided. One possible light wavelength which is used for determining the z-position would be e.g. in the infrared range.
(116) Determining the z-position may mean in this case: quantitatively determining the z-position, e.g. in relation to the focal plane or in relation to some other suitable reference system of the optical apparatus; and/or qualitatively determining the z-position, e.g. in relation to the criterion of whether or not a specific predefined position parallel to the optical axis, such as e.g. the focal plane, is attained.
(117)
(118) The optical axis 221 and the focal plane 3160 are illustrated in
(119) A first illumination direction 91-1 and a second illumination direction 91-2 are furthermore illustrated in
(120) It is evident from
(121) Instead of the sharply delimited illumination directions 91-1, 91-2, it is also possible to use illumination directions that implement the illumination of the specimen object 3100 over a certain solid angle. To this end, more than two light sources, for example, could be used to implement a single illumination direction or else a light source with a large extent in relation to the specimen object 3100 could be used.
(122) Since the object 3100 is illuminated with finite angles 3251-1, 3251-2 relative to the optical axis 221, a pure phase object which brings about no or only a small attenuation of the amplitude of the light passing through may also be imaged in the first and second images 3230-1, 3230-2. This enables a diverse application of the present techniques to different specimens, in particular, e.g., biological specimens.
(123)
(124) However, it would also be possible for determining the z-position 3150 furthermore to be based on the first angle 3251-1 and the second angle 3251-2. The z-position 3150 may then be determined quantitatively. For this purpose, as set out below, trigonometrical relationships between the first angle 3251-1, the second angle 3251-2 and the distance 3250 may be taken into account.
(125) The following applies to the scenario of
Δz=a.Math.cos α=b.Math.cos β (1)
where a denotes a distance between the specimen object 3100 and the imaging location 3220-1 of the object 3100 in the first image 3230-1 along the first illumination direction 91-1 and b denotes a distance between the specimen object 3100 and the imaging location 3220-2 of the object 3100 in the second image 3230-2 along the second illumination direction 91-2 (a and b are not illustrated in
(126) By applying the sine law for general triangles, the following is obtained:
(127)
(128) Combining equations 1 and 2 results in:
(129)
(130) With the aid of equation 3, it is possible to determine the z-position 3150 on the basis of the first angle 3251-1 and the second angle 3251-2 and furthermore on the basis of the distance 3250 between the imaging locations 3220-1, 3220-2. In particular, the z-position 3150 may be determined solely by double illumination and simultaneous capture of the first and second images 3230-1, 3230-2. A light loading of the object 3100 may be minimized, e.g. in comparison with the abovementioned scenario with iterative positioning of the object 3100 at different reference positions parallel to the optical axis 221.
(131) It may be desirable to increase an accuracy for determining the z-position 3150. The accuracy for determining the z-position 3150 is typically associated directly with the first angle 3251-1, the second angle 3251-2 and the distance 3250. Therefore, the accuracy when determining the z-position 3150 may be limited at least by a pixel size in the first image 3230-1 and the second image 3230-2.
(132) An error in the distance 3250—designated as Δx′ hereinafter—is transferred as follows to an error of the z-position 3150:
(133)
(134) If the specimen object 3100 has a significant extent in the xy-plane, it may be desirable, e.g. to determine the distance 3250 between specific reference points in the first image 3230-1 and the second image 3230-2. The reference points may mark a specific part of the object 3100, e.g. a particularly prominent part or a part that is particularly important for the imaging. In general, it is also possible to determine the distance 3250 for a plurality of pairs of reference points of the object 3100. In this way, it may be possible, by repeatedly applying equation 3, for different parts of the object 3100, to determine the z-position 3150 in each case. In other words, the z-position 3150 can thus be determined in a spatially resolved manner in the xy-plane.
(135) It may thus be desirable to determine the distance 3250 particularly accurately. In this context it may be possible to apply a wide variety of techniques which enable the distance 3250 to be determined particularly accurately. Such techniques may include e.g.: landmark recognition; determining an optical centroid of the object 3100 in the first image 3230-1 and/or in the second image 3230-2; a user input; an aberration correction. In one simple scenario, e.g. the user could select a specific reference point of the object 3100 in the first image 3230-1 and select the corresponding reference point in the second image 3230-2. By means of landmark recognition, it may be possible, for example, to carry out such a selection of reference points in an at least partly automated manner. It would also be possible to use the optical centroid as a reference point for determining the distance 3250. The aberration correction may be used e.g. to take account of known incorrect imagings on account of aberrations in the optical apparatus 200, 300. By way of example, it may be possible, by taking account of previously known aberrations, e.g., in the illumination apparatus of the optical apparatus and/or in the detector optical unit of the optical apparatus, to take account of distortions in the first and second images that may lead to a displacement of the imaging locations of the object. Such displacements may then be eliminated computationally or reduced computationally and the actual distance may be determined particularly accurately.
(136) A further limitation of the accuracy when determining the z-position 3150 may result from the coherent depth of field of the detector 241 of the optical apparatus 200, 300. In particular, it should be ensured that the specimen object 3100—even in the case of a significant displacement relative to the focal plane 3160—is still imaged in the first image 3230-1 and the second image 3230-2. However, it may be unnecessary to achieve a sharp imaging of the object 3100; in particular techniques described above, such as e.g. the determination of the optical centroid of the object 3100, may also be applied in a case in which the specimen object 3100 is imaged only unsharply in the images 3230-1, 3230-2.
(137) While
(138)
(139) Here it is possible for the first light used to illuminate the specimen object 3100 from the first illumination direction and the second light used to illuminate the specimen object 3100 from the second illumination direction or, in general, for the light of the different used illumination directions to differ from one another. This means that the first light may have different wavelengths than the second light. As an alternative or in addition thereto, the first light could have a different polarization than the second light. As a result, it is possible for the illumination of the specimen object 3100 from the first illumination direction and the illumination of the specimen object 3100 from the second illumination direction to occur at least partly parallel in time. As a result, particularly fast autofocusing of the specimen object 3100 may be facilitated. Such techniques may also be applied to more than two illumination directions.
(140) By way of example, two detectors could be provided for separating the first light and the second light, said detectors each being associated with a filter that is configured according to the properties of the respective light. Thus, the filters can implement a spectral element that facilitates filtering or separation of the light in respect of its spectral properties. By way of example, the spectral element may be selected from the following group: beam splitter; dichroic element; color filter; polarization filter; filter wheel; and prism. It would also be possible for a detector to be used, said detector having different groups of pixels, which each have different sensitivities in relation to wavelengths and/or polarizations of light.
(141)
(142) By way of example, the distance 3250 could be determined with the aid of an image correlation calculation and a search for maximum within the image correlation: In some examples, the position of this maximum can directly determine the value of dx; see equation 3. In order to increase the robustness, it is also possible to use a threshold on the correlation image: in this way, it is possible to span an area in which the centroid is then sought after. The position of the centroid can also then determine dx directly. Such a technique can be advantageous, in particular, if the object comprises a plurality of planes.
(143)
(144) By way of example, for the purposes of illuminating the specimen object, it would be possible to use an illumination module having a plurality of light sources and, for example, a light-transmissive region according to various examples described herein.
(145) By way of example, it would be possible for steps 4001, 4002 to be carried out at least partly parallel in time. By way of example, it would be possible for 4001, 4002 to be carried out in parallel in time as a single-shot measurement. To this end, respectively different light can be used in steps 4001 and 4002, i.e., light that differs in respect of at least one spectral property such as wavelength and/or polarization, for example.
(146) In order to facilitate a separation of the light to capture the first image and the second image, use can then be made of a spectral element with a plurality of detectors or with a detector having a plurality of groups of pixels that are associated with the different spectral properties of the light.
(147) Referring back to
(148) On the basis of the determined distance, it is then possible, for example, to determine the z-position of the specimen object, i.e., the distance to the focal plane. As an alternative or in addition thereto, it is also possible to implement an autofocus application, in which, for example, the motor of a specimen holder is actuated for focusing purposes.
(149) To summarize, a description has been given above of techniques which—e.g., by applying equation 3 or by repositioning the object parallel to the optical axis—enable the z-position 3150 to be determined particularly rapidly and accurately. A rapid focusing of the object 3100 becomes possible as a result.
(150) In conclusion, techniques in relation to illumination modules for angle-selective illumination were described above. Such techniques render it possible to combine the angle-selective illumination flexibly with various optical apparatuses.
(151) It goes without saying that the features of the embodiments and aspects of the invention described above can be combined with one another. In particular, the features can be used not only in the combinations described but also in other combinations or on their own without departing from the scope of the invention.
(152) While various examples were described above in relation to an LSM and in relation to a microscope with an eyepiece, it is also possible to use corresponding techniques for other optical apparatuses in other examples. In particular, the illumination modules with carrier and light-transmissive region, as described herein, can also be used for other optical apparatuses.
(153) While various examples were described above in relation to fluorescence imaging, corresponding techniques can also be used for other types of imaging. This may mean that use can be made of other detectors which, for example, are not suitable for detecting a fluorescence signal.
(154) While various examples were described above in relation to an illumination module with a carrier, which has a light-transmissive region, corresponding techniques can also be applied to a carrier that has no light-transmissive region in some examples.