Method and device for determining a distance between two optical boundary surfaces which are spaced apart from each other along a first direction
09741122 · 2017-08-22
Assignee
Inventors
Cpc classification
G06V10/44
PHYSICS
G02B21/0016
PHYSICS
G02B21/367
PHYSICS
International classification
Abstract
A method is provided for determining the distance between two optical boundary surfaces spaced apart from each other in a first direction. A first image is ascertained wherein the plane into which the pattern acquired coincides with a first of two optical boundary surfaces or has the smallest distance to the first optical boundary surface in a first direction. A position of the first image in the first direction is determined. A second image is ascertained wherein the plane into which the pattern acquired coincides with a second of two optical boundary surfaces or has the smallest distance to the second optical boundary surface in the first direction. The position of the second image in the first direction is determined. The distance is calculated by means of determined positions of the first and second image.
Claims
1. A method for determining the distance between two optical boundary surfaces spaced apart from each other in a first direction, the method comprising: a) imaging a pattern into a plane transverse to the first direction and acquiring an image of the pattern imaged in the plane; b) repeating step a) for a plurality of different positions in the first direction, wherein the different positions cover an area in the first direction in which the two optical boundary surfaces lie; c) ascertaining a first image from the images from step a), wherein the plane into which the pattern acquired with the first image was imaged either coincides with a first of the two optical boundary surfaces or has the smallest distance to the first optical boundary surface in the first direction in comparison with the planes in which the pattern acquired with the other images from step a) was imaged; c1) determining the position of the first image in the first direction; d) ascertaining a second image from the images from step a), wherein the plane into which the pattern acquired with the second image was imaged either coincides with a second of the two optical boundary surfaces or has the smallest distance to the second optical boundary surface in the first direction in comparison with the planes in which the pattern acquired with the other images from step a) was imaged; d1) determining the position of the second image in the first direction; and e) calculating the distance based upon the determined positions of the first and second image.
2. The method according to claim 1, wherein in at least one of steps c) and d), one or both of the first and second images is ascertained on the basis of one or both of contrast and edge sharpness.
3. The method according to claim 1, wherein in at least one of step c) and d), one or both of the first and second images is ascertained by a convolution of the respective image from step a), or a partial image generated therefrom, with a target pattern.
4. The method according to claim 3, wherein the target pattern used for the convolution, is derived based upon the first image ascertained in step c).
5. The method according to claim 1, wherein in step a) a pattern in the form of a strip grating is used.
6. The method according to claim 1, wherein: in one or both of steps c) and d), at least one of the first and second images is ascertained by a convolution of the respective image from step a), or a partial image generated therefrom, with a target pattern; in step a) a pattern in the form of a strip grating is used; and in one or both of steps c) and d), at least one column in the direction of a grating modulation of the strip-grating pattern of the respective image from step a), the average value of several such columns, or the sum of several such columns, are utilized in the convolution.
7. The method according to claim 1, wherein the images from step a) are filtered to ascertain one or both of the first and second images in one or both of steps c) and d).
8. The method according to claim 1, wherein the different positions according to step b) are spaced equidistantly.
9. The method according to claim 1, wherein the different positions according to step b) have smaller distances in a first area around the first optical boundary surface in comparison with a second area adjacent to the first area.
10. The method according to claim 9, wherein the different positions according to step a) have smaller distances in a third area around the second optical boundary surface in comparison with the second area which lies between the first and third area.
11. The method according to claim 1, wherein steps a) and b) are carried out for at least two different wavelengths to image the pattern, wherein in step e) the distance of the boundary surfaces is calculated for each of the at least two different wavelengths and, on the basis of the at least two distances calculated, it is ascertained which of at least two possible materials, which are bordered by the at least two optical boundary surfaces, is present.
12. The method according to claim 11, wherein a dispersion of the material is used in conjunction with the positions of the boundary surfaces at the different wavelengths to determine the material.
13. The method according to claim 12, wherein, in determining which material is present, an amount of a difference of mechanical thicknesses of a first material for each of the at least two different wavelengths is compared with an amount of a difference of mechanical thicknesses of a second material for each of the at least two different wavelengths and a determination is made for which material is present based upon a comparison of the amount of differences of mechanical thicknesses of the first and second materials.
14. The method according to claim 11, wherein, in determining which material is present, an amount of a difference of mechanical thicknesses of a first material for each of the at least two different wavelengths is compared with an amount of a difference of mechanical thicknesses of a second material for each of the at least two different wavelengths and a determination is made for which material is present based upon a comparison of the amount of differences of mechanical thicknesses of the first and second materials.
15. The method according to claim 1, wherein a mechanical distance of the two optical boundary surfaces is calculated by multiplying the distance of the determined positions of the first and second image by the refractive index of the material present between the two optical boundary surfaces.
16. A device for determining the distance between two optical boundary surfaces spaced apart from each other in a first direction, the device comprising: an illumination unit configured to image a pattern into a plane transverse to the first direction; an imaging unit configured to acquire an image of the pattern imaged in the plane; and a controller configured to carry out the steps of claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
DETAILED DESCRIPTION
(17) The present invention can be explained with reference to the following example embodiments. However, these example embodiments are not intended to limit the present invention to any specific examples, embodiments, environments, applications or implementations described in these embodiments. Therefore, description of these embodiments is only for purpose of illustration rather than to limit the present invention.
(18) The structure of an embodiment of the device 1 according to the invention for determining the distance between two optical boundary surfaces spaced apart from each other in a first direction is shown schematically in
(19) The imaging module 3 comprises the objective 9, a beam splitter 15, an imaging lens system 16 and a camera 17. A (preferably magnified) image of the sample 14 can thus be acquired with the camera 17 via the objective 9, the beam splitter 15 and the imaging lens system 16.
(20) Furthermore, the device 1 also has a control module M which controls the device during operation and carries out the evaluation and determination of distance described below. The control module M can be part of the device 1, be formed as a separate module or be a combination of the two.
(21) An enlarged representation of the objective 9 and the sample 14 is shown in
(22) The thickness of the base 20 and thus the distance from the lower boundary surface 21 of the base 20 to the upper boundary surface 22 of the base 20 can be determined as follows.
(23) A strip grating 24 (as represented schematically in
(24) At the lower boundary surface 21, the refractive index discontinuity is present because of the transition between the material of the surroundings (for example, air) and the material of the base 20 of the Petri dish 19 and, at the upper boundary surface 22, the refractive index discontinuity is present because of the transition between the material of the base 20 of the Petri dish 19 and the medium 18.
(25) In the microscope 1 according to the invention, a pre-determined z-area 25 is thus passed through (the focal plane is shifted in the z-direction), wherein there is always a sharp image of the strip grating in the focal plane and thus the desired illumination structured in the form of strips. Passing through the pre-determined z-area can take place with equidistant steps, for example.
(26) The acquisition of the camera 17 of the imaging module 3 when the focal plane coincides with the lower boundary surface 21 is shown in
(27) Moreover, the effect occurs that the grating 24 is sharply imaged if e.g. the focal plane coincides with the lower boundary surface 21. As the distance of the focal plane from the lower boundary surface increases, the grating 24 becomes blurred in the acquisition, wherein the grating pattern recurs with the same period and has a decreasing contrast in each repetition. This is illustrated in
(28) The same behavior occurs in the area of the upper boundary surface 22 (with lower intensity and lower contrast).
(29) Only an area around the lower boundary surface is shown in
(30) A corresponding evaluation with a focus function is shown in
F(z)=ΣΣ|g(x,y+d)−2g(x,y)+g(x,y−d)|.sup.2 (1)
can be used (g(x,y) is the image to be evaluated and d=5). This variant is based on a detection of high edge sharpness and is known by the name second order squared gradient. In
(31) It can be seen that the signal level at the upper boundary surface 22 lies only just above the base level and is therefore difficult to differentiate from the background.
(32) In some circumstances, the examination of the images with functions for contrast steepness and edge sharpness can thus not be sensitive enough to detect the grating structure at the upper boundary surface 22.
(33) A better evaluation can be achieved by convolving at least one column in the direction of the grating modulation (in y-direction in
(34)
In equation 2, Signal (x, y, z) means the at least one column of the acquired image and Convolution (x, y) means the received convolution signal. Instead of the at least one column, an average or the sum of several columns of the acquired image can be used as Signal (x, y, z) for the convolution according to equation 2. The averaging and the summation are carried out transverse to the direction of the grating modulation and thus in the x-direction in
(35) The expected grating structure (Grating (y)) can be generated e.g. on the basis of the reflex image of the lower boundary surface 21. This reflex image can be determined by the focus function described above. Thereafter a profile line is generated by means of this maximum image and processed such that it is centred on zero (
(36) If the convolution signal is now evaluated on the basis of this target grating according to
(37) A further advantage of the convolution is that a signal is obtained which is essentially only sensitive to the imaging of the pattern. Other structures with high contrast or quite general luminance fluctuations have little or no influence on the evaluation. This can be seen in
(38) Although the contrast of the grating image is strengthened by the evaluation of the convolution, the contrast has a linear influence on the signal. Correspondingly, if there is almost no signal from the grating any longer, the signal level of the convolution function is correspondingly weak. According to the invention, the wavelength range for the illumination is therefore chosen as far as possible so as to ensure a good contrast of the grating image. For this, a suitable light source can be selected.
(39) In the flow diagram according to
(40) In step S1 an image stack or z-stack is generated.
(41) These image data are imported in step S2, wherein the actual z-position is measured and e.g. a central column is selected in order thus in step S3 to have a stack of image columns S(z).
(42) In step S4 the focus function which is available in step S5 is calculated from the stack of image columns S(z).
(43) In step S6 the main maximum is ascertained and then in step S7 the target grating or comparison grating G is calculated.
(44) In step S8, the convolution which is available in step S9 is calculated on the basis of the stack of image data S(z) with the comparison grating G. On the basis of the calculated convolution, in step S10 the two reflex positions are then determined.
(45) By means of the reflex positions, the distance of the focus positions Δz can be determined and thus the distance between the lower and the upper boundary surfaces 21 and 22 can be calculated.
(46) If the material of the base 20 of the Petri dish 19 is known, the thickness of the base d can be calculated using Δz.Math.n, wherein Δz is the difference of the z-positions between the sharp grating images and n is the refractive index of the base as a function of the wavelength of the illumination system used.
(47) For applications in the biological field, as a rule only two materials are used. Either glass with the designation D263M or the plastic polystyrene (PS) is used. These materials differ in terms of dispersion, which can be used to be able to automatically differentiate these materials and thus to ascertain the material.
(48) The following process can be used to ascertain the material. First, Δz is ascertained for a first wavelength and then for a second wavelength, which differ from each other significantly. For example, 420 nm and 625 nm can be used as first and second wavelength. The difference in the thus-ascertained thicknesses for a material must be approximately zero. If the difference of the two thicknesses for the material D263M is now compared with the difference of the two thicknesses for the material polystyrene, one difference is smaller in terms of amount than the other. The difference which is smaller in terms of amount reveals the material which is actually present. Therefore, if the amount of the difference of the thicknesses for polystyrene for both wavelengths is smaller than the amount of the difference for the glass D263M, it is polystyrene.
(49) In the case of embedded samples 14, the refractive index of the embedding medium 18 can be so close to the refractive index of the cover glass that the reflex is so weak that no grating image can be evaluated.
(50) In this case, as is shown in
(51) Alternatively, referring to
(52) A further embodiment of the device 1 according to the invention for determining the distance between two optical boundary surfaces spaced apart from each other in a first direction is shown in
(53) The light sources 4, 5, 4′ can e.g. be LEDs.
(54) The above disclosure is related to the detailed technical contents and inventive features thereof. People skilled in this field may proceed with a variety of modifications and replacements based on the disclosures and suggestions of the invention as described without departing from the characteristics thereof. Nevertheless, although such modifications and replacements are not fully disclosed in the above descriptions, they have substantially been covered in the following claims as appended.