Determining a polar error signal of a focus position of an autofocus imaging system
09578227 ยท 2017-02-21
Assignee
Inventors
Cpc classification
G02B21/365
PHYSICS
H04N23/673
ELECTRICITY
G02B21/367
PHYSICS
H04N23/90
ELECTRICITY
International classification
H01L27/00
ELECTRICITY
G02B21/36
PHYSICS
Abstract
An autofocus imaging system for a microscope system includes a tilted autofocus image sensor that images an oblique cross-section of a slide. The autofocus imaging system focuses multiple sequential overlapping images, which have been taken by the tilted autofocus image sensor, by making a comparison of the multiple sequential overlapping images. An axial position of a tissue layer on the slide can be determined from a polar error signal resulting from this comparison.
Claims
1. An autofocus imaging system for a microscope system, the autofocus imaging system comprising: an image sensor arrangement configured to acquire a primary image data of an object of interest and first and second autofocus image data of an oblique section of the object of interest, wherein the image sensor arrangement comprises a tilted autofocus image sensor configured to acquire the first autofocus image data and, at a later time, the second autofocus image data; and a calculation unit configured to generate a polar error signal of a focus position of the microscope system based on a comparison of a first data relating to the first autofocus image data with a second data relating to the second autofocus image data, wherein the comparison comprises: determining a first contrast function in the first autofocus image data, the first contrast function being a first contrast as a function of a first position on the tilted autofocus image sensor; determining a second contrast function in the second autofocus image data, the second contrast function being a second contrast as a function of a second position on the tilted autofocus image sensor; and comparing the first contrast function with the second contrast function, and wherein, for the comparing the first contrast function with the second contrast function, the calculation unit is further configured to correct the second contrast function for translation of the object of interest and then for subtracting the first and second contrast functions, or filtered or transformed versions of the first and second contrast unctions, from each other, resulting in an S-curve.
2. The autofocus imaging system of claim 1, wherein the image sensor arrangement further comprises a primary image sensor configured to acquire the primary image data of the object of interest.
3. The autofocus imaging system of claim 2, wherein the primary image sensor is a line sensor; and wherein the tilted autofocus image sensor is a two-dimensional sensor.
4. The autofocus imaging system of claim 1, wherein the second autofocus image data is configured to be acquired after the object of interest has been translated from a first position to a second position, wherein the first autofocus image data has been acquired at the first position.
5. The autofocus imaging system of claim 1, wherein, for the generating of the polar error signal, the calculation unit is further configured to determine a zero crossing of the S-curve.
6. The autofocus imaging system of claim 1, wherein the autofocus imaging system is configured to perform a feed-back or feed-forward control of the focus position after a determination of an optimal focus position.
7. The autofocus imaging system of claim 1, wherein the tilted autofocus image sensor is tilted with respect to an optical axis of a lens focusing radiation being radiated from the object of interest on the tilted autofocus image sensor.
8. The autofocus imaging system of claim 7, wherein a primary image sensor and the tilted autofocus image sensor are provided by a single physical image sensor, the single physica image sensor being thted with respect to the optical axis.
9. The microscope system comprising the autofocus imaging system of claim 1.
10. A method for autofocus imaging of a microscope system, the method comprising acts of: acquiring a primary image data of an object of interest by an image sensor arrangement; acquiring a first autofocus image data and, at a later time, a second autofocus image data of an oblique section of the object of interest by a tilted autofocus image sensor of the image sensor arrangement; and generating a polar error signal of a focus position of the microscope system based on a comparison of a first data relating to the first autofocus image data with a second data relating to the second autofocus image data, wherein the comparison comprises acts of: determining a first contrast function in the first data, the first contrast function being a first contrast as a function of a first position on the autofocus image sensor; determining a second contrast function in the second data, the second contrast function being a second contrast as a function of a second position on the autofocus image sensor; and comparing the first contrast function with the second contrast function, and wherein the act of comparing the first contrast function with the second contrast function comprises acts of: determining an S-curve, by subtracting the first and second contrast functions, or filtered or transformed versions of the first and second contrast functions, from each other; and generating the polar error signal by determining a zero crossing of the S-curve.
11. A non-transitory computer readable medium comprising computer instructions which, when executed by a processor of a microscope system, configure the processor to perform a method of autofocus imaging of the microscope system, the method comprising acts of: acquiring a primary image data of an object of interest by a primary image sensor arrangement; acquiring a first autofocus image data and, at a later time, a second autofocus image data of an oblique section of the object of interest by a tilted autofocus image sensor of the image sensor arrangement; and generating a polar error signal of a focus position of the microscope system based on a comparison of a first data relating to the first autofocus image data with a second data relating to the second autofocus image data, wherein the com.arison comprises: determining a first contrast function in the first data, the first contrast function being a first contrast as a function of a first position on the autofocus image sensor; determining a second contrast function in the second data, the second contrast function being a second contrast as a function of a second position on the autofocus image sensor; and comparing the first contrast function with the second contrast function, and wherein the comparing the first contrast function with the second contrast function comprises: determining an S-curve, by subtracting the first and second contrast functions, or filtered or transformed versions of the first and second contrast functions, from each other.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF EMBODIMENTS
(11) The illustration in the drawings is schematically. In different drawings, similar or identical elements are provided with the same reference numerals. In the following, the character prime () associated to a symbol will mean that the image space is considered (e.g.
(12) sensor reference) while a symbol without prime character will mean that the object space is considered (typically the sample reference). For example, when the angle Beta prime () will be used in this description, a rotation in image space, and, as will be described more specifically, a rotation of the physical sensor, will be indicated. Also, an angle Beta ( without prima) will indicate a rotation in object space, and as will be described more specifically a rotation of an oblique cross section of the sample that is imaged by the autofocus sensor.
(13)
(14) For providing an optimum resolution during scanning the focus may have to be adjusted continuously, since the axial position of the tissue layer varies.
(15) An alternative for the use of the focus map-method is the use of a continuous autofocus system, i.e. an additional system that continuously measures the optimum focus position and adapts the axial position of the objective lens during the actual scan for acquiring the digital image. The autofocus system may be based on optimizing the contrast in the obtained image. A variety of metrics may be used for contrast optimization. However, the sign of the focus error (above or below focus) cannot be determined in this manner, i.e. the focus error signal is not polar. This may be disadvantageous for a continuous autofocus system that needs permanent updates on the optimum focus setting.
(16) The autofocus system may use the line reflected at a reference surface at or near the object plane, such as in optical discs. However, a drawback of this method when applied to tissue slides may be that the relevant interface (between the microscope slide and the tissue layer and between the tissue layer and the cover slip) may have a low reflectance and that the reflection signal is distorted by scattering arising from the nearby tissue layer, thus compromising robustness.
(17) A good alternative is the use of an additional sensor that is tilted with respect to the optical axis. This autofocus image sensor makes an image of an oblique section of the object, as depicted in
(18) As can be seen from
(19) For example, the autofocus imaging system operates using wavelengths outside the visible spectrum so as not to spoil the visible light imaging of the tissue layer. For example, the autofocus system operates using wavelengths on the infrared side of the visible spectrum, because ultraviolet radiation may damage the tissue and may require more complicated and/or expensive optical components than infrared radiation.
(20) The additional autofocus image may be provided in a variety of ways. One possibility is to use so-called dark field illumination. Hereby, the sample is illuminated with a beam comprising a set of directions of propagation.
(21) The depth range z.sub.tot of the autofocus system must be sufficiently large for an appropriate setting of other parameters. The autofocus image sensor has N.sub.x pixels in the scan direction, with pixel size b. The sensor is tilted over an angle so that the lateral and axial sampling is given by:
x=b cos
z=b sin
(22) The lateral and axial sampling at the object (the tissue slide) is given by:
x=x/M
z=nz/M.sup.2
(23) where M is the magnification and n the refractive index of the object. The axial sampling at the object now follows as:
(24)
(25) As there are N.sub.xpixels the total depth range is:
(26)
(27) The lateral sampling at the object Ax should be sufficiently small in order to enable the generation of defocus-sensitive high-frequency information. The largest defocus sensitivity is found at half the cut-off spatial frequency, so x is preferably in the range 0.5 to 1.0 m, say around x=0.75 m. Taking a VGA-sensor (N.sub.x=640 by N.sub.y=480 pixels) with pixel size b=10 m, and using a refractive index n=1.5 the relation between depth range and sensor tilt angle is as shown in
(28) In
(29) Roughly the depth range increases with 1 m per degree tilt. The depth range may be preferably around 20 m, so a moderate tilt angle of around 20 deg would be sufficient. In that case the axial sampling z is around 42 nm, which is sufficiently small to enable finding the axial position of the tissue layer with sub-m accuracy.
(30) It appears that not all N.sub.y rows of pixels are needed for a defocus error signal with sufficient signal-to-noise ratio. In principle N.sub.y=1 may be sufficient. However, using multiple rows improves the robustness of the signal. If the sensor is a CMOS-sensor windowing (only read out a restricted number of rows) may be used, which may have positive effects on the computational burden and/or the frequency of measurements of the autofocus signal. Furthermore, the lateral location of the window may be adapted from scan to scan, so as to adapt to the average lateral position of the tissue. For example, if the scan area encompasses the edge of the tissue area, one side of the line sensor images tissue whereas the other side does not. By selecting the window on the side of the field where the tissue is, a good error signal may still be obtained. Alternatively, one or more line sensors can be combined thus providing one or more measurements of the autofocus signal.
(31) To detect the amount of defocus, image analysis algorithms may be used that quantify the amount of sharp detail in the image on the autofocus sensor. A two stage algorithm may be used: a first stage algorithm detects the amount of contrast at a given time on the image produced by the autofocus sensor. Subsequently a second stage algorithm compares the results of the first stage algorithm from two sequential images (not necessarily adjacent frames, a frame can be compared to a frame several frames back) taken by the autofocus sensor while the object under the microscope is being translated.
(32) The first stage, low level algorithm produces a curve which indicates the amount of detail in the image on the autofocus sensor, as a function of the x position on that sensor. Because the autofocus sensor is tilted with respect to the x direction, i.e. with respect to the front surface of the tissue slide assembly, the depth in the object at which the autofocus sensor image is generated varies with the x position on the sensor as well. Therefore, the contrast curves generated by the first stage algorithm, are a function of the x-position on the sample, as well as the depth (z position) of the sample. This is shown in
(33)
(34) The curves 403, 404 are the outcome of the low level algorithm, one for each image. The image corresponding to curve 404 is translated approximately 60 pixels with respect to the image corresponding to curve 403. The curves 403, 404 clearly show that the outcome of the low level algorithm is dependent both on the x position of the object with respect to the autofocus sensor (the high frequency detail, which is qualitatively the same, but translated, in the left image compared to the right image), as well as the z position of the object with respect to the autofocus sensor (the low frequency envelope, more or less gaussian, peaking around 450).
(35) The second stage, high level algorithm, subtracts two sequential results from the first stage algorithm (from now on called focus curves) from each other, after correcting for the translation of the sample. This is shown in
(36) Clearly, the high frequency detail that was present on the focus curves, and which was a result of the information content of the image, is no longer present in the S-Curve. The S-Curve is a stable signal which is not dependent on the (local) morphology of the object that is being imaged in the microscope. An S-Curve may be generated for each frame generated by the autofocus image sensor, and the zero crossing of the most recent S-Curve indicates the ideal focus position at a time T exactly in between the acquisition of the two frames that were compared.
(37) In other words, this two stage approach clearly suppresses the high frequency noise present on the focus curves. Thus, a high bandwidth polar focus error detection during the scanning action of the microscope may be provided.
(38) The maximum bandwidth of the autofocus signal is determined by the frame rate of the sensor. Off the shelve sensors are readily available offering 200 Hz frame rates. For a typical fast scanning microscope, at 10 mm/s, this implies a focus measurement every 50 micrometers.
(39) An embodiment which allows for real-time determination of the z-position at which the tissue is in focus, is the combination of a high-speed image sensor with an FPGA processor, or a dedicated image processing ASIC.
(40) Once the optimum focus position z.sub.opt has been determined, either a feed-back, or a feed-forward focus strategy can be implemented. In the feed-back case, the detection of the focus position z.sub.opt would ideally take place at exactly the same position and time as where the imaging is done. Based on the error signal (the desired focus position z.sub.opt minus the actual focus position of the imaging objective lens z.sub.obj) the focus position z.sub.obj of the imaging optics is adjusted. The sample rate with which the focus position z.sub.opt is obtained will determine the ultimate bandwidth of such a feedback system. Typically a bandwidth that is a factor of 10 lower than the sampling rate may be obtained.
(41) The tilted cross-section of the slide that is imaged onto the autofocus sensor intersects the focal plane of the objective in a line. Possibly, in all of the above embodiments this line is offset from the line imaged by the (TDI or non-TDI) line sensor in the positive scan direction (upstream) by a distance x, such that x/v, where v is the scan velocity, is larger than the time needed for computing the focus error signal from the autofocus image. This is advantageously used in the feed-forward scenario, where the detection of the focus position z.sub.opt would ideally take place ahead (in time) of the imaging. In addition, the offset x should be small enough such that the autofocus image fits within the field of view of the objective. A feed-forward system can be infinitely fast, however, the maximum scan speed is limited by the fact that the determination of the focus position has to be performed within the amount of time that elapses between the time when the focus position measurement for a certain region of the sample is performed, and the time when that same region is imaged. Another limiting factor is the speed with which the imaging optics' focus position can be adjusted.
(42) As a non limitative example,
(43) The light passing through the slide 1 and the cover slip 2 (and tissue layer 4, not shown) is captured by the objective lens 20 with the back aperture 21, wherein the unscattered beams are blocked. A colour splitter 22 splits off the white light which is imaged by a tube lens 23 onto the image sensor arrangement, which may comprise a first, a second and a third primary image sensor 24, 32, 33, which may be adapted in the form of line sensors, for generating the digital tissue image. The infrared light is imaged by a second tube lens 25 onto the autofocus image sensor 26, which is tilted with respect to the optical axis 31 of radiation from the object of interest towards the autofocus image sensor 26. In the context of this disclosure tilted with respect to the optical axis of the primary image sensor means that the radiation from the object of interest which impinges on the autofocus image sensor does not impinge on the autofocus image sensor perpendicularly. However, the radiation which travels from the object of interest towards the primary image sensor may impinge perpendicularly on the primary image sensor, although this is not required. Rays scattered by the tissue can pass through the aperture 21 and are imaged onto the autofocus image sensor 26.
(44)
(45) A polarizing beam splitter 28 is provided to split the beam after it has passed the collimator lens 17. Furthermore, the microscope comprises quarter-wave plate 29. Both elements 28 and 29 take care of directing the beam originating from the laser towards the objective lens and erecting the scattered light originating from the tissue towards the autofocus image sensor.
(46)
(47) In the embodiment of
(48)
(49)
(50) The described autofocus system finds application in digital pathology and other fields of rapid micro scanning
(51) While all embodiments so far reference two separate image sensors, i.e. the primary image sensor, and the autofocus image sensor it should be noted that, as mentioned before, in an additional embodiment these two sensors can be replaced by one image sensor which provides the same functionalities as the two separate sensors. This combined sensor may be adapted as a 2D image sensor of some fashion and has to be tilted with respect to the optical axis. As may be understood from European patent application No 09306350, all or a subset of the pixels of the combined sensor are used to generate the autofocus image data, while another potentially overlapping set is used to generate the primary image data. These two sets of pixels can be different physical pixels captured simultaneously, or partially the same pixels captured sequentially, or a combination of both. As non limiting example of an implementation of this embodiment, one may use a 2D image sensor, where all pixels are used to generate the autofocus image data, and a single row of pixels, perpendicular to the scan direction, is used to form a virtual line sensor, which is used to generate the primary image data.
(52) While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practising the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word comprising does not exclude other elements or steps, and the indefinite article a or an does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.
List of Reference Signs
(53) 1 microscope slide
(54) 2 cover slip
(55) 3 mounting medium
(56) 4 tissue layer
(57) 5 oblique cross-section
(58) 6 scanning direction
(59) 7 axial direction
(60) 14 laser diode
(61) 15 two crossed gratings
(62) 16 field stop
(63) 17 collimator lens
(64) 18 stop for blocking Oth order light rays
(65) 20 objective lenses
(66) 21 back aperture
(67) 22 colour splitter
(68) 23 tube lens
(69) 24 first primary image sensor
(70) 25 tube lens
(71) 26 autofocus image sensor
(72) 28 beam splitter
(73) 29 quarter-wave plate
(74) 31 optical axis
(75) 32 second primary image sensor
(76) 33 third primary image sensor
(77) 301 autofocus sensor tilt angle
(78) 302 z-range
(79) 303 curve
(80) 401 x-position on the image
(81) 402 signal value
(82) 403 outcome of the low level algorithm
(83) 404 outcome of the low level algorithm
(84) 405 S-curve
(85) 406 zero crossing
(86) 500 autofocus imaging system
(87) 800 processor
(88) 801 user interface
(89) 802 microscope system
(90) 901 method step
(91) 902 method step
(92) 903 method step
(93) 904 method step