IMAGE CORRECTION METHOD AND MICROSCOPE

20170102533 ยท 2017-04-13

Assignee

Inventors

Cpc classification

International classification

Abstract

An image correction method is provided in which, in order to capture images, a scanning beam is guided over an object plane by a beam-directing element, and at detection times a brightness value of a detection signal of an object location scanned at the respective detection time is detected in the object plane, wherein actual positions of the beam-directing element and the positions of the object locations which are associated with the actual positions are known at every detection time. A pixel array with known positions of each pixel of the pixel array is defined in the object plane, a number of pixels adjacent to the object location is acquired, and the brightness value which is detected at an object location is assigned proportionally to the adjacent pixels of the object location as brightness value portions. Also provided is a microscope designed to carry out the image correction method.

Claims

1. An image correction method comprising: an image-capturing process comprising guiding a scanning beam over an object plane by means of a beam-directing element in order to capture images by acquiring image data comprising pixels; and at detection times, detecting in the object plane a brightness value of a detection signal of an object location scanned at the respective detection time; wherein actual positions of the beam-directing element and positions of the object locations that are associated with the actual positions are known at every detection time; wherein a pixel array with known positions of each pixel of the pixel array is defined in the object plane; wherein a number of pixels adjacent to the object location is acquired; and wherein the brightness value that is detected at an object location is assigned proportionally to the adjacent pixels of the object location as brightness value portions.

2. The image correction method according to claim 1; wherein the brightness value portions are acquired from the brightness value as a function of at least one weighting factor.

3. The image correction method according to claim 1; wherein the weighting factor is acquired as a function of spatial differences between the object location and the adjacent pixels.

4. The image correction method according to claim 1; wherein a summary brightness value is acquired for each pixel and stored, the summary brightness value comprising the brightness value portions assigned to the pixel at at least one detection time.

5. The image correction method according to claim 1; wherein the brightness value portions of an object location are acquired and assigned to the adjacent pixels of the object location after the image-capturing process has ended.

6. The image correction method according to claim 1; wherein the brightness value portions of an object location are acquired and assigned to the adjacent pixels of the object location during the image-capturing process.

7. The image correction method according to claim 1; wherein the positions of the object locations that are associated with the actual positions are acquired computationally by a simulation; and wherein a scanning function which describes the relationship between the actual positions and the associated positions of the object locations for each scanning path along which the scanning beam is guided, is acquired.

8. The image correction method according to claim 2; wherein the simulation comprises the steps of: generating the pixel array; calculating a geometric distortion of the image data; calculating the addresses of the adjacent pixels; determining the weighting factors; and calculating the brightness value portions.

9. The image correction method according to claim 1, further comprising: after the image-capturing process for each pixel, applying a brightness correction value to the brightness value portions of selected pixels.

10. The image correction method according to claim 1; wherein a number of each detected object location of adjacent pixels is kept constant for an image obtained by the image-capturing process.

11. The image correction method according to claim 1; wherein the object plane is exposed and scanned simultaneously with a plurality of regions illuminated on the object plane.

12. A microscope for acquiring image data and for storing at least one portion of the image data the microscope comprising: a beam-directing element configured to guide a scanning beam over an object plane in order to capture images; a brightness value detector configured to, at detection points, detect in the object plain a brightness value of an object location scanned at a respective detection time; and a storage and computer unit configured to assign brightness values of the object location detected at a detection time by an object location located in an object plane to pixels of a pixel array that is defined in the object plane and has pixels of known positions proportionally to adjacent pixels of the object location as brightness value portions.

13. The microscope according to claim 12; wherein the beam-directing element is a mirror that is adjustable in a controlled fashion.

14. The microscope according to claim 12; wherein the beam-directing element is a hollow mirror.

15. The microscope according to claim 12, further comprising: a field programmable gate array (FPGA) circuit in the storage and computer unit.

16. The image correction method according to claim 4, further comprising: after the image-capturing process for each pixel, applying a brightness correction value to the summary brightness value portions of selected pixels.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0053] FIG. 1 shows a schematic illustration of a first exemplary embodiment of a microscope.

[0054] FIG. 2 shows a schematic illustration of a pixel array and a number of scanning curves of a scanning beam.

[0055] FIG. 3 shows a schematic illustration of a second exemplary embodiment of a microscope.

DETAILED DESCRIPTION OF EMBODIMENTS

[0056] It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for purposes of clarity, many other elements which are conventional in this art. Those of ordinary skill in the art will recognize that other elements are desirable for implementing the present invention. However, because such elements are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements is not provided herein.

[0057] The present invention will now be described in detail on the basis of exemplary embodiments.

[0058] In FIG. 1 an exemplary embodiment of a microscope 1 is illustrated schematically, which microscope 1 is designed to detect brightness values at object locations 2 in an object plane 3. The microscope 1 has a radiation source 4 for making available electromagnetic radiation 5, for example laser radiation. The electromagnetic radiation 5 is shaped by means of optical elements 6, for example by means of optical lenses, to form a scanning beam 7 which is directed onto a beam-directing element 8 in the form of an MEMS scanner. A microscope optic 11, which comprises a scanning optic, a tube lens and an objective, is arranged between the beam-directing element 8 and the object plane 3. The scanning beam 7 is reflected through the microscope optic 11 onto the object plane 3 by means of the beam-directing element 8. The beam-directing element 8 is connected to at least one actuating device 9, by which the beam-directing element 8 can be moved into actual positions 12 (symbolized by arrows) and can be pivoted in an X direction X and Y direction Y of a Cartesian coordinate system. The actuating device 9 is connected, in a form suitable for the transmission of control commands, to a control and computer unit 10. Actuating movements of the beam-directing element 8 are brought about on the basis of the control commands which are generated by the control and computer unit 10 and transmitted to the actuating device 9, and the scanning beam 7 is guided along at least one scanning path 16.n (n=positive integer) over the object plane 3.

[0059] The control and computer unit 10 is equipped with a computer unit such as an FPGA circuit 17, which makes parallel image processing possible during an image-capturing process. In further embodiments of the control and computer unit 10, it is equipped with at least one ASIC (application-specific integrated circuit), DSP (digital signal processor), ARM (asynchronous circuits) and/or controller.

[0060] A pixel array 13, which has pixels 14 of equal size which are arranged uniformly in rows 13.1 and columns 13.2 is superimposed in a virtual fashion on the object plane 3.

[0061] In order to scan the object plane 3, in which a sample (not illustrated in more detail) is present, by means of the scanning beam 7, the beam-directing element 8 can be moved into actual positions 12 in such a way that the scanning beam 7 would be guided along the individual rows 13.1 if no imaging errors occur. The object location 2 at which the scanning beam 7 impinges on the object plane 3 is known at all the actual positions 12 of the beam-directing element 8, that is at every combination of its possible orientation in the X direction X and in the Y direction Y.

[0062] At the object location 2 illustrated by a dashed circular ring, the emission of photons is brought about by action of the scanning beam 7 by a connection which is suitable for the emission of photons being excited by the scanning beam 7. The photons which are emitted by the connection at the object location 2 are detected as a detection signal by means of a detection unit 15, from which detection signal a brightness value of the object location 2 is acquired and fed to the control and computer unit 10. In the control and computer unit 10, the brightness value is assigned to the object location 2, for example its X-Y coordinates in the object plane 3 and stored in the pixel array 13 using a memory 18. In the illustrated exemplary embodiment the detection unit 15 is embodied as a photo-electron multiplier (photomultiplier tube).

[0063] In further possible embodiments of the microscope 1, the detection unit 15 is embodied, for example, as an avalanche photodiode.

[0064] A refinement of the method according to the invention will be explained in more detail with reference to FIG. 2 on the basis of the explanations given with respect to FIG. 1.

[0065] The scanning beam 7 (see FIG. 1) is guided in a controlled fashion along, in each case, one scanning path 16.n (n=1 to 6), running from row to row, over the object plane 3. Despite linear actuation of the beam-directing element 8, non-linear guidance of the scanning beam 7 occurs in the object plane 3, which is referred to as geometric distortion. Owing to the geometric distortion, a local position error occurs between actual positions 12 of the beam-directing element 8 and a theoretical impact point of the scanning beam 7 in the object plane 3.

[0066] In further refinements of the method, the scanning beam 7 is not guided row by row but rather in any other desired pattern over the object plane 3.

[0067] The distortions occur uniformly in the X direction X and Y direction Y if for example, the mirrored surface of the beam-directing element 8 has an inclination angle with respect to the object plane 3. A region of the object plane 3 which can be scanned with the scanning beam 7, also referred to as a scanning field, reduces, for example, by the factor given by the cosine of the inclination angle.

[0068] The impact point, also referred to as a spot, of the scanning beam 7 on the object plane 3 (object location 2) is guided here along a curved scanning path 16.n over the object plane 3. A scanning path 16.n which is curved in this way does not run along the respective rows 13.1 of the pixel array 13 but instead deviates from them at least partially.

[0069] In addition to the geometric distortion, aberration effects of the optical elements 6 and their arrangement along the beam path contribute to the production and characteristics of the respective scanning paths 16.1 to 16.6.

[0070] In order to be able to capture images, a transmission function or scanning function F is acquired at least once, by means of which function F the relationship between the actual position 12 and the associated object location 2 is described with sufficient precision. For this purpose, the associated impact point of the scanning beam 7 is acquired as an object location 2 for a number of actual positions 12 of the beam-directing element 8 by carrying out practical testing and/or simulating the profiles of the scanning paths 16.n.

[0071] In practical tests, for example a test pattern, for example a Siemens disc which is arranged in the object plane 3 is scanned.

[0072] In a simulation, the expected and/or empirically acquired system-induced imaging errors are taken into account. The simulation is programmed, for example, as what is referred to as a simulink model and is composed essentially of the following modules: the generation of the pixel array 13 and of a test pattern, the calculation and/or estimation of a geometric distortion of the image data, the calculation of the addresses of the adjacent pixels 14 of a respective object location 2, the determination of the weighting factors, the calculation of the brightness value portions and, if appropriate, the summary brightness value portions as well as their storage and the noise correction.

[0073] The scanning function F is stored by way of example in the memory 18 in such a way that it can be retrieved repeatedly.

[0074] The respective actual positions 12 of the beam-directing element 8 are assigned to the associated positions of the object locations 2, and a scanning function F is acquired (only designated at two scanning paths 16.n) by means of which the relationship between the actual positions 12 and the associated positions of the object locations 2 is described for each scanning path 16.n. The specific object location 2 at which the scanning beam 7 will impinge on the object plane 3 and excitation of, for example, the emission of photons occurs or can occur is therefore known at given actual positions 12.

[0075] In FIG. 2, the beam-directing element 8 is actuated by the control and computer unit 10 at an illustrated detection time t1 and aligned with the first scanning path 16.1 with known actual positions 12. The further scanning paths 16.2 to 16.6 are illustrated additionally. The object location 2 associated with the actual positions 12 is known by means of the scanning function F of the first scanning path 16.1 or can be acquired on the basis of the scanning function F. The scanning beam 7 impacts on the object plane 3 at the object location 2 and excites the emissions of photons at the object location 2, which photons are detected by means of the detection unit 15 and stored assigned to the actual positions 12 and the detection timet1 in a memory 18 by the control and computer unit 10 as a detection signal which is converted into a brightness value of the object location 2.

[0076] The four pixels 14 which are adjacent to the object location 2 are acquired by carrying out, for example, a periphery search around the object location 2, and the coordinates or addresses of the four adjacent pixels 14 are made available for the further execution of the image correction method.

[0077] The respective distances of the object location 2 from each of the adjacent pixels 14 are acquired from the coordinates of the four adjacent pixels 4 and the coordinates of the object location 2 on the basis of their deviations in the X direction X and the Y direction Y.

[0078] A weighting factor is calculated and stored for each of the distances acquired in this way. In a refinement of the image correction method, the weighting factor can be between 0 and 1, wherein the weighting factors of an object can be added to form 1.

[0079] The weighting factors are, for example, inversely proportional to the acquired distance between the object location 2 and the respective pixel 14. The brightness value is multiplied by the respective weighting factor and the brightness value portion which is obtained in this way is stored assigned to the respective adjacent pixel 14. The adjacent pixels 14 are illustrated as open circular rings for the sake of better clarity.

[0080] After the brightness value portions have been calculated for all four adjacent pixels 14 and assigned to them, the brightness value is apportioned computationally completely to the four adjacent pixels 14.

[0081] At all times during the image-capturing process of the image correction method, an equalized (component) image of the scanned object plane is advantageously available.

[0082] If a pixel 14 is an adjacent pixel 14 of a plurality of object locations 2, the individual brightness value portions which are assigned to this pixel 14 are added to form a summary brightness value.

[0083] If the object location 2 lies precisely on a pixel 14, the entire brightness value is assigned to this pixel 14. This pixel 14 is assigned, for example, a weighting factor of one, while the other adjacent pixels 14 are given a weighting factor of zero.

[0084] This procedure is repeated at every object location 2 scanned at a detection time, until the scanning beam 7 is guided along all the scanning paths 16.n and the object plane 3 is scanned.

[0085] Owing to the non-homogeneous distances between the scanning paths 16.n, a non-uniform distribution of the brightness value portions to the pixels 14 and therefore lateral brightness differences can occur in the X direction X and Y direction Y.

[0086] In particular in the case of a non-linear movement of the beam-directing element 8, as is the case, for example, with resonant mirrors, fewer object locations 2 are available to the image processing at a high scanning speed and in the case of constant distances between the detection times, in particular at the edges of the object plane 3 than in the case of relatively low scanning speeds. As a result, the captured image appears darker towards its edges.

[0087] The brightness differences can be corrected for each pixel 14 by means of a noise correction system arranged downstream of the image-capturing process.

[0088] As an alternative to the noise correction, in a further refinement of the image correction method the number of object locations 2 within four pixels 14 is kept constant over the entire object plane 3.

[0089] Compared to image correction methods which are based on what are referred to as look-up tables, the proposed image correction method can be carried out more quickly and requires less computing time and computing power.

[0090] The pixels 14 are stored together with their brightness value portions or their summary brightness value portions and their coordinates as image data in the memory 18. In order to generate an image, the image data is retrieved from the memory 18 and combined to form the image by means of an image generator (not illustrated). Since the pixels 14 which can be displayed are located at their correct coordinates of the pixel array 13, the image is not distorted. The image can be displayed with a display means (not illustrated either), for example a screen, a display, a printer and/or by means of a projector.

[0091] A microscope 1 which is suitable for simultaneous scanning with a plurality of scanning beams 7.n is shown schematically in FIG. 3. A number of beam sources 4.n are present which are designated as beam sources 4.1 to 4.4 in the illustrated exemplary embodiment and are each embodied as laser light sources. Each of the scanning beams 7.1 to 7.4 is or can be directed onto object locations 2 in the object plane 3 by means of the beam-directing element 8 which is formed here by a quasi-statically operated first scanning mirror 8.1 and a resonantly operated second scanning 8.2.

[0092] A specific section B.1 to B.n, where n=number of scanning beams 7.n, of the object plane 3 is scanned by each of the scanning beams 7.1 to 7.4. In the exemplary embodiment, four sections B1 to B4 are scanned, which sections B1 to B4 can overlap slightly at their edges in order to achieve complete scanning of the object plane 3. For example, in each section B1 to B4 a distance path which is not denoted in more detail (see FIG. 2) is shown, which paths are, for the sake of better clarity, shown extending over the edges of the respective sections B1 to B4.

[0093] It is possible to acquire current actual positions 12 by means of an actual position detector 12.1, for example in the form of a 4-quadrant photodiode.

[0094] The actual position data is fed to the control and computer unit 10 (symbolized by arrows), which is illustrated four times here for the sake of better clarity. The control and computer unit 10 receives brightness values, detected by the detection unit 15, for each object location 2. In the illustrated exemplary embodiment, each of the sections B1 to B4 is assigned to a detection range of a detection unit 15. The brightness values which are detected in the respective sections B1 to B4 are fed to the control and computer unit 10 and an equalized image of the respective section B1 to B4 is generated. The partial images which are thus obtained and are shown schematically can be combined in a further step to form a composite image.

[0095] While this invention has been described in conjunction with the specific embodiments outlined above, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, the preferred embodiments of the invention as set forth above are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the inventions as defined in the following claims.

REFERENCE SYMBOLS

[0096] 1 Microscope [0097] 2 Object location [0098] 3 Object plane [0099] 4 Radiation source [0100] 4.n n-th radiation source (n=1 to 4) [0101] 5 Electromagnetic radiation [0102] 6 Optical element [0103] 7 Scanning beam [0104] 7.n n-th scanning beam (n=1 to 4) [0105] 8 Beam-directing element [0106] 8.1 First scanning mirror [0107] 8.2 Second scanning mirror [0108] 9 Actuating device [0109] 10 Control and computer unit [0110] 11 Microscope optic [0111] 12 Actual position [0112] 12.1 Actual position detector [0113] 13 Pixel array [0114] 13.1 Row [0115] 13.2 Column [0116] 14 Pixel [0117] 15 Detection unit [0118] 16.n Scanning path (n=1 to 6) [0119] 17 FPGA circuit [0120] 18 Memory [0121] B.n Section (of object plane 3) [0122] X X direction [0123] Y Y direction [0124] Z Z direction [0125] F Scanning function