Method, computer program and microscope system for processing microscope images

11579429 · 2023-02-14

Assignee

Inventors

Cpc classification

International classification

Abstract

In a method for processing microscope images, at least one microscope image is provided as input image for an image processing algorithm. An output image is created from the input image by means of the image processing algorithm. The creation of the output image comprises adding low-frequency components for representing solidity of image structures of the input image to the input image, wherein the low-frequency components at least depend on high-frequency components of these image structures and wherein high-frequency components are defined by a higher spatial frequency than low-frequency components. A corresponding computer program and microscope system are likewise described.

Claims

1. A method for processing microscope images, comprising: inputting at least one microscope image as input image into an image processing algorithm; creating an output image from the input image by means of the image processing algorithm, wherein creating the output image comprises adding low-frequency components for representing solidity of image structures of the input image to the input image, wherein the low-frequency components at least depend on high-frequency components of these image structures and wherein high-frequency components are defined by a higher spatial frequency than low-frequency components.

2. The method as defined in claim 1, wherein the image processing algorithm comprises a machine learning algorithm which is trained to add the low-frequency components to the input image.

3. The method as defined in claim 2, wherein the machine learning algorithm comprises a neural network with an encoder-decoder structure, the decoder of which ascertains the low-frequency components which are added to the input image.

4. The method as defined in claim 2, wherein the machine learning algorithm is trained with a loss function, which not only considers deviations between output images calculated from training data and associated target images but also penalizes or prevents an addition of spatial frequency components with increasing spatial frequency or above a spatial frequency limit.

5. The method as defined in claim 4, wherein said spatial frequency limit or a spatial frequency dependence of the aforementioned addition of spatial frequency components is defined in the loss function in dependence of an image content of the input image.

6. The method as defined in claim 4, wherein the image processing algorithm produces the output image in a first work step and supplies the output image to a verification algorithm in a second work step, wherein the verification algorithm assesses the output image in respect of image processing artefacts, wherein, depending on an assessment result, the first work step is repeated for producing a new output image, with the stipulation that the spatial frequency limit or the spatial frequency upper limit is reduced, and the second work step is subsequently carried out in relation to the new output image.

7. The method as defined in claim 6, wherein the verification algorithm compares the output image to the input image and assesses whether the added low-frequency components observe a predetermined frequency upper limit.

8. The method as defined in claim 6, wherein the verification algorithm comprises a machine learning verification algorithm which is trained using training data which comprise input images and associated output images created by the image processing algorithm.

9. The method as defined in claim 2, wherein training of the machine learning algorithm is carried out, within the scope of which microscope images are used as input images and differential interference contrast images spatially registered to the microscope images are used as target images.

10. The method as defined in claim 1, wherein the image processing algorithm defines a spatial frequency upper limit on the basis of an image content or a frequency distribution of the input image, and wherein the image processing algorithm only adds low-frequency components that lie below the spatial frequency upper limit to the input image.

11. The method as defined in claim 1, wherein the image processing algorithm adds the low-frequency components to the input image and wherein the low-frequency components do not represent an amplification or multiplication of low-frequency components already present in the input image.

12. The method as defined in claim 1, wherein the low-frequency components to be added are defined on the basis of context information relating to the microscope image.

13. Computer program comprising commands stored on a non-transitory computer-readable medium which, upon execution by a computer, prompt the method as defined in claim 1 to be carried out.

14. A microscope system, comprising: a microscope for recording a microscope image; and a computing device configured to carry out an image processing algorithm, wherein the image processing algorithm creates an output image from the microscope image as input image, wherein the creation of the output image comprises adding low-frequency components for representing solidity of image structures of the input image to the input image, wherein the low-frequency components at least depend on high-frequency components of these image structures and wherein high-frequency components are defined by a higher spatial frequency than low-frequency components.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) A better understanding of the invention and various other features and advantages of the present invention will become readily apparent by the following description in connection with the schematic drawings, which are shown by way of example only, and not limitation, wherein like reference numerals may refer to alike or substantially alike components:

(2) FIG. 1 schematically shows a microscope image and an output image, as can be calculated by the invention;

(3) FIG. 2 illustrates an exemplary embodiment of a method according to the invention;

(4) FIG. 3 illustrates a further exemplary embodiment of a method according to the invention;

(5) FIG. 4 illustrates yet a further exemplary embodiment of a method according to the invention; and

(6) FIG. 5 schematically shows an exemplary embodiment of a microscope system according to the invention.

DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

(7) FIG. 1

(8) FIG. 1 shows a microscope image B, which is a contrast image in this example. Therein, changes in the phase of passing through light, which can be traced back to the sample, are represented by brightness differences.

(9) FIG. 1 moreover shows an output image 3, as can be calculated from the microscope image B in exemplary fashion by the present invention. The output image 3 contains the image information of the microscope image B and differs from the latter by a solid impression. In this example, the image content comprises a plurality of biological cells. These image structures appear three-dimensional in the output image 3, as a result of which the visual evaluation is made easier for an observer in comparison with the original representation in the microscope image B.

(10) In terms of solid impression, the output image 3 is similar to DIC images. In the latter, a spatial offset between interfering partial beams, which have experienced different phase changes, leads to brightened and shadowed regions at object edges. This yields a three-dimensional impression, which need not correspond to an actual 3D profile of the sample but serves for a quicker and easier perception and assessment by a microscope user.

(11) Exemplary embodiments according to the invention, which can calculate the output image 3 from the microscope image B, are described with reference to the following figures. As an essential aspect of the invention, a falsification of the image content in the process is excluded. In particular, it is ensured that no new image structures, for instance new cells or cell details, are added as a result of the image processing. As a matter of principle, this problem exists in known machine learning algorithms, as specified in the introductory section relating to the prior art.

(12) The invention exploits the discovery that the spatial impression of the output image 3 can be achieved by virtue of complementing the microscope image B with a superposition determined by low spatial frequencies. Higher spatial frequencies are, by contrast, decisively responsible for the relative position and course of the visible edges of the image structures, i.e., the cell edges. By virtue of adding no, or hardly any, higher frequency components, it is possible to avoid a falsification or an addition of image structures.

Exemplary Embodiment of FIG. 2

(13) One exemplary embodiment of a method according to the invention will be described with reference to FIG. 2. The example should serve for better understanding. By contrast, in actual implementations, a plurality of steps can be carried out in computationally efficient fashion by way of a single operation or can be modified, as will also be explained below.

(14) The example of FIG. 2 uses an image processing algorithm 10, which is based on a machine learning algorithm M. The processes of a learning procedure are illustrated. The training data are formed by a plurality of microscope images B, which are supplied as input images 1 to the machine learning algorithm M in step S1. An associated target image 5 is provided for each microscope image B of the training data. A microscope image B and an associated target image 5 could be images of the same sample region recorded using different microscopy techniques. By way of example, the target images 5 could be DIC images while the microscope images B are DPC images.

(15) The machine learning algorithm M comprises a neural network P, which is formed by an encoder-decoder structure in this case. From an input microscope image B, the encoder E produces a feature vector which, in principle, could have any dimensions. The feature vector is an input for the decoder D which outputs an image therewith in step S2. The latter should be formed from low spatial frequencies and is therefore referred to as low-frequency components N. The low-frequency components N are added to the input image 1 in step S3, as a result of which the output image 3 is produced. Once the machine learning algorithm has been taught, the output image 3 corresponds to the output image 3 shown in FIG. 1.

(16) To make things clear, it is noted that the processes can be carried out in the spatial domain, in the frequency domain or partly in the spatial and partly in the frequency domain. In the spatial domain, the images (i.e., the input image 1, the output image 3 and the low-frequency components N) are each representable as a 2D matrix of brightness values. By way of a frequency transformation, for example a Fourier transform, the representation in the spatial domain is able to be converted into a representation in the frequency domain. In modifications of the illustrated embodiment, the frequency transforms of the microscope images B can also be supplied to the machine learning algorithm as input images (which are now representations in the frequency domain). Likewise, or alternatively, the low-frequency components N, which are calculated for an input image 1, can be output in the frequency domain and can only be added to the associated input image 1 following a transformation into the spatial domain.

(17) During the learning procedure, the calculated output images 3 are supplied to a loss function L. The loss function L calculates a measure of a correspondence between an output image 3 and an associated target image 5, which both belong to the same input image 1. This measure can also be considered to be a penalty number as its size increases, the smaller the correspondences are between the output image 3 and the target image 5. Conventional loss functions L calculate a deviation or distance R between the pixels, for example by means of a sum of the square deviations between locally corresponding pixels in output image 3 and target image 5. However, the loss function L of the exemplary embodiment according to the invention is not only a function of such a distance R. Rather, the loss function L is also dependent on the frequency values f of the added low-frequency components N, specified schematically in FIG. 2 as f.sub.N. The higher f.sub.N, the higher the value (the penalty number) of the loss function L. By way of example, the low-frequency components N can comprise different frequencies with a different amplitude in each case. These various frequencies, weighted by their respective amplitude, can now be incorporated in summed fashion in the loss function L. The loss function L is therefore not only a measure for how well a calculated output image 3 corresponds to an associated target image 5 but also a measure for whether low or high spatial frequencies were added in the calculation of the output image 3. The frequency dependence in the loss function can be expressed by a parameter f.sub.0, which can represent a spatial frequency limit. More penalty points are awarded to added frequency components f.sub.N above the spatial frequency limit f.sub.0 than to added frequency components f.sub.N below the spatial frequency limit f.sub.0. Optionally, penalty points can be higher, the more added frequencies f.sub.N exceed the spatial frequency limit f.sub.0.

(18) The spatial frequency limit f.sub.0 can be a specified constant or a variable in the loss function. By way of example, the variable can depend on context information, for example an illumination aperture for an input image.

(19) From the result of the loss function L, an optimization function O calculates how parameters/weights to be learnt of the neural network P should be altered. Updated low-frequency components N are calculated using the updated parameter values and the described steps are repeated until parameters that minimize the loss function L have been ascertained.

(20) As inputs, the loss function L need not necessarily obtain the output images 3, the low-frequency components N and the input images 1; rather, two of these three are enough, as indicated by the dashed arrows.

(21) A summation in step S3 is shown as a simple example of a superimposition of the low-frequency components N on the input images 1. However, other calculation operations can also be used to combine the low-frequency components N and the input image 1.

(22) In all present descriptions, the loss function L can also be replaced by a reward function which, in contrast to the loss function L, should be maximized. In the reward function, the dependencies in relation to frequency f.sub.N, the spatial frequency limit f.sub.0 and the deviations R are reversed, i.e., the reward function increases with smaller deviations R, lower added frequencies f.sub.N and when f.sub.0 is undershot.

(23) The neural network P can also be formed differently than by an encoder-decoder model.

(24) A further modification is described below with reference to FIG. 3.

Exemplary Embodiment of FIG. 3

(25) FIG. 3 illustrates an exemplary embodiment of a method according to the invention, which differs from FIG. 2 in that the neural network P calculates an output image 3 directly from an input image 1, as labelled as step S3. Thus, the neural network P need not necessarily explicitly calculate or output the low-frequency components N in this case.

(26) An exemplary design of the neural network P comprises a residual skip structure: Here, the low-frequency components N are initially calculated in a manner analogous to FIG. 2, wherein both the low-frequency components N and the input image 1 are input in a subsequent layer of the neural network. The input image 1 consequently skips layers of the neural network P.

(27) In the loss function L, the implicitly added low-frequency components N can be reconstructed by a comparison of the input image 1 with the associated output image 3.

Exemplary Embodiment of FIG. 4

(28) FIG. 4 illustrates the progress of an exemplary method of the invention for processing a microscope image B. While the preceding figures represented the training procedure, FIG. 4 shows the use of the already trained image processing algorithm 10.

(29) The already trained machine learning algorithm M can be restricted to the neural network P as described in the previous figures, wherein the functionalities for training the neural network P, i.e., for defining the weights thereof, are not required here.

(30) In step S1, a microscope image B is supplied as input image 1 to the image processing algorithm 10, which calculates an output image 3 therefrom in step S3, as also shown in FIG. 1 and described above in respect of the training procedure.

(31) In FIG. 4, there now is an optional verification procedure which is intended to ensure that no image processing artefacts were added to the input image 1. Such a safety step is particularly expedient if the image processing algorithm 10 is based on a machine learning algorithm.

(32) In step S4, the output image 3 is supplied to a verification algorithm V. The latter compares the output image 3 to the input image 1 and assesses a frequency distribution of the differences between these images. These differences, i.e., the frequencies f.sub.N of the added low-frequency components, can be compared, for example, to a specified value of a frequency upper limit f.sub.G. For this comparison, use can be made of a variable derived or accumulated from the low-frequency components f.sub.N, for example the mean thereof.

(33) If f.sub.N is less than f.sub.G, it was possible to ensure that no high-frequency components were added which could falsify or remove the image structures of the input image 1 or could lead to a “hallucination” of newly added structures. The image processing is therefore assessed as correct and the output image 3 is output in step S5.

(34) By contrast, if f.sub.N is greater than f.sub.G, the verification algorithm V assesses the output image 3 as potentially falsified and prompts renewed image processing by the image processing algorithm 10. In the process, an image processing parameter is altered in order to suppress an addition of higher-frequency image components. By way of example, the machine learning algorithm M shown in FIG. 2 or 3 could be trained in advance with different values of the parameter f.sub.0 in a plurality of training iterations. As a result, different loss functions L are used, which differ in the frequencies above which penalty points are awarded or in how strongly an award of penalty points increases with increasing frequency of the added components N. As a result, a plurality of neural networks P are ascertained, which differ in terms of the underlying parameter f.sub.0. If the verification algorithm V in FIG. 4 now prompts renewed image processing, the parameter value f.sub.0 is altered to a smaller value f.sub.0′ in the process and the correspondingly associated neural network P is selected. This reduces the probability of image processing artefacts reappearing in the output image 3.

(35) The verification algorithm V can optionally likewise be formed using a machine learning algorithm.

Exemplary Embodiment of FIG. 5

(36) FIG. 5 schematically shows an exemplary embodiment of a microscope system 40 according to the invention. The microscope system 40 comprises a (light) microscope 20, by means of which at least one microscope image B is recorded. The latter is supplied to the image processing algorithm 10 and, optionally, to the verification algorithm V.

(37) The image processing algorithm 10 and the optional verification algorithm V are formed as a computer program. Exemplary embodiments of the computer program according to the invention are given by the above-described exemplary designs of the image processing algorithm 10 and of the verification algorithm V.

(38) The microscope system 40 of FIG. 5 comprises a computing device 30 which is set up to carry out the computer program, i.e., the image processing algorithm 10 and the verification algorithm V. By way of example, the computing device 30 can be a server-based computer system or a (personal) computer. Here, the machine learning algorithm can be trained, in particular, using a graphics processor (GPU) of the computing device 30.

(39) By way of the various exemplary embodiments explained, an output image that a user finds optically easier to comprehend can be calculated from an input image by virtue of a solid impression being generated without there being a risk of falsifying relevant sample structures. The exemplary embodiments described are purely illustrative and modifications thereof are possible within the scope of the attached claims.

LIST OF REFERENCE SIGNS

(40) B Microscope image D Decoder E Encoder f Spatial frequency of the image components to be added f.sub.0 Spatial frequency limit f.sub.G Frequency upper limit f.sub.N Frequency values L Loss function M Machine learning algorithm N Low-frequency components which should be added to an input image O Optimization function P Neural network R Term in the loss function specifying the deviation between output image and target image S1 Inputting a microscope image as input image into an image processing algorithm S2 Calculating and outputting a low-frequency component which should be added to the input image S3 Creating an output image by adding a low-frequency component to the input image S4 Supplying the output image to a verification algorithm S5 Outputting the output image by the verification algorithm V Verification algorithm 1 Input image 3 Output image 5 Target image 10 Image processing algorithm 20 Microscope 30 Computing device 40 Microscope system