Method, computer program and microscope system for processing microscope images
11579429 · 2023-02-14
Assignee
Inventors
Cpc classification
G06F18/217
PHYSICS
G02B21/0056
PHYSICS
G02B21/367
PHYSICS
International classification
G02B21/36
PHYSICS
Abstract
In a method for processing microscope images, at least one microscope image is provided as input image for an image processing algorithm. An output image is created from the input image by means of the image processing algorithm. The creation of the output image comprises adding low-frequency components for representing solidity of image structures of the input image to the input image, wherein the low-frequency components at least depend on high-frequency components of these image structures and wherein high-frequency components are defined by a higher spatial frequency than low-frequency components. A corresponding computer program and microscope system are likewise described.
Claims
1. A method for processing microscope images, comprising: inputting at least one microscope image as input image into an image processing algorithm; creating an output image from the input image by means of the image processing algorithm, wherein creating the output image comprises adding low-frequency components for representing solidity of image structures of the input image to the input image, wherein the low-frequency components at least depend on high-frequency components of these image structures and wherein high-frequency components are defined by a higher spatial frequency than low-frequency components.
2. The method as defined in claim 1, wherein the image processing algorithm comprises a machine learning algorithm which is trained to add the low-frequency components to the input image.
3. The method as defined in claim 2, wherein the machine learning algorithm comprises a neural network with an encoder-decoder structure, the decoder of which ascertains the low-frequency components which are added to the input image.
4. The method as defined in claim 2, wherein the machine learning algorithm is trained with a loss function, which not only considers deviations between output images calculated from training data and associated target images but also penalizes or prevents an addition of spatial frequency components with increasing spatial frequency or above a spatial frequency limit.
5. The method as defined in claim 4, wherein said spatial frequency limit or a spatial frequency dependence of the aforementioned addition of spatial frequency components is defined in the loss function in dependence of an image content of the input image.
6. The method as defined in claim 4, wherein the image processing algorithm produces the output image in a first work step and supplies the output image to a verification algorithm in a second work step, wherein the verification algorithm assesses the output image in respect of image processing artefacts, wherein, depending on an assessment result, the first work step is repeated for producing a new output image, with the stipulation that the spatial frequency limit or the spatial frequency upper limit is reduced, and the second work step is subsequently carried out in relation to the new output image.
7. The method as defined in claim 6, wherein the verification algorithm compares the output image to the input image and assesses whether the added low-frequency components observe a predetermined frequency upper limit.
8. The method as defined in claim 6, wherein the verification algorithm comprises a machine learning verification algorithm which is trained using training data which comprise input images and associated output images created by the image processing algorithm.
9. The method as defined in claim 2, wherein training of the machine learning algorithm is carried out, within the scope of which microscope images are used as input images and differential interference contrast images spatially registered to the microscope images are used as target images.
10. The method as defined in claim 1, wherein the image processing algorithm defines a spatial frequency upper limit on the basis of an image content or a frequency distribution of the input image, and wherein the image processing algorithm only adds low-frequency components that lie below the spatial frequency upper limit to the input image.
11. The method as defined in claim 1, wherein the image processing algorithm adds the low-frequency components to the input image and wherein the low-frequency components do not represent an amplification or multiplication of low-frequency components already present in the input image.
12. The method as defined in claim 1, wherein the low-frequency components to be added are defined on the basis of context information relating to the microscope image.
13. Computer program comprising commands stored on a non-transitory computer-readable medium which, upon execution by a computer, prompt the method as defined in claim 1 to be carried out.
14. A microscope system, comprising: a microscope for recording a microscope image; and a computing device configured to carry out an image processing algorithm, wherein the image processing algorithm creates an output image from the microscope image as input image, wherein the creation of the output image comprises adding low-frequency components for representing solidity of image structures of the input image to the input image, wherein the low-frequency components at least depend on high-frequency components of these image structures and wherein high-frequency components are defined by a higher spatial frequency than low-frequency components.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) A better understanding of the invention and various other features and advantages of the present invention will become readily apparent by the following description in connection with the schematic drawings, which are shown by way of example only, and not limitation, wherein like reference numerals may refer to alike or substantially alike components:
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
(7)
(8)
(9)
(10) In terms of solid impression, the output image 3 is similar to DIC images. In the latter, a spatial offset between interfering partial beams, which have experienced different phase changes, leads to brightened and shadowed regions at object edges. This yields a three-dimensional impression, which need not correspond to an actual 3D profile of the sample but serves for a quicker and easier perception and assessment by a microscope user.
(11) Exemplary embodiments according to the invention, which can calculate the output image 3 from the microscope image B, are described with reference to the following figures. As an essential aspect of the invention, a falsification of the image content in the process is excluded. In particular, it is ensured that no new image structures, for instance new cells or cell details, are added as a result of the image processing. As a matter of principle, this problem exists in known machine learning algorithms, as specified in the introductory section relating to the prior art.
(12) The invention exploits the discovery that the spatial impression of the output image 3 can be achieved by virtue of complementing the microscope image B with a superposition determined by low spatial frequencies. Higher spatial frequencies are, by contrast, decisively responsible for the relative position and course of the visible edges of the image structures, i.e., the cell edges. By virtue of adding no, or hardly any, higher frequency components, it is possible to avoid a falsification or an addition of image structures.
Exemplary Embodiment of FIG. 2
(13) One exemplary embodiment of a method according to the invention will be described with reference to
(14) The example of
(15) The machine learning algorithm M comprises a neural network P, which is formed by an encoder-decoder structure in this case. From an input microscope image B, the encoder E produces a feature vector which, in principle, could have any dimensions. The feature vector is an input for the decoder D which outputs an image therewith in step S2. The latter should be formed from low spatial frequencies and is therefore referred to as low-frequency components N. The low-frequency components N are added to the input image 1 in step S3, as a result of which the output image 3 is produced. Once the machine learning algorithm has been taught, the output image 3 corresponds to the output image 3 shown in
(16) To make things clear, it is noted that the processes can be carried out in the spatial domain, in the frequency domain or partly in the spatial and partly in the frequency domain. In the spatial domain, the images (i.e., the input image 1, the output image 3 and the low-frequency components N) are each representable as a 2D matrix of brightness values. By way of a frequency transformation, for example a Fourier transform, the representation in the spatial domain is able to be converted into a representation in the frequency domain. In modifications of the illustrated embodiment, the frequency transforms of the microscope images B can also be supplied to the machine learning algorithm as input images (which are now representations in the frequency domain). Likewise, or alternatively, the low-frequency components N, which are calculated for an input image 1, can be output in the frequency domain and can only be added to the associated input image 1 following a transformation into the spatial domain.
(17) During the learning procedure, the calculated output images 3 are supplied to a loss function L. The loss function L calculates a measure of a correspondence between an output image 3 and an associated target image 5, which both belong to the same input image 1. This measure can also be considered to be a penalty number as its size increases, the smaller the correspondences are between the output image 3 and the target image 5. Conventional loss functions L calculate a deviation or distance R between the pixels, for example by means of a sum of the square deviations between locally corresponding pixels in output image 3 and target image 5. However, the loss function L of the exemplary embodiment according to the invention is not only a function of such a distance R. Rather, the loss function L is also dependent on the frequency values f of the added low-frequency components N, specified schematically in
(18) The spatial frequency limit f.sub.0 can be a specified constant or a variable in the loss function. By way of example, the variable can depend on context information, for example an illumination aperture for an input image.
(19) From the result of the loss function L, an optimization function O calculates how parameters/weights to be learnt of the neural network P should be altered. Updated low-frequency components N are calculated using the updated parameter values and the described steps are repeated until parameters that minimize the loss function L have been ascertained.
(20) As inputs, the loss function L need not necessarily obtain the output images 3, the low-frequency components N and the input images 1; rather, two of these three are enough, as indicated by the dashed arrows.
(21) A summation in step S3 is shown as a simple example of a superimposition of the low-frequency components N on the input images 1. However, other calculation operations can also be used to combine the low-frequency components N and the input image 1.
(22) In all present descriptions, the loss function L can also be replaced by a reward function which, in contrast to the loss function L, should be maximized. In the reward function, the dependencies in relation to frequency f.sub.N, the spatial frequency limit f.sub.0 and the deviations R are reversed, i.e., the reward function increases with smaller deviations R, lower added frequencies f.sub.N and when f.sub.0 is undershot.
(23) The neural network P can also be formed differently than by an encoder-decoder model.
(24) A further modification is described below with reference to
Exemplary Embodiment of FIG. 3
(25)
(26) An exemplary design of the neural network P comprises a residual skip structure: Here, the low-frequency components N are initially calculated in a manner analogous to
(27) In the loss function L, the implicitly added low-frequency components N can be reconstructed by a comparison of the input image 1 with the associated output image 3.
Exemplary Embodiment of FIG. 4
(28)
(29) The already trained machine learning algorithm M can be restricted to the neural network P as described in the previous figures, wherein the functionalities for training the neural network P, i.e., for defining the weights thereof, are not required here.
(30) In step S1, a microscope image B is supplied as input image 1 to the image processing algorithm 10, which calculates an output image 3 therefrom in step S3, as also shown in
(31) In
(32) In step S4, the output image 3 is supplied to a verification algorithm V. The latter compares the output image 3 to the input image 1 and assesses a frequency distribution of the differences between these images. These differences, i.e., the frequencies f.sub.N of the added low-frequency components, can be compared, for example, to a specified value of a frequency upper limit f.sub.G. For this comparison, use can be made of a variable derived or accumulated from the low-frequency components f.sub.N, for example the mean thereof.
(33) If f.sub.N is less than f.sub.G, it was possible to ensure that no high-frequency components were added which could falsify or remove the image structures of the input image 1 or could lead to a “hallucination” of newly added structures. The image processing is therefore assessed as correct and the output image 3 is output in step S5.
(34) By contrast, if f.sub.N is greater than f.sub.G, the verification algorithm V assesses the output image 3 as potentially falsified and prompts renewed image processing by the image processing algorithm 10. In the process, an image processing parameter is altered in order to suppress an addition of higher-frequency image components. By way of example, the machine learning algorithm M shown in
(35) The verification algorithm V can optionally likewise be formed using a machine learning algorithm.
Exemplary Embodiment of FIG. 5
(36)
(37) The image processing algorithm 10 and the optional verification algorithm V are formed as a computer program. Exemplary embodiments of the computer program according to the invention are given by the above-described exemplary designs of the image processing algorithm 10 and of the verification algorithm V.
(38) The microscope system 40 of
(39) By way of the various exemplary embodiments explained, an output image that a user finds optically easier to comprehend can be calculated from an input image by virtue of a solid impression being generated without there being a risk of falsifying relevant sample structures. The exemplary embodiments described are purely illustrative and modifications thereof are possible within the scope of the attached claims.
LIST OF REFERENCE SIGNS
(40) B Microscope image D Decoder E Encoder f Spatial frequency of the image components to be added f.sub.0 Spatial frequency limit f.sub.G Frequency upper limit f.sub.N Frequency values L Loss function M Machine learning algorithm N Low-frequency components which should be added to an input image O Optimization function P Neural network R Term in the loss function specifying the deviation between output image and target image S1 Inputting a microscope image as input image into an image processing algorithm S2 Calculating and outputting a low-frequency component which should be added to the input image S3 Creating an output image by adding a low-frequency component to the input image S4 Supplying the output image to a verification algorithm S5 Outputting the output image by the verification algorithm V Verification algorithm 1 Input image 3 Output image 5 Target image 10 Image processing algorithm 20 Microscope 30 Computing device 40 Microscope system