METHOD AND DEVICE FOR NON-CONVOLUTIONAL IMAGE PROCESSING

20220398703 · 2022-12-15

Assignee

Inventors

Cpc classification

International classification

Abstract

A method, device, and computer program product are designed for non-convolutional image processing in microscopy of an input image into an output image using an artificial neural network with at least one contracting path including layers, at least one expanding path including layers, and at least one filter kernel. The method includes determining, in one or multiple artificial neural network layers, a similarity metric between at least one filter kernel and one output of the previous layer. Additionally, in at least one layer of the contracting path, the resolution of the output of the previous layer is reduced, and, in at least one layer of the expanding path, the resolution of the output of the previous layer is increased. The first artificial neural network layer treats the input image as the output of the previous layer, and the output of the last artificial neural network layer is the output image.

Claims

1. A computer-implemented method for non-convolutional image processing in microscopy of an input image (210) into an output image (280) by means of an artificial neural network with at least one contracting path (220, 230) comprising layers, at least one expanding path (240, 250, 260) comprising layers, and at least one filter kernel, wherein the method comprises: determining, in one or multiple layers of the artificial neural network, a similarity metric (220, 225, 250) between at least one filter kernel and one output of the previous layer; reducing, in one or multiple layers of the contracting path (220, 230), the resolution of the output of the previous layer, and increasing, in one or multiple layers of the expanding path (240, 250), the resolution of the output of the previous layer, wherein the first layer of the artificial neural network treats the input image (210) as the output of the previous layer, and the output of the last layer of the artificial neural network is the output image (280).

2. The method according to claim 1, wherein the similarity metric uses an element-wise sum function.

3. The method according to claim 1, wherein the similarity metric uses a sum of a distance of image elements for each image channel between image elements of the output of the previous layer and the filter kernel.

4. The method according to claim 1, wherein the output of a layer is determined by the function A ( x A , y A , c A ) = .Math. i = 1 e .Math. j = 1 a .Math. k = 1 c E D ( E ( x A + i , y A + j , k ) , F ( i , j , k , c A ) ) , A being the output of the layer with c.sub.A channels, E being the output of the previous layer with c.sub.E channels, F being the filter kernel with the dimensions d and e, and D being the similarity metric, and wherein the above formula determines the respective value of the output of the layer for each point x.sub.A, y.sub.A in the respective channel c.sub.A.

5. The method according to claim 1, wherein the similarity metric and is formed as a distance function formed according to a p-norm, preferably according to the L1-norm, according to a radial base function, or according to a polynomial function.

6. The method according to claim 1, wherein the filter kernel has an equal size in both dimensions, i.e. d=e, wherein reducing is performed in at least one contracting path by striding or pooling, and/or wherein increasing is performed in at least one expanding path by bilinear interpolation, a different type of interpolation, or by a transposed element-wise sum function.

7. The method according to claim 1, wherein the method further comprises, in one or multiple layers, an application of skip connections (260, 270), which supplement (240) the increasing of the resolution in the expanding path in order to improve the output of the respective layer.

8. The method according to claim 1, wherein a regression (251) is performed in the last layer, and/or wherein the last layer is a completely connected layer, or wherein the regression (251) uses a further element-wise sum function.

9. The method according to claim 1, wherein the method, after one or multiple layers, further comprises a normalization, preferably by means of a group normalization, an instance normalization, or a batch normalization, wherein the normalization may comprise a scale and shift operation.

10. The method according to claim 1, wherein the image processing is a virtual staining, a denoising, a super resolution, a deconvolution, a compressed capturing, or a different type of image enhancement.

11. The method according to claim 1 for training a neural network.

12. The method according to claim 11, wherein training is performed with a further neural network in the context of a generative adversarial network training.

13. A device for non-convolutional image processing in microscopy of an input image (210) into an output image (280), preferably a computer, with an artificial neural network with at least one contracting path (220, 230) comprising layers, at least one expanding path (240, 250, 260) comprising layers, and at least one filter kernel, wherein the device is configured to perform the method according to claim 1, and wherein the device comprises a calculation unit configured to determine, in one or multiple layers of the artificial neural network, a similarity metric (220, 225, 250) between at least one filter kernel and one output of the previous layer; reduce, in one or multiple layers of the contracting path (220, 230), the resolution of the output of the previous layer; and increase, in one or multiple layers of the expanding path (240, 250), the resolution of the output of the previous layer, wherein the first layer of the artificial neural network treats the input image (210) as the output of the previous layer, and the output of the last layer of the artificial neural network is the output image (280).

14. A computer program product with a program for a data processing device, comprising software code sections for performing the steps according to claim 1 when the program is run on the data processing device.

15. The computer program product according to claim 14, wherein the computer program product comprises a computer-readable medium, on which the software code sections are stored, wherein the program can be loaded directly into an internal storage of the data processing device.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0016] Other objects and features of the invention will become apparent from the following detailed description considered in connection with the accompanying drawings which respectively show a very simplified representation. It is to be understood, however, that the drawings are designed as an illustration only and not as a definition of the limits of the invention.

[0017] In the drawings,

[0018] FIG. 1A shows a cell sample recorded in a phase contrast image;

[0019] FIG. 1B shows the cell sample recorded in a fluorescence contrast;

[0020] FIG. 1C shows the result of image processing for virtual staining of image 1A by a CNN according to the prior art;

[0021] FIG. 1D shows the result of an image invention processing, according to the invention, for virtual staining of image 1A by a non-convolutional neural network;

[0022] FIG. 2 shows an exemplary structure of a non-convolutional neural network according to the invention for image processing; and

[0023] FIG. 3 shows an exemplary course according to the invention of an image processing in a non-convolutional neural network.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

[0024] First of all, it is to be noted that in the different embodiments described, equal elements are provided with equal reference numbers and/or equal element designations, where the disclosures contained in the entire description may be analogously transferred to equal elements with equal reference numbers and/or equal element designations. Moreover, the specifications of location, such as at the top, at the bottom, at the side, chosen in the description refer to the directly described and depicted figure and in case of a change of position, these specifications of location are to be analogously transferred to the new position.

[0025] In this invention, a non-convolutional (i.e. not based on CNNs) solution for image-to-image depictions in general and virtual staining as an exemplary application are described.

[0026] The following describes a method according to the invention for non-convolutional image processing in microscopy of an input image into an output image by means of a neural network with at least one contracting path consisting of layers, at least one expanding path consisting of layers, and at least one filter kernel.

[0027] The image processing is preferably an image regression or an image-to-image depiction.

[0028] The method comprises determining, in one or multiple layers of the ANN, a similarity metric between at least one filter kernel and one output of the previous layer. Moreover, the method comprises reducing, in one or multiple of the layers of the contracting path, the resolution of the output of the previous layer, as well as increasing, in one or multiple of the layers of the expanding path, the resolution of the output of the previous layer, wherein the first layer of the ANN as the input image is treated as output of the previous layer, and the output of the last layer of the ANN is the output image.

[0029] It is known that multiplication is slower than addition, but the known solutions for virtual staining always use CNNs.

[0030] FIG. 2 shows an exemplary structure of a non-convolutional neural network according to the invention for image processing. FIG. 3 shows an exemplary course according to the invention of image processing in a non-convolutional neural network.

[0031] Therein, an input image 210 is input. This is then processed in the first layers 220 and 230.

[0032] The course shown in FIG. 3 consists of a contracting path 220 and 230 as well as an expanding path 240, 250 and 260. The paths are similar in terms of their course to a conventional convolutional network. However, in this case, instead of the conventional convolutional layers, no convolution operation is used in the layers, but rather a more efficient element-wise sum operation is used.

[0033] In this process, a repeated determination of the similarity metric in the contracting path in the elements 220 and the subsequent reduction of the resolution 230 is used to increase the number of the channels, for example double them, whereby the dimensions of the image decrease accordingly.

[0034] The contracting path may also be referred to as an encoder path, and its function may be implemented, for example, by striding or pooling.

[0035] At the end of the contracting path, the processing transitions into the so-called bottleneck, in whose layers non-linear transformations are performed by element-wise sum functions. In this process, the resolution and number of channels remains the same.

[0036] In the expanding path, the number of channels is reduced again, for example halved, in the elements 240, by means of increasing the resolution as well as further applications of determining the similarity metric 250. Increasing the resolution is preferably not effected by a transposed convolution but rather by a bilinear interpolation, for example, in order to re-gain the original resolution.

[0037] The expanding path may also be referred to as decoder path, and its function may be implemented, for example, by a different type of interpolation in addition to the bilinear interpolation or by a transposed element-wise sum function.

[0038] Optionally, skip connections 270 may be used, meaning that a part of the image is copied in the contracting path and carried over into the expanding path, where the separated parts 260 are attached again in order to improve the output image (of the respective layer).

[0039] In this regard, skip connections may be helpful for restoring fine structures in the output image. Thus, a possibly occurring blurring, which develops due to increasing the resolution, may be compensated by means of interpolation. Possibly, a cropping due to the loss of edge pixels is necessary.

[0040] Alternatively or additionally, residual connections may be used. In this regard, an identity function is realized either from the start of a layer to the end of the same layer or from the first to the last layer. In this regard, a residual training is realized in the context of residual learning, i.e. instead of the result image, only the residual between the input image and the output image (a layer, and/or input of the first and output of the last layer) is learned. This is generally easier for enhancing the image and details, such as sharp edges, are preserved.

[0041] In this regard, FIG. 3 shows a network with 23 layers, although a different number of layers is possible, as well. Furthermore, it is possible that in some layers, no reducing, increasing, or determining is performed.

[0042] In each the layers 220 and 250, in which the similarity metrics are determined, a filter kernel is applied to the image, whereby the number of the channels increases whereas the dimensions are reduced.

[0043] In this regard, the similarity metrics may be performed as sum operations and are a measure for a similarity between a kernel and the input image.

[0044] The sum functions are described as element-wise as the sums are formed in an element-wise manner. This means that, upon moving the kernel over the image to be processed, in each case, the values of the pixels of the filter kernel are connected to the pixels of the image below by means of a sum operation (sum or subtraction). The results are then added up.

[0045] A possible variation consists in using blueprint separable convolutions (BSConv) (DE 10 2019 130 930). The 3D filters described here can be approximated by means of the filter separation described therein. For this process, the filter are represented as a consequence of a pointwise and a depthwise and/or layer-wise operation. In the mentioned document, these operations are convolutions. When applying the present invention, at least the depthwise convolution could be replaced by the application of a non-convolutional metric. In this regard, the pointwise operation could also be considered as a weighting of the individual 2D filters and thus as a pure multiplication and is therefore also non-convolutional.

[0046] Varying the sequence of the two operations then results in either an equivalent to the “blueprint separable convolution” (if the pointwise operation is performed first, before the depthwise operation) or, in the case of a reverse order, an equivalent to the “depthwise separable convolution” (if the depthwise operation is performed first, before the pointwise operation). Further details can be gathered from the following publications: “Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets”, by Daniel Haase and Manuel Amthor, as well as “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications” by Andrew G. Howard et al.

[0047] Furthermore, filter can also be separated spatially.

[0048] Such a sum function may, for example, be a sum of a distance of image elements for each image channel between image elements of the input image and the filter kernel. This means that for each pixel, the distance between each of the pixels of the kernel and the pixel underneath of the input image is calculated when the kernel is moving over the input image.

[0049] The output of a layer can be determined by means of a sum function using the following formula, for example:

[00001] A ( x A , y A , c A ) = .Math. i = 1 e .Math. j = 1 a .Math. k = 1 c E D ( E ( x A + i , y A + j , k ) , F ( i , j , k , c A ) )

[0050] A being the output image with c.sub.A channels, E being the input image with c.sub.E channels, F being the filter kernel with the dimensions d and e, and D being a distance function.

[0051] With the aid of the above formula, the respective value for the output image is determined for each location x.sub.A, y.sub.A in the respective channel c.sub.A. In this regard, the indexes i, j, and k move over the respective region: i and j over the dimensions d and e of the kernel. Wherein these may also be identical, i.e. d=e. The kernel can thus have a size of, for example, 3×3 pixels, or also 2×3, 3×4, etc.

[0052] The distance function in the above formula may be formed according to a p-norm. Thus, the single distance can be calculated with the L1-norm, i.e. as a sum of the absolute values of the differences of the pixel values.

[0053] The distance may also be calculated with the L2-norm, i.e. as a sum of the squares of the absolute values of the differences of the pixel values.

[0054] All further p-norms can accordingly also be used as a distance calculation. Other distance functions can also be applied accordingly, here.

[0055] Alternatively, the similarity metric may be formed according to a radial base function or a polynomial function. In this case, so-called RBF kernels or polynomial kernels are used, wherein these kernels are not to be confused with the filter kernels of a neural network. Thus, these designations for the similarity metric are not used anymore, but rather kernel or filter kernel always refers to the kernel(s) or filter kernel(s) of neural networks.

[0056] Optionally, a regression 251 may be performed in the last layer. This may be effected, for example, in a common fully connected layer.

[0057] Preferably, however, the regression 251 is also performed by means of an element-wise sum function, which may correspond to those in the layers before but may also be designed differently, for example by using a different distance. A further option is that determining the similarity metric is normalized, for example by a batch normalization. This normalization may also optionally comprise a scale and shift operation. Nevertheless, the regression 251 may also be effected by a convolution, more specifically by a 1×1 convolution.

[0058] However, the normalization, may be performed optionally after each layer, independently from the regression 251. For the normalization, it is also possible to use a group normalization or an instance normalization instead of the batch normalization. It is also possible to use any other known normalization. Batch, group, and instance normalizations are listed merely as examples. The scale and shift operation may be applied optionally in the case of any normalization.

[0059] As described initially, however, the image processing may be any other type of image processing besides virtual staining, in particular any type of image enhancement, such as denoising, super resolution, a deconvolution, or compressed capturing.

[0060] At the end of processing is the output image 280.

[0061] With the aid of the method described here, it is also possible to train a neural network. Generating training data of a source contrast (e.g. wide field) and the target contrast (fluorescence) is possible. This may take place with different samples/devices, also independently of one another. For this, either a registration of the source and target data would be necessary—so they can be assigned to one another—or a model is used which does not require the registration, e.g. cycle GANs. Cycle GANs are a technology in which image-to-image translation models are automatically trained without paired examples. The models are trained in an unsupervised manner, using a collection of images from the source and target domain, which do not have to be linked in any way.

[0062] Using conventional GAN technologies is equally possible.

[0063] For using a neural network as described here, it is possible to generate a source contrast from which it is possible to project onto the virtual fluorescence.

[0064] FIG. 1D shows the result 280 of an image processing, according to the invention, for virtual staining of the image 1A by means of a non-convolutional neural network, meaning virtual staining by neural networks without convolution operations. The result shows that it is possible to perform virtual staining even without convolutional networks (CNNs).

[0065] A further exemplary embodiment is a device for non-convolutional image processing in microscopy of an input image into an output image, preferably a computer, by means of an artificial neural network with at least one contracting path consisting of layers, at least one expanding path consisting of layers, and at least one filter kernel. The device comprises a calculation unit configured to determine, in one or multiple layers of the artificial neural network, a similarity metric between at least one filter kernel and one output of the previous layer. The calculation unit is further configured to reduce, in one or multiple layers of the contracting path, the resolution of the output of the previous layer, and to increase, in one or multiple layers of the expanding path, the resolution of the output of the previous layer. In this process, the first layer of the artificial neural network treats the input image as the output of the previous layer, and the output of the last layer of the artificial neural network is the output image.

[0066] The modifications for the method mentioned above apply equally for the device.

[0067] The further exemplary embodiments show possible embodiment variants, while it should be noted at this point that the invention is not limited to these particular illustrated embodiment variants thereof, but rather various A further embodiment is a computer program product with a program for a data processing device, comprising software code sections for performing the steps of the method described above when the program is run on the data processing device.

[0068] This computer program product may comprise a computer-readable medium, on which the software code sections are stored, wherein the program can be loaded directly into an internal storage of the data processing device.

[0069] Combinations of the individual embodiment variants are possible and this possibility of variation owing to the teaching for technical action provided by the present invention lies within the ability of the person skilled in the art in this technical field.

[0070] The scope of protection is determined by the claims. Nevertheless, the description and drawings are to be used for construing the claims. Individual features or feature combinations from the different exemplary embodiments shown and described may represent independent inventive solutions. The object underlying the independent inventive solutions may be gathered from the description.

[0071] All indications regarding ranges of values in the present description are to be understood such that these also comprise random and all partial ranges from it, for example, the indication 1 to 10 is to be understood such that it comprises all partial ranges based on the lower limit 1 and the upper limit 10, i.e. all partial ranges start with a lower limit of 1 or larger and end with an upper limit of 10 or less, for example 1 through 1.7, or 3.2 through 8.1, or 5.5 through 10.

[0072] Finally, as a matter of form, it should be noted that for ease of understanding of the structure, elements are partially not depicted to scale and/or are enlarged and/or are reduced in size.

[0073] Although only a few embodiments of the present invention have been shown and described, it is to be understood that many changes and modifications may be made thereunto without departing from the spirit and scope of the invention.

LIST OF REFERENCE NUMBERS

[0074] 210 Input image [0075] 220 Determining the similarity metric in the contracting path [0076] 225 Determining the similarity metric in the bottleneck [0077] 230 Reducing the resolution [0078] 240 Increasing the resolution [0079] 250 Determining the similarity metric in the expanding path [0080] 251 Regression [0081] 260 Attachment blocks of the skip connections [0082] 270 Skip connections [0083] 280 Output image