ANONYMOUS FINGERPRINTING OF MEDICAL IMAGES

20230368386 · 2023-11-16

    Inventors

    Cpc classification

    International classification

    Abstract

    Disclosed herein is a medical system comprising a memory storing machine executable instructions and at least one trained neural network. Each of the at least one neural network is configured for receiving a medical image as input. Each of the at least one trained neural network has been modified to provide hidden layer output. Execution of the machine executable instructions causes the computational system to: receive the medical image; receive the hidden layer output in response to inputting the medical image into each of the at least one trained neural network; provide an anonymized image fingerprint comprising the hidden layer output from each of the at least one trained neural network; and receive an image assessment of the medical image in response to querying a historical image database using the anonymized image fingerprint.

    Claims

    1. A medical system comprising: a memory configured to store machine executable instructions and at least one trained neural network, wherein each of the at least one neural network is configured for receiving a medical image as input, wherein each of the at least one trained neural network comprises multiple hidden layers, wherein each of the at least one trained neural network has been modified to provide hidden layer output in response to receiving the medical image, wherein the hidden layer output is outputted directly from one or more of the multiple hidden layers; a computational system, wherein execution of the machine executable instructions causes the computational system to: receive the medical image; receive the hidden layer output in response to inputting the medical image into each of the at least one trained neural network; provide an anonymized image fingerprint comprising the hidden layer output from each of the at least one trained neural network; and receive an image assessment of the medical image in response to querying a historical image database using the anonymized image fingerprint.

    2. The medical system of claim 1, wherein the historical image database is queried via a network connection.

    3. The medical system of claim 1, wherein the image assessment comprises at least one of the following: an identification of one or more image artifacts; an assignment of an image quality value; a retrieved diagnostic guideline; instructions to repeat the measurement of the medical image; suggestion of follow up acquisition of additional medical images; an identification of image acquisition problems; an identification of an incorrect field of view; an identification of an improper subject positioning; an identification of irregular subject inspiration; an identification of metal artifacts; an identification of motion artifacts; an identification of foreign objects in the image; medical image scan planning instructions.

    4. The medical system of claim 1, wherein the medical system comprises the historical image database, wherein the historical image database is configured to provide the image assessment by: identifying a set of similar images by comparing the anonymized image fingerprint to image fingerprints in the historical image database, wherein the set of similar images each comprises historical data; providing at least a portion of the historical data as the image assessment.

    5. The medical system of claim 4, wherein the comparison between the anonymized image fingerprint to image fingerprints in the historical image database is performed using at least one of the following: applying a similarity measure to the anonymized image fingerprint and each of the image fingerprints; applying a learned similarity measure to the anonymized image fingerprint and each of the image fingerprints; applying a metric to the anonymized image fingerprint and each of the image fingerprints; calculating a Minkowski distance between the anonymized image fingerprint and each of the image fingerprints; calculating a Mahalanobis distance between the anonymized image fingerprint and each of the image fingerprints; applying a cosine similarity measure to a difference between the anonymized image fingerprint and each of the image fingerprints; or using a trained vector comparison neural network.

    6. The medical system of claim 1, wherein the neural network is at least one of the following: a pretrained image classification neural network; a pretrained image segmentation neural network; a U-Net neural network; a ResNet neural network; a DenseNet neural network; an EfficientNet neural network; an Xception neural network; an Inception neural network; a VGG neural network; an auto-encoder neural network; a recurrent neural network; a LSTM neural network; a feedforward neural network; a multi-layer perceptron; or a network resulting from a neural network architecture search.

    7. The medical system of claim 1, wherein the provided hidden layer output is provided from at least one of the following: a convolutional layer; a dense layer; an activation layer; a pooling layer; an unpooling layer; a normalization layer; a padding layer; a dropout layer; a recurrent layer; a transformer layer; a linear layer; a resampling layer; or an embedded representation from an autoencoder.

    8. The medical system of claim 1, wherein the memory further stores a bag-of-words model configured to output a set of image descriptors in response to receiving the medical image, wherein execution of the machine executable instructions further comprises receiving the set of image descriptors in response to inputting the medical image into the bag-of-words model, wherein the anonymized image fingerprint further comprises the set of image descriptors.

    9. The medical system of claim 1, wherein the medical system further comprises a medical imaging system, wherein execution of the machine executable instructions further causes the computational system to: control the medical imaging system to acquire medical image data; and reconstruct the medical image from the medical imaging data.

    10. The medical system of claim 9, wherein the medical image system is at least one of the following: a magnetic resonance imaging system, a computed tomography system, an ultrasonic imaging system, an X-ray system, a fluoroscope, a positron emission tomography system, and a single photon emission computed tomography system.

    11. The medical system of claim 10, wherein the anonymized image fingerprint further comprises metadata descriptive of a configuration of the medical imaging system during acquisition of the medical image data.

    12. The medical system of claim 1, wherein the image assessment comprises scan planning instructions.

    13. The medical system of claim 12, wherein the medical system further comprises a display, wherein execution of the machine executable instructions further causes the processor to render at least the scan planning instructions on the display.

    14. A method of medical imaging, wherein the method comprises: receiving a medical image; receiving hidden layer output in response to inputting the medical image into each of at least one trained neural network, wherein each of the at least one trained neural network comprises multiple hidden layers, wherein each of the at least one trained neural network has been modified to provide hidden layer output in response to receiving the medical image, wherein the hidden layer output is outputted directly from one or more of the multiple hidden layers; and providing an anonymized image fingerprint comprising the hidden layer output from each of the at least one trained neural network; and receiving an image assessment of the medical image in response to query a historical image database using the anonymized image fingerprint.

    15. A computer program comprising machine executable instructions and at least one trained neural network stored on a non-transitory computer readable medium for execution by a computational system controlling a medical imaging system, wherein each of the at least one neural network is configured for receiving a medical image as input, wherein each of the at least one trained neural network comprises multiple hidden layers, wherein each of the at least one trained neural network has been modified to provide hidden layer output in response to receiving the medical image, wherein the hidden layer output is outputted directly from one or more of the multiple hidden layers, wherein execution of the machine executable instructions causes the computational system to: receive the medical image; receive the hidden layer output in response to inputting the medical image into each of the at least one trained neural network; and provide an anonymized image fingerprint comprising the hidden layer output from each of the at least one trained neural network; and receive an image assessment of the medical image in response to querying a historical image database using the anonymized image fingerprint.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0095] In the following preferred embodiments of the invention will be described, by way of example only, and with reference to the drawings in which:

    [0096] FIG. 1 illustrates an example of a medical system;

    [0097] FIG. 2 shows a flow chart which illustrates a method of using the medical system of FIG. 1;

    [0098] FIG. 3 illustrates a further example of a medical system;

    [0099] FIG. 4 illustrates the function of a medical system.

    DETAILED DESCRIPTION OF EMBODIMENTS

    [0100] Like numbered elements in these figs. are either equivalent elements or perform the same function. Elements which have been discussed previously will not necessarily be discussed in later figs. if the function is equivalent.

    [0101] FIG. 1 illustrates an example of a medical system 100. The medical system is shown as comprising a computer 102 with a computational system 104. The computational system 104 could for example represent one or more processing cores at one or more locations. The computational system 104 is shown as being connected to an optional hardware interface 106. The hardware interface 106 may for example be an interface which enables the computational system 104 to communicate with other components of the medical system 100 as well as other computers 102′. For example, the hardware interface 106 could be a network interface. The computational system 104 is further shown as being connected to an optional user interface 108.

    [0102] The computational system 104 is further shown as being connected to a memory 110. The memory 110 is intended to represent any type of memory which may be accessible to the computational system 104. In this example, the computer 102 is shown as being optionally connected to an additional computer 102′. The additional computer 102′ is likewise shown as comprising a computational system 104′ that is connected to an optional user interface 108′, a hardware interface 106′ and a memory 110′. In some examples the medical system 100 only comprises those portions which are part of the computer 102. In other examples, the medical system 100 comprises the components of the computer 102, the optional computer 102′ and a network connection 112.

    [0103] The network connection 112 is shown as connecting the hardware interfaces 106, 106′ and enables the two computational systems 104 and 104′ to communicate. In yet other examples, the components of the computer 102 and the optional computer 102′ are combined with each other. For example, the contents of the memory 110 and the memory 110′ may be combined together to form a single computer 102. The computer 102′ may for example represent a remote server or cloud-based computer.

    [0104] The memory 110 is shown as containing machine-executable instructions 120. The machine-executable instructions 120 may for example enable the computational system 104 to provide basic image and data processing tasks as well as control the medical system 100. Likewise, the memory 110′ is shown as containing machine-executable instructions 120′ that enable the computational system 104′ to perform equivalent tasks.

    [0105] The memory 110 is further shown as containing at least one neural network 122. The at least one neural network 122 have been modified so that they provide hidden layer output 124. The hidden layer output 124 may be considered to be feature vectors which are provided from a hidden layer of the at least one neural network 122. If there is more than one neural network 122 then for example, the hidden layer output 124 from each of the neural networks may be concatenated together. The memory 110 is further shown as containing an anonymized image fingerprint 126. In the simplest case, the anonymized image fingerprint 126 is simply the hidden layer output 124 from the at least one neural network 122. The memory 110 is further shown as containing a medical image 130 that may be input into the at least one neural network 122 to generate the hidden layer output 124.

    [0106] The anonymized image fingerprint 126 may also contain information that is additional to the hidden layer output 124. For example, there may be meta data associated with the medical image 130 such as the type of scan or other data which may be useful to narrow the search within the historical image database. The anonymized image fingerprint 126 may also for example comprise image descriptors or other data which is descriptive of the medical image 130. The memory 110 is further shown as containing an image assessment 128 of the medical image 130. This is received in response to querying a historical image database 140.

    [0107] In the memory 110 it is shown that there is the historical image database 140. The historical image database 140 comprises historical images that are associated with image fingerprints. It should be noted that the historical image database 140 could in some examples contain the original images or it may contain just image fingerprints of these historical images. Each of the historical images in the image database or just the image fingerprint is associated with historical data. The historical image database 140 may be queried with the anonymized image fingerprint 126. The computer 102 transferred the anonymized image fingerprint 126 to the computer 102′. The computer 102′ then queried the historical image database 140 with the anonymized image fingerprint 126. This resulted in returning a set of similar images 142. The set of similar images 142 comprise historical data 144. The historical data 144 can be considered to be a container for all sorts of information about the images in historical image database 140. For example, if an image had an artifact or it had a failure of a hardware component or there was a particular diagnosis or tumor in one of the historical images, this could be labeled in the historical data 144.

    [0108] In some cases, the historical data 144 is simply returned as the image assessment 128. In other examples there may also be a filter module 146 which selects the type of data from the historical data 144, which is used to compile or provide the image assessment 128. For example, the filter module 146 may remove certain types of data. For example, if the operator of the medical system 100 is only interested in how to configure the medical system then the filter module 146 may remove other types of data. In other instances, the operator of the medical system 100 may be looking for certain types of tumors or for certain types of image artifacts. This could be selected using the filter module 146 also. The medical system 100 may therefore provide a system which can provide a variety of types of information without prior training. The anonymized image fingerprint 126 may be the individual numerical values from the neurons which were output into the hidden layer output 124. Various grouping and nearest neighbor algorithms may be used to determine the set of similar images 142 without prior training.

    [0109] FIG. 2 shows a flowchart which illustrates a method of operating the medical system 100 illustrated in FIG. 1. First, in step 200, the medical image 130 is received. In some examples the medical image 130 may be received by transferring the medical image 130 to the memory 110. In other examples, the medical system 100 may further comprise a medical imaging system and the medical image 130 may be received by acquiring it. Next, in step 202, the hidden layer output 124 is received by inputting the medical image 130 into the at least one neural network 122. Next, in step 204, the computational system 104 constructs an anonymized image fingerprint 126 using the hidden layer output 124. If there is additional meta data or descriptors available, this may in some instances be appended to the hidden layer output 124 to construct the anonymized image fingerprint 126. Next, in step 206, the set of similar images 142 is provided by querying the historical image database 140 with the anonymized image fingerprint 126.

    [0110] In the simplest case where the anonymized image fingerprint 126 only comprises the hidden layer output 124 there may be some sort of nearest neighbor or metric which is used to compute which historical images belong to the set of similar images 142. In case the anonymized image fingerprint 126 comprises additional descriptors or meta data, these descriptors or meta data may be used to first query the historical image database 140 and reduce the number of images before a metric or nearest neighbor algorithm is used. This for example, may make the system more efficient. Next, in step 208, at least a portion of historical data is provided as the image assessment 128. For example, there may be a filter module 146 which reduces the amount of historical data 144 that is used to construct the image assessment 128. Finally, in step 210, the computer 102 receives the image assessment 128 from the other computer system 102′. As was noted previously, in some examples the computers 102 and 102′ may be combined.

    [0111] FIG. 3 illustrates a further example of a medical system 300. The medical system 300 is similar to that that is illustrated in FIG. 1 except that it is shown as additionally comprising a magnetic resonance imaging system 302. The magnetic resonance imaging system 302 is intended to depict a medical imaging system in general. For example, the magnetic resonance imaging system 302 may be replaced with a diagnostic ultrasound system, a computed tomography system, a positron emission tomography system or a single photon emission tomography system for example.

    [0112] The magnetic resonance imaging system 302 comprises a magnet 304. The magnet 304 is a superconducting cylindrical type magnet with a bore 306 through it. The use of different types of magnets is also possible; for instance it is also possible to use both a split cylindrical magnet and a so called open magnet. A split cylindrical magnet is similar to a standard cylindrical magnet, except that the cryostat has been split into two sections to allow access to the iso-plane of the magnet, such magnets may for instance be used in conjunction with charged particle beam therapy. An open magnet has two magnet sections, one above the other with a space in-between that is large enough to receive a subject: the arrangement of the two sections area similar to that of a Helmholtz coil. Open magnets are popular, because the subject is less confined. Inside the cryostat of the cylindrical magnet there is a collection of superconducting coils.

    [0113] Within the bore 306 of the cylindrical magnet 304 there is an imaging zone 308 where the magnetic field is strong and uniform enough to perform magnetic resonance imaging. A region of interest 309 is shown within the imaging zone 308. The magnetic resonance data that is acquired typically acquired for the region of interest. A subject 318 is shown as being supported by a subject support 320 such that at least a portion of the subject 318 is within the imaging zone 308 and the region of interest 309.

    [0114] Within the bore 306 of the magnet there is also a set of magnetic field gradient coils 310 which is used for acquisition of preliminary magnetic resonance data to spatially encode magnetic spins within the imaging zone 308 of the magnet 304. The magnetic field gradient coils 310 connected to a magnetic field gradient coil power supply 312. The magnetic field gradient coils 310 are intended to be representative. Typically magnetic field gradient coils 310 contain three separate sets of coils for spatially encoding in three orthogonal spatial directions. A magnetic field gradient power supply supplies current to the magnetic field gradient coils. The current supplied to the magnetic field gradient coils 310 is controlled as a function of time and may be ramped or pulsed.

    [0115] Adjacent to the imaging zone 308 is a radio-frequency coil 314 for manipulating the orientations of magnetic spins within the imaging zone 308 and for receiving radio transmissions from spins also within the imaging zone 308. The radio frequency antenna may contain multiple coil elements. The radio frequency antenna may also be referred to as a channel or antenna. The radio-frequency coil 314 is connected to a radio frequency transceiver 316. The radio-frequency coil 314 and radio frequency transceiver 316 may be replaced by separate transmit and receive coils and a separate transmitter and receiver. It is understood that the radio-frequency coil 314 and the radio frequency transceiver 316 are representative. The radio-frequency coil 314 is intended to also represent a dedicated transmit antenna and a dedicated receive antenna. Likewise the transceiver 316 may also represent a separate transmitter and receivers. The radio-frequency coil 314 may also have multiple receive/transmit elements and the radio frequency transceiver 316 may have multiple receive/transmit channels. For example if a parallel imaging technique such as SENSE is performed, the radio-frequency could 314 will have multiple coil elements.

    [0116] The transceiver 316 and the gradient controller 312 are shown as being connected to the hardware interface 106 of the computer system 102.

    [0117] The memory 110 is shown as containing pulse sequence commands 330. The pulse sequence commands are commands or data which can be converted into commands which can be used to control the magnetic resonance imaging system 302 to acquire k-space data 332. The memory 110 is further shown as comprising k-space data 332 that has been acquired by controlling the magnetic resonance imaging system 302 with the pulse sequence commands 330. The computational system 104 may also reconstruct a magnetic resonance image (the medical image 130) from the k-space data 332. The k-space data 332 is intended to represent general medical imaging data.

    [0118] The memory 110 is further shown as containing meta data 334 which may for example be descriptive of the acquisition of the k-space data 332. For example, it may indicate the type of scan or even the region 309 that was scanned of the subject 318. The meta data 334 may for example be appended to the anonymized image fingerprint 126.

    [0119] The memory 110 is further shown as containing a bag-of-words model 336. The medical image 130 may for example be input into the bag-of-words model 336 to optionally provide a set of image descriptors 338. Like the meta data 334, if the set of image descriptors 338 are present they may be optionally appended to the anonymized image fingerprint 126. The meta data 334 and/or the set of image descriptors 338 may then be used to optionally query the historical image database 140 to narrow the search for the set of set of similar images 142.

    [0120] The medical system 300 is shown as further comprising a display 340. The display 340 is shown as depicting scan planning instructions 342 which were generated using the image assessment 128. The scan planning instructions 342 could example be a set of detailed instructions on how to configure the medical system 300 for further medical image acquisition.

    [0121] As was described above, technologists operating medical imaging systems such as MRI or CT often face unforeseen situations such as image quality problems or unexpected pathologies. Currently, the operator usually has to rely on his/her experience to select the most suitable action, such as modifying the acquisition parameters. This procedure is error-prone, in particular for inexperienced staff members and/or rare events like hardware defects.

    [0122] Examples may provide for a multi-site workflow assistance tool (medical system) for medical imaging systems is disclosed. It relies on a comparison of the acquired images (medical image 130) with similar cases from multiple other sites (from the historical image database 144), thereby accessing a large body of knowledge. Similarity assessment is based on feature vectors (hidden layer output 124) calculated using a dedicated convolutional neural network (using the at least one neural network 122). Since only these feature vectors (and possibly metadata) are shared between sites, the system does not violate privacy regulations. Alternatively, the system can also be used within single site, e.g. to ensure that internal guidelines for incidental findings are followed.

    [0123] Advanced imaging systems such as MRI or CT scanners require highly skilled operators. In the clinical routine, technicians must be able to deal with a large variety of unforeseen situations, including image quality problems that require suitable corrective action as well as pathologies that may require insertion of additional dedicated scans. Examples of image quality problems include inappropriate selection of the FOV, incorrect patient positioning, metal artifacts, motion artifacts, or problems due to technical limitations such as unsuccessful preparation phases. In most cases, adequate corrective action will resolve these problems. An example of pathologies that requires additional scans are vascular stenoses, which require angiography and/or perfusion sequences to enable reliable diagnosis.

    [0124] In the current clinical routine, a technician usually relies on his/her experience to analyze images and remember suitable actions from previous cases. This situation is problematic: a long training time is required to enable new technicians to reliably analyze images and build up enough experience. In addition, many problematic situations such as incidental findings or hardware defects occur very infrequently, so that even experienced technicians may have difficulties to correctly interpret the images. This can be particularly problematic for clinical sites with low case numbers.

    [0125] In principle, the large amount of medical images that is constantly produced by medical scanners, such as magnetic resonance imaging systems, world-wide could be used to alleviate this problem: comparison with similar images from other sites could often reveal the most suitable action. In particular, smaller sites with a less-specialized healthcare profile may strongly profit from accessing knowledge databases of large sites with specialized departments and staff. Sharing of medical images between clinics is, however, challenging due to strict legal and privacy regulations in many countries.

    [0126] Examples may provide for a system that enables a fast comparison of medical images with images from multiple other clinical sites (via the historical image database), thereby enabling workflow assistance that relies on a large body of knowledge. The comparison is based on feature vectors (the anonymized image fingerprint 126) that are calculated using a dedicated pre-trained convolutional neural network (CNN), such that only these anonymized vectors need to be shared between sites. Therefore, the system does not violate privacy regulations.

    [0127] FIG. 4 illustrates a further example of a medical system that may be implemented. FIG. 4 provides a schematic overview of an example, where an unexpected intracranial hemorrhage is depicted. FIG. 4 illustrates this as a flowchart. The steps of the operations are divided into those which would be performed by a computer 102 such as is depicted in FIGS. 1 and 3, and a second computer 102′ such as is depicted in FIGS. 1 and 3 also. Computer 102′ is implemented in this example as a cloud-based node.

    [0128] The method starts in step 400, where an image is acquired. This is the medical image 130. Next, the feature vector is generated 402 using a convolutional neural network. This is equivalent to generating the anonymized image fingerprint 126 using the at least one neural network 122. In this step, the pre-trained CNN is used to calculate a feature vector of the acquired image. In the simplest case, this network is a standard CNN that is trained for image classification, ideally on medical images. In a proof-of-concept study, however, a CNN trained for classification on the ImageNet dataset (i.e. natural images) yielded satisfying results as well. The feature vector is extracted using the output of a hidden layer located deeply within the network (e.g. last convolutional layer before dense layer in ResNet). The extracted feature vector corresponds to a high-level abstraction of the input image data. Image similarity is then assessed by comparing feature vectors of different images with standard metrics such as mean-squared-error (MSE), L1 norm, cosine similarity, etc. or machine-learning based similarity metrics (see Example 3 below).

    [0129] Importantly, access to the parameters of the CNN should be restricted to ensure that image content cannot be estimated by inverse methods.

    [0130] Next, in step 404, the feature vector is sent to the cloud-based computer 102′. When the feature vector reaches the cloud-based computer the cloud nodes then query their local databases and select cases or historical images whose associated feature vectors yield a high similarity score with the feature vector f of image 130. In this example, the database is depicted as storing these images. However, the images do not need to be stored in the cloud-based computer 102′ itself. The data could be linked to an image fingerprint. This would provide for example a means of anonymizing the historical data of the database contained in the computer system 102′. The calculated feature vector is then sent to a central node, where it is compared to other vectors from different sites. Each of these vectors is stored with a corresponding workflow label that describes the most suitable action, such as adding a specific sequence for further image acquisition. The best matching results are then returned to the querying site (Site 1 in FIG. 4), and the workflow recommendations associated to the best matches (highest similarity score) are displayed to the operator.

    [0131] Next, in step 406, the workflow labels and feature vectors of the best matching cases are received. In some cases, the feature vectors are not received, only the workflow labels or equivalently the image assessment 128. Next, in step 408, the results are selected from the cloud query with a high feature similarity. This may be performed by the computer 102 or by the computer 102′ as was illustrated in FIGS. 1 and 3. Finally, in step 410, the workflow or image assessment 128 is displayed to the operator.

    [0132] For many applications, the workflow labels (used to later provide image assessments 128) can be automatically extracted without requiring manual annotation by the operator: [0133] Insertion of new scans into an imaging protocol can be detected by a simple logfile analysis. [0134] Modification of scan parameters to alleviate artifacts can be extracted by detecting scans that are repeated without modified geometry parameters. [0135] Repositioning of the patient can be detected by analysis of table movement (also usually contained in logfiles).

    [0136] More detailed annotations can also be obtained by a simple user interaction module that collects feedback for certain situations (“Why was the scan repeated in this way? Was this procedure successful?”).

    Examples

    [0137] 1. In one example, the system is used as an in-house tool within a single clinical site, where the local image archive is used to find the most similar images. This procedure can be desirable if internal guidelines should be followed, such as executing a defined list of additional scans if an incidental finding occurs. In addition, this application scenario has the advantage that details of the previous exams, including images, may be shown to the user with limited privacy concerns. [0138] 2. In another example, the medical system is also used for automatic scan planning: After acquisition of a survey/scout scan, the corresponding feature vector is calculated and compared to the vectors from other sites. In this case, the workflow label that is returned to the querying site are the scan planning parameters, i.e. angulation/offset parameters. [0139] 3. In another embodiment, the training of the trained neural network is tailored to the targeted image similarity task, e.g. by training the network to produce feature vectors that yield better matches (more similar images). Moreover, instead of comparing the different feature vectors with standard metrics such as MSE, a dedicated network can be trained optimize this comparison (i.e. a machine learning based similarity measure).

    [0140] A more comprehensive comparison can also be realized by including system information (e.g., scanner type), protocol information (T1w, T2w, . . . ) or even patient information (age, gender, clinical indication . . . ) in the similarity metric. [0141] 4. Next to CNN features, also techniques from the computer vision domain could be employed in order to search for similar images. E.g. common approach includes the use of a bag-of-visual-words model. Using such an approach, in a first step, relevant regions in an image are identified using feature detectors, while the regions are characterized in terms of feature descriptors. In a second step, these features are compared to a finite set of codebook entries, the corresponding frequency histogram provides a representation which can be used in a similar fashion as the CNN features for a search for related images. [0142] 5. Instead of using a central node that controls the communication with the other sites, a decentralized (p2p) communication between sites can also be implemented.

    [0143] While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.

    [0144] Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

    REFERENCE SIGNS LIST

    [0145] 100 medical system [0146] 102 computer [0147] 102′ optional computer [0148] 104 computational system [0149] 104′ computational system [0150] 106 hardware interface [0151] 106′ hardware interface [0152] 108 optional user interface [0153] 108′ optional user interface [0154] 110 memory [0155] 110′ memory [0156] 112 network connection [0157] 120 machine executable instructions [0158] 120′ machine executable instructions [0159] 122 at least one neural network [0160] 124 hidden layer output [0161] 126 anonymized image fingerprint [0162] 128 image assessment [0163] 130 medical image [0164] 140 historical image database [0165] 142 set of similar images [0166] 144 historical data [0167] 146 filter module [0168] 200 receive the medical image [0169] 202 receive the hidden layer output in response to inputting the medical image into each of the at least one trained neural network [0170] 204 provide an anonymized image fingerprint comprising the hidden layer output from each of the at least one trained neural network [0171] 206 identifying a set of similar images by comparing the anonymized image fingerprint to image fingerprints in the image database [0172] 208 providing at least a portion of the historical data as the image assessment [0173] 210 receive an image assessment of the medical image in response to query a historical image database using the anonymized image fingerprint [0174] 300 medical system [0175] 302 magnetic resonance imaging system [0176] 304 magnet [0177] 306 bore of magnet [0178] 308 imaging zone [0179] 309 region of interest [0180] 310 magnetic field gradient coils [0181] 312 magnetic field gradient coil power supply [0182] 314 radio-frequency coil [0183] 316 transceiver [0184] 318 subject [0185] 320 subject support [0186] 330 pulse sequence commands [0187] 332 k-space data [0188] 334 metadata [0189] 336 bag-of-words model [0190] 338 set of image descriptors [0191] 340 display [0192] 342 scan planning instructions [0193] 400 image acquisition [0194] 402 generation of feature vector F using CNN [0195] 404 send feature vector F to cloud [0196] 406 receive workflow labels and feature vectors F(C) of best matching cases [0197] 408 select results from cloud query with high feature similarity [0198] 410 display workflow propositions to operator