SALIENCY MAPS FOR MEDICAL IMAGING
20240355094 ยท 2024-10-24
Inventors
Cpc classification
G06V10/774
PHYSICS
G06V10/454
PHYSICS
G06T11/005
PHYSICS
G06V10/7715
PHYSICS
G06V10/464
PHYSICS
International classification
G06V10/77
PHYSICS
G06V10/46
PHYSICS
G06V10/774
PHYSICS
G06V10/22
PHYSICS
Abstract
Disclosed herein is a medical system (100) comprising a memory (110) storing machine executable instructions (120). The memory (110) further stores a trained first machine learning module (122) trained to output in response to receiving a medical image (124) as input a saliency map (126) as output. The saliency map (126) is predictive of a distribution of user attention over the medical image (124). The medical system (100) further comprises a computational system (104). Execution of the machine executable instructions (120) causes the computational system (104) to receive a medical image (124). The medical image (124) is provided as input to the trained first machine learning module (122). In response to the providing of the medical image (124), a saliency map (126) of the medical image (124) is received as output from the trained first machine learning module (122). The saliency map (126) predicts a distribution of user attention over the medical image (124). The saliency map (126) of the medical image (124) is provided.
Claims
1. A medical system comprising: a memory configured to store machine executable instructions, wherein the memory further stores a trained first machine learning module trained to output in response to receiving a medical image as input a saliency map as output, the saliency map being predictive of a distribution of user attention over the medical image; a computational system, wherein execution of the machine executable instructions causes the computational system to: receive a medical image; provide the medical image as input to the trained first machine learning module; in response to the providing of the medical image, receive a saliency map of the medical image as output from the trained first machine learning module, wherein the saliency map predicts a distribution of user attention over the medical image; provide the saliency map of the medical image.
2. The medical system of claim 1, wherein execution of the machine executable instructions further causes the computational system to provide the trained first machine learning module, wherein the providing of the trained first machine learning module comprises: providing the first machine learning module; providing first training data comprising first pairs of training medical images and training saliency maps, wherein the training salient maps are descriptive of distributions of user attention over the training medical images; training the first machine learning module using the first training data, wherein the resulting trained first machine learning module is trained to output the training saliency maps of the first pairs in response to receiving the training medical images of the first pairs.
3. The medical system of claim 2, wherein the medical system further comprises a display device, wherein the providing of the first training data comprises for each of the training medical images of the first training data: displaying the respective training medical image using the display device; measuring a distribution of user attention over the displayed training medical image; generating the training saliency map of the first pair of training data comprising the displayed training medical image using the measured distribution of user attention over the training medical image.
4. The medical system of claim 3, wherein the medical system further comprises an eye tracking device configured for measuring positions and movements of eyes of a user of the medical system, wherein the memory further stores an attention determining module configured for determining the distribution of user attention over the displayed training medical image using the eye tracking device to determine for the user of the medical system looking at the displayed training medical image points of attention within the displayed training medical image.
5. The medical system of claim 1, wherein the trained first machine learning module is trained to output in response to receiving a medical image as input a user individual saliency map predicting a user individual distribution of user attention over the input medical image.
6. The medical system of claim 1, wherein the medical system is further configured to select a reconstruction method for reconstructing medical images from a plurality of pre-defined reconstruction methods using the saliency map, wherein the medical image is a test medical image of a pre-defined type of anatomical structure for which a medical image is to be reconstructed, wherein a plurality of test maps is provided, each of the test maps being assigned to a different one of the reconstruction methods, each of the test maps identifying sections of the test image comprising anatomical sub-structures of the pre-defined type of anatomical structure for which a quality of image reconstruction is the highest compared to other anatomical sub-structures of the pre-defined type of anatomical structure, when using the assigned reconstruction method, wherein execution of the machine executable instructions further causes the computational system to: provide the test maps; compare the test maps with the saliency map; determine one of the test maps having a highest level of structural similarity with the saliency map; select the reconstruction method assigned to the determined test map; reconstruct the medical image to be reconstructed using the selected reconstruction method.
7. The medical system of claim 1, wherein the memory further stores an out-of-distribution estimation module configured for outputting an out-of-distribution map in response to receiving a medical image as input, wherein the out-of-distribution map represents levels of compliance of the input medical image with a reference distribution defined by a set of reference medical images, wherein execution of the machine executable instructions further causes the computational system to: provide the medical image as input to the out-of-distribution estimation module; in response to the providing of the medical image, receive an out-of-distribution map of the medical image as output from the out-of-distribution estimation module, wherein the out-of-distribution map represents levels of compliance of the medical image with the pre-defined distribution; provide a weighted out-of-distribution map, wherein the providing of the weighted out-of-distribution map comprises weighting the levels of compliance represented by the out-of-distribution map using the distribution of user attention over the medical image predicted by the saliency map.
8. The medical system of claim 7, wherein the providing of the weighted out-of-distribution map further comprises calculating an out-of-distribution score using the weighted levels of compliance provided by the weighted out-of-distribution map, wherein the out-of-distribution score is descriptive of a probability that the medical image as a whole is within the reference distribution.
9. The medical system of claim 1, wherein the memory further stores an image quality assessment module configured for outputting an image quality map in response to receiving a medical image and a saliency map as input, wherein the image quality map represents a distribution of levels of image quality over the input medical image weighted using the distribution of user attention over the input medical image predicted by the input saliency map, wherein execution of the machine executable instructions further causes the computational system to: provide the medical image and the saliency map as input to the image quality assessment module; in response to the providing of the medical image and the saliency map, receive an image quality map as output from the image quality assessment module, wherein the image quality map represents a distribution of levels of image quality over the medical image weighted using the distribution of user attention over the medical image predicted by the saliency map; provide the received image quality map.
10. The medical system of claim 9, wherein the image quality assessment module (136) is used for training a second machine learning module to output in response to receiving medical imaging data as input a medical image as output, wherein the image quality estimated by the image quality assessment module is descriptive of losses of the output medical image of the second machine learning module relative to one or more reference medical images, wherein execution of the machine executable instructions further causes the computational system to: provide the second machine learning module; provide second training data for training the second machine learning module, the second training data comprising second pairs of training medical imaging data and training medical images reconstructed using the training medical images data; train the second machine learning module, wherein the second machine learning module is trained to output the training medical images of the second pairs in response to receiving the training medical images data of the second pairs, wherein the training comprises for each of the second pairs providing the respective training medical imaging data as input to the second machine learning module and receiving a preliminary medical image as output, wherein the received preliminary medical image is the medical image, wherein the distribution of image quality represented by the image quality map received for the medical image from the image quality assessment module is used as a distribution of losses over the medical image relative to the training medical image of the respective second pair provided as a reference medical image to the image quality assessment module for determining the received image quality map, wherein parameters of the second machine learning module are adjusted during the training until the losses over the medical image satisfy a predefined criterion.
11. The medical system of claim 9, wherein the providing of the received image quality map comprises calculating an image quality score using the received image quality map, wherein the image quality score is descriptive of an averaged image quality of the medical image.
12. The medical system of claim 1, wherein the medical system is configured to acquire medical imaging data for reconstructing the medical image, wherein the medical imaging data is acquired using any one of the following data acquisition methods: magnetic resonance imaging, computed-tomography imaging, positron emission tomography imaging, single photon emission computed tomography imaging.
13. A medical system comprising: a memory storing machine executable instructions; a computational system, wherein execution of the machine executable instructions causes the computational system to provide a trained machine learning module trained to output in response to receiving a medical image as input a saliency map as output, the saliency map being predictive of a distribution of user attention over the medical image, wherein the providing of the trained machine learning module comprises: providing the machine learning module; providing training data comprising pairs of training medical images and training saliency maps, wherein the training salient maps are descriptive of distributions of user attention over the training medical images; training the machine learning module using the training data, wherein the resulting trained machine learning module is trained to output the training saliency maps of the pairs in response to receiving the training medical images of the pairs.
14. A computer program comprising machine executable instructions stored on a non-transitory computer readable memory for execution by a computational system controlling a medical system, wherein the computer program further comprises a trained machine learning module trained to output in response to receiving a medical image as input a saliency map as output, the saliency map being predictive of a distribution of user attention over the medical image, wherein execution of the machine executable instructions causes the computational system to: receive a medical image; provide the medical image as input to the trained machine learning module; in response to the providing of the medical image, receive a saliency map of the medical image as output from the trained machine learning module, wherein the saliency map predicts a distribution of user attention over the medical image; provide the saliency map of the medical image.
15. A method of medical imaging using a trained machine learning module trained to output in response to receiving a medical image as input a saliency map as output, the saliency map being predictive of a distribution of user attention over the medical image, wherein the method comprises: receiving a medical image; providing the medical image as input to the trained machine learning module; in response to the providing of the medical image, receiving a saliency map of the medical image as output from the trained machine learning module, wherein the saliency map predicts a distribution of user attention over the medical image; providing the saliency map of the medical image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0103] In the following preferred embodiments of the invention will be described, by way of example only, and with reference to the drawings in which:
[0104]
[0105]
[0106]
[0107]
[0108]
[0109]
[0110]
[0111]
[0112]
[0113]
[0114]
[0115]
[0116]
[0117]
[0118]
[0119]
[0120]
[0121]
[0122]
[0123]
[0124]
DESCRIPTION OF EMBODIMENTS
[0125] Like numbered elements in these figures are either equivalent elements or perform the same function. Elements which have been discussed previously will not necessarily be discussed in later figures if the function is equivalent.
[0126]
[0127] The computer 102 is further shown as comprising a computation system 104. The computational system 104 is intended to represent one or more processors or processing cores or other computational systems that are located at one or more locations. The computational system 104 is shown as being connected to an optional hardware interface 106. The optional hardware interface 106 may for example enable the computational system 104 to control other components such as a magnetic resonance imaging system, a computed-tomography imaging system, a positron emission tomography imaging system, or a single photon emission computed tomography imaging system.
[0128] The computational system 104 is further shown as being connected to an optional user interface 108 which may for example enable an operator to control and operate the medical system 100. The optional user interface 108 may, e.g., comprise an output and/or input device enabling a user to interact with the medical system. The output device may, e.g., comprise a display device configured for displaying medical images. The input device may, e.g., comprise a keyboard and/or a mouse enabling the user to insert control commands for controlling the medical system 100. The optional user interface 108 may, e.g., comprise an eye tracking device, like a camera, configured for tracking positions and/or movements of eyes of a user using the medical system 100. The computational system 104 is further shown as being connected to a memory 110. The memory 110 is intended to represent different types of memory which could be connected to the computational system 104.
[0129] The memory is shown as containing machine-executable instructions 120. The machine-executable instructions 120 enable the computational system 104 to perform tasks such as controlling other components as well as performing various data and image processing tasks. The machine-executable instructions 120 may, e.g., enable the computational system 104 to control other components such as a magnetic resonance imaging system, a computed-tomography imaging system, a positron emission tomography imaging system, or a single photon emission computed tomography imaging system.
[0130] The memory 110 is further shown as containing a trained machine learning module 122 configured for receiving a medical image 124 and, in response, providing a saliency map 126. The saliency map 126 predicts a distribution of user attention over the medical image 124. The medical image 124 may, e.g., be an MRI image, a CT image or an AMI image, like a PET image or a SPECT image.
[0131] The memory 110 is further shown as containing medical imaging data 123. The medical system 100 may, e.g., be configured to reconstruct the medical image 124 using the medical imaging data 123. The medical imaging data 123 may, e.g., be MRI data, CT imaging data or AMI data, like PET imaging data or SPECT imaging data. For example, the memory 110 may contain a further machine learning module 121 configured for receiving the medical imaging data 123 and, in response, providing the reconstructed medical image 124.
[0132] The memory 110 is further shown as containing a set of test maps 128. The medical system 100 may be configured to execute different reconstruction method for reconstructing medical images, like the medical image 124, e.g., using the medical imaging data 123. Each of the test maps 128 may be assigned to a different one of the reconstruction methods and identify sections of a test image for which a quality of image reconstruction is the highest using the assigned reconstruction method. The test image, e.g., the medical image 124, may display a pre-defined type of anatomical structure for which a medical image is to be reconstructed using one of the reconstruction methods. In case a medical image of a brain is to be reconstructed, the test image may show a brain structure. The sections identified by the test maps may comprising anatomical sub-structures of the pre-defined type of anatomical structure for which a quality of image reconstruction is the highest compared to other anatomical sub-structures of the pre-defined type of anatomical structure, when using the assigned reconstruction method. For example, in case one of the reconstruction methods provides high image quality for white matter in the brain, a test map assigned to this reconstruction method may indicate, e.g., highlight white matter comprised by the test image. For example, in case one of the reconstruction methods provides high image quality for grey matter in the brain, a test map assigned to this reconstruction method may indicate, e.g., highlight grey matter comprised by the test image. For example, in case one of the reconstruction methods provides high image quality for cerebrospinal fluid in the brain, a test map assigned to this reconstruction method indicate, e.g., highlight cerebrospinal fluid comprised by the test image. For example, medical image 124 may be the test image and saliency map 126 may be used to select one of the reconstruction methods for reconstructing for reconstruction one or more medical images. The saliency map 126 may be used to determine one of the test maps 128 having a highest level of structural similarity with the saliency map 126. The reconstruction method assigned to the determined test map may be selected and one or more medical images may be reconstructed using the selected reconstruction method. For example, the medical imaging data 123 may be used for reconstructing the medical images.
[0133] The memory 110 is further shown as containing an OOD estimation module 130 configured for receiving a medical image 124 and, in response, providing an OOD map 132 representing levels of compliance of the input medical image 124 with a reference distribution defined by a set of reference medical images. The memory 110 may further contain a weighted OOD map 134 generated weighting the levels of compliance represented by the OOD map 132 using the distribution of user attention over the medical image 124 predicted by the saliency map 126. The weighted levels of compliance provided by the weighted OOD map 134 may, e.g., be used for calculating an OOD score describing a probability that the medical image 124 as a whole is within the reference distribution.
[0134] The memory 110 is further shown as containing an image quality assessment module 136 configured for receiving a medical image 124 and, in response, providing an image quality map 138 representing a distribution of levels of image quality over the input medical image 124. The levels of image quality may be weighted using the distribution of user attention over the input medical image 124 predicted by the input saliency map 126, when receiving the saliency map 126 together with the medical image 124 as input. The weighted levels of image quality provided by the image quality map 138 may, e.g., be used for calculating an image quality score providing, e.g., an averaged image quality of the medical image 124.
[0135] The image quality assessment module 136 may, e.g., be used by the medical system 100 for training the machine learning module 121. The image quality estimated by the image quality assessment module 136 may, e.g., be descriptive of losses of the medical image 124 output by the machine learning module 121 relative to one or more reference medical images. For example, training data for training the machine learning module 121 may be provided. The respective training data may, e.g., be contained by the memory 110. The training data for training the machine learning module 121 may comprise pairs of training medical imaging data and training medical images reconstructed using the training medical images data. The machine learning module 121 may be trained to output the training medical images in response to receiving the training medical images data. The training may comprise for each of the pairs providing the respective training medical imaging data as input to the machine learning module 121 and receiving a preliminary medical image as output. The received preliminary medical image may, e.g., be the medical image 124. The distribution of image quality represented by the image quality map 138 received for the medical image 124 from the image quality assessment module 136 may be used as a distribution of losses over the medical image 124 relative to the training medical image of the respective pair. The training medical image may be provided as a reference medical image to the image quality assessment module 136 for determining the received image quality map 138. Training of the machine learning module 121 may comprise adjusting parameters of the machine learning module until the losses over the medical image 124 satisfy a predefined criterion. The criterion may, e.g., require that the losses are smaller than a predefined threshold.
[0136]
[0137]
[0138]
[0139] The magnetic resonance imaging system 302 comprises a magnet 304. The magnet 304 is a superconducting cylindrical type magnet with a bore 306 through it. The use of different types of magnets is also possible; for instance, it is also possible to use both a split cylindrical magnet and a so-called open magnet. A split cylindrical magnet is similar to a standard cylindrical magnet, except that the cryostat has been split into two sections to allow access to the iso-plane of the magnet, such magnets may for instance be used in conjunction with charged particle beam therapy. An open magnet has two magnet sections, one above the other with a space in-between that is large enough to receive a subject: the arrangement of the two sections area similar to that of a Helmholtz coil. Open magnets are popular, because the subject is less confined. Inside the cryostat of the cylindrical magnet there is a collection of superconducting coils.
[0140] Within the bore 306 of the cylindrical magnet 304 there is an imaging zone 308 where the magnetic field is strong and uniform enough to perform magnetic resonance imaging. A region of interest 309 is shown within the imaging zone 308. The magnetic resonance data that is acquired typically acquired for the region of interest. A subject 318 is shown as being supported by a subject support 320 such that at least a portion of the subject 318 is within the imaging zone 308 and the region of interest 309.
[0141] Within the bore 306 of the magnet there is also a set of magnetic field gradient coils 310 which is used for acquisition of preliminary magnetic resonance data to spatially encode magnetic spins within the imaging zone 308 of the magnet 304. The magnetic field gradient coils 310 connected to a magnetic field gradient coil power supply 312. The magnetic field gradient coils 310 are intended to be representative. Typically, magnetic field gradient coils 310 contain three separate sets of coils for spatially encoding in three orthogonal spatial directions. A magnetic field gradient power supply supplies current to the magnetic field gradient coils. The current supplied to the magnetic field gradient coils 310 is controlled as a function of time and may be ramped or pulsed.
[0142] Adjacent to the imaging zone 308 is a radio-frequency coil 314 for manipulating the orientations of magnetic spins within the imaging zone 308 and for receiving radio transmissions from spins also within the imaging zone 308. The radio frequency antenna may contain multiple coil elements. The radio frequency antenna may also be referred to as a channel or antenna. The radio-frequency coil 314 is connected to a radio frequency transceiver 316. The radio-frequency coil 314 and radio frequency transceiver 316 may be replaced by separate transmit and receive coils and a separate transmitter and receiver. It is understood that the radio-frequency coil 314 and the radio frequency transceiver 316 are representative. The radio-frequency coil 314 is intended to also represent a dedicated transmit antenna and a dedicated receive antenna. Likewise, the transceiver 316 may also represent a separate transmitter and receivers. The radio-frequency coil 314 may also have multiple receive/transmit elements and the radio frequency transceiver 316 may have multiple receive/transmit channels. For example, if a parallel imaging technique such as SENSE is performed, the radio-frequency could 314 will have multiple coil elements.
[0143] The transceiver 316 and the gradient controller 312 are shown as being connected to the hardware interface 106 of the computer system 102.
[0144] The memory 110 is further shown as containing pulse sequence commands 330. The pulse sequence commands 330 are commands or data which may be converted into such commands that are configured for controlling the magnetic resonance imaging system 302 to acquire the medical image data 123 from the region of interest 309. In case of the medical system 100 according to
[0145]
[0146] The CT system 332 may comprise a rotating gantry 336. The gantry 336 may rotates about an axis of rotation 340. There is a subject 318 shown on a subject support 320. Within the gantry 336 is an X-ray tube 342, e.g., within an X-ray tube high voltage isolation tank. In addition, a voltage stabilizer circuit 338 may be provided with the X-ray tube 342, within an X-ray power supply 334 or external to both. The X-ray power supply 334 supplies the X-ray tube 342 with power.
[0147] The X-ray tube 334 produces X-rays 346 that pass through the subject 318 and are received by a detector 344. Within the area of the box 309 is a region of interest within an imaging zone 308, where CT or computer tomography images 124 of the subject 318 can be made.
[0148] For positron emission tomography imaging (PET) or single photon emission computed tomography imaging (SPECT), a similar system may be used with a detector 344 comprising a gamma camera. In case of positron emission tomography or single photon emission computed tomography, no external radiation sources are required. For example, detector 344 of the CT system 332 may comprise a gamma camera and be used for PET-CT imaging or SPECT-CT imaging, i.e., a combination of PET and CT or a combination of SPECT and CT, respectively.
[0149]
[0150]
[0151]
[0152] The computer 102 is further shown as comprising a computation system 104. The computational system 104 is intended to represent one or more processors or processing cores or other computational systems that are located at one or more locations. The computational system 104 is shown as being connected to an optional hardware interface 106. The optional hardware interface 106 may for example enable the computational system 104 to control other components.
[0153] The computational system 104 is further shown as being connected to an optional user interface 108 which may for example enable an operator to control and operate the medical system 101. The optional user interface 108 may, e.g., comprise an output and/or input device enabling a user to interact with the medical system. The output device may, e.g., comprise a display device configured for displaying medical images and saliency maps. The input device may, e.g., comprise a keyboard and/or a mouse enabling the user to insert control commands for controlling the medical system 101. The optional user interface 108 may, e.g., comprise an eye tracking device, like a camera, configured for tracking positions and/or movements of eyes of a user using the medical system 101. The computational system 104 is further shown as being connected to a memory 110. The memory 110 is intended to represent different types of memory which could be connected to the computational system 104.
[0154] The memory is shown as containing machine-executable instructions 120. The machine-executable instructions 120 enable the computational system 104 to perform tasks such as controlling other components as well as performing various data and image processing tasks. The machine-executable instructions 120 may, e.g., enable the computational system 104 to train the machine learning module 160 and provide the trained machine learning module 122 as a result of the training.
[0155] For the training of the machine learning module 160 training data 162 may be provided. The training data may comprise pairs of training medical images and training saliency maps. The training salient maps are descriptive of distributions of user attention over the training medical images. The training saliency maps may, e.g., be generated using an eye tracking device, like a camera, configured for tracking positions and/or movements of eyes of a user using the medical system 101. Based on the tracking data provided by the eye tracking device distributions of user attention over the training medical images may be determined resulting in the saliency maps. The machine learning module 160 is trained using the provided training data 162. The machine learning module 160 is trained to output the training saliency maps of the pairs in response to receiving the training medical images of the pairs. Thus, the machine learning module 122 may be generated.
[0156]
[0157] The medical system 100 shown in
[0158] For the training of the machine learning module 160 training data 162 may be provided. The training data may comprise pairs of training medical images and training saliency maps. The training salient maps are descriptive of distributions of user attention over the training medical images. The training saliency maps may, e.g., be generated using an eye tracking device, like a camera, configured for tracking positions and/or movements of eyes of a user using the medical system 100. Based on the tracking data provided by the eye tracking device distributions of user attention over the training medical images may be determined resulting in the saliency maps. The machine learning module 160 is trained using the provided training data 162. The machine learning module 160 is trained to output the training saliency maps of the pairs in response to receiving the training medical images of the pairs. Thus, the machine learning module 122 may be generated.
[0159]
[0160] Such a machine learning module 122, e.g., with a U-Net architecture, may be trained to mimic a user's, e.g., a radiologist's, behavior related to a displayed medical image. For example, user attention as it is detected in
[0161] For example, the machine learning module 122 for predicting saliency maps may be trained on the training pairs of medical images 406 and their corresponding saliency maps 408, acquired as shown in
[0162]
[0163]
[0164] In
[0165]
[0166] Thus, the trained machine learning module may be used to select a reconstruction method that better serves the interests of a user. By providing a saliency map that predicts a distribution of user attention, the trained machine learning provides an approach taking into account a user related measure of relevance. In this setting, multiple reconstruction methods with comparable characteristics may be deployed. For example, a specific anatomical structure should be depicted. Comparable characteristics may refer to the fact that all the reconstruction methods are able to provide reconstruct medical images depicting the respective anatomical structure. The reconstruction method most suitable for the user may be chosen based on the prediction of user attention provided by the saliency map. The saliency map may, e.g., predict a distribution of user attention for an individual user. Thus, for different users different saliency maps may be predicted, depending on user's experience, references and/or way of working. Different radiologists may examine images in different ways. Hence, each of them may benefit from reconstructions which better meet his or her individual needs. Each of the reconstruction methods may be designed to deal with particular characteristic of the input data, i.e., of the acquired medical data used for reconstructing the medical image. For example, in MRI, some reconstruction methods may produce medical images having a high contrast between white and gray matter in the brain, while others may provide better signal-to-noise ratio in regions near the skull. For each reconstruction method a test map may be provided identifying sections of a test image for which a quality of image reconstruction of the respective reconstruction method is the highest. For example, in case of a reconstruction method providing high image quality for white matter in the brain a test map may be provided highlighting white matter comprised by the test image. For example, in case of a reconstruction method providing high image quality for grey matter in the brain a test map may be provided highlighting grey matter comprised by the test image. For example, in case of a reconstruction method providing high image quality for cerebrospinal fluid in the brain a test map may be provided highlighting cerebrospinal fluid comprised by the test image. The test maps may have saliency map-like appearance. The test maps may be compared with the saliency map provided by the machine learning, that, e.g., is configured to predict a particular radiologists' distribution of attention over the test image. The most suitable reconstruction method, i.e., the reconstruction method for which the test map displays a highest level of similarity with the saliency map is chosen. The level of similarity may be determined estimating a distance between the saliency map obtained for the radiologist and the test maps provided for the different reconstruction method.
[0167]
[0168]
[0169]
[0170]
[0171]
[0172]
[0173]
[0174]
[0175] While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
[0176] Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word comprising does not exclude other elements or steps, and the indefinite article a or an does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
Reference Signs List
[0177] 100 medical system
[0178] 101 medical system
[0179] 102 computer
[0180] 104 computational system
[0181] 106 optional hardware interface
[0182] 108 optional user interface
[0183] 110 memory
[0184] 120 machine executable instructions
[0185] 121 machine learning module
[0186] 122 trained machine learning module
[0187] 123 medical imaging data
[0188] 124 medical image
[0189] 126 saliency map
[0190] 128 set of test maps
[0191] 130 OOD estimation module
[0192] 132 OOD map
[0193] 134 weighted OOD map
[0194] 136 image quality assessment module
[0195] 138 image quality map
[0196] 140 display device
[0197] 144 eye tracking device
[0198] 150 attention determining module
[0199] 160 machine learning module
[0200] 162 training data
[0201] 302 magnetic resonance imaging system
[0202] 304 magnet
[0203] 306 bore of magnet
[0204] 308 imaging zone
[0205] 309 region of interest
[0206] 310 magnetic field gradient coils
[0207] 312 magnetic field gradient coil power supply
[0208] 314 radio-frequency coil
[0209] 318 transceiver
[0210] 318 subject
[0211] 320 subject support
[0212] 330 pulse sequence commands
[0213] 332 CT system
[0214] 334 X-ray power supply
[0215] 336 gantry
[0216] 338 voltage stabilizer circuit
[0217] 340 axis of rotation
[0218] 342 X-ray tube
[0219] 344 detector
[0220] 346 X-rays
[0221] 350 CT control commands
[0222] 406 training medical image
[0223] 407 training data
[0224] 408 training saliency map
[0225] 422 test map
[0226] 424 test map
[0227] 426 test map
[0228] 500 user
[0229] 502 eyes