ENHANCEMENT OF MEDICAL IMAGES
20220343475 · 2022-10-27
Inventors
- Qiang ZHANG (Oxford, GB)
- Stefan PIECHNIK (Oxford, GB)
- Vanessa FERREIRA (Oxford, GB)
- Evan HANN (Oxford, GB)
- Iulia Andreia POPESCU (Oxford, GB)
Cpc classification
A61B5/055
HUMAN NECESSITIES
International classification
A61B5/055
HUMAN NECESSITIES
Abstract
A method and apparatus for enhancing magnetic resonance images to produce contrast-enhanced images without the need to administer contrast agent to a patient. The image processing apparatus utilises a trained machine learning algorithm as an image processor, preferably a generative adversarial network, to produce images from contrast agent-free magnetic resonance images with the produced images having similar appearance and better image quality and better pathological sensitivity and being able to differentiate more pathological conditions than actually acquired contrast-enhanced images.
Claims
1. A method of producing a computed contrast-enhanced medical image, the method comprising: receiving an image dataset of the subject comprising a native quantitative mapping image obtained by performing a contrast agent-free magnetic resonance imaging procedure; and inputting the image dataset to an image processor and processing the image dataset with the image processor to produce a computed contrast-enhanced medical image, wherein the image processor comprises a machine learning processor trained on a training dataset comprising sets of images, each set of images comprising a contrast agent-free quantitative mapping image having the same quantitative mapping as the quantitative mapping image of the image dataset and a corresponding acquired contrast-enhanced medical image.
2. A method according to claim 1, wherein the native quantitative mapping image is a T1 mapping image.
3. A method according to claim 1, wherein the native quantitative mapping image is a T2 mapping image or a T2* mapping image.
4. A method according to claim 1, wherein the corresponding acquired contrast-enhanced medical image is a contrast-enhanced magnetic resonance image.
5. A method according to claim 4, wherein the corresponding acquired contrast-enhanced medical image is a contrast-enhanced quantitative mapping image having the same quantitative mapping as the quantitative mapping image of the image dataset.
6. A method according to claim 1, the corresponding acquired contrast-enhanced medical image is a contrast-enhanced image of a non-magnetic resonance modality.
7. A method according to claim 6, wherein the contrast-enhanced image of a non-magnetic resonance modality is one of: a contrast-enhanced computed tomography images, a contrast-enhanced PET images, a contrast-enhanced SPECT image, or an ultrasound image.
8. A method according to claim 1, wherein the image dataset of the subject comprises the native quantitative mapping image only.
9. A method according to claim 1, wherein the image dataset of the subject further comprises at least one further magnetic resonance image obtained by a contrast agent-free magnetic resonance modality other than the quantitative mapping of the quantitative mapping image of the image dataset, and the sets of images of the training dataset further comprise at least one further magnetic resonance image obtained by the other contrast agent-free magnetic resonance modality.
10. A method according to claim 9, wherein the at least one further magnetic resonance image comprises at least one of: a raw magnetic resonance image, an image that is a fusion of raw magnetic resonance images, or an image that is a derivation of raw magnetic resonance images.
11. A method according to claim 9, wherein the at least one further magnetic resonance image comprises at least one of: a T1 mapping image, a T1-weighted image, a T2-weighted image, a T2*-weighted image, a T2 mapping image, a T2* mapping image, or a cine CMR image.
12. A method according to claim 9, wherein the at least one further magnetic resonance image comprises at least one of: a STIR image, a tagged-CMR image, a strain-encoded image, a diffusion-weighted image, a diffusion tensor image, an arterial spin labelling image, a PD weighted image, or a fat-water separated image.
13. A method according to claim 1, wherein the image dataset of the subject further comprises at least one non-magnetic resonance image, and the sets of images of the training dataset further comprise at least one non-magnetic resonance image of the same type as the at least one non-magnetic resonance image of the image dataset.
14. A method according to claim 13, wherein the at least one non-magnetic resonance image comprises at least one of: an echocardiogram, a nuclear perfusion image, a CT image, an electrophysiological cardiac map image, or a chest X-ray.
15. A method according to claim 1, wherein the image dataset comprises further data that is not image data, and the training dataset comprises further training data associated with each set of images of the same type as the further data that is not image data.
16. A method according to claim 15, wherein the further data comprises at least one of: imaging metadata, image acquisition parameters, or a non-imaging diagnostic test result.
17. A method according to claim 16, wherein the non-imaging diagnostic test result is at least one of: a MR spectroscopy result, a blood test result, an electrocardiogram, the subject's clinical characteristics, or the subject's reason for referral.
18. A method according to claim 1, further comprising the step of inputting to the image processor at least one CE image already available from previous visit or study of the same subject.
19. A method according to claim 1, wherein the images are cardiac images.
20. A method according to claim 1, wherein the image processor is one of: a trained variational autoencoder, a trained Fully Convolutional Neural Network, a trained U-Net, a trained V-Net, or a trained Generative Adversarial Network that is optionally a trained conditional Generative Adversarial Network.
21. A method according to claim 1, wherein the image processor comprises one or multiple convolutional streams to take as input one or more image modalities, and/or streams to take as input related imaging metadata or image acquisition parameters or non-image diagnostic information.
22. A method according to claim 1, wherein the image dataset is processed with the image processor to produce plural computed contrast-enhanced medical images having different disease sensitivity, and the method further comprises combining the plural multiple plural computed contrast-enhanced medical images to produce a combined, computed contrast-enhanced medical image.
23. A method according to claim 1, wherein the machine learning processor has been trained by processing the contrast agent-free quantitative mapping images of the training dataset to produce computed contrast-enhanced medical images.
24. A method according to claim 1, wherein the trained machine learning algorithm has been trained to minimize the differences between the computed contrast-enhanced medical images produced by the image processor and the corresponding acquired contrast-enhanced images.
25. A method according to claim 1, further comprising a step of training the machine learning processor.
26. A method according to claim 1, further comprising a step of performing a contrast agent-free magnetic resonance imaging procedure on a subject to obtain the image dataset comprising the native quantitative mapping image.
27. (canceled)
28. A computer-readable storage medium storing a computer program capable of execution by a computer apparatus and configured, on execution, to cause the computer apparatus to perform a method according to claim 1.
29. An image processor adapted to produce a computed contrast-enhanced medical image, the image processor comprising: an input for receiving an image dataset of a subject comprising a quantitative mapping image obtained by performing a contrast agent-free magnetic resonance imaging procedure; and a data processor for processing the image dataset to produce a computed contrast-enhanced medical image, wherein the data processor comprises a machine learning processor trained on a training dataset comprising sets of images, each set of images comprising a contrast agent-free native quantitative mapping image having the same quantitative mapping as the quantitative mapping image of the image dataset and a corresponding acquired contrast-enhanced medical image.
30. An image processor according to claim 29, wherein the native quantitative mapping image is a T1 mapping image.
31. An image processor according to claim 29, wherein the native quantitative mapping image is a T2 mapping image or a T2* mapping image.
32. An image processor according to claim 29, wherein the trained machine learning algorithm is one of: a trained Generative Adversarial Network that is optionally a trained conditional Generative Adversarial Network, a trained variational autoencoder, a trained Fully Convolutional Neural Network, a trained U-Net, a trained V-Net.
33. An image processor according to claim 29, wherein the acquired contrast-enhanced medical image is a contrast-enhance magnetic resonance image.
34. An image processor according to claim 29, wherein the acquired contrast-enhanced medical image is quantitative mapping image having the same quantitative mapping as the quantitative mapping image of the image dataset.
35. An image processor according to claim 29, wherein the acquired contrast-enhanced medical image is a contrast-enhanced image of a non-magnetic resonance modality.
36. An image processor according to claim 29, wherein the images are cardiac images.
37. An image processor claim 22, wherein the image dataset of the subject comprises the native quantitative mapping image only.
38. An image processor according to claim 29, wherein the image dataset of the subject further comprises at least one further magnetic resonance image obtained by a contrast agent-free magnetic resonance modality other than the quantitative mapping of the quantitative mapping image of the image dataset, and the sets of images of the training dataset further comprise at least one further magnetic resonance image obtained by the other contrast agent-free magnetic resonance modality.
39. An image processor according to claim 38, wherein the at least one further magnetic resonance image comprises at least one of: a T1 mapping image, a T1-weighted image, a T2 weighted image, a T2*-weighted image, a T2 mapping image, a T2* mapping image, or a cine CMR image.
40. An image processor according to claim 38, wherein the at least one further magnetic resonance image comprises at least one of: a STIR image, a tagged-CMR image, a strain-encoded image, a diffusion-weighted image, a diffusion tensor image, an arterial spin labelling image, a PD weighted image, or a fat-water separated image.
41. An image processor according to claim 29, wherein the image dataset of the subject further comprises at least one non-magnetic resonance image, and the sets of images of the training dataset further comprise at least one non-magnetic resonance image of the same type as the at least one non-magnetic resonance image of the image dataset.
42. An image processor according to claim 41, wherein the at least one non-magnetic resonance image comprises at least one of: an echocardiogram, a nuclear perfusion image, a CT image, an electrophysiological cardiac map, or a chest X-ray.
43. An image processor according to claim 29, wherein the image dataset comprises further data that is not image data, and the training dataset comprises further training data associated with each set of images of the same type as the further data that is not image data.
44. An image processor according to claim 43, wherein the further data comprises at least one of: imaging metadata, image acquisition parameters, or a non-imaging diagnostic test
45. An image processor according to claim 44, wherein the non-imaging diagnostic test result is, at least one of: a MR spectroscopy result, a blood test result, an electrocardiogram, the subject's clinical characteristics, or the subject's reason for referral.
46. A method of training the data processor of claim 29, the method comprising the steps of: a) receiving a training dataset comprising corresponding sets of images comprising a contrast agent-free native quantitative mapping image of a subject, a corresponding acquired contrast-enhanced image of the subject, and optionally at least one further image of the subject and/or further data that is not an image; b) inputting to the data processor the contrast agent-free native quantitative mapping image, of each set, and, if present, the at least one further image of the subject and/or the further data of each set, processing it with the data processor using a generative image processing function to produce a computed contrast-enhanced medical image, and comparing the computed contrast-enhanced medical image to the corresponding acquired contrast-enhanced image from the set; c) altering the processing performed by the data processor to reduce the differences between the computed contrast-enhanced medical image and the corresponding acquired contrast-enhanced image from the set; and d) repeating steps b) and c) until the differences between the computed contrast-enhanced medical images and the corresponding acquired contrast-enhanced images are below a predetermined threshold.
47. A method according to claim 46, further comprising operating a discriminator to distinguish between computed contrast-enhanced medical images produced by the generative image processing function and the corresponding acquired contrast-enhanced images by classifying each image as either a computed contrast-enhanced medical image produced by the generative image processing function or an acquired contrast-enhanced image, and to output a classification confidence value, wherein the processing performed by the generative image processor is altered in the step c) to train the generative image processor to reduce the differences between the computed contrast-enhanced medical images and the corresponding acquired contrast-enhanced images, and reduce classification confidence of the discriminator, the discriminator being simultaneously trained to increase its own classification confidence.
48. A method according to claim 46, wherein the native quantitative mapping image is a T1 mapping image.
49. A method according to claim 46, wherein the native quantitative mapping image is a T2 mapping image or a T2* mapping image.
Description
[0050] The invention will be further described by way of examples with reference to the accompanying drawings in which:
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
[0075]
[0076] In this text, reference is made to magnetic resonance imaging or scan “procedures”. A procedure is conducted according to a protocol, and may include one or several scan sequences in which each sequence produces differently weighted raw image(s) and their fusions and derivations such as quantitative maps, etc., and may include non-MR imaging steps, such as blood tests and contrast agent administration. The protocol may also set out aspects relevant to the subject such as rest times, and breath-hold requirements.
[0077]
[0078] The apparatus 1 comprises an input 3 which receives an image dataset 100 which in this example comprises only a T1 mapping image 101 (also referred to as a “T1 map image” or “T1 map”, that has been previously obtained by performing a contrast agent-free magnetic resonance imaging procedure, for example a MOLLI or ShMOLLI T1 mapping image in gray scale or colour map, which can be any number of raw input T1-weighted images and phase maps with any associated metadata (e.g. inversion times) obtained using the experiment for T1 mapping. T1 data may be replaced by T2 data or any other MR image dataset adequate to predict the target type of CE image.
[0079] In these examples, the T1 mapping image 101 (and also the T1 mapping image 21 used in training and described below) is an example of an image having a quantitative mapping. Any T1 mapping may be used, including a T1-rho mapping or a stress T1 mapping. More generally, the T1 mapping image 101 (and also the T1 mapping image 21 used in training and described below) may be replaced by an image having any other quantitative mapping, for example being a T2 mapping image or a T2* mapping image. Such quantitiative mapping is provided by acquisition and analysis of variably contrasted MR datasets. The resultant image provides quantitative mapping of the specific underlying magnetic properties of the substances in the image, such as T1, T1-rho, T2, T2*, etc.
[0080] The MR image dataset 100 is passed to an image processor 5 which is a data processor which uses a trained machine-learning (or artificial intelligence) process. The image processor 5 processes the input image dataset 100 to produce a computed contrast-enhanced image 6 which has similar appearance and pathological sensitivity as an acquired contrast-enhanced image that would have been obtained had the subject been scanned after administration of a contrast agent. Herein, this is termed a computed CE medical image (c-CE image) 6. The c-CE image may be displayed on a display 7 or otherwise output.
[0081] In the embodiment of
[0082] The contrast agent-free T1 mapping image of the training dataset is obtained by performing the same contrast agent-free magnetic resonance imaging procedure as the T1 mapping image 101 of the image dataset 100, for example a MOLLI or ShMOLLI acquisition.
[0083] The corresponding acquired contrast-enhanced medical image of the training dataset is the target or “ground truth” image for the machine learning. Thus, the computed c-CE image 6 is effectively of the same type as the acquired CE medical image.
[0084]
[0085] The further data of the image dataset 100 may include at least one further magnetic resonance image 102. The further magnetic resonance images 102 are obtained by a contrast agent-free magnetic resonance modality other than the quantative mapping acquisition of the T1 mapping image 101. Some examples are T2* weighted images, cine MR images, STIR, tagged-CMR, strain images, diffusion images, PD weighted images, DTI, ASL, etc. Other examples are given above. The further magnetic resonance images 102 may be obtained by a single contrast agent-free magnetic resonance modality or by plural different contrast agent-free magnetic resonance modalities.
[0086] The image dataset 100 is received at the input 3 and passed to an image processor 5 which operates as described above to produce a c-CE medical image 6 that is displayed on display 7 or otherwise output. In this case, image processor 5 has been trained on an training dataset as described above, except that the sets of images of the training dataset further comprise at least one further magnetic resonance image obtained by the other contrast agent-free magnetic resonance modality, corresponding to that of the further magnetic resonance images 102 of the image dataset 100.
[0087] The further data of the image dataset 100 may include at least one non-magnetic resonance image 103. The non-magnetic resonance images 103 are obtained by a contrast agent-free non-magnetic resonance modality. Some examples are given above. The non-magnetic resonance images 103 may be obtained by a single modality or by plural different modalities.
[0088] The image dataset 100 is received at the input 3 and passed to an image processor 5 which operates as described above to produce a c-CE medical image that is displayed on display 7 or otherwise output. In this case, image processor 5 has been trained on an training dataset as described above, except that the sets of images of the training dataset further comprise at least one non-magnetic resonance image obtained by the contrast agent-free non-magnetic resonance modality, corresponding to that of the non-magnetic resonance images 103 of the image dataset 100. The further data of the image dataset 100 may include further data 104 that is not image data. The further data 104 may be data related to the subject and/or data relating to the input images. The further data 104 may be imaging metadata, such as image acquisition parameters, or at least one non-imaging diagnostic test result for the subject such as blood test results, electrocardiograms, clinical characteristics (e.g. medical conditions, medications, symptoms, risk factors, history, physical examination findings), reasons for referral of the subject for imaging, and other imaging diagnostic tests (echocardiogram, nuclear perfusion imaging, CT scans, electrophysiology cardiac mapping, chest x-rays, etc.). Other examples are given above
[0089] Although three different types of further data are illustrated, namely further magnetic resonance images 102, non-magnetic resonance images 103 and further data 104. However, it is not essential to use all these types of further data and more generally any one or any combination of the different types of further data may be used. In that case, the training dataset includes a corresponding combination of data.
[0090] The image dataset 100 is received at the input 3 and passed to an image processor 5 which operates as described above to produce a c-CE medical image that is displayed on display 7 or otherwise output. In this case, image processor 5 has been trained on an training dataset as described above, except that the training dataset further comprises further data, corresponding to that of the further data 104 of the image dataset 100.
[0091] The alternative types of further data described with reference to
[0092] Thus, the image processor 5 is a trained machine learning process which is trained on a training dataset whose content corresponds to the streams of data of the image data set 100 input to the apparatus 1, along with the corresponding acquired contrast-enhanced medical images obtained by use of contrast agent.
[0093] The acquired CE medical images may be MR images. In this case, the acquired CE medical images may be MR images that are not quantitative mapping images. Alternatively, the acquired CE medical images may be MR images that are quantitative mapping images, optionally having the same quantitative mapping as the T1 map images 101 of the image dataset 100, or having a different quantitative mapping. In this case, they are obtained by performing a contrast-enhanced magnetic resonance imaging procedure.
[0094] Alternatively, the acquired contrast-enhanced medical images of the training dataset may be contrast-enhanced medical image of a non-magnetic resonance modality, for example contrast-enhanced CT images, PET images, or SPECT images, or ultrasound images based on administration of contrast agents (injection, ingestion, inhalation, etc.), or obtained with introducing physiological stress to produce additional enhancement dependent on pathophysiological tissue properties. In this case, they are obtained by performing a contrast-enhanced procedure of the relevant non-magnetic resonance modality.
[0095] Examples of suitable machine learning algorithms that may be employed in the image processor 5 are: fully convolutional neural networks, variational autoencoder, U-nets, dense U-net, V-nets, generative adversarial networks (GANs), including its variants such as conditional GAN, Cycle GAN, cascaded refinement networks.
[0096] The image processor 5 may be trained prior to the performance of the methods performed by the apparatus 1 shown in
[0097]
[0098]
[0099] As mentioned above, each set of images of the training dataset 20 comprises a contrast agent-free T1 mapping image 21 having the same quantitative mapping as the T1 mapping image 101 of the image dataset 100 and an acquired contrast-enhanced medical image 26. The contrast agent-free mapping image 21 and contrast-enhanced medical image 26 in each set correspond to each other in that they are obtained from the same subject or patient, for example being images obtained before contrast agent is administered to the patient and after administration of the contrast agent to the patient.
[0100] The acquired CE medical image and the contrast agent-free T1 mapping image of the respective sets of images 25 of the training dataset 20 may be obtained in the same MR scan procedure with the images acquired in the same position relative to the subject so that the images contain matching pathology configurations. However, they may alternatively be obtained from different procedures at different times. They may, for example, be obtained using different scanners (where one procedure requires a different protocol or scanner configuration from another for example). As mentioned above, although the acquired CE medical image may be a T1 mapping image, it may alternatively be an MR image that is of a different modality from a T1 mapping (including but not restricted to a quantitative mapping image), or may be a contrast-enhanced medical image produced by a non-magnetic resonance modality. They can be from different procedures of: the same or different imaging techniques; the same or different patient position, e.g., combining short-axis and long-axis CMR images to predict short-axis LGE, with the image position and orientation information provided; and/or the same or different times, e.g., procedures from previous visits of the same patient.
[0101] In general, the sets of images 25 of the training dataset 20 (and in the circumstance of using the trained processor, the input data) are data registered with the same patient.
[0102] Plural sets of images 25 are used. In principle, any number of sets of images 25 may be used, although the training is improved by increasing the number of sets of images 25 and variation in the sets of images 25. Preferably the training set comprises pairs of images 25 pairs from many different patients with different clinical conditions and a suitable training set may be obtained from, for example, the Hypertrophic Cardiomyopathy Registry which has over 4000 suitable pairs of quality-controlled pre-contrast CMR and corresponding LGE (Late Gadolinium Enhancement) images.
[0103] In the case of using the apparatus of
[0104] In the case of using the apparatus of
[0105] Where the image dataset 100 includes at least one further MR image 102, the sets of images 25 of the training dataset 20 additionally comprise at least one further magnetic resonance image 22 obtained by the other contrast agent-free magnetic resonance modality, corresponding to that of the further magnetic resonance images 102 of the image dataset 100.
[0106] Where the image dataset 100 includes at least one non-MR image 103, the sets of images 25 of the training dataset 20 additionally comprise at least one non-magnetic resonance image 23 obtained by the contrast agent-free non-magnetic resonance modality, corresponding to that of the non-magnetic resonance images 103 of the image dataset 100.
[0107] Where the image dataset 100 includes further data 104, the sets of images 25 of the training dataset 20 additionally comprise further data 24, corresponding to that of the further data 104 of the image dataset 100.
[0108] The training is performed using techniques which are known in the art of machine learning, as follows. The training dataset 20 is received at input 30 into the training processor 35 which includes a machine learning algorithm 32 which processes contrast agent-free T1 mapping image 21 of each set of images 25 (and if used also: the at least one further magnetic resonance image 22; the at least one non-magnetic resonance image 23 and/or the further data 24) to produce a computed CE image 31 of the same patient.
[0109] The computed CE images 31 are compared to the corresponding actual CE images 26 in the training dataset 20 and a cost function calculator 33 calculates a measure of their difference. The machine learning algorithm 32 is then repeatedly modified to reduce the difference between the computed CE images 31 and the actual CE images 26 through e.g., backpropagation. The data of the training dataset 20 may be augmented by rotation, translation, reflections, scaling, distortions, adding noise etc., to improve the robustness of the learning process.
[0110] Once the computed CE images 31 are judged as sufficiently close to the actual CE images 26, the machine learning algorithm 32 is regarded as trained and it can then be used to process new contrast agent-free MR images in an apparatus 1 to produce computed CE medical images 6.
[0111]
[0112] In this case, the machine learning algorithm 32 comprises the following functional blocks that perform the training method. Specifically, the machine learning algorithm 32 comprises a generator 51 and a discriminator 52.
[0113]
[0114] A difference block 53 derives the difference between the computed CE images 31 and the acquired CE images 26 which is fed to a generator loss function block 54, together with a classification loss value 55 of the discriminator 52. The generator loss function block 54 trains the generator 51 by using back-propagation of the generation loss to repeatedly modify its processing to minimise the difference between the computed CE medical images 31 and the corresponding acquired CE medical images 26, and to increase the classification loss value 55 of the discriminator 52.
[0115] Once the computed CE medical images 31 are close enough to the actual CE images in the training dataset, the generator 51 may be used to process new contrast agent-free MR images in an embodiment of the apparatus as illustrated in
[0116]
[0117] As explained above with reference to
[0118]
[0119]
[0120]
[0121]
[0122]
[0123] Such c-CE medical images 6-1, 6-2 etc. may be combined to produce a combined contrast-enhanced medical image for further enhancement.
[0124] The c-CE images 6-1, 6-2 are supplied to an ROI generator 61 which generates the oedema region of interest (ROI) 62 as the myocardial region that has higher signal intensity in the first c-CE image 6-1 (sensitive to oedema) than in the second c-CE image 6-2 (not sensitive to oedema). The oedema ROI 61 is colour-encoded. The oedema ROI 61 and one or both of the c-CE images 6-1, 6-2 (the second c-CE images 6-2 being illustrated as an example in
[0125] Thus, transforming quantitative T1 mapping into LGE-like image provides standardised presentation and allows direct combination of multiple modalities and trained c-LGE images 6-1, 6-2 etc. to further derive a comprehensive combined c-LGE image 6-C. The combined cLGE 6-C requiring no contrast agent administration can differentiate more disease conditions than conventional CE images requiring contrast agent administration. By way of example,
[0126]
[0127] As illustrated in
[0128]
[0129] An embodiment of the invention has been tested by applying it to the CE free MR T1 mapping images from steps 4 of procedures such as those shown in
[0130]
[0131]
[0132]
[0133]
[0134]
[0135] In contrast-enhanced imaging LGE images are acquired with inversion recovery (IR) or phase-sensitive IR (PSIR) techniques to reveal abnormal myocardial enhancement. A proper inversion time (TI) must be select to ‘null’ normal myocardial tissue, rending it dark in LGE images. If the scan operators make a mistake by selecting an incorrect TI, the LGE images will have poor quality or impaired diagnostic value. The optimal TI changes as GBCA washes out of the tissue, which means also poorer reproducibility of LGE images.
[0136] Because the computed CE procedure is much shorter than an actual CE scanning procedure the technique is potentially more robust to artefacts caused by, for example, patient fatigue or intolerance staying inside the scanner.
[0137]
[0138] Other examples of computed CE images demonstrating better image quality and less noise than an actual CE image are illustrated in
[0139]
[0140] Contrast-agent-free MRI modalities such as T1 mapping carry rich information and have better sensitivity to certain pathologies than LGE image, for example diffuse changes and oedema.
[0141]
[0142]
[0143]
[0144]
[0145] The examples above demonstrate the efficacy of the methods when applied to the example of T1 mapping applied to the native T1 mapping image 101. The benefits are understood to arise from the use of the quantitative mapping, so similar efficacy is anticipated for other variants of T1 mapping including for example a T1-rho mapping or a stress T1 mapping and indeed for other contrast-agent-free quantitative mappings other than T1 mapping, for example T2 mapping, or T2* mapping.
[0146] The apparatus 1 shown in
[0147] The computer apparatus, where used, may be any type of computer system but is typically of conventional construction. The computer program may be written in any suitable programming language. The computer program may be stored on a computer-readable storage medium, which may be of any type, for example: a recording medium which is insertable into a drive of the computing system and which may store information magnetically, optically or opto-magnetically; a fixed recording medium of the computer system such as a hard drive; or a computer memory.
[0148] The MR image dataset 100 may be obtained as part of the method. The MR image dataset 100 is obtained by performing a contrast agent-free imaging magnetic resonance procedure to provide the T1 mapping image, and, where other images are used, by performing suitable contrast agent-free imaging procedures to provide the other images.