DETECTION OF ARTIFACTS IN SYNTHETIC IMAGES
20250045926 ยท 2025-02-06
Inventors
Cpc classification
A61B5/055
HUMAN NECESSITIES
G06V10/774
PHYSICS
International classification
G06V10/774
PHYSICS
A61B5/055
HUMAN NECESSITIES
Abstract
The present disclosure relates to the technical field of generation of synthetic images, in particular synthetic medical images. The subjects of the present disclosure are a method, a computer system and a computer-readable storage medium comprising a computer program for detecting artifacts in synthetic images, in particular synthetic medical images.
Claims
1. A computer-implemented method comprising: providing a trained machine-learning model (MLM.sup.t); wherein the trained machine-learning model (MLM.sup.t) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI.sub.1(x.sub.i)) of a reference region of the reference object in a first state and (ii) a target reference image (RI.sub.2(y.sub.i)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI.sub.1(x.sub.i)) and the target reference image (RI.sub.2(y.sub.i)) each comprise a plurality of image elements, wherein the at least one input reference image (RI.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object, wherein the target reference image (RI.sub.2(y.sub.i)) is a computed tomography or magnetic resonance image of the reference region of the reference object, wherein the machine-learning model (MLM.sup.t) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI.sub.1(x.sub.i)) a synthetic reference image (RI.sub.2*(.sub.i)), wherein the synthetic reference image (RI.sub.2*(.sub.i)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI.sub.2*(.sub.i) respectively corresponds to an image element of the target reference image (RI.sub.2(y.sub.i)), wherein the machine-learning model (MLM.sup.t) has been trained to predict for each image element of the synthetic reference image (RI.sub.2*(.sub.i)) a color value (.sub.i) and an uncertainty value ({circumflex over ()}(x.sub.i)) for the predicted color value (.sub.i), and wherein the training comprises minimization of a loss function (L), wherein the loss function (L) comprises (i) the predicted color value (.sub.i) or a deviation of the predicted color value (.sub.i) from a color value (y.sub.i) of the corresponding image element of the target reference image (RI.sub.2(y.sub.i)) and (ii) the predicted uncertainty value ((x.sub.i)) as parameters; receiving at least one input image (I.sub.1(x.sub.i)) of an examination region of an examination object, wherein the at least one input image (I.sub.1(x.sub.i)) represents the examination region of the examination object in the first state, wherein the at least one input image (I.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object; feeding the at least one input image (I.sub.1(x.sub.i)) to the trained machine-learning model (MLM.sup.t); receiving a synthetic image (I.sub.2*(.sub.i)) from the trained machine-learning model, wherein the synthetic image (I.sub.2*(.sub.u)) represents the examination region of the examination object in the second state; receiving an uncertainty value ({circumflex over ()}(x.sub.i)) for each image element of the synthetic image (I.sub.2*(.sub.i)); determining at least one confidence value on the basis of the received uncertainty values; and outputting the at least one confidence value.
2. The method according to claim 1, wherein the first state is a state before or after administration of a contrast agent and the second state is a state after administration of the contrast agent.
3. The method according to claim 1, wherein the first state and the second state each indicate an amount of a contrast agent which has been administered to the reference object and the examination object.
4. The method according to claim 1, wherein the at least one input image (I.sub.1(x.sub.i)) comprises a first computed tomography or magnetic resonance image and a second computed tomography or magnetic resonance image, wherein the first computed tomography or magnetic resonance image represents the examination region of the examination object without a contrast agent or after administration of a first amount of the contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object after administration of a second amount of the contrast agent, and wherein the synthetic image (I.sub.2*(.sub.i)) is a synthetic computed tomography or magnetic resonance image, wherein the synthetic image (I.sub.2*(.sub.i)) represents the examination region of the examination object after administration of a third amount of the contrast agent, wherein the second amount is different from, the first amount and the third amount is different from, the first amount and the second amount.
5. The method according to claim 1, wherein the at least one input image (I.sub.1(x.sub.i)) comprises a first computed tomography or magnetic resonance image and a second computed tomography or magnetic resonance image, wherein the first computed tomography or magnetic resonance image represents the examination region of the examination object in a first period of time before or after administration of a contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object in a second period of time after administration of the contrast agent, and wherein the synthetic image (I.sub.2*(.sub.i)) is a synthetic computed tomography or magnetic resonance image, wherein the synthetic image (I.sub.2*(.sub.i)) represents the examination region of the examination object in a third period of time after administration of the contrast agent, wherein the second period of time follows the first period of time, and the third period of time follows the second period of time.
6. The method according to claim 1, wherein the uncertainty value ({circumflex over ()}(x.sub.i)) of the predicted color value (.sub.i) of each image element of the synthetic image (I.sub.2*(.sub.i)) or a value derived therefrom is set as the confidence value of the image element.
7. The method according to claim 1, further comprising: generating a confidence representation, wherein the confidence representation comprises a plurality of image elements, wherein each image element of the plurality of image elements represents a sub-region of the examination region, wherein each image element of the plurality of image elements respectively corresponds to an image element of the synthetic image (I.sub.2*(.sub.i)), wherein each image element of the plurality of image elements has a color value, wherein the color value correlates with the respective uncertainty value ({circumflex over ()}(x.sub.i)) of the predicted color value (.sub.i) of the corresponding image element of the synthetic image (I.sub.2*(.sub.i)); and outputting the confidence representation superimposed on the synthetic image (I.sub.2*(.sub.i)).
8. The method according to claim 1, wherein the at least one confidence value is a confidence value for the entire synthetic image (I.sub.2*(.sub.1)), wherein the confidence value is a mean or a maximum value or a minimum value that is formed on the basis of all uncertainty values ({circumflex over ()}(x.sub.i)) of all image elements of the synthetic image (I.sub.2*(.sub.i)).
9. The method according to claim 1, further comprising: determining a confidence value for one or more sub-regions of the synthetic image (I.sub.2*(.sub.i)); and outputting the confidence value for the one or more sub-regions of the synthetic image (I.sub.2*(.sub.i)).
10. The method according to claim 9, wherein different methods for calculating the confidence value are used for different sub-regions of the examination region of the examination object.
11. The method according to claim 1, wherein the trained machine-learning model (MLM.sup.t) is configured and has been trained to increase, in the event of an increase in the deviation of the predicted color value (.sub.i) of the synthetic reference image (RI.sub.2*(.sub.i)) from the color value (y.sub.i) of the corresponding image element of the target reference image (RI.sub.2(y.sub.i)), the uncertainty value ({circumflex over ()}(x.sub.i)) of the predicted color value (.sub.i) in order to minimize the loss function.
12. The method according to claim 1, wherein an increase in the deviation of the predicted color value (.sub.i) of the synthetic reference image (RI.sub.2*(.sub.i)) from the color value (y.sub.i) of the corresponding image element of the target reference image (RI.sub.2(y.sub.i)) leads to an increase in a loss calculated by means of the loss function (), wherein an increase in the uncertainty value ({circumflex over ()}(x.sub.i)) leads to a decrease in the loss calculated by means of the loss function (
).
13. The method according to claim 1, wherein the loss function () comprises the following equation (1):
14. The method according to claim 1, wherein the training of a machine-learning model (MLM) comprises: receiving the training data (TD); wherein the training data (TD) comprise for each reference object of the plurality of reference objects (i) the at least one input reference image (RI.sub.1(x.sub.i)) of the reference region of the reference object in the first state and (ii) the target reference image (RI.sub.2(y.sub.i)) of the reference region of the reference object in the second state, wherein the second state is different from the first state, wherein the at least one input reference image (RI.sub.1(x.sub.i)) comprises a plurality of image elements, wherein each image element of the at least one input reference image (RI.sub.1(x.sub.i)) represents a sub-region of the reference region, wherein each image element of the at least one input reference image (RI.sub.1(x.sub.i)) is characterized by a color value (x.sub.i), and wherein the target reference image (RI.sub.2(y.sub.i)) comprises a plurality of image elements, wherein each image element of the target reference image (RI.sub.2(y.sub.i)) represents a sub-region of the reference region, wherein each image element of the target reference image (RI.sub.2(y.sub.i)) is characterized by a color value (y.sub.i); providing the machine-learning model (MLM); wherein the machine-learning model (MLM) is configured to generate on the basis of the at least one input reference image (RI.sub.1(x.sub.i)) of the reference region of a reference object and model parameters (MP) of a synthetic reference image (RI.sub.2*(.sub.i)) of the reference region of the reference object, wherein the synthetic reference image (RI.sub.2*(.sub.i)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI.sub.2*(.sub.i)) corresponds to an image element of the target reference image (RI.sub.2(y.sub.i)), wherein each image element of the synthetic reference image (RI.sub.2*(.sub.i)) is assigned a predicted color value (.sub.i), wherein the machine-learning model (MLM) is configured to predict for each predicted color value (y) an uncertainty value ({circumflex over ()}(x.sub.i)); training the machine-learning model (MLM), wherein the training for each reference object of the plurality of reference objects comprises: inputting the at least one input reference image (RI.sub.1(x.sub.i)) into the machine-learning model (MLM); receiving the synthetic reference image (RI.sub.2*(.sub.i)) from the machine-learning model (MLM); receiving an uncertainty value ({circumflex over ()}(x.sub.i)) for each predicted color value (.sub.i) of the synthetic reference image (RI.sub.2*(.sub.i)); calculating a loss by means of a loss function (), wherein the loss function (
) comprises (i) the predicted color value (.sub.i) or a deviation between the predicted color value (.sub.i) and a color value (y.sub.i) of the corresponding image element of the target reference image (RI.sub.2(y.sub.i)) and (ii) the predicted uncertainty value ({circumflex over ()}(x.sub.i)) as parameters; and reducing the loss by modification of model parameters (MP); and outputting and storing the trained machine-learning model (MLM.sup.t) or transmitting the trained machine-learning model (MLM.sup.t) to a separate computer system; and using the trained machine-learning model (MLM.sup.t) to predict a synthetic image and to generate at least one confidence value for a synthetic image.
15. A computer system comprising: a receiving unit; a control and calculation unit; and an output unit; wherein the control and calculation unit is configured to: provide a trained machine-learning model (MLM.sup.t); wherein the trained machine-learning model (MLM.sup.t) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI.sub.1(x.sub.i)) of a reference region of the reference object in a first state and (ii) a target reference image (RI.sub.2(y.sub.i)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI.sub.1(x.sub.i)) and the target reference image (RI.sub.2(y.sub.i)) each comprise a plurality of image elements, wherein the at least one input reference image (RI.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object, wherein the target reference image (RI.sub.2(y.sub.i)) is a computed tomography or magnetic resonance image of the reference region of the reference object, wherein the machine-learning model (MLM.sup.t) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI.sub.1(x.sub.i)) a synthetic reference image (RI.sub.2*(.sub.i)), wherein the synthetic reference image (RI.sub.2*(.sub.i)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI.sub.2*(.sub.i)) respectively corresponds to an image element of the target reference image (RI.sub.2(y.sub.i)), wherein the machine-learning model (MLM.sup.t) has been trained to predict for each image element of the synthetic reference image (RI.sub.2*(.sub.i)) a color value (.sub.i) and an uncertainty value ({circumflex over ()}(x.sub.i)) for the predicted color value (.sub.i), and wherein the training comprises minimization of a loss function (), wherein the loss function (
) comprises (i) the predicted color value (.sub.i) or a deviation of the predicted color value (.sub.i) from a color value (y.sub.i) of the corresponding image element of the target reference image (RI.sub.2(y.sub.i)) and (ii) the predicted uncertainty value ({circumflex over ()}(x.sub.i)) as parameters; cause the receiving unit to receive at least one input image (I.sub.1(x.sub.i)) of an examination region of an examination object, wherein the at least one input image (I.sub.1(x.sub.i)) represents an examination region of an examination object in the first state, wherein the at least one input image (I.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object; feed the at least one input image (I.sub.1(x.sub.i)) to a trained machine-learning model (MLM.sup.t); receive from the trained machine-learning model (MLM.sup.t) a synthetic image (I.sub.2*(.sub.i)), wherein the synthetic image (I.sub.2*(.sub.i)) represents the examination region of the examination object in the second state; receive from the trained machine-learning model (MLM.sup.t) an uncertainty value ({circumflex over ()}(x.sub.i)) for each image element of the synthetic image (I.sub.2*(.sub.i)); determine at least one confidence value on the basis of the received uncertainty values; and cause the output unit to output the at least one confidence value, store it in a data memory and transmit it to a separate computer system.
16. A computer-readable storage medium comprising a computer program which, when loaded into a working memory of a computer system, causes the computer system to execute: providing a trained machine-learning model (MLM.sup.t); wherein the trained machine-learning model (MLM.sup.t) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI.sub.1(x.sub.i)) of a reference region of the reference object in a first state and (ii) a target reference image (RI.sub.2(y.sub.i)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI.sub.1(x.sub.i)) and the target reference image (RI.sub.2(y.sub.i)) each comprise a plurality of image elements, wherein the at least one input reference image (RI.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object, wherein the target reference image (RI.sub.2(y.sub.i)) is a computed tomography or magnetic resonance image of the reference region of the reference object, wherein the machine-learning model (MLM.sup.t) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI.sub.1(x.sub.i)) a synthetic reference image (RI.sub.2*(.sub.i)), wherein the synthetic reference image (RI.sub.2*(.sub.i)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI.sub.2*(.sub.i)) respectively corresponds to an image element of the target reference image (RI.sub.2(y.sub.i)), wherein the machine-learning model (MLM.sup.t) has been trained to predict for each image element of the synthetic reference image (RI.sub.2*(.sub.i)) a color value (.sub.i) and an uncertainty value ({circumflex over ()}(x.sub.i)) for the predicted color value (.sub.i), and wherein the training comprises minimization of a loss function (), wherein the loss function (
) comprises (i) the predicted color value (.sub.i) or a deviation of the predicted color value (.sub.i) from a color value (y.sub.i) of the corresponding image element of the target reference image (RI.sub.2(y.sub.i)) and (ii) the predicted uncertainty value ({circumflex over ()}(x.sub.i)) as parameters; receiving at least one input image (I.sub.1(x.sub.i)) of an examination region of an examination object, wherein the at least one input image (I.sub.1(x.sub.i)) represents the examination region of the examination object in the first state, wherein the at least one input image (I.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object; feeding the at least one input image (I.sub.1(x.sub.i)) to the trained machine-learning model (MLM.sup.t); receiving a synthetic image (I.sub.2*(.sub.i)) from the trained machine-learning model, wherein the synthetic image (I.sub.2*(.sub.i)) represents the examination region of the examination object in the second state; receiving an uncertainty value ({circumflex over ()}(x.sub.i)) for each image element of the synthetic image (I.sub.2*(.sub.i)); determining at least one confidence value on the basis of the received uncertainty values; and outputting the at least one confidence value.
17. A contrast agent for use in a radiological examination method, the method comprising: providing a trained machine-learning model (MLM.sup.t); wherein the trained machine-learning model (MLM.sup.t) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI.sub.1(x.sub.i)) of a reference region of the reference object in a first state and (ii) a target reference image (RI.sub.2(y.sub.i)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI.sub.1(x.sub.i)) and the target reference image (RI.sub.2(y.sub.i)) each comprise a plurality of image elements, wherein the at least one input reference image (RI.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object in the first state, wherein the target reference image (RI.sub.2(y.sub.i)) is a computed tomography or magnetic resonance image of the reference region of the reference object in the second state, and wherein the first state represents the reference region of the reference object in a first period of time before or after the administration of the contrast agent and the second state represents the reference region of the reference object in a second period of time after the administration of the contrast agent, and/or the first state represents the reference region of the reference object before or after the administration of a first amount of the contrast agent and the second state represents the reference region of the reference object after the administration of a second amount of the contrast agent, wherein the machine-learning model (MLM.sup.t) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI.sub.1(x.sub.i)) a synthetic reference image (RI.sub.2*(.sub.i)), wherein the synthetic reference image (RI.sub.2*(.sub.i)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI.sub.2*(.sub.i)) respectively corresponds to an image element of the target reference image (RI.sub.2(y.sub.i)), wherein the machine-learning model (MLM.sup.t) has been trained to predict for each image element of the synthetic reference image (RI.sub.2*(.sub.i)) a color value (.sub.i) and an uncertainty value ({circumflex over ()}(x.sub.i)) for the predicted color value (.sub.i), and wherein the training comprises minimization of a loss function (), wherein the loss function (
) comprises (i) the predicted color value (.sub.i) or a deviation of the predicted color value (.sub.i) from a color value (y.sub.i) of the corresponding image element of the target reference image (RI.sub.2(y.sub.i)) and (ii) the predicted uncertainty value ({circumflex over ()}(x.sub.i)) as parameters; receiving at least one input image (I.sub.1(x.sub.i)) of an examination region of an examination object, wherein the at least one input image (I.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object in the first state; feeding the at least one input image (I.sub.1(x.sub.i)) to the trained machine-learning model (MLM.sup.t); receiving a synthetic image (I.sub.2*(.sub.i)) from the trained machine-learning model, wherein the synthetic image (I.sub.2*(.sub.i)) comprises a synthetic radiological image representing the examination region of the examination object in the second state; receiving an uncertainty value ({circumflex over ()}(x.sub.i)) for each image element of the synthetic image (I.sub.2*(.sub.i)); determining at least one confidence value on the basis of the received uncertainty values; and outputting the at least one confidence value.
18. The method according to claim 2, wherein the contrast agent comprises one or more of the following compounds: gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid; gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid; gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate; dihydrogen [()-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-); tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]amino}methyl)-4,7,11,14-tetraazaheptadecan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate; gadolinium 2,2,2-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate; gadolinium 2,2,2-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2,2-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium (2S,2S,2S)-2,2,2-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate); gadolinium 2,2,2-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2,2-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2,2-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate; gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate; gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate; gadolinium(III) 2,2,2-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate; a Gd.sup.3+ complex of a compound of formula (I) ##STR00005## wherein: Ar is a group selected from: ##STR00006## wherein .sub.# is a linkage to X; X is a group selected from: CH.sub.2, (CH.sub.2).sub.2, (CH.sub.2).sub.3, (CH.sub.2).sub.4 and *(CH.sub.2).sub.2OCH.sub.2.sup.#; wherein * is a linkage to Ar and .sup.# is a linkage to an acetic acid residue; R.sup.1, R.sup.2 and R.sup.3 are each independently a hydrogen atom or a group selected from C.sub.1-C.sub.3 alkyl, CH.sub.2OH, (CH.sub.2).sub.2OH and CH.sub.2OCH.sub.3; R.sup.4 is a group selected from C.sub.2-C.sub.4 alkoxy, (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O, (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O(CH.sub.2).sub.2O and (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O(CH.sub.2).sub.2O(CH.sub.2).sub.2O; R.sup.5 is a hydrogen atom; and R.sup.6 is a hydrogen atom; or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof, a Gd.sup.3+ complex of a compound of formula (II) ##STR00007## wherein: Ar is a group selected from: ##STR00008## wherein .sub.# is a linkage to X; X is a group selected from: CH.sub.2, (CH.sub.2).sub.2, (CH.sub.2).sub.3, (CH.sub.2).sub.4 and *(CH.sub.2).sub.2OCH.sub.2.sup.#; wherein * is a linkage to Ar and .sup.# is a linkage to an acetic acid residue; R.sup.7 is a hydrogen atom or a group selected from C.sub.1-C.sub.3 alkyl, CH.sub.2OH, (CH.sub.2).sub.2OH and CH.sub.2OCH.sub.3; R.sup.8 is a group selected from: C.sub.2-C.sub.4 alkoxy, (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O, (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O(CH.sub.2).sub.2O and (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O(CH.sub.2).sub.2O(CH.sub.2).sub.2O; R.sup.9 and R.sup.10 are independently a hydrogen atom; or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof.
19. The method according to claim 17, wherein the contrast agent comprises one or more of the following compounds: gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid; gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid; gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate; dihydrogen [()-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-); tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]amino}methyl)-4,7,11,14-tetraazaheptadecan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate; gadolinium 2,2,2-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate; gadolinium 2,2,2-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2,2-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium (2S,2S,2S)-2,2,2-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate); gadolinium 2,2,2-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2,2-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2,2-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate; gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate; gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate; gadolinium(III) 2,2,2-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate; a Gd.sup.3+ complex of a compound of formula (I) ##STR00009## wherein: Ar is a group selected from: ##STR00010## wherein .sub.# is a linkage to X; X is a group selected from: CH.sub.2, (CH.sub.2).sub.2, (CH.sub.2).sub.3, (CH.sub.2).sub.4 and *(CH.sub.2).sub.2OCH.sub.2.sup.#; wherein * is a linkage to Ar and .sup.# is a linkage to an acetic acid residue; R.sup.1, R.sup.2 and R.sup.3 are each independently a hydrogen atom or a group selected from C.sub.1-C.sub.3 alkyl, CH.sub.2OH, (CH.sub.2).sub.2OH and CH.sub.2OCH.sub.3; R.sup.4 is a group selected from C.sub.2-C.sub.4 alkoxy, (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O, (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O(CH.sub.2).sub.2O and (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O(CH.sub.2).sub.2O(CH.sub.2).sub.2O; R.sup.5 is a hydrogen atom; and R.sup.6 is a hydrogen atom; or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof, a Gd.sup.3+ complex of a compound of formula (II) ##STR00011## wherein: Ar is a group selected from: ##STR00012## wherein .sub.# is a linkage to X; X is a group selected from: CH.sub.2, (CH.sub.2).sub.2, (CH.sub.2).sub.3, (CH.sub.2).sub.4 and *(CH.sub.2).sub.2OCH.sub.2.sup.#; wherein * is a linkage to Ar and .sup.# is a linkage to an acetic acid residue; R.sup.7 is a hydrogen atom or a group selected from C.sub.1-C.sub.3 alkyl, CH.sub.2OH, (CH.sub.2).sub.2OH and CH.sub.2OCH.sub.3; R.sup.8 is a group selected from: C.sub.2-C.sub.4 alkoxy, (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O, (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O(CH.sub.2).sub.2O and (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O(CH.sub.2).sub.2O(CH.sub.2).sub.2O; R.sup.9 and R.sup.10 are independently a hydrogen atom; or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof.
20. A kit comprising a contrast agent and a computer program which can be loaded into a working memory of a computer system, wherein the computer program causes the computer system to: provide a trained machine-learning model (MLM.sup.t), wherein the trained machine-learning model (MLM.sup.t) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI.sub.1(x.sub.i)) of a reference region of the reference object in a first state and (ii) a target reference image (RI.sub.2(y.sub.i)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI.sub.1(x.sub.i)) and the target reference image (RI.sub.2(y.sub.i)) each comprise a plurality of image elements, wherein the at least one input reference image (RI.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object in the first state, wherein the target reference image (RI.sub.2(y.sub.i)) is a computed tomography or magnetic resonance image of the reference region of the reference object in the second state, and wherein the first state represents the reference region of the reference object in a first period of time before or after the administration of the contrast agent and the second state represents the reference region of the reference object in a second period of time after the administration of the contrast agent, and/or the first state represents the reference region of the reference object before or after the administration of a first amount of the contrast agent and the second state represents the reference region of the reference object after the administration of a second amount of the contrast agent, wherein the machine-learning model (MLM.sup.t) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI.sub.1(x.sub.i)) a synthetic reference image (RI.sub.2*(.sub.i)), wherein the synthetic reference image (RI.sub.2*(.sub.i)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI.sub.2*(.sub.i)) respectively corresponds to an image element of the target reference image (RI.sub.2(y.sub.i)), wherein the machine-learning model (MLM.sup.t) has been trained to predict for each image element of the synthetic reference image (RI.sub.2*(.sub.i)) a color value (.sub.i) and an uncertainty value ({circumflex over ()}(x.sub.i)) for the predicted color value (.sub.i), and wherein the training comprises minimization of a loss function (), wherein the loss function (
) comprises (i) the predicted color value (.sub.i) or a deviation of the predicted color value (.sub.i) from a color value (y.sub.i) of the corresponding image element of the target reference image (RI.sub.2(y.sub.i)) and (ii) the predicted uncertainty value ({circumflex over ()}(x.sub.i)) as parameters; receive at least one input image (I.sub.1(x.sub.i)) of an examination region of an examination object, wherein the at least one input image (I.sub.1(x.sub.i)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object in the first state; feed the at least one input image (I.sub.1(x.sub.i)) to the trained machine-learning model (MLM.sup.t); receive a synthetic image (I.sub.2*(.sub.i)) from the trained machine-learning model, wherein the synthetic image (I.sub.2*(.sub.i)) comprises a synthetic radiological image representing the examination region of the examination object in the second state; receive an uncertainty value ({circumflex over ()}(x.sub.i)) for each image element of the synthetic image (I.sub.2*(.sub.i)); determine at least one confidence value on the basis of the received uncertainty values; and output the at least one confidence value.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
DETAILED DESCRIPTION
[0094] The disclosure will be more particularly elucidated below without distinguishing between the subjects of the present disclosure (method, computer system, computer-readable storage medium, use, contrast agent for use, kit). Rather, the following statements shall apply mutatis mutandis to all the subjects of the disclosure, irrespective of the context (method, computer system, computer-readable storage medium, use, contrast agent for use, kit) in which they are described.
[0095] Where steps are stated in an order in the present description or in the claims, this does not necessarily mean that the disclosure is limited to the order stated. Rather, it is conceivable that the steps can also be executed in a different order or else in parallel with one another, unless one step builds on another step, which absolutely requires that the step building on the previous step be executed subsequently (this will, however, become clear in the individual case).
[0096] In certain places the disclosure will be more particularly elucidated with reference to drawings. The drawings show specific embodiments having specific features and combinations of features, which are intended primarily for illustrative purposes; the disclosure is not to be understood as being limited to the features and combinations of features shown in the drawings. Furthermore, statements made in the description of the drawings in relation to features and combinations of features are intended to be generally applicable, that is to say transferable to other embodiments too and not limited to the embodiments shown.
[0097] The present disclosure describes means for judging the trustworthiness of a synthetic image of an examination region of an examination object.
[0098] The term trustworthiness is understood to mean that a person reviewing the synthetic image is able to trust that structures and/or morphologies and/or textures depicted in the synthetic image are attributable to real structures and/or real morphologies and/or real textures of the examination region of the examination subject and are not artifacts.
[0099] The term synthetic as used herein may refer to a synthetic image that is not the direct result of a measurement on a real examination object, but has been artificially generated (calculated). However, a synthetic image can be based on images taken of a real examination object, i.e. one or more images taken of a real examination object can be used to generate the synthetic image. Examples of synthetic images are described in the introduction and in the further description of the present disclosure. According to the present disclosure, a synthetic image is generated by a machine-learning model. The generation of a synthetic image with the aid of a machine-learning model is also referred to as prediction in this description. The terms synthetic and predicted are used synonymously in this disclosure. In other words, a synthetic image is an image generated (predicted) by a (trained) machine-learning model on the basis of input data, which can comprise one or more images generated by measurement.
[0100] The examination object is preferably a human or an animal, preferably a mammal, most preferably a human.
[0101] The examination region is part of the examination object, for example an organ of a human or animal such as the liver, brain, heart, kidney, lung, stomach, intestines, pancreas, thyroid gland, prostate, breast, or part of the aforementioned organs, or multiple organs, or another part of the examination object.
[0102] The examination region may also include multiple organs and/or parts of multiple organs.
[0103] In one embodiment, the examination region includes a liver or part of a liver or the examination region is a liver or part of a liver of a mammal, preferably a human.
[0104] In a further embodiment, the examination region includes a brain or part of a brain or the examination region is a brain or part of a brain of a mammal, preferably a human.
[0105] In a further embodiment, the examination region includes a heart or part of a heart or the examination region is a heart or part of a heart of a mammal, preferably a human.
[0106] In a further embodiment, the examination region includes a thorax or part of a thorax or the examination region is a thorax or part of a thorax of a mammal, preferably a human.
[0107] In a further embodiment, the examination region includes a stomach or part of a stomach or the examination region is a stomach or part of a stomach of a mammal, preferably a human.
[0108] In a further embodiment, the examination region includes a pancreas or part of a pancreas or the examination region is a pancreas or part of a pancreas of a mammal, preferably a human.
[0109] In a further embodiment, the examination region includes a kidney or part of a kidney or the examination region is a kidney or part of a kidney of a mammal, preferably a human.
[0110] In a further embodiment, the examination region includes one or both lungs or part of a lung of a mammal, preferably a human.
[0111] In a further embodiment, the examination region includes a breast or part of a breast or the examination region is a breast or part of a breast of a female mammal, preferably a female human.
[0112] In a further embodiment, the examination region includes a prostate or part of a prostate or the examination region is a prostate or part of a prostate of a male mammal, preferably a male human.
[0113] The examination region, also referred to as the field of view (FOV), is in particular a volume that is imaged in radiological images. The examination region is typically defined by a radiologist, for example on a localized image. It is also possible for the examination region to be alternatively or additionally defined in an automated manner, for example on the basis of a selected protocol.
[0114] The term image refers to a data structure constituting a spatial distribution of a physical signal. The spatial distribution can have any dimension, for example 2D, 3D, 4D or a higher dimension. The spatial distribution can have any form, for example it can form a grid, which can be irregular or regular, and thereby define pixels or voxels. The physical signal can be any signal, for example proton density, echogenicity, permeability, absorption capacity, relaxivity, information about rotating hydrogen nuclei in a magnetic field, color, grey level, depth, surface or volume occupancy.
[0115] The term image is preferably understood to mean a two-, three- or higher-dimensional visually capturable representation of the examination region of the examination object. Such an image is usually a digital image. The term digital as used herein may mean that the image can be processed by a machine, generally a computer system. Processing is understood to mean the known methods for electronic data processing (EDP).
[0116] A digital image can be processed, edited and reproduced and also converted into standardized data formats, for example JPEG (graphics format of the Joint Photographic Experts Group), PNG (Portable Network Graphics) or SVG (Scalable Vector Graphics), by means of computer systems and software.
[0117] Digital images can be visualized by means of suitable display devices, for example computer monitors, projectors and/or printers.
[0118] In a digital image, image contents are usually represented by whole numbers and stored. In most cases, the images are two- or three-dimensional images, which can be binary coded and optionally compressed. The digital images are usually raster graphics, in which the image information is stored in a uniform raster grid. Raster graphics consist of a raster arrangement of so-called picture elements (pixels) in the case of two-dimensional representations or volume elements (voxels) in the case of three-dimensional representations. In the case of four-dimensional representations, the term doxel (dynamic voxel) is commonly used for the image elements. In the case of higher-dimensional representations or in general, the term n-xel is sometimes also used, where n indicates the particular dimension. This disclosure uses generally the term image element. An image element can therefore be a picture element (pixel) in the case of a two-dimensional representation, a volume element (voxel) in the case of a three-dimensional representation, a dynamic voxel (doxel) in the case of the four-dimensional representation or a higher-dimensional image element in the case of a higher-dimensional representation.
[0119] Each image element in an image is assigned a color value. The color value indicates how (e.g. in what color) the image element is to be visually displayed (e.g. on a monitor).
[0120] The simplest case is a binary image, in which an image element is displayed either white or black. The color value 0 is usually black and the color value 1 white.
[0121] In the case of a grey scale image, each image element is assigned a grey level, which ranges from black to white over a defined number of shades of grey. Grey levels are also referred to as grey values. The number of shades can range, for example, from 0 to 255 (i.e. 256 grey levels/grey values), and here too, the value 0 is usually black and the highest grey value (value of 255 in this example) white.
[0122] In the case of a color image, the color coding used for an image element is defined, inter alia, in terms of the color space and the color depth. In the case of an image, the color of which is defined in terms of the so-called RGB color space (RGB stands for the primary colors red, green and blue), each picture element is assigned three color values, one color value for the color red, one color value for the color green and one color value for the color blue. The color of an image element arises through the superimposition (additive blending) of the three color values. The individual color value can be discretized, for example, into 256 distinguishable levels, which are called tonal values and usually range from 0 to 255. The tonal value 0 of each color channel is usually the darkest color nuance. If all three color channels have the tonal value 0, the corresponding image element appears black; if all three color channels have the tonal value 255, the corresponding image element appears white.
[0123] Irrespective of whether the image is a binary image, a grey scale image or a color image, the term color value is used in this disclosure to indicate the color (including the colors black and white and all shades of grey) in which an image element is to be displayed. A color value can thus be, for example, a tonal value of a color channel, a shade of grey, or black or white.
[0124] A color value in an image (especially a medical image) usually represents a strength of a physical signal (see above). It should be noted that the color value can also be a value for the physical signal itself.
[0125] There are a multiplicity of possible digital image formats and color codings. For simplification, it is assumed in this description that the present images are raster graphics having a specific number of image elements. However, this assumption ought not in any way be understood as limiting. It is clear to a person skilled in the art of image processing how the teaching of this description can be applied to image files which are present in other image formats and/or in which the color values are coded differently.
[0126] An image in the context of the present disclosure can also be one or more excerpts from a video sequence.
[0127] In a first step, at least one input image of an examination region of an examination object is received.
[0128] The term receiving encompasses both the retrieving of images and the accepting of images transmitted, for example, to the computer system of the present disclosure. The at least one input image can be received from a computed tomography scanner, from a magnetic resonance imaging scanner, from an ultrasound scanner, from a camera and/or from some other device for generating images. The at least one input image can be read from a data memory and/or transmitted from a separate computer system.
[0129] The expression receiving at least one input image as used herein may mean that one or more input images are received. It is thus possible to receive one input image; however, it is also possible to receive two or three or four or more than four input images.
[0130] The additional word input in the term input image indicates that the input image is intended to be fed to a (trained) machine-learning model as input data or at least as part of the input data.
[0131] Preferably, the at least one input image is a two-dimensional or three-dimensional representation of an examination region of an examination object.
[0132] In one embodiment of the present disclosure, the at least one input image is a medical image.
[0133] A medical image is a visual representation of an examination region of a human or animal that can be used for diagnostic and/or therapeutic purposes.
[0134] There is a multitude of techniques that can be used to generate medical images; examples of such techniques include radiography, computed tomography (CT), fluoroscopy, magnetic resonance imaging (MRI), ultrasound (sonography), endoscopy, elastography, tactile imaging, thermography, microscopy, positron emission tomography, optical coherence tomography (OCT), fundus photography and others.
[0135] Examples of medical images include CT images, X-ray images, MRI images, fluorescence angiography images, OCT images, histological images, ultrasound images, fundus images and/or others.
[0136] The at least one input image can be a CT image, MRI image, ultrasound image, OCT image and/or some other representation of an examination region of an examination object.
[0137] The at least one input image can also include representations of different modalities, for example one or more CT images and one or more MRI images.
[0138] Preferably, the at least one input image is the result of a radiological examination. In other words, the at least one input image is preferably a radiological image.
[0139] Radiology is the branch of medicine that is concerned with the use of electromagnetic rays and mechanical waves (including for instance ultrasound diagnostics) for diagnostic, therapeutic and/or scientific purposes. Besides X-rays, other ionizing radiation such as gamma radiation or electrons are also used. Imaging being a key application, other imaging methods such as sonography and magnetic resonance imaging (nuclear magnetic resonance imaging) are also counted as radiology, even though no ionizing radiation is used in these methods. The term radiology in the context of the present disclosure thus encompasses in particular the following examination methods: computed tomography, magnetic resonance imaging, sonography.
[0140] In one embodiment of the present disclosure, the radiological examination is a magnetic resonance imaging examination, i.e. the at least one input image comprises at least one MRI image.
[0141] In a further embodiment, the radiological examination is a computed tomography examination, i.e. the at least one input image comprises at least one CT image.
[0142] In a further embodiment, the radiological examination is an ultrasound examination, i.e. the at least one input image comprises at least one ultrasound image.
[0143] In radiological examinations, contrast agents are commonly used for contrast enhancement.
[0144] Contrast agents are substances or mixtures of substances that improve the depiction of structures and functions of the body in radiological examinations.
[0145] In computed tomography, iodine-containing solutions are normally used as contrast agents. In magnetic resonance imaging (MRI), superparamagnetic substances (for example iron oxide nanoparticles, superparamagnetic iron-platinum particles (SIPPs)) or paramagnetic substances (for example gadolinium chelates, manganese chelates, hafnium chelates) are normally used as contrast agents. In the case of sonography, liquids containing gas-filled microbubbles are normally administered intravenously. Examples of contrast agents can be found in the literature (see, for example, A. S. L. Jascinth et al.: Contrast Agents in computed tomography: A Review, Journal of Applied Dental and Medical Sciences, 2016, Vol. 2, Issue 2, 143-149; H. Lusic et al.: X-ray-Computed Tomography Contrast Agents, Chem. Rev. 2013, 113, 3, 1641-1666; radiology.wisc.edu/wp-content/uploads/2017/10/contrast-agents-tutorial.pdf, M. R. Nough et al.: Radiographic and magnetic resonances contrast agents: Essentials and tips for safe practices, World J Radiol. 2017 Sep. 28; 9(9): 339-349; L. C. Abonyi et al.: Intravascular Contrast Media in Radiography: Historical Development & Review of Risk Factors for Adverse Reactions, South American Journal of Clinical Research, 2016, Vol. 3, Issue 1, 1-10; ACR Manual on Contrast Media, 2020, ISBN: 978-1-55903-012-0; A. Ignee et al.: Ultrasound contrast agents, Endosc Ultrasound. 2016 November-December; 5(6): 355-362).
[0146] MRI contrast agents exert their effect in an MRI examination by altering the relaxation times of structures that take up contrast agents. A distinction can be made between two groups of substances: paramagnetic and superparamagnetic substances. Both groups of substances have unpaired electrons that induce a magnetic field around the individual atoms or molecules. Superparamagnetic contrast agents result in a predominant shortening of T2, whereas paramagnetic contrast agents mainly result in a shortening of T1. The effect of said contrast agents is indirect, since the contrast agent does not itself emit a signal, but instead merely influences the intensity of signals in its vicinity. An example of a superparamagnetic contrast agent is iron oxide nanoparticles (SPIO, superparamagnetic iron oxide). Examples of paramagnetic contrast agents are gadolinium chelates such as gadopentetate dimeglumine (trade name: Magnevist and others), gadoteric acid (Dotarem, Dotagita, Cyclolux), gadodiamide (Omniscan), gadoteridol (ProHance), gadobutrol (Gadovist), gadopiclenol (Elucirem, Vueway) and gadoxetic acid (Primovist/Eovist).
[0147] In one embodiment, the radiological examination is an MRI examination in which an MRI contrast agent is used.
[0148] In a further embodiment, the radiological examination is a CT examination in which a CT contrast agent is used.
[0149] In a further embodiment, the radiological examination is a CT examination in which an MRI contrast agent is used.
[0150] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid (also referred to as gadolinium-DOTA or gadoteric acid).
[0151] In a further embodiment, the contrast agent is an agent that includes gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid (Gd-EOB-DTPA); preferably, the contrast agent includes the disodium salt of gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid (also referred to as gadoxetic acid).
[0152] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate (also referred to as gadopiclenol) (see for example WO2007/042504 and WO2020/030618 and/or WO2022/013454).
[0153] In one embodiment of the present disclosure, the contrast agent is an agent that includes dihydrogen [()-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-) (also referred to as gadobenic acid).
[0154] In one embodiment of the present disclosure, the contrast agent is an agent that includes tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]amino}methyl)-4,7,11,14-tetraazaheptadecan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate (also referred to as gadoquatrane) (see for example J. Lohrke et al.: Preclinical Profile of Gadoquatrane: A Novel Tetrameric, Macrocyclic High Relaxivity Gadolinium-Based Contrast Agent. Invest Radiol., 2022, 1, 57(10): 629-638; WO2016193190).
[0155] In one embodiment of the present disclosure, the contrast agent is an agent that includes a Gd.sup.3, complex of a compound of the formula (I)
##STR00001## [0156] wherein: [0157] Ar is a group selected from:
##STR00002## [0158] wherein .sub.# is a linkage to X, [0159] X is a group selected from: [0160] CH.sub.2, (CH.sub.2).sub.2, (CH.sub.2).sub.3, (CH.sub.2).sub.4 and *(CH.sub.2).sub.2OCH.sub.2.sup.#, [0161] wherein * is a linkage to Ar and .sup.# is a linkage to an acetic acid residue, [0162] R.sup.1, R.sup.2 and R.sup.3 are each independently a hydrogen atom or a group selected from C.sub.1-C.sub.3 alkyl, CH.sub.2OH, (CH.sub.2).sub.2OH and CH.sub.2OCH.sub.3, [0163] R.sup.4 is a group selected from C.sub.2-C.sub.4 alkoxy, (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O, (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O(CH.sub.2).sub.2O and (H.sub.3CCH.sub.2)O(CH.sub.2).sub.2O(CH.sub.2).sub.2O(CH.sub.2).sub.2O, [0164] R.sup.5 is a hydrogen atom, [0165] and [0166] R.sup.6 is a hydrogen atom, [0167] or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof.
[0168] In one embodiment of the present disclosure, the contrast agent is an agent that includes a Gd.sup.3+ complex of a compound of the formula (II)
##STR00003## [0169] wherein: [0170] Ar is a group selected from:
##STR00004## [0171] wherein .sub.# is a linkage to X, [0172] X is a group selected from: [0173] CH.sub.2, (CH.sub.2).sub.2, (CH.sub.2).sub.3, (CH.sub.2).sub.4 and *(CH.sub.2).sub.2OCH.sub.2.sup.#, [0174] wherein * is a linkage to Ar and .sup.# is a linkage to an acetic acid residue, [0175] R.sup.7 is a hydrogen atom or a group selected from C.sub.1-C.sub.3 alkyl, CH.sub.2OH, (CH.sub.2).sub.2OH and CH.sub.2OCH.sub.3; [0176] R.sup.8 is a group selected from: [0177] C.sub.2-C.sub.4 alkoxy, (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O, (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O(CH.sub.2).sub.2O and (H.sub.3CCH.sub.2O)(CH.sub.2).sub.2O(CH.sub.2).sub.2O(CH.sub.2).sub.2O; [0178] R.sup.9 and R.sup.10 are independently a hydrogen atom; [0179] or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof.
[0180] The term C.sub.1-C.sub.3 alkyl denotes a linear or branched, saturated monovalent hydrocarbon group having 1, 2 or 3 carbon atoms, for example methyl, ethyl, n-propyl or isopropyl. The term C.sub.2-C.sub.4 alkyl denotes a linear or branched, saturated monovalent hydrocarbon group having 2, 3 or 4 carbon atoms.
[0181] The term C.sub.2-C.sub.4 alkoxy denotes a linear or branched, saturated monovalent group of the formula (C.sub.2-C.sub.4 alkyl)-O, in which the term C.sub.2-C.sub.4 alkyl is as defined above, for example a methoxy, ethoxy, n-propoxy or isopropoxy group.
[0182] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2,2-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate (see for example WO2022/194777, example 1).
[0183] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2,2-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate (see for example WO2022/194777, example 2).
[0184] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2,2-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate (see for example WO2022/194777, example 4).
[0185] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium (2S,2S,2S)-2,2,2-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate) (see for example WO2022/194777, example 15).
[0186] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2,2-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate (see for example WO2022/194777, example 31).
[0187] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium-2,2,2-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate.
[0188] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2,2-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate.
[0189] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate (also referred to as gadodiamide).
[0190] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate (also referred to as gadoteridol).
[0191] In one embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 2,2,2-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate (also referred to as gadobutrol or Gd-DO3A-butrol).
[0192] The at least one input image can also include representations of the examination region that were generated under different measurement conditions, for example a T1-weighted MRI image and/or a T2-weighted MRI image and/or a diffusion-weighted MRI image and/or some other MRI image and/or one or more dual-energy CT images and/or one or more spectral CT images.
[0193] The at least one input image can also include multiple radiological images that were generated after administration of different amounts of a contrast agent and/or after administration of different contrast agents, for example a native radiological image and/or a radiological image after administration of a first amount of a contrast agent and/or one or more radiological images after administration of a second contrast agent and/or a virtual non-contrast representation (VNC representation).
[0194] The at least one input image can also include multiple radiological images that were generated at different times before and/or after the administration of one or more contrast agents and/or that represent the examination region in different phases and/or states.
[0195] The at least one input image comprises a multiplicity of image elements. Each image element of the multiplicity of image elements represents a sub-region of the examination region of the examination object. The terms multiplicity of image elements and plurality of image elements as used herein may mean at least 1,000, or at least 10,000, or at least 100,000, or more than 100,000. It is conceivable that the at least one input image comprises one or more image elements that do not represent the examination region of the examination object, but some other region such as an adjoining and/or surrounding region.
[0196] The at least one input image represents the examination region of the examination object in a first state.
[0197] If more than one input image is received (e.g. two or three or four or more than four), the further input images received can represent the examination region of the examination object in the same state as the first input image or in one or more further states. A first input image can represent the examination region of the examination object in, for example, a first state and a second input image can represent the examination region of the examination object in, for example, a second input state, the second state differing from the first state.
[0198] On the basis of at least one input image, the (trained) machine-learning model generates a synthetic image. The synthetic image represents the examination region of the examination object in a second state. The first state and the second state are different states.
[0199] If more than one input image is received and a synthetic image is generated on the basis of the multiple input images received, then the synthetic image represents the examination region of the examination object in a different state than any of the input images received that are used as the basis for generation of the synthetic image.
[0200] It is thus possible to receive a first input image and a second input image. The first input image can represent the examination region of the examination object in a first state and the second input image can represent the examination region of the examination object in a second state. The first state and the second state can be the same or different; preferably, they are different. A synthetic image can be created on the basis of the first input image and the second input image. The synthetic image represents the examination region of the examination object in a third state; the third state differs from the first state and the second state.
[0201] It is also possible to receive a first input image, a second input image and a third input image. The first input image can represent the examination region of the examination object in a first state; the second input image can represent the examination region of the examination object in a second state; the third input image can represent the examination region of the examination object in a third state. The first state, the second state and the third state can be the same or different; preferably, they are different. A synthetic image can be generated on the basis of the first input image, the second input image and the third input image. The synthetic image represents the examination region of the examination object in a fourth state; the fourth state differs from the first state, the second state and the third state.
[0202] Generally, a number m of input images can be received (where m is an integer greater than zero); each input image received represents the examination region of the examination object in one of p states, where p is an integer which can assume the values from 1 to m. The m input images received can be used as the basis to generate a synthetic image representing the examination region of the examination object in a state differing from the p states.
[0203] The state can be an amount of contrast agent that had been administered to the examination object. The state can be a period of time that has passed since the administration of a contrast agent. The state can be a specific contrast agent that had been administered to the examination object. The state can also be a state without contrast agent. The state can be a modality (e.g. MRI image, CT image, ultrasound image) or a measurement protocol leading to a specific appearance of the examination region in a medical image generated by measurement.
[0204] The at least one input image can comprise, for example, a radiological image of the examination region before and/or after administration of a first amount of a contrast agent and the synthetic image can be a synthetic radiological image of the examination region after the administration of a second amount of a contrast agent, the second amount being preferably greater than the first amount.
[0205] The at least one input image can comprise, for example, a radiological image of the examination region without contrast agent and/or with a lower amount of contrast agent than the standard amount of the contrast agent and the synthetic image can be a synthetic radiological image of the examination region after the administration of the standard amount of contrast agent (as described, for example, in WO 2019/074938 A1 or WO 2022/184297 A1).
[0206] The standard amount is normally the amount recommended by the manufacturer and/or distributor of the contrast agent and/or the amount approved by a regulatory authority and/or the amount specified in a package leaflet for the contrast agent. For example, the standard amount of Primovist is 0.025 mmol Gd-EOB-DTPA disodium/kg body weight.
[0207] The at least one input image can comprise, for example, one or more MRI images representing the examination region before and/or after administration of the first amount of an MRI contrast agent and the synthetic image can be a synthetic MRI image after the administration of the second amount of the MRI contrast agent.
[0208] The at least one input image can also comprise a CT image before and/or after the administration of a first amount of an MRI contrast agent and the synthetic image can be a synthetic CT image after the administration of a second amount of an MRI contrast agent, the second amount being preferably greater than the first amount and preferably greater than the standard amount of the MRI contrast agent for MRI examinations (as described, for example, in WO 2023/161041 A1).
[0209] The at least one input image can comprise, for example, one or more radiological images of the examination region in a first period of time before and/or after the administration of a contrast agent and the synthetic image can be a synthetic radiological image of the examination region in a second period of time after the administration of the contrast agent, the second period of time preferably chronologically following the first period of time (as described, for example, in WO 2021/052896 A1).
[0210] The at least one input image can comprise, for example, one or more radiological images of the liver or part of the liver of an examination object that represent the liver or the part of the liver in the native phase (i.e. without contrast agent) and/or in the arterial phase and/or in the portal venous phase and/or in the transitional phase after the administration of a hepatobiliary contrast agent and the synthetic image can be a synthetic radiological image representing the liver or the part of the liver in the hepatobiliary phase after the administration of the hepatobiliary contrast agent. The one or more radiological images can be one or more MRI images and/or one or more CT images. The hepatobiliary contrast agent can be a hepatobiliary MRI contrast agent and/or hepatobiliary CT contrast agent.
[0211] The stated phases are, for example, described in more detail in the following publications: J. Magn. Reson. Imaging, 2012, 35(3): 492-511, doi:10.1002/jmri.22833; Clujul Medical, 2015, Vol. 88 no. 4: 438-448, DOI: 10.15386/cjmed-414; Journal of Hepatology, 2019, Vol. 71: 534-542, dx.doi.org/10.1016/j.jhep.2019.05.005).
[0212] A hepatobiliary contrast agent is understood to mean a contrast agent which is specifically taken up by healthy liver cells, the hepatocytes. Examples of hepatobiliary contrast agents are contrast agents based on gadoxetic acid. They are, for example, described in U.S. Pat. No. 6,039,931A. They are commercially available under the trade names Primovist or Eovist for example. A further contrast agent having a lower uptake into the hepatocytes is gadobenate dimeglumine (Multihance). Further hepatobiliary contrast agents are described inter alia in WO 2022/194777.
[0213] The at least one input image can comprise, for example, one or more MRI images of an examination region of an examination object and the synthetic image can be a synthetic CT image of the examination region of the examination object.
[0214] The at least one input image can comprise, for example, one or more CT images of an examination region of an examination object and the synthetic image can be a synthetic MRI image of the examination region of the examination object.
[0215] The at least one input image can comprise, for example, one or more radiological images (e.g. MRI and/or CT images) of an examination region of an examination object that have been generated using a first measurement protocol and the synthetic image can be a synthetic radiological image of the examination region of the examination object using a second measurement protocol, i.e. a synthetic image showing the examination region of the examination object as it would look if the second measurement protocol had been applied instead of the first measurement protocol.
[0216] The synthetic image is generated (predicted) by means of a trained machine-learning model.
[0217] A machine learning model can be understood as meaning a computer-implemented data processing architecture. Such a model is able to receive input data and to supply output data on the basis of said input data and model parameters. Such a model is able to learn a relationship between the input data and the output data through training. During training, the model parameters can be adjusted so as to supply a desired output for a particular input.
[0218] During the training of such a model, the model is presented with training data from which it can learn. The trained machine-learning model is the result of the training process. Besides input data, the training data include the correct output data (target data) that are to be generated by the model on the basis of the input data. During training, patterns that map the input data onto the target data are identified.
[0219] In the training process, the input data of the training data are input into the model, and the model generates output data. The output data are compared with the target data. Model parameters can be altered so as to reduce the deviations between the output data and the target data to a (defined) minimum.
[0220] The training data comprise for each reference object of a multiplicity of reference objects (i) at least one input reference image of a reference region of the reference object in the first state and (ii) a target reference image of the reference region of the reference object in the second state as target data.
[0221] The term reference is used in this description to distinguish the phase of training the machine-learning model from the phase of using the trained model for prediction (i.e. for generation of synthetic images). The term reference otherwise has no limitation on meaning. Statements made in this description concerning at least one input image apply analogously to each input reference image; statements made in this description concerning the examination object apply analogously to each reference object; statements made in this description concerning the examination region apply analogously to the reference region.
[0222] Each reference object is, like the examination object, normally a living being, preferably a mammal, most preferably a human. The reference region is a part of the reference object. The reference region normally (but not necessarily) corresponds to the examination region of the examination object. In other words, when the examination region is an organ or part of an organ (for example the liver or part of the liver) of the examination object, the reference region of each reference object is preferably the corresponding organ or corresponding part of the organ of the respective reference object. The at least one input reference image of the reference region of the reference object in the first state thus usually corresponds to the at least one input image of the examination region of the examination object in the first statewith the difference that the at least one input reference image comes from the reference object and the at least one input image comes from the examination object.
[0223] The additional word input in the term input reference image indicates that the input reference image is intended to be fed to a machine-learning model as input data or at least as part of the input data in the context of training the machine-learning model.
[0224] The additional word target in the term target reference image indicates that the target reference image is used as target data (ground truth) or at least as part of the target data in the context of training the machine-learning model.
[0225] The target reference image is normally not a synthetic image, but the result of a measurement, for example a radiological examination.
[0226] The at least one input reference image comprises a multiplicity of image elements; each image element represents a sub-region of the reference region; each image element is characterized by a color value. Furthermore, the target reference image also comprises a multiplicity of image elements; each image element represents a sub-region of the reference region; each image element is characterized by a color value.
[0227] For each reference object, the at least one input reference image is fed to the machine-learning model. The machine-learning model is configured to generate on the basis of the at least one input reference image (and any further input data) a synthetic reference image. The synthetic reference image should come as close as possible to the (normally measured) target reference image.
[0228] The synthetic reference image comprises a multiplicity of image elements; each image element represents a sub-region of the reference region; each image element is characterized by a color value.
[0229] Each image element of the synthetic reference image respectively corresponds to an image element of the target reference image. Corresponding image elements are those image elements representing the same reference region (or examination region); corresponding image elements normally have the same coordinates.
[0230] The machine-learning model is configured and is trained to predict for each image element of the synthetic reference image a color value. The predicted color value of each image element should come as close as possible to the color value of the corresponding image element of the (normally measured) target reference image.
[0231] Furthermore, the machine-learning model is configured and is trained to predict for each predicted color value an uncertainty value. The uncertainty value reflects the uncertainty of the predicted color value.
[0232] Training of the machine-learning model comprises minimization of a loss function.
[0233] The loss function comprises the color value predicted by the machine-learning model and/or a deviation of the predicted color value from the color value of the corresponding image element of the target reference image as parameters. The loss function further comprises the uncertainty value predicted by the machine-learning model as a further parameter.
[0234] The loss function can have, for example, the following equation (1):
[0235] Here, .sub.i is the predicted color value of the image element i of the synthetic reference image and y.sub.i is the color value of the corresponding image element i of the (normally measured) target reference image. N indicates the number of image elements of the synthetic reference image. The term {circumflex over ()}(x.sub.i) is a predicted uncertainty value; it depends on the color values x.sub.i of the corresponding image elements of the at least one input reference image.
[0236] The loss function stated here by way of example is based on the loss function proposed by A. Kendall and Y. Gal (see: A. Kendall, Y. Gal: What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?, Computer Vision and Pattern Recognition, 2017, arxiv.org/abs/1703.04977).
[0237] The loss function comprises a term corresponding to the L2 loss function (Euclidean distance):
[0238] If during training the machine-learning model predicts color values that come close to (ideally match) the color values of the target data, then the Euclidean distance is small (if the color values match, it is equal to zero) and the loss calculated using the loss function is likewise small.
[0239] If the predicted color values deviate from the target data, then the Euclidean distance is large.
[0240] The Euclidean distance is divided in the loss function by the square of the uncertainty value {circumflex over ()}(x.sub.i). As a result, the first term of the loss function
remains small even in the event of a large Euclidean distance of the predicted color values from the color values of the target data. In other words, if the predicted color values deviate from the color values of the target data, the loss function
can still be minimized if the uncertainty values are large. In other words, a high deviation in the prediction of color values leads to a high uncertainty value.
[0241] In order that the model parameters are not altered in such a way during training of the machine-learning model that the uncertainty values become larger and larger (in order to minimize the loss function), there is a second term for regularization:
[0242] The machine-learning model thus implicitly learns the uncertainty values from the loss function; the uncertainty values are not part of the target data; the uncertainty values reflect the uncertainty of the input data, also referred to as heteroscedastic aleatoric uncertainty.
[0243] The above-indicated equation (1) is only one example of a loss function; other equations are conceivable.
[0244] Present in an equation for a suitable loss function are not only the predicted color values and/or deviations of the predicted color values from the color values of the target data but also one or more uncertainty values as parameters. Such an uncertainty value serves as a complement to the deviations of the predicted color values from the color values of the target data. It ensures that an increase in the deviations of the predicted color values from the color values of the target data is somewhat countered so to be able to minimize the loss function, even in the case of deviations different from zero.
[0245] Training of the machine-learning model is shown by way of example and in schematic form in
[0246]
[0247] The machine-learning model MLM is trained using training data. The training data comprise for each reference object of a multiplicity of reference objects (i) at least one input reference image RI.sub.1(x.sub.i) of a reference region of the reference object in a first state as input data and (ii) a target reference image RI.sub.2(y.sub.i) of the reference region of the reference object in a second state as target data.
[0248] In the example shown in
[0249] The at least one input reference image RI.sub.1(x.sub.i) represents the reference region of the reference object in a first state; the target reference image RI.sub.2(y.sub.i) represents the reference region of the reference object in a second state. The first state and the second state are different states. For example, the state can represent an amount of contrast agent that is or has been introduced into the reference region. For example, the state can represent a specific contrast agent. For example, the state can represent a time before and/or after administration of a contrast agent. For example, the state can represent a modality and/or a measurement protocol. Further examples are mentioned in this disclosure.
[0250] The at least one input reference image RI.sub.1(x.sub.i) is used in the example shown in
[0251] The at least one input reference image RI.sub.1(x.sub.i) comprises a multiplicity of image elements (not explicitly shown in
[0252] The machine-learning model MLM is trained to predict for each image element i of the synthetic reference image RI.sub.2*(.sub.i) a color value .sub.i. The prediction is made on the basis of the color values of the at least one input reference image RI.sub.1(x.sub.i).
[0253] Each predicted color value .sub.i of each image element of the synthetic reference image RI.sub.2*(.sub.i) should come as close as possible (ideally correspond) to the respective color value y.sub.i of the corresponding image element of the target reference image RI.sub.2(y.sub.i). In other words, deviations between the predicted color value .sub.i of an image element i of the synthetic reference image RI.sub.2*(.sub.i) and the respective color value y.sub.i of the corresponding image element i of the target reference image RI.sub.2(y.sub.i) should be as small as possible (ideally zero).
[0254] The machine-learning model MLM is furthermore trained to predict for each predicted color value .sub.i an uncertainty value {circumflex over ()}(x.sub.i). The uncertainty value {circumflex over ()}(x.sub.i) depends on the color values x of the image elements i of the at least one input reference image RI.sub.1(x.sub.i).
[0255] Training of the machine-learning model MLM comprises minimization of a loss function . In other words, for each triple composed of a predicted color value .sub.i, the color value y.sub.i and the predicted uncertainty value {circumflex over ()}(x.sub.i), the loss function
can be used to calculate a loss; by modifying model parameters MP, the loss determined can be reduced; the goal can be that of modifying the model parameters in such a way that the loss determined by means of the loss function
assumes a minimum value for all triples and/or the loss determined cannot be reduced further by modifying the model parameters. The model parameters MP can be modified and the loss function
minimized in an optimization method. The loss function
can have the above-mentioned equation (1) or a different equation.
[0256] After training, the trained machine-learning model can be stored in a data memory and/or output and/or transmitted to another computer system. After training, the trained machine-learning model can be used to generate synthetic images.
[0257]
[0258] The synthetic image is generated using a trained machine-learning model MLM.sup.t. The trained machine-learning model MLM.sup.t can have been trained as described in relation to
[0259] The trained machine-learning model MLM.sup.t is fed at least one input image I.sub.1(x.sub.i) of an examination region of an examination object. The at least one input image I.sub.1(x.sub.i) represents the examination region of the examination object in a first state. On the basis of the at least one input image I.sub.1(x.sub.i), the trained machine-learning model MLM.sup.t generates a synthetic image I.sub.2*(.sub.i). The synthetic image I.sub.2*(.sub.i) represents the examination region of the examination object in a second state. The second state is different from the first state.
[0260] For example, the state can represent an amount of contrast agent that is or has been introduced into the examination region. For example, the state can represent a specific contrast agent. For example, the state can represent a time before and/or after administration of a contrast agent. For example, the state can represent a modality and/or a measurement protocol. Further examples are mentioned in this disclosure. The states during prediction normally correspond to the states during training of the machine-learning model.
[0261] The at least one input image I.sub.1(x.sub.i) comprises a multiplicity of image elements (not explicitly shown in
[0262] The synthetic image I.sub.2*(.sub.i) comprises a multiplicity of image elements (not explicitly shown in
[0263] The trained machine-learning model MLM.sup.t is configured and has been trained to predict on the basis of the at least one input image I.sub.1(x.sub.i) a color value f for each image element i of the synthetic image I.sub.2*(.sub.i). In other words, each image element i of the synthetic image I.sub.2*(.sub.i) is assigned a predicted color value .sub.i.
[0264] The synthetic image I.sub.2*(.sub.i) can be output (i.e. for example displayed on a monitor or printed by a printer) and/or stored in a data memory and/or transmitted to a separate computer system.
[0265] The trained machine-learning model MLM.sup.t is furthermore configured and has furthermore been trained to predict for each predicted color value .sub.i an uncertainty value {circumflex over ()}(x.sub.i).
[0266] The uncertainty values predicted for the predicted color values of the image elements of a synthetic image can be used as the basis to determine at least one confidence value.
[0267] The at least one confidence value can be a value indicating the extent to which a synthetic image (e.g. synthetic image I.sub.2*(.sub.i) in
[0268] If trustworthiness is low, then the synthetic image gives rise to a high degree of uncertainty; it is possible that the synthetic image has one or more artifacts; it is possible that structures and/or morphologies and/or textures in the synthetic image have no correspondence in reality, i.e. that structures and/or morphologies and/or textures in the synthetic image cannot be attributed to real structures and/or real morphologies and/or real textures in the examination region.
[0269] By contrast, a high confidence value indicates that the synthetic image has a low degree of uncertainty; features in the synthetic image have a correspondence in reality; the synthetic image can be trusted; a medical diagnosis can be made on the basis of the synthetic image and/or a medical therapy can be initiated on the basis of the synthetic image.
[0270] A confidence value positively correlating with trustworthiness can in principle be converted into a confidence value negatively correlating with trustworthiness, for example by forming the reciprocal (multiplicative inverse). Conversely, a confidence value negatively correlating with trustworthiness can correspondingly also be converted into a confidence value positively correlating with trustworthiness.
[0271] The uncertainty values predicted by the trained machine-learning model indicate for each individual image element the uncertainty of the respective predicted color value. Accordingly, these uncertainty values can be used directly as confidence values. They negatively correlate with trustworthiness; they positively correlate with uncertainty.
[0272] The at least one confidence value can also be a value derived from the uncertainty values.
[0273] Assuming that the predicted color values obey a Gaussian probability distribution, the predicted color value .sub.i in equation (1) corresponds to the mean, the uncertainty value {circumflex over ()}(x.sub.i) to the standard deviation, and the square {circumflex over ()}(x.sub.i).sup.2 of the uncertainty value to the variance of the Gaussian distribution function. Instead of the predicted uncertainty value {circumflex over ()}(x.sub.i), the square {circumflex over ()}(x.sub.i).sup.2 or some other derived variable can thus also be used as the at least one confidence value.
[0274] It is also conceivable that the machine-learning model is configured and is trained to predict a confidence value (instead of the uncertainty value or in addition to the uncertainty value). In such a case, the confidence value (or a variable derived therefrom) is a (further) parameter of the loss function.
[0275] It is thus possible to determine for each individual image element of the synthetic image a confidence value indicating the extent to which the color value of the image element can be trusted.
[0276] If there is more than one color value (e.g. three color values, as in the case of images, the color values of which are specified according to the RGB color model), an uncertainty value can be predicted for each color channel and a confidence value can be determined for each color channel.
[0277] However, it is also possible to combine the uncertainty values of the color channels into a single value and to determine a confidence value on the basis of the combined value; it is possible to determine a confidence value on the basis of the maximum uncertainty value of the color channels; it is possible to determine a confidence value on the basis of a mean (e.g. arithmetic mean, geometric mean, root mean square or some other mean) of the uncertainty values of the color channels; it is possible to determine a confidence value on the basis of the length of the vector specified by the uncertainty values of the color channels in a three-dimensional space (or a higher-dimensional space when using more than three color channels); other possibilities are conceivable.
[0278] It is possible to use different methods for calculating a confidence value for different sub-regions of the examination object. For example, it is possible that image elements representing a specific tissue and/or organ and/or lesion use a different method for calculating a confidence value than image elements representing another tissue and/or another organ and/or another sub-region. Sub-regions for which there are different calculation rules for confidence values can be identified, for example, by means of segmentation.
[0279] The term segmentation refers to the process of dividing an image into multiple segments, which are also referred to as image segments, image regions or image objects. Segmentation is generally used to locate objects and boundaries (lines, curves, etc.) in images. In a segmented image, the objects located can be separated from the background, visually highlighted (e.g. in color), measured, counted and/or quantified in some other way. In segmentation, each image element of an image is assigned a label (e.g. a number), and so image elements having the same label have certain features in common, for example represent the same tissue (e.g. bone tissue or adipose tissue or healthy tissue or diseased tissue (e.g. tumour tissue) or muscle tissue and/or the like) and/or the same organ. For image elements having a specific label, a specific calculation rule can then be used to calculate a confidence value; for image elements having a different (specific) label, a different (specific) calculation rule can be used to calculate a confidence value.
[0280] The confidence values determined for image elements can be output (e.g. displayed on a monitor or printed on a printer), stored in a data memory and/or transmitted to a separate computer system, for example via a network.
[0281] The confidence values determined for image elements can also be displayed pictorially.
[0282] In addition to the synthetic image generated on the basis of the at least one input image, a further representation of the examination region can thus be output (e.g. displayed on a monitor), indicating the trustworthiness for each image element. Such a representation is also referred to as confidence representation in this description. The confidence representation preferably has the same dimensions and size as the synthetic image; each image element of the synthetic image is preferably assigned an image element in the confidence representation.
[0283] Such a confidence representation can be used by a user (e.g. a doctor) to identify for each individual image element the extent to which the predicted color value of the image element can be trusted. It is possible to completely or partly superimpose the confidence representation on the synthetic image. It is possible to configure the superimposition in such a way that it can be faded in and out by the user. The user can display the synthetic image layer by layer, for example, as is customary for computed tomography representations, magnetic resonance imaging representations, and other three- or higher-dimensional representations. For each layer, the user can fade in the corresponding layer of the confidence representation in order to check whether the predicted color values of the image elements in the layer showing structures, morphologies, and/or textures are trustworthy or uncertain. This allows the user to identify the level of the risk of the structures, morphologies and/or textures being real properties of the examination region or artifacts.
[0284] For example, image elements having low trustworthiness (having a high degree of uncertainty) can be displayed brightly and/or with a signal color (e.g. red or orange or yellow), whereas image elements having high trustworthiness (having a low degree of uncertainty) can be displayed darkly or with a more inconspicuous or calming color (e.g. green or blue). It is also possible, in the case of a superimposition, to display those image elements for which the confidence value exceeds or falls short of a predefined limit value. If the confidence value correlates positively with trustworthiness, what can be displayed for example are those image elements of the confidence representation, the confidence value of which is below a predefined limit value; in such a case, a user (e.g. a doctor) is explicitly informed of those image elements that they should better not trust.
[0285] It is also possible to determine confidence values for sub-regions of the synthetic image (e.g. for layers within a three-dimensional or higher-dimensional synthetic image) and/or for the entire synthetic image. Determination of such confidence values for sub-regions or entire images can be done on the basis of the confidence values of those image elements of which they are composed. For example, a confidence value of a layer can be determined by taking into account all the confidence values of those image elements that lie in said layer. However, it is also possible to also take into account adjacent image elements (e.g. image elements of the layer above and/or below the layer under consideration). A confidence value for a sub-region or the entire region can be determined, for example, by forming a mean (e.g. arithmetic mean, geometric mean, root mean square or some other mean). It is also possible to determine the maximum value (e.g. in the case of a confidence value correlating negatively with trustworthiness) or the minimum value (e.g. in the case of a confidence value correlating negatively with trustworthiness) of the confidence values of the image elements of a sub-region or the entire region and to use it as the confidence value of the sub-region or the entire region. Further ways of determining a confidence value for a sub-region or the entire region on the basis of the confidence values of individual image elements are conceivable.
[0286] Such a confidence value for a sub-region or the entire region can be likewise output (e.g. displayed on a monitor or output on a printer), stored in a data memory and/or transmitted to a separate computer system. It can also be displayed pictorially (e.g. in color), as described for the individual confidence values.
[0287] If a confidence value for a sub-region or the entire region that correlates positively with trustworthiness is lower than a predefined limit value, then it is possible that the corresponding sub-region or entire region should not be trusted. It is possible that such a sub-region or the corresponding entire region is not output at all (e.g. not displayed at all), or that it is displayed with a warning indicating that a user should be careful when interpreting the displayed data owing to the uncertainty of the displayed data.
[0288] It is also possible that the user of the computer system/computer program of the present disclosure is given the option, via a user interface, of navigating in the synthetic image to sub-regions having low trustworthiness. For example, the user can be shown the sub-regions having the lowest trustworthiness in a list (e.g. in the form of a list having a number q of sub-regions having the lowest confidence value correlating positively with trustworthiness, where q is a positive integer). By clicking on a list entry, the user can be shown the corresponding sub-region in the form of a synthetic image, a confidence representation and/or an input image and/or a detail thereof.
[0289] The machine-learning model can for example be an artificial neural network or comprise such a network.
[0290] An artificial neural network comprises at least three layers of processing elements: a first layer with input neurons (nodes), an Nth layer with at least one output neuron (nodes) and N2 inner layers, where N is a natural number and greater than 2.
[0291] The input neurons serve to receive the input data, in particular to receive the at least one input image. For example, there can be at least one input neuron for each image element of the at least one input image. Additional input neurons for additional input data can be present (e.g. information about the examination region, information about the examination object, information about the conditions when the at least one input image was generated, information about the first state and/or second state, and/or information about the time at which or period of time in which the at least one input image was generated).
[0292] The output neurons can serve to output a synthetic image.
[0293] The processing elements of the layers between the input neurons and the output neurons are connected to one another in a predetermined pattern with predetermined connection weights.
[0294] Preferably, the artificial neural network is a so-called convolutional neural network (CNN) or comprises such a network.
[0295] A CNN normally consists essentially of an alternately repeating array of filters (convolutional layer) and aggregation layers (pooling layer) terminating in one or more layers of fully connected neurons (dense/fully connected layer).
[0296] For example, the machine-learning model can have an architecture such as U-Net (see for example O. Ronneberger et al.: U-Net: Convolutional Networks for Biomedical Image Segmentation, arXiv:1505.04597v1). The machine-learning model can have an architecture as described in: V. P. Sudarshan et al.: Towards lower-dose PET using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data, Medical Image Analysis 73 (2021) 102187.
[0297] The training of the neural network can, for example, be carried out by means of a backpropagation method. The aim here in respect of the network is maximum reliability of mapping of the at least one input reference image onto the target reference image. The quality of prediction is described by a loss function. The goal is to minimize the loss function. In the case of the backpropagation method, an artificial neural network is taught by the alteration of the connection weights.
[0298] In the trained state, the connection weights between the processing elements contain information regarding the relationships between a multiplicity of input reference images and the corresponding target reference images, which can be used for predictive purposes.
[0299] A cross-validation method can be used in order to divide the data into training and validation data sets. The training data set is used in the backpropagation training of network weights. The validation data set is used in order to check the accuracy of prediction with which the trained network can be applied to unknown data.
[0300]
[0301] The computer-implemented training method (100) comprises the steps of: [0302] (110) receiving training data, [0303] where the training data comprise for each reference object of a multiplicity of reference objects (i) at least one input reference image of a reference region of the reference object in a first state and (ii) a target reference image of the reference region of the reference object in a second state, [0304] where the second state is different from the first state, [0305] where the at least one input reference image comprises a multiplicity of image elements, where each image element of the at least one input reference image represents a sub-region of the reference region, where each image element of the at least one input reference image is characterized by a color value, [0306] where the target reference image comprises a multiplicity of image elements, where each image element of the target reference image represents a sub-region of the reference region, where each image element of the target reference image is characterized by a color value, [0307] (120) providing a machine-learning model, [0308] where the machine-learning model is configured to generate on the basis of the at least one input reference image of the reference region of a reference object and model parameters a synthetic reference image of the reference region of the reference object, where the synthetic reference image comprises a multiplicity of image elements, where each image element of the synthetic reference image corresponds to an image element of the target reference image, where each image element of the synthetic reference image is assigned a predicted color value, where the machine-learning model is configured to predict for each predicted color value an uncertainty value, [0309] (130) training the machine-learning model, where the training for each reference object of the multiplicity of reference objects comprises: [0310] (131) inputting the at least one input reference image into the machine-learning model, [0311] (132) receiving a synthetic reference image from the machine-learning model, [0312] (133) receiving an uncertainty value for each predicted color value of the predicted second reference image, [0313] (134) calculating a loss by means of a loss function, where the loss function comprises (i) the predicted color value and/or a deviation between the predicted color value and a color value of the corresponding image element of the target reference image and (ii) the predicted uncertainty value as parameters, [0314] (135) reducing the loss by modification of model parameters, [0315] (140) outputting and/or storing the trained machine-learning model and/or transmitting the trained machine-learning model to a separate computer system and/or using the trained machine-learning model to generate a synthetic image and/or to generate at least one confidence value for a synthetic image.
[0316]
[0317] The machine-learning model MLM is trained using training data. The training data comprise for each reference object of a multiplicity of reference objects (i) at least one input reference image of a reference region of the reference object in a first state and (ii) a target reference image of the reference region of the reference object in a second state. The terms multiplicity of reference objects and plurality of reference objects as used herein may mean more than 10 and even more than 100 reference objects.
[0318] In the example shown in
[0319] The first input reference image RI.sub.1(x.sub.1i) represents the reference region of the reference object in a first state; the second input reference image RI.sub.2(x.sub.2i) represents the reference region of the reference object in a second state; the target reference image RI.sub.3(y.sub.i) represents the reference region of the reference object in a third state. The first state and the second state can be the same or different; preferably, the states are different. The third state is different from the first state and the second state. For example, the state can represent an amount of contrast agent that is or has been introduced into the reference region. For example, the state can represent a time before and/or after administration of a contrast agent.
[0320] For example, the first input reference image RI.sub.1(x.sub.1i) can represent the reference region without administration or after administration of a first amount of a contrast agent, the second input reference image RI.sub.2(x.sub.2i) can represent the reference region after administration of a second amount of the contrast agent, and the target reference image RI.sub.3(y.sub.i) can represent the reference region after administration of a third amount of the contrast agent. The first amount can be less than the second amount and the second amount can be less than the third amount (see for example WO 2019/074938 A1, WO 2022/184297 A1).
[0321] For example, the first input reference image RI.sub.1(x.sub.1i) can represent the reference region before administration or in a first period of time after administration of a contrast agent, the second input reference image RI.sub.2(x.sub.2i) can represent the reference region in a second period of time after administration of the contrast agent, and the target reference image RI.sub.3(y.sub.i) can represent the reference region in a third period of time after administration of the contrast agent (see for example WO 2021/052896 A1, WO 2021/069338 A1). The second period of time can follow the first period of time and the third period of time can follow the second period of time.
[0322] Further examples are mentioned in this disclosure.
[0323] The first input reference image RI.sub.1(x.sub.1i) and the second input reference image RI.sub.2(x.sub.2i) serve as input data in the example shown in
[0324] The first input reference image RI.sub.1(x.sub.1i) comprises a multiplicity of image elements (not explicitly shown in
[0325] The machine-learning model MLM is trained to predict for each image element i of the synthetic reference image RI.sub.3*(.sub.i) a color value .sub.i. The prediction is made on the basis of the color values x.sub.1i of the first input reference image RI.sub.1(x.sub.1i) and the color values x.sub.2i of the second input reference image RI.sub.2(x.sub.2i).
[0326] Each predicted color value .sub.i of each image element i of the synthetic reference image RI.sub.3*(.sub.i) should come as close as possible (ideally correspond) to the respective color value y.sub.i of the corresponding image element i of the target reference image RI.sub.3(y.sub.i). In other words, deviations between the predicted color value .sub.i and the respective color value y.sub.i of the corresponding image element of the target reference image RI.sub.3(y.sub.i) should be as small as possible (ideally zero).
[0327] The machine-learning model MLM is furthermore trained to predict for each predicted color value .sub.i an uncertainty value {circumflex over ()}(x.sub.1i, x.sub.2i). The uncertainty value {circumflex over ()}(x.sub.1i, x.sub.2i) depends on the color values x.sub.1i and x.sub.2i of the image elements of the first input reference image RI.sub.1(x.sub.1i) and the second input reference image RI.sub.2(x.sub.2i).
[0328] Training of the machine-learning model MLM comprises minimization of a loss function . In other words, for each triple composed of a predicted color value .sub.i, the color value y.sub.i and the predicted uncertainty value {circumflex over ()}(x.sub.1i, x.sub.2i), the loss function
can be used to calculate a loss; by modifying model parameters MP, the loss determined can be reduced; the goal can be that of modifying the model parameters in such a way that the loss determined by means of the loss function
assumes a minimum value for all triples and/or the loss determined cannot be reduced further by modifying the model parameters. The model parameters MP can be modified and the loss function
minimized in an optimization method. The loss function
can have a form analogous to the above-mentioned equation (1) or be a different equation.
[0329] After training, the trained machine-learning model can be stored in a data memory and/or output and/or transmitted to another computer system. After training, the trained machine-learning model can be used to generate synthetic images.
[0330] The computer-implemented training method shown in
[0347]
[0348] The computer-implemented method (200) comprises the steps of: [0349] (210) providing a trained machine-learning model, [0350] where the trained machine-learning model has been trained on the basis of training data, [0351] where the training data comprise for each reference object of a multiplicity of reference objects (i) at least one input reference image of a reference region of the reference object in a first state and (ii) a target reference image of the reference region of the reference object in a second state, where the at least one input reference image and the target reference image each comprise a multiplicity of image elements, [0352] where the machine-learning model is configured and has been trained to generate for each reference object on the basis of the at least one input reference image a synthetic reference image, [0353] where the synthetic reference image comprises a multiplicity of image elements, where each image element of the synthetic reference image respectively corresponds to an image element of the target reference image, [0354] where the machine-learning model has been trained to predict for each image element of the synthetic reference image a color value and an uncertainty value for the predicted color value, [0355] where the training comprises minimization of a loss function, where the loss function comprises (i) the predicted color value and/or a deviation of the predicted color value from a color value of the corresponding image element of the target reference image and (ii) the predicted uncertainty value as parameters, [0356] (220) receiving at least one input image of an examination region of an examination object, where the at least one input image represents the examination region of the examination object in the first state, [0357] (230) feeding the at least one input image to the trained machine-learning model, [0358] (240) receiving a synthetic image from the trained machine-learning model, where the synthetic image represents the examination region of the examination object in the second state, [0359] (250) receiving an uncertainty value for each image element of the synthetic image, [0360] (260) determining at least one confidence value on the basis of the received uncertainty values, [0361] (270) outputting the at least one confidence value.
[0362]
[0363] The synthetic image is generated using a trained machine-learning model MLM.sup.t. The machine-learning model MLM.sup.t can have been trained as described in relation to
[0364] The trained machine-learning model MLM.sup.t is fed at least one input image of an examination region of an examination object. The at least one input image represents the examination region of the examination object in a first state.
[0365] In the example shown in
[0366] For example, the first input image I.sub.1(x.sub.1i) can represent the examination region without administration or after administration of a first amount of a contrast agent and the second input image I.sub.2(x.sub.2i) can represent the examination region after administration of a second amount of the contrast agent. The first amount can be less than the second amount (see for example WO 2019/074938 A1, WO 2022/184297 A1).
[0367] For example, the first input image I.sub.1(x.sub.1i) can represent the examination region before administration or in a first period of time after administration of a contrast agent and the second input image I.sub.2(x.sub.2i) can represent the examination region in a second period of time after administration of the contrast agent. The second period of time can follow the first period of time (see for example WO 2021/052896 A1, WO 2021/069338 A1).
[0368] Further examples are mentioned in this disclosure.
[0369] The first input image I.sub.1(x.sub.1i) comprises a multiplicity of image elements (not explicitly shown in
[0370] The second input image I.sub.2(x.sub.2i) likewise comprises a multiplicity of image elements (not explicitly shown in
[0371] The trained machine-learning model MLM.sup.t is configured and has been trained to generate on the basis of the first input image I.sub.1(x.sub.1i) and the second input image I.sub.2(x.sub.2i) a synthetic image I.sub.3*(.sub.i).
[0372] The synthetic image I.sub.3*(.sub.i) represents the examination region of the examination object in a third state. For example, the synthetic image I.sub.3*(.sub.i) can represent the examination region after administration of a third amount of the contrast agent. The third amount can be greater than the second amount (see for example WO 2019/074938 A1, WO 2022/184297 A1). For example, the synthetic image I.sub.3*(.sub.i) can represent the examination region in a third period of time after administration of the contrast agent. The third period of time can follow the second period of time (see for example WO 2021/052896 A1, WO 2021/069338 A1). Further examples are mentioned in this disclosure.
[0373] The synthetic image I.sub.3*(.sub.i) can be output (i.e. for example displayed on a monitor or printed by a printer) and/or stored in a data memory and/or transmitted to a separate computer system.
[0374] The synthetic image I.sub.3*(.sub.i) comprises a multiplicity of image elements (not explicitly shown in
[0375] The trained machine-learning model MLM.sup.t is configured and has been trained to predict on the basis of the first input image I.sub.1(x.sub.1i) and the second input image I.sub.2(x.sub.2i) a color value .sub.i for each image element i of the synthetic image I.sub.3*(.sub.i). In other words, each image element i of the synthetic image I.sub.3*(.sub.i) is assigned a predicted color value .sub.i.
[0376] The trained machine-learning model MLM.sup.t is furthermore configured and has furthermore been trained to predict for each predicted color value .sub.i an uncertainty value {circumflex over ()}(x.sub.1i, x.sub.2i).
[0377] The uncertainty values predicted for the predicted color values of the image elements of a synthetic image can be used as the basis to determine at least one confidence value.
[0378] The method for generating at least one confidence value for a synthetic image that is shown in
[0392]
[0393] A computer system is an electronic data processing system that processes data by means of programmable calculation rules. Such a system typically comprises a computer, which is the unit that includes a processor for carrying out logic operations, and peripherals.
[0394] In computer technology, peripherals denotes all devices that are connected to the computer and are used to control the computer and/or as input and output devices. Examples thereof are monitor (screen), printer, scanner, mouse, keyboard, drives, camera, microphone, speakers, etc. Internal ports and expansion cards are also regarded as peripherals in computer technology.
[0395] The computer system (10) shown in
[0396] The control and calculation unit (12) serves for control of the computer system (10), for coordination of the data flows between the units of the computer system (10), and for the performance of calculations.
[0397] The control and calculation unit (12) is configured: [0398] to provide a trained machine-learning model, [0399] where the trained machine-learning model has been trained on the basis of training data, [0400] where the training data comprise for each reference object of a multiplicity of reference objects (i) at least one input reference image of a reference region of the reference object in a first state and (ii) a target reference image of the reference region of the reference object in a second state, where the at least one input reference image and the target reference image each comprise a multiplicity of image elements, [0401] where the machine-learning model is configured and has been trained to generate for each reference object on the basis of the at least one input reference image a synthetic reference image, [0402] where the synthetic reference image comprises a multiplicity of image elements, where each image element of the synthetic reference image respectively corresponds to an image element of the target reference image, [0403] where the machine-learning model has been trained to predict for each image element of the synthetic reference image a color value and an uncertainty value for the predicted color value, [0404] where the training comprises minimization of a loss function, where the loss function comprises (i) the predicted color value and/or a deviation of the predicted color value from a color value of the corresponding image element of the target reference image and (ii) the predicted uncertainty value as parameters, [0405] to cause the receiving unit (11) to receive at least one input image of an examination region of an examination object, where the at least one input image represents the examination region of the examination object in the first state, [0406] to feed the at least one input image to the trained machine-learning model, [0407] to receive from the trained machine-learning model a synthetic image, wherein the synthetic image represents the examination region of the examination object in the second state, [0408] to receive from the trained machine-learning model an uncertainty value for each image element of the synthetic image, [0409] to determine at least one confidence value on the basis of the received uncertainty values, [0410] to cause the output unit (13) to output the at least one confidence value, to store it in a data memory and/or to transmit it to a separate computer system.
[0411]
[0412] The processing unit (21) may comprise one or more processors alone or in combination with one or more memories. The processing unit (21) may be customary computer hardware that is able to process information such as digital images, computer programs and/or other digital information. The processing unit (21) normally consists of an arrangement of electronic circuits, some of which can be designed as an integrated circuit or as a plurality of integrated circuits connected to one another (an integrated circuit is sometimes also referred to as a chip). The processing unit (21) may be configured to execute computer programs that can be stored in a working memory of the processing unit (21) or in the memory (22) of the same or of a different computer system.
[0413] The memory (22) may be customary computer hardware that is able to store information such as digital images (for example representations of the examination region), data, computer programs and/or other digital information either temporarily and/or permanently. The memory (22) may comprise a volatile and/or non-volatile memory and may be fixed in place or removable. Examples of suitable memories are RAM (random access memory), ROM (read-only memory), a hard disk, a flash memory, an exchangeable computer floppy disk, an optical disc, a magnetic tape or a combination of the aforementioned. Optical discs can include compact discs with read-only memory (CD-ROM), compact discs with read/write function (CD-R/W), DVDs, Blu-ray discs and the like.
[0414] The processing unit (21) may be connected not just to the memory (22), but also to one or more interfaces (11, 12, 31, 32, 33) in order to display, transmit and/or receive information. The interfaces may comprise one or more communication interfaces (11, 32, 33) and/or one or more user interfaces (12, 31). The one or more communication interfaces may be configured to send and/or receive information, for example to and/or from an MRI scanner, a CT scanner, an ultrasound camera, other computer systems, networks, data memories or the like. The one or more communication interfaces may be configured to transmit and/or receive information via physical (wired) and/or wireless communication connections. The one or more communication interfaces may comprise one or more interfaces for connection to a network, for example using technologies such as mobile telephone, wifi, satellite, cable, DSL, optical fibre and/or the like. In some examples, the one or more communication interfaces may comprise one or more close-range communication interfaces configured to connect devices having close-range communication technologies such as NFC, RFID, Bluetooth, Bluetooth LE, ZigBee, infrared (e.g. IrDA) or the like.
[0415] The user interfaces may include a display (31). A display (31) may be configured to display information to a user. Suitable examples thereof are a liquid crystal display (LCD), a light-emitting diode display (LED), a plasma display panel (PDP) or the like. The user input interface(s) (11, 12) may be wired or wireless and may be configured to receive information from a user in the computer system (1), for example for processing, storage and/or display. Suitable examples of user input interfaces are a microphone, an image- or video-recording device (for example a camera), a keyboard or a keypad, a joystick, a touch-sensitive surface (separate from a touchscreen or integrated therein) or the like. In some examples, the user interfaces may contain an automatic identification and data capture technology (AIDC) for machine-readable information. This can include barcodes, radiofrequency identification (RFID), magnetic strips, optical character recognition (OCR), integrated circuit cards (ICC) and the like. The user interfaces may in addition comprise one or more interfaces for communication with peripherals such as printers and the like.
[0416] One or more computer programs (40) may be stored in the memory (22) and executed by the processing unit (21), which is thereby programmed to fulfil the functions described in this description. The retrieving, loading and execution of instructions of the computer program (40) may take place sequentially, such that an instruction is respectively retrieved, loaded and executed. However, the retrieving, loading and/or execution may also take place in parallel.
[0417] The computer system of the present disclosure may be designed as a laptop, notebook, netbook and/or tablet PC; it may also be a component of an MRI scanner, a CT scanner or an ultrasound diagnostic device.
[0418] The present disclosure also provides a computer program product. Such a computer program product includes a non-volatile data carrier, for example a CD, a DVD, a USB stick or another data storage medium. Stored on the data carrier is a computer program. The computer program can be loaded into a working memory of a computer system (more particularly into a working memory of a computer system of the present disclosure), where the computer program causes the computer system to: [0419] provide a trained machine-learning model, [0420] where the trained machine-learning model has been trained on the basis of training data, [0421] where the training data comprise for each reference object of a multiplicity of reference objects (i) at least one input reference image of a reference region of the reference object in a first state and (ii) a target reference image of the reference region of the reference object in a second state, where the at least one input reference image and the target reference image each comprise a multiplicity of image elements, [0422] where the machine-learning model is configured and has been trained to generate for each reference object on the basis of the at least one input reference image a synthetic reference image, [0423] where the synthetic reference image comprises a multiplicity of image elements, where each image element of the synthetic reference image respectively corresponds to an image element of the target reference image, [0424] where the machine-learning model has been trained to predict for each image element of the synthetic reference image a color value and an uncertainty value for the predicted color value, [0425] where the training comprises minimization of a loss function, where the loss function comprises (i) the predicted color value and/or a deviation of the predicted color value from a color value of the corresponding image element of the target reference image and (ii) the predicted uncertainty value as parameters, [0426] receive at least one input image of an examination region of an examination object, where the at least one input image represents the examination region of the examination object in the first state, [0427] feed the at least one input image to the trained machine-learning model, [0428] receiving a synthetic image from the trained machine-learning model, where the synthetic image represents the examination region of the examination object in the second state, [0429] receive an uncertainty value for each image element of the synthetic image, [0430] determine at least one confidence value on the basis of the received uncertainty values, [0431] output the at least one confidence value.
[0432] The computer program product can also be marketed in combination (in a set) with the contrast agent. Such a set is also referred to as a kit. Such a kit comprises the contrast agent and the computer program product. It is also possible for such a kit to comprise the contrast agent and means that allow the purchaser to obtain the computer program, for example by downloading it from a webpage. These means may include a link, i.e. an address of the webpage on which the computer program can be obtained, for example from which the computer program can be downloaded to a computer system connected to the internet. These means may include a code (for example an alphanumeric string or a QR code, or a DataMatrix code or a barcode or another optically and/or electronically readable code) that gives the purchaser access to the computer program. Such a link and/or code may for example be printed on a packaging of the contrast agent and/or on a package leaflet of the contrast agent. A kit is thus a combination product comprising a contrast agent and a computer program (for example in the form of access to the computer program or in the form of executable program code on a data carrier) that are offered for sale together.