PROVIDING A SCENE WITH SYNTHETIC CONTRAST
20220051401 · 2022-02-17
Inventors
Cpc classification
G16H50/20
PHYSICS
G16H20/40
PHYSICS
A61B6/504
HUMAN NECESSITIES
A61B6/5217
HUMAN NECESSITIES
A61B6/5211
HUMAN NECESSITIES
International classification
A61B6/00
HUMAN NECESSITIES
Abstract
A computer-implemented method for providing a scene with synthetic contrast includes receiving preoperative image data of an examination region containing a hollow organ, wherein the medical image data images a contrast agent flow in the hollow organ; receiving intraoperative image data of the examination region of the examination subject, wherein the intraoperative image data images a medical object at least partially disposed in the hollow organ, generating the scene with synthetic contrast by applying a trained function to input data, wherein the input data is based on the preoperative image data and the intraoperative image data, wherein the scene with synthetic contrast images a virtual contrast agent flow in the hollow organ taking into account the medical object disposed therein, wherein at least one parameter of the trained function is based on a comparison between a training scene and a comparison scene; and providing the scene with synthetic contrast.
Claims
1. A computer-implemented method for providing a scene with synthetic contrast, the method comprising: receiving preoperative image data of an examination region of an examination subject, wherein the examination region comprises a hollow organ, and wherein the preoperative image data images a contrast agent flow in the hollow organ; receiving intraoperative image data of the examination region of the examination subject, wherein the intraoperative image data images a medical object at least partially disposed in the hollow organ; generating the scene with the synthetic contrast by applying a trained function to input data, wherein the input data is based on the preoperative image data and the intraoperative image data, wherein the scene with the synthetic contrast images a virtual contrast agent flow in the hollow organ taking into account the medical object at least partially disposed therein, and wherein at least one parameter of the trained function is based on a comparison between a training scene and a comparison scene; and providing the scene with the synthetic contrast.
2. The method of claim 1, wherein the scene with the synthetic contrast comprises a time-resolved three-dimensional (3D) image of the virtual contrast agent flow taking into account the medical object at least partially disposed in the hollow organ.
3. The method of claim 2, wherein the scene with the synthetic contrast comprises at least one synthetic time-resolved two-dimensional (2D) image of the virtual contrast agent flow taking into account the medical object at least partially disposed in the hollow organ.
4. The method of claim 3, wherein the at least one synthetic time-resolved 2D image is generated by a virtual projection of the time-resolved 3D image.
5. The method of claim 1, wherein the scene with the synthetic contrast comprises at least one synthetic time-resolved two-dimensional (2D) image of the virtual contrast agent flow taking into account the medical object at least partially disposed in the hollow organ.
6. The method of claim 1, wherein the input data is additionally based on a material parameter relating to the medical object, an operating parameter relating to the medical object, shape information relating to the medical object, a physiological parameter of the examination subject, or a combination thereof.
7. The method of claim 1, wherein the input data is additionally based on a parameter for the virtual contrast agent flow, and wherein the parameter specifies one or more of a dose, a motion speed, or a motion direction of the virtual contrast agent flow.
8. A computer-implemented method for providing a trained function, the method comprising: receiving preoperative training image data of a training examination region of a training examination subject, wherein the training examination region comprises a hollow organ, and wherein the preoperative training image data images a contrast agent flow in the hollow organ; receiving intraoperative training image data of the training examination region of the training examination subject, wherein the intraoperative training image data images a medical object at least partially disposed in the hollow organ; receiving a contrast-weighted comparison scene of a medical imaging device, wherein the contrast-weighted comparison scene images a further contrast agent flow in the hollow organ, wherein the medical object is at least partially disposed in the hollow organ, and/or generating a comparison scene with synthetic contrast by applying a deformation correction to the preoperative training image data, wherein the deformation correction is based on the intraoperative training image data, wherein the comparison scene with synthetic contrast images a virtual comparison contrast agent flow in the hollow organ taking into account the medical object at least partially disposed therein, and wherein the virtual comparison contrast agent flow is simulated; generating a training scene with synthetic contrast by applying the trained function to input data, wherein the input data is based on the preoperative image data and the intraoperative training image data; adjusting at least one parameter of the trained function based on a comparison between the training scene and the comparison scene; and providing the trained function.
9. The method of claim 8, wherein the comparison scene comprises a time-resolved three-dimensional (3D) comparison image of the further contrast agent flow and/or of the virtual comparison contrast agent flow taking into account the medical object at least partially disposed in the hollow organ.
10. The method of claim 9, wherein the comparison scene comprises at least one time-resolved two-dimensional (2D) comparison image of the further contrast agent flow and/or of the virtual comparison contrast agent flow taking into account the medical object at least partially disposed in the hollow organ.
11. The method of claim 10, wherein the at least one time-resolved 2D comparison image is generated by a virtual projection of the time-resolved 3D comparison image.
12. The method of claim 8, wherein the comparison scene comprises at least one time-resolved two-dimensional (2D) comparison image of the further contrast agent flow and/or of the virtual comparison contrast agent flow taking into account the medical object at least partially disposed in the hollow organ.
13. The method of claim 8, wherein the virtual comparison contrast agent flow is simulated taking into account a deformation and/or constriction of a cross-section of the hollow organ due to the medical object.
14. The method of claim 8, wherein the input data is additionally based on a training material parameter relating to the medical object, a training operating parameter relating to the medical object, training shape information relating to the medical object, a physiological training parameter of the training examination subject, or a combination thereof.
15. The method of claim 8, wherein the input data is additionally based on a training parameter for the further contrast agent flow and/or for the virtual comparison contrast agent flow, and wherein the training parameter specifies one or more of a dose, a motion speed, or a motion direction of the virtual contrast agent flow.
16. A provisioning unit comprising: an interface; a computing unit; and a memory, wherein the interface, the memory, and the computing unit are configured to: receive preoperative image data of an examination region of an examination subject, wherein the examination region comprises a hollow organ, and wherein the preoperative image data images a contrast agent flow in the hollow organ; receive intraoperative image data of the examination region of the examination subject, wherein the intraoperative image data images a medical object at least partially disposed in the hollow organ; generate a scene with a synthetic contrast by applying a trained function to input data, wherein the input data is based on the preoperative image data and the intraoperative image data, wherein the scene with the synthetic contrast images a virtual contrast agent flow in the hollow organ taking into account the medical object at least partially disposed therein, and wherein at least one parameter of the trained function is based on a comparison between a training scene and a comparison scene; and provide the scene with the synthetic contrast.
17. The provisioning unit of claim 16, wherein the provisioning unit is a component of a medical imaging device.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0083] Exemplary embodiments of the disclosure are illustrated in the drawings and are described in more detail hereinbelow. The same reference characters are used for like features in different figures, in which:
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
DETAILED DESCRIPTION
[0093]
[0094] Furthermore, the proposed method for providing a scene with contrast PROV-S may include a receiving REC-PARAM of a parameter PARAM, which parameter PARAM may include a material parameter and/or an operating parameter and/or shape information relating to the medical object MO and/or a physiological parameter of the examination subject. Advantageously, the input data of the trained function TF may be based in addition on the parameter PARAM. In this case, the parameter PARAM may be provided by the medical object MO, in particular by a processing unit of the medical object. Furthermore, the parameter PARAM, in particular the physiological parameter of the examination subject, may be provided by a sensor for detecting the physiological parameter, for example, a breath sensor and/or a motion sensor and/or a pulse sensor and/or a blood pressure sensor.
[0095] Furthermore, the parameter PARAM may include a parameter for the contrast agent flow pFI and/or a parameter for the virtual contrast agent flow vFI, which may specify a dose and/or motion speed and/or motion direction of the virtual contrast agent flow. The parameter for the virtual contrast agent flow vFI may be specified and/or adjusted by a user by an input unit. Furthermore, the parameter for the contrast agent flow pFI may be provided by a device for monitoring a contrast agent injection at the time of the acquisition of the preoperative image data pID.
[0096] Preoperative image data pID and intraoperative image data iID are represented schematically in
[0097] The intraoperative image data iID illustrated schematically in
[0098] A scene with synthetic contrast S, in particular an individual image of the scene with synthetic contrast S, is represented schematically in
[0099] Furthermore, the scene with synthetic contrast S, (e.g., the graphical representation of the virtual contrast agent flow vFI), may include an image, (e.g., a color-coded image), of a deviation between the virtual contrast agent flow vFI and the contrast agent flow pFI. In particular, a deformation correction may be applied to the preoperative image data pID for this purpose, where the graphical representation may include a difference and/or a quotient between the deformation-corrected preoperative image data pID and the scene with synthetic contrast S. By this means, a change in the flow dynamic due to the medical object MO intraoperatively disposed at least partially in the hollow organ may be perceptible in a particularly intuitive manner.
[0100]
[0101] Alternatively, or in addition, the scene with synthetic contrast S may include at least one synthetic time-resolved 2D image 2D-SD of the virtual contrast agent flow vFI taking into account the medical object MO at least partially disposed in the hollow organ iHO. In this case, the at least one synthetic time-resolved 2D image 2D-SD may be generated by a virtual projection PROJ, (e.g., an intensity projection), of the time-resolved 3D image 3D-SD.
[0102] An advantageous embodiment of a computer-implemented method for providing a trained function PROV-TF is represented schematically in
[0103] Furthermore, the proposed method for providing a trained function PROV-TF may include a receiving REC-TPARAM of a training parameter TPARAM, which training parameter TPARAM may include a training material parameter and/or a training operating parameter and/or training shape information relating to the medical object MO and/or a physiological training parameter of the training examination subject. Advantageously, the input data of the trained function TF may be based in addition on the training parameter TPARAM. In this case, the training parameter TPARAM may be provided by the medical object MO, in particular by a processing unit of the medical object MO. Furthermore, the training parameter TPARAM, (e.g., the physiological training parameter of the training examination subject), may be provided by a sensor for detecting the physiological training parameter, (e.g., a breath sensor and/or a motion sensor and/or a pulse sensor and/or a blood pressure sensor).
[0104] Furthermore, the training parameter TPARAM may include a training parameter for the contrast agent flow pFI and/or for the virtual contrast agent flow vFI, which training parameter may specify a dose and/or motion speed and/or motion direction of the virtual contrast agent flow vFI.
[0105]
[0106] Advantageously, the virtual comparison contrast agent flow may be simulated in this case taking into account a deformation and/or constriction of the cross-section of the hollow organ iHO due to the medical object MO.
[0107]
[0108] Analogously thereto, the training scene TS may include a time-resolved 3D training image 3D-TSD of the virtual contrast agent flow taking into account the medical object MO at least partially disposed in the hollow organ iHO. Alternatively or in addition, the training scene TS may include at least one time-resolved 2D training image 2D-TSD of the virtual contrast agent flow taking into account the medical object MO at least partially disposed in the hollow organ iHO. The comparison between the training scene TS and the comparison scene VS may advantageously include a comparison between the time-resolved 3D training image 3D-TSD and the time-resolved 3D comparison image 3D-VSD. Alternatively, or in addition, the comparison between the training scene TS and the comparison scene VS may include a comparison between the at least one time-resolved 2D training image 2D-TSD and the at least one time-resolved 2D comparison image 2D-VSD. Advantageously, the at least one parameter of the trained function TF may be adjusted ADJ-TF in such a way that the respective deviation may be minimized.
[0109] In particular, the at least one time-resolved 2D training image 2D-TSD may be generated by a virtual projection of the time-resolved 3D training image 3D-TSD. This may be advantageous for the comparison between the training scene and the comparison scene in particular when the comparison scene only contains at least one 2D comparison image 2D-VSD.
[0110] A proposed provisioning unit PRVS is schematically represented in
[0111]
[0112] The provisioning unit PRVS and/or the training unit TRS may be a computer, a microcontroller, or an integrated circuit. Alternatively, the provisioning unit PRVS and/or the training unit TRS may be a real or virtual network of interconnected computers (a technical term for a real network is “cluster”; a technical term for a virtual network is “cloud”). The provisioning unit PRVS and/or the training unit TRS may also be embodied as a virtual system that is implemented on a real computer or a real or virtual network of interconnected computers (virtualization).
[0113] An interface IF and/or a training interface TIF may be a hardware or software interface (for example, PCI bus, USB, or Firewire). A computing unit CU and/or a training computing unit TCU may include hardware elements or software elements, (e.g., a microprocessor or a Field Programmable Gate Array (FPGA)). A memory unit MU and/or a training memory unit TMU may be realized as a volatile working memory known as Random Access Memory (RAM) or as a nonvolatile mass storage device (e.g., hard disk, USB stick, SD card, or Solid State Disk (SSD)).
[0114] The interface IF and/or the training interface TIF may include a number of subsidiary interfaces that perform different acts of the respective methods. In other words, the interface IF and/or the training interface TIF may also be understood as a plurality of interfaces IF or a plurality of training interfaces TIF. The computing unit CU and/or the training computing unit TCU may include a number of subsidiary computing units that perform different acts of the respective methods. In other words, the computing unit CU and/or the training computing unit TCU may also be understood as a plurality of computing units CU or a plurality of training computing units TCU.
[0115] In
[0116] In this case, the medical C-arm x-ray device 37 advantageously includes a detector 34, (e.g., an x-ray detector), and an x-ray source 33. In order to acquire the preoperative image data pID and the intraoperative image data iID, an arm 38 of the C-arm x-ray device 37 may be mounted so as to be movable about one or more axes. The medical C-arm x-ray device 37 may further include a motion device 39 which enables the C-arm x-ray device 37 to execute a movement in space.
[0117] For the purpose of acquiring the preoperative image data pID and the intraoperative image data iID of an examination region UB of an examination subject 31 disposed on a patient support and positioning device 32, the provisioning unit PRVS may send a signal 24 to the x-ray source 33. The x-ray source 33 may thereupon transmit a pencil beam of x-rays. When the pencil beam of x-rays, following an interaction with the examination region UB, is incident on a surface of the detector 34, the detector 34 may send a signal 21 to the provisioning unit PRVS. The provisioning unit PRVS may receive the preoperative image data pID and/or the intraoperative image data iID, for example, on the basis of the signal 21.
[0118] Furthermore, the medical C-arm x-ray device 37 may include an input unit 42, (e.g., a keyboard), and/or a visualization unit 41, (e.g., a monitor and/or display). The input unit 42 may be integrated into the visualization unit 41, (e.g., in the case of a capacitive input display). In this case, the medical C-arm x-ray device 37, (e.g., the proposed method for providing a scene with synthetic contrast PROV-S), may be controlled by an input by a user at the input unit 42. The input unit 42 may also allow an input by the user in order to specify a value for the parameter PARAM. For this purpose, the input unit 42 may send a signal 26 to the provisioning unit PRVS.
[0119] Furthermore, the visualization unit 41 may be embodied to display information and/or graphical representations of information of the medical C-arm x-ray device 37 and/or of the provisioning unit PRVS and/or of further components. For this purpose, the provisioning unit PRVS may send a signal 25 to the visualization unit 41. In particular, the visualization unit 41 may be embodied to display a graphical representation of the preoperative image data pID and/or the intraoperative image data pID and/or the scene with synthetic contrast S.
[0120] The schematic representations contained in the described figures do not reflect a scale or proportions of any kind.
[0121] In conclusion, it is pointed out once again that the methods described in detail in the foregoing, as well as the illustrated devices, are simply exemplary embodiments which may be modified in the most diverse ways by the person skilled in the art without leaving the scope of the disclosure. Furthermore, the use of the indefinite articles “a” or “an” does not exclude the possibility that the features in question may also be present more than once. Similarly, the terms “unit” and “element” do not rule out the possibility that the components in question includes a plurality of cooperating subcomponents, which, if necessary, may also be distributed in space.
[0122] It is to be understood that the elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present disclosure. Thus, whereas the dependent claims appended below depend from only a single independent or dependent claim, it is to be understood that these dependent claims may, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or dependent, and that such new combinations are to be understood as forming a part of the present specification.
[0123] While the present disclosure has been described above by reference to various embodiments, it may be understood that many changes and modifications may be made to the described embodiments. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments are intended to be included in this description.