Method and system for assistance in guiding an endovascular instrument
11564750 · 2023-01-31
Assignee
Inventors
- Florent Lalys (Rennes, FR)
- Mathieu Colleaux (Rennes, FR)
- Vincent Durrmann (Rennes, FR)
- Antoine Lucas (Rennes, FR)
- Cemil Goksu (Rennes, FR)
Cpc classification
A61B6/5241
HUMAN NECESSITIES
A61B34/20
HUMAN NECESSITIES
A61B6/5235
HUMAN NECESSITIES
A61B6/504
HUMAN NECESSITIES
G06T7/30
PHYSICS
A61B2090/3764
HUMAN NECESSITIES
A61B2034/105
HUMAN NECESSITIES
A61B34/10
HUMAN NECESSITIES
A61B6/463
HUMAN NECESSITIES
International classification
A61B34/20
HUMAN NECESSITIES
A61B6/00
HUMAN NECESSITIES
G06T7/30
PHYSICS
Abstract
A system for assisting guiding an endovascular instrument in vascular structures of an anatomical region of interest of a patient. This system includes an imaging device for capturing three-dimensional images of parts of the body of a patient, a programmable device and a viewing unit. The imaging device captures partially superposed fluoroscopic images of the region, and the programmable device forms a first augmented image, representative of a complete panorama of bones of the region, and cooperates with the imaging device to obtain a second augmented image including a representation of the vascular structures of the region. The imaging device captures a current fluoroscopic image of a part of the region, and the programmable device registers the current fluoroscopic image with respect to the first augmented image and locates and displays, on the viewing unit, an image region corresponding to the current fluoroscopic image in the second augmented image.
Claims
1. An apparatus comprising: a programmable device for assistance in guiding an endovascular instrument in vascular structures of an anatomical region of interest of a patient, the programmable device comprising: a processor; and a non-transitory computer-readable medium comprising instructions stored thereon, which when executed by the processor configure the programmable device to perform acts comprising: obtaining a plurality of partially superposed fluoroscopic images of the anatomical region of interest from an imaging device positioned at different successive positions that is configured to capture two-dimensional images of parts of the body of the patient, and forming a first augmented image representative of a complete panorama of bones of said anatomical region of interest, capturing a plurality of angiographic images corresponding to the plurality of fluoroscopic images, and forming a second augmented image, the second augmented image being a two-dimensional image representative of an arterial panorama of the anatomical region of interest, obtaining a new fluoroscopic image, called a current fluoroscopic image, of a part of the anatomical region of interest, and performing a template matching type registering of said current fluoroscopic image with respect to the first augmented image, so as to locate, based on a spatial correspondence of bone anatomical structures, a portion of the first augmented image corresponding to the current fluoroscopic image, and locating and displaying on a viewing unit an image region corresponding to said portion of the first augmented image in the second augmented image.
2. The apparatus according to claim 1, wherein the apparatus further comprises: the viewing unit; and the imaging device.
3. The apparatus according to claim 1, wherein the programmable device is configured to display markers that delimit arterial lesions observed on the second augmented image.
4. The apparatus according to claim 1, wherein the programmable device is configured to control the imaging device in order to capture the plurality of angiographic images substantially in parallel with the capturing of the plurality of partially superposed fluoroscopic images, the imaging device being at a same spatial position for the capturing of a fluoroscopic image and of a corresponding angiographic image.
5. The apparatus according to claim 1, wherein the programmable device is configured to carry out a merger of the first augmented image and of the second augmented image to create a merged image and display on the viewing unit the merged image.
6. The apparatus according to claim 5, wherein the programmable device is configured to display on the viewing unit an image region corresponding to the current fluoroscopic image in the merged image.
7. The apparatus according to claim 6 wherein the programmable device is configured to, in a pre-operative phase, cooperate with the imaging device in order to obtain a third augmented image representative of the bone and vascular structures of the anatomical region of interest, and, after locating an image region corresponding to the current fluoroscopic image in the second augmented image, locating and displaying an image region corresponding to the current fluoroscopic image in the third augmented image.
8. A non-transitory computer-readable medium comprising a computer program stored thereon comprising instructions for implementing a method for assistance in guiding an endovascular instrument in vascular structures during execution of the program by a processor of a programmable device, wherein the instructions configure the programmable device to: capture with an imaging device positioned at different successive positions a plurality of partially superposed two-dimensional fluoroscopic images of an anatomical region of interest of the patient, and forming a first augmented image representative of a complete panorama of bones of said anatomical region of interest, capture a plurality of angiographic images corresponding to the plurality of fluoroscopic images, and forming a second augmented image, the second augmented image being a two-dimensional image representative of an arterial panorama of the anatomical region of interest, capture a new two-dimensional fluoroscopic image, called a current fluoroscopic image, of a portion of said anatomical region of interest, perform a template matching type registering of the current fluoroscopic image with respect to the first augmented image, so as to locate, based on a spatial correspondence of bone anatomical structures, a portion of the first augmented image corresponding to the current fluoroscopic image, and locate and display on a viewing unit an image region corresponding to said portion of the first augmented image in the second augmented image.
9. The non-transitory computer-readable medium according to claim 8, wherein the instructions further configure the programmable device to display markers that delimit arterial lesions observed on the second augmented image.
10. A method for assistance in guiding an endovascular instrument in vascular structures of an anatomical region of interest of the patient, the method comprising the following acts: capturing with an imaging device positioned at different successive positions a plurality of partially superposed two-dimensional fluoroscopic images of the anatomical region of interest of the patient, and forming a first augmented image representative of a complete panorama of bones of said anatomical region of interest, capturing a plurality of angiographic images corresponding to the plurality of fluoroscopic images, and forming a second augmented image, the second augmented image being a two-dimensional image representative of an arterial panorama of the anatomical region of interest, capturing with the imaging device a new two-dimensional fluoroscopic image, called a current fluoroscopic image, of a portion of said anatomical region of interest, performing a template matching type registering of the current fluoroscopic image with respect to the first augmented image, so as to locate, based on a spatial correspondence of bone anatomical structures, a portion of the first augmented image corresponding to the current fluoroscopic image, and locating and displaying on a viewing unit an image region corresponding to said portion of the first augmented image in the second augmented image.
11. The method according to claim 10, further comprising displaying markers that delimit arterial lesions observed on the second augmented image.
12. The method according to claim 10, wherein the capturing of a plurality of angiographic images is carried out substantially in parallel with the capturing of a plurality of partially superposed fluoroscopic images, at a same spatial position for the capturing of a fluoroscopic image and of a corresponding angiographic image.
13. The method according to claim 10, comprising merging the first augmented image and of the second augmented image to create a merged image, displaying the merged image, and displaying an image region corresponding to the current fluoroscopic image in the merged image.
14. The method according to claim 13 further comprising, in a pre-operative phase, obtaining a third augmented image representative of the bone and vascular structures of the anatomical region of interest, and, after the locating an image region corresponding to the current fluoroscopic image in the second augmented image, locating and displaying another image region corresponding to the current fluoroscopic image in the third augmented image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Other characteristics and advantages shall appear in the description that is given of it hereinbelow, for the purposes of information and in a non-limiting manner, in reference to the accompanying figures, among which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
(10) The invention is described hereinafter more particularly for assistance in guiding an endovascular instrument in the lower limbs, but the invention more generally applies to other anatomical regions of interest, or to a wider region that also includes all or a portion of the trunk of the patient.
(11)
(12) The operating room 1 is provided with an operating table 12, whereon is represented a patient 14 to be treated via an endovascular intervention.
(13) The intervention system 10 comprises an X-ray imaging device 21, itself comprised of a support device 16 in the shape of a hoop, of a source 18 of X-rays and of a unit 20 for receiving and detecting X-rays, positioned facing the source 18. This imaging device is suitable for capturing images of elements positioned between the source 18 of X-rays and the unit 20 for receiving and detecting, and is also suitable for rotating about two axes according to the needs of the operator.
(14) The two-dimensional images captured by an X-ray imaging system are generally called fluoroscopic images.
(15) Thus, the imaging device 21 shows is suitable for capturing fluoroscopic two-dimensional images of various anatomical regions of interest of the body of the patient, comprising targeted vascular structures, in particular the arteries.
(16) In particular, the operator can displace the imaging device 21 substantially in translation along the legs of the patient 14, in order to capture a plurality of N fluoroscopic images F.sub.i.
(17) More generally, the operator can displace the imaging device 21 substantially in translation in order to obtain a plurality of fluoroscopic images of an anatomical region of interest chosen or extending from the supra-aortic trunk to the feet of the patient.
(18) Alternatively, the displacement in translation of the imaging device 21 is controlled, for example by the programmable device 22, described hereinafter.
(19) The number N of images to be captured is chosen according to the field of vision of the imaging device and the size of the patient 14. For example, 3 to 4 images of each leg of the patient 14 are captured along the axis of the femur.
(20) The imaging device is positioned at successive positions such that two images captured successively have a partial superposition on a rectangular superposition region, of a predetermined or variable size.
(21) Preferably, the superposition region has a surface greater than or equal to 20% of the surface of the two-dimensional images captured.
(22) The imaging device 21 is also suitable for capturing angiographic images A.sub.i of the leg or of the legs of the patient, following an injection of a radio-opaque contrast agent by an injection unit 19.
(23) As shall be explained in more detail in what follows, in an embodiment of the invention, during a first preparatory operative phase, at each successive position of the imaging device, a corresponding fluoroscopic image and angiographic image are captured and memorised in the form of two-dimensional digital images.
(24) The intervention system 10 also comprises a programmable device 22, comprising one or several processors, associated with a viewing unit 24 comprised of one or several screens and of a man-machine interface 26.
(25) The man-machine interface 26 comprises means for pointing and selecting elements, for example a keyboard-mouse unit, a touchpad, a contactless 3D gesture interface or a combination of these devices.
(26) In an embodiment, the man-machine interface 26 is integrated with the viewing unit 24 in the form of a touch screen.
(27) The programmable device 22 is suitable for receiving the two-dimensional images (fluoroscopic and/or angiographic) captured by the X-ray imaging device and for processing them according to a method for assistance in guiding an endovascular instrument in vascular structures according to an exemplary embodiment of the invention.
(28) In an embodiment, the programmable device 22 is suitable for controlling the capturing of two-dimensional images (fluoroscopic and/or angiographic).
(29) The two-dimensional images captured during the intervention phase are displayed on the viewing unit 24 in order to assist in the precise guiding of the endovascular instruments inside the vascular structures, in particular the arteries, of the patient.
(30) The endovascular instruments are selected from among a catheter, an endovascular device of the stent type, a flexible or rigid guide, an endoprosthesis or a balloon.
(31)
(32) A programmable device 30 able to implement the invention, comprises a screen 32, similar to the viewing unit 24, a unit 34 for the entering of the commands of an operator, for example a keyboard, a mouse, a touchpad or a contactless interface, similar to the unit 26, a central processing unit 36, or CPU, able to execute computer program instructions when the device 30 is turned on. The device 30 optionally comprises a controller 40, that makes it possible to send commands and to select elements remotely.
(33) The device 30 also comprises a data storage unit 38, for example registers, suitable for storing executable code instructions that allow for the implementing of programs that comprise code instructions able to implement the method according to an exemplary embodiment of the invention. The various functional blocks of the device 30 described hereinabove are connected via a communication bus 42.
(34) The device 30 is able to receive image data from a source 44.
(35) In an embodiment, the device 30 is suitable for cooperating with an imaging device in order to capture fluoroscopic and/or angiographic images. In particular, the device 30 is suitable for commander the imaging device in order to capture a plurality of angiographic images corresponding to the plurality of fluoroscopic images.
(36) The method of an exemplary embodiment of the invention is suitable for being implemented by a programmable device 30 such as a computer integrated into a standard operating room, which makes it possible to limit equipment costs.
(37) In an embodiment, the method of the invention is implemented by software code modules. The executable code instructions are recorded on a support that can be read by a computer, for example an optical disc, a magnetic-optical disc, a ROM memory, a RAM memory, a non-volatile memory (EPROM, EEPROM, FLASH, NVRAM), a magnetic or optical card. Alternatively the software code modules are carried out in the form of a programmable logic component such as a FPGA (for Field Programmable Gate Array), or in the form of a dedicated integrated circuit such as an ASIC (for Application Specific Integrated Circuit).
(38) The programmable device 30 is suitable for: obtaining a plurality of partially superposed fluoroscopic images of the anatomical region of interest, and forming a first augmented image representative of a complete panorama of the bones of said anatomical region of interest, obtaining a second augmented image including a representation of the vascular structures of said anatomical region of interest, obtaining a new fluoroscopic image, called the current fluoroscopic image, of a part of the anatomical region of interest, registering said current fluoroscopic image with respect to the first augmented image, to locate and display, on the viewing unit an image region corresponding to the current fluoroscopic image in the second augmented image.
(39)
(40) The method comprises, in a first operative phase, a first step 50 of capturing N two-dimensional fluoroscopic images of a lower limb of the patient, for example 3 or 4 successive images captured by translation of the imaging device along the axis of the femur.
(41) The two-dimensional fluoroscopic images are noted as {F.sub.1, . . . F.sub.i, . . . F.sub.N} and are memorised in the form of digital images. Two successive images are partially superposed, and therefore have a rectangular superposition region of variable size when the device is displaced by the operator, preferably greater than or equal to a predetermined size.
(42) For example, when each digital image is represented in the form of a two-dimensional matrix of pixels, with each pixel having an associated numerical value called intensity, the size of the image is defined by a number of lines and number of columns, a superposition region comprises a number of lines and of columns that form a region of a given surface, preferably greater than or equal to 20% of the entire surface of the image.
(43) The step 50 is followed by a step 52 of two-by-two registering of the two-dimensional fluoroscopic images for forming a first augmented image, noted as P.sub.1, representative of a complete panorama of the bones of the lower limb represented.
(44) The registering of images, or setting in spatial correspondence, consists in placing in spatial correspondence anatomical or functional structures present in each one of the images. Via the successive merging of registered images along the lower limb, an augmented image is obtained that represents the bone structure of the entire lower limb.
(45) In the present case of application, the purpose of the registering is to carry out a precise superposition of the bone structures present in the superposition region that is common to the successive images.
(46) Two successive images can have a slight rotation between them, in particular when the operator displaces the imaging device. The registering 2D/2D consists in optimising two translations and one rotation.
(47) The two-dimensional fluoroscopic images captured have a level of contrast that is relatively substantial thanks to the bones. There are several methods of rigid registering between two-dimensional images (or 2D/2D registering) that can be applied for images that include objects that have a relatively high level of contrast.
(48) For example, an automatic registering of the iconic type can be implemented, by using a measurement of similarity based on the difference of gradients between the two images, coupled to an optimisation strategy of the gradient descent type or Powell optimiser.
(49) Following the step 52 of 2D-2D registering a first augmented image P.sub.1 is obtained that represents a panorama of the bones of the lower limb of the patient. As explained hereinabove the image P.sub.1 is obtained from a plurality of captured fluoroscopic two-dimensional images.
(50) During a step 54, at least one second augmented image of the lower limb of the patient is obtained. This second augmented image comprises a representation of the vascular structures of the lower limb of the patient.
(51) Two embodiments different from the step 54 are considered and shall be described in more detail hereinafter in reference to
(52) In a first embodiment, the second augmented image is a two-dimensional image representative of an arterial panorama of the lower limb considered, located in the same spatial reference system as the panorama of the bones P.sub.1.
(53) The arterial panorama is obtained from two-dimensional angiographic images representing the vascular structures of the lower limb considered. As the angiographic images obtained are in the same spatial reference system as the fluoroscopic images, the registering parameters obtained in the step 50 are directly applied to the angiographic images in order to obtain the arterial panorama.
(54) In a second embodiment, the second augmented image is a three-dimensional image of the bone and vascular structures of the lower limb considered. In this second embodiment, the second augmented image is obtained in a prior, pre-operative and memorised phase. It is for example calculated from images obtained by capturing techniques such as for example tomography also called CT for “computed tomography”, magnetic resonance imaging (MRI), enhanced by the injection of a contrast agent in order to have the vascular structures appear better.
(55) In a third embodiment, a second augmented image representative of an arterial panorama and a third three-dimensional image of the bone and vascular structures of the lower limb considered are both used.
(56) In the case of obtaining an augmented image of the pre-operative three-dimensional image, the step 54 is followed by an optional step 56 of registering between the first and second augmented images.
(57) Following the step 56, it is useful to represent a merged image P.sub.3 of the panorama of the bones P.sub.1 and of the augmented image P.sub.2, which will be used for the simultaneous viewing of bone and arterial structures.
(58) The step 56 or the step 54 is optionally followed by a step 58 of marking, by the operator of the system, for example a doctor, the arterial lesions observed on the second augmented image P.sub.2.
(59) In a second operative phase, a current fluoroscopic image I.sub.c is captured in the step 60. The second operative phase is an effective intervention phase, during which an endovascular instrument, selected from among a catheter, an endovascular device of the stent type, a flexible or rigid guide, an endoprothesis or a balloon, is inserted in order to treat the patient.
(60) The current fluoroscopic image I.sub.c is then registered with respect to the first augmented image P.sub.1 representative of a panorama of the bones of the lower limb in the step 62.
(61) Registering consists here in placing in spatial correspondence of the bone anatomical structures present in the current fluoroscopic image I.sub.c with structures of the first augmented image P.sub.1. The registering used here is thus of the “template matching” type due to the belonging of the current image I.sub.c to a very precise portion of the first augmented image P.sub.1.
(62) Preferably, the registering carried out is of the iconic type, by using a measurement of the similarity based on the difference in gradients between the two images, a measurement of similarity based on mutual information, or a combination of the two.
(63) Finally, in the step 64, an image region corresponding to the current fluoroscopic image I.sub.c is determined in the second augmented image P.sub.2 and/or in the merged image P.sub.3, and is displayed, thanks to the calculated and memorised registering information between the first augmented image P.sub.1 and the second augmented image P.sub.2.
(64) Advantageously, the local vascular structure is displayed in liaison with the current fluoroscopic image without requiring a new capturing of an angiographic image, and consequently without requiring injecting a supplement of contrast agent.
(65) Advantageously, in the case where the marking of the lesions was carried out during the step 58, the lesions are located precisely in relation to the current fluoroscopic image thanks to the registering carried out between the current fluoroscopic image and the first augmented image P.sub.1 and thanks to the registering between the first augmented image P.sub.1 and the second augmented image P.sub.2.
(66) The steps 60 to 64 are reiterated substantially in real time as many times as needed during the intervention, in particular for each lesion to be treated on the lower limb.
(67)
(68) In this first embodiment, in the step of capturing images 70, at each image capturing position of the imaging device, a fluoroscopic image F.sub.i and an angiographic image A.sub.i are captured. Thus, there is an initial spatial correspondence between these images. In other terms, the images are captured in the same spatial reference system.
(69) The step 72 of registering is similar to the step 52 described hereinabove. The rigid 2D-2D registering parameters between successive fluoroscopic images are memorised, and then applied in the step 74 to the angiographic images A.sub.i in order to obtain the second augmented image P.sub.2.
(70) Thus, advantageously, the registering parameters are calculated on the fluoroscopic images that have a good contrast thanks to the presence of the bone structures. Thanks to the capturing of fluoroscopic and angiographic images in spatial correspondence, a panorama of vascular structures that corresponds to the panorama of bones is easily obtained.
(71) The first and second augmented images are merged in the step of fusion 76 in a merged image P.sub.3 that shows both the bone and vascular structures of the entire lower limb studied. The merger includes for example a sum that is weighted by weighting coefficients, pixel by pixel, of the augmented images P.sub.1 and P.sub.2, with the weighting coefficients associated with the respective pixels being chosen according to the initial intensity of the images.
(72) The image P.sub.3 is displayed, for example next to the second augmented image P.sub.2. Of course, alternative displays can be considered.
(73) In the optional step of marking 78, the operator adds markers, of which the position is recorded, in order to mark the arterial lesions to be treated.
(74) The markers are more preferably added on the second augmented image P.sub.2 representative of the panorama of the vascular structures, and added precisely on the merged image P.sub.3. The markers are for example represented by coloured lines superposed on the image or images displayed.
(75) Alternatively, the markers are added on the second augmented image P.sub.2 before the step 76 of calculating the merged image P.sub.3, and are added on the merged image displayed.
(76) According to another alternative, the markers are added directly on the merged image P.sub.3.
(77) The step 80 of capturing a current fluoroscopic image I.sub.c is similar to the step 60 described hereinabove.
(78) The current fluoroscopic image is then registered with respect to the first augmented image P.sub.1 in the step 82, similar to the step 62 described hereinabove.
(79) Then, in the step 84, the vascular structures corresponding to the current fluoroscopic image are relocated in the second augmented image P.sub.2 and/or in the merged image P.sub.3, and are displayed.
(80) For example, in an embodiment, a frame indicating the region that corresponds to the current fluoroscopic image is displayed, both on the augmented image P.sub.2 and the merged image P.sub.3, which are displayed in parallel, as well as the previously positioned marked, as shown in the example of
(81) In
(82) In addition, according to an alternative, an image representative of the current fluoroscopic image with in superposition the corresponding angiographic structure, and, where applicable, with the previously recorded markers, is displayed, with a better display resolution than the resolution of the region displayed on the augmented image P.sub.2 or on the image P.sub.3
(83)
(84) Thus, advantageously, the arterial lesions to be treated are relocated, in liaison with the current fluoroscopic image and with the augmented P.sub.2 and merged P.sub.3 images of the lower limb, which makes it possible to assist in the guiding of the endovascular instrument during the intervention without a new capturing of an angiographic image.
(85)
(86) In this embodiment, the second augmented image P.sub.2 is obtained and memorised in a pre-operative phase. This three-dimensional image is for example calculated from images obtained in the pre-operative step 90 by capturing techniques such as for example tomography also called CT for “computed tomography”, magnetic resonance imaging (MRI), enhanced by the injection of a contrast agent in order to have the vascular structures appear better.
(87) From the images obtained, a 3D image is calculated and memorised in the pre-operative step 92. There are various known methods for calculating such an image. For example, by working on the CT image obtained with the injection of a contrast agent, an automatic segmentation of the arterial structures can be used to create a three-dimensional anatomical model of the patient. A semi-automatic segmentation algorithm of the graph section type can be used.
(88) For example, such a virtual three-dimensional anatomical model can include the segmented volumes of the femoral artery, internal iliac arteries, hips and the femur.
(89) In operative phase, the method comprises a step 94 of capturing fluoroscopic two-dimensional images, similar to the step 50 described hereinabove, followed by a step of registering 96 in order to form the first augmented image P.sub.1, similar to the step 52 described hereinabove.
(90) A rigid 2D-3D registering is then applied in the step 98, and parameters that allow for the putting into correspondence of the first augmented image P.sub.1 and the second augmented three-dimensional image are memorised.
(91) Any known method of rigid 2D-3D registering can be applied. For example, a semi-automatic registering of the iconic type is implemented, wherein the initialisation of a portion of the points used for the registering is carried out manually, which makes it possible to have the two images to be registered correspond roughly, and automatic registering is then launched afterwards in order to refine the result.
(92) The step 98 is followed by steps of capturing 100 a current fluoroscopic two-dimensional image, representative of a portion of the lower limb considered, similar to the step 60 described hereinabove, then a step 102 of registering with respect to the first augmented image P.sub.1, similar to the step 62 already described.
(93) Finally, during the step 104, a region corresponding to the current image is determined in the three-dimensional image P.sub.2, by applying the parameters for putting into 2D-3D correspondence between the first augmented image P.sub.1 and the three-dimensional image P.sub.2 calculated and memorised in the step 98.
(94) Alternatively, a 2D-3D registering is carried out in the step 104.
(95) Finally, a region corresponding to the current fluoroscopic image is displayed on the three-dimensional image P.sub.2.
(96)
(97) Of course, alternatives in the detail of the implementing of the steps hereinabove, within the scope of those skilled in the art, can be entirely considered.
(98) In a third embodiment, the first two embodiments are combined. The second augmented image P.sub.2 representative of an arterial panorama is calculated and displayed, as in the first embodiment. In addition, another augmented image P′.sub.2, which is the three-dimensional image representative of the bone and vascular structures of the lower limb, is also calculated and displayed as in the second embodiment.
(99) The region corresponding to the current fluoroscopic image is determined and displayed on each one of the augmented images.
(100) Thus, the method allows for the displaying, in the second operative phase, of a region corresponding to the current fluoroscopic image both in the augmented image corresponding to the arterial panorama, and in the three-dimensional augmented image.
(101) Advantageously, the operator has information on the arterial lesions, as well as additional information on the patient, for example the location of the calcifications that are visible only in the 3D image.
(102) In all of the embodiments, it is not necessary to inject the contrast agent and to capture new angiographic images that correspond to the current fluoroscopic images, therefore the quantity of contrast agent to be injected into the patient is decreased in relation to conventional interventions.
(103) In addition, advantageously, more information is displayed, which makes it possible to improve the assistance provided in the framework of endovascular interventions.
(104) An exemplary embodiment of the present invention therefore has an object to overcome the disadvantages of the known methods, in order to improve image-guided endovascular interventions while still limiting the number of angiographic image captures.
(105) Although the present disclosure has been described with reference to one or more examples, workers skilled in the art will recognize that changes may be made in form and detail without departing from the scope of the disclosure and/or the appended claims.