Method and apparatus for ultrasound image acquisition

10130328 ยท 2018-11-20

    Inventors

    Cpc classification

    International classification

    Abstract

    Method for ultrasound image acquisition includes defining a fixed frame of reference, with an origin of coordinates within a transmitter of a tracking system; detecting position and orientation of a probe relative to the frame of reference with a probe sensor of the tracking system coupled to the probe; detecting position and orientation of a body relative to the frame of reference with a reference sensor of the tracking system coupled to the body; calculating position and orientation of the probe with respect to the body; acquiring a set of 2D images constituting a 3D by transmitting an ultrasonic beam into the body and receiving echographic signals with the probe; iterating for a predetermined number of 3D image acquisitions; generating a panoramic 3D image by combining the 2D images based on information of position and orientation of the probe relative to the body for each 2D image acquisition.

    Claims

    1. A method for ultrasound image acquisition, comprising: a) defining a fixed frame of reference, an origin of coordinates being set within a transmitter of a tracking system; b) detecting position and orientation of a probe with respect to said fixed frame of reference with a probe sensor of said tracking system coupled to said probe; c) detecting position and orientation of a body to be imaged with respect to said frame of reference with a reference sensor of said tracking system coupled to said body; d) calculating the position and the orientation of said probe with respect to said body; e) acquiring a set of 2D images constituting a 3D image in Doppler mode by transmitting an ultrasonic beam into said body and receiving ecographic signals from said body by said probe; f) iterating steps b) to e) for a predetermined number of 3D image acquisitions; and g) generating a panoramic 3D image combining the 2D images of the acquired 3D images based on information of the position and the orientation of said probe with respect to said body calculated for each 2D image acquisition, wherein step d) comprises the following steps: h) generating from the detected position and orientation of the probe a rotation matrix RP indicating rotation and translation transforming the fixed frame of reference into a frame of reference of the probe; i) generating from the detected position and orientation of the body a rotation matrix RR indicating rotation and translation transforming the fixed frame of reference into a frame of reference of the body; and j) multiplying the rotation matrix RP with the inverse of the rotation matrix RR in order to obtain a calculated rotation matrix RP indicating rotation and translation transforming the frame of reference of the body into the frame of reference of the probe.

    2. The method as claimed in claim 1, wherein the body is a head of a patient, said acquisitions being transcranial acquisitions.

    3. The method as claimed in claim 1, wherein the step of acquiring a set of 2D images is performed with a 2D probe.

    4. The method as claimed in claim 1, wherein said panoramic 3D image is automatically fused with a corresponding volumetric image of a same patient acquired in a different imaging modality.

    5. The method as claimed in claim 4, wherein the images are fused using an automatic registration algorithm performing a matching of vessels comprised in the panoramic 3D image with vessels identified by segmentation in the volumetric image acquired in said different imaging modality.

    Description

    (1) These and other features and advantages of the present invention will appear more clearly from the following description of some embodiments, illustrated in the annexed drawings, wherein:

    (2) FIG. 1 shows an embodiment of the apparatus according to the present invention;

    (3) FIGS. 2 to 4 explain the coordinates transformations provided in the method according to the present invention.

    (4) In FIG. 1 an apparatus for ultrasound image acquisition is shown, which apparatus comprises an ultrasound probe 1 having at least one piezoelectric transducer, a stage for transmitting an ultrasonic beam by said at least one transducer into a body to be imaged, a stage for receiving and processing echographic signals returned to the at least one transducer.

    (5) The probe 1 is of 2D type and acquires subsequent 2D images in order to generate 3D images from a scan volume 10, said stage for receiving and processing echographic signals being comprised in a unit for generation of 3D ultrasound images 3.

    (6) The tracking system 2 comprises a transmitter 20 defining a fixed frame of reference 23 and a probe sensor 21 coupled to the probe 1 in a fixed way so that a rotation and/or translation of the probe 1 corresponds to a rotation and/or translation of the probe sensor 21.

    (7) The probe sensor 21 detects its position and orientation with respect to said fixed frame of reference 23 and consequently detects the position of the probe 1 with respect to said fixed frame of reference 23.

    (8) The probe sensor 21 is preferably mounted to the probe shaft.

    (9) A reference sensor 22 is coupled to the body to be imaged in a fixed way so that a rotation and/or translation of the body corresponds to a rotation and/or translation of the reference sensor 22.

    (10) The reference sensor 22 detects its position and orientation with respect to said fixed frame of reference 23 and consequently detects the position of the body with respect to said fixed frame of reference 23.

    (11) The reference sensor 22 is preferably mounted to the head of the patient, in particular to the forehead, for example by means of an elastic strap or of a strap provided with closing means such as Velcro endings.

    (12) Preferably the transmitter 20, the probe sensor 21 and the reference sensor 22 are of electromagnetic type, i.e. they function by measuring the strength of the magnetic fields generated by the electric current which flows through three small wire coils, oriented perpendicular to one another.

    (13) The current causes each wire to work as an electromagnet while the current is flowing through it.

    (14) By sequentially activating each of the wire coils of the transmitter 20, and measuring the magnetic fields generated on each of the three perpendicular wire coils of the probe sensor 21 and of the reference sensor 22, it is possible to determine the position and orientation of the probe sensor 21 and of the reference sensor 22 with respect to the transmitter 20.

    (15) The tracking system 2 comprises means calculating the position and orientation of the probe sensor 21 with respect to the reference sensor 22, hence calculating the position and orientation of the probe 1 with respect to the body.

    (16) Multiple acquisitions of 3D images are then performed, each 3D image being composed by a plurality of 2D images, and for each 2D image the position and orientation of the probe 1 with respect to the body is calculated.

    (17) The acquired images are combined in a panoramic image by a panoramic image combinator 4, using said information of position and orientation of the probe for each 2D image acquisition to correct the position of each 2D image.

    (18) The panoramic image obtained is visualized onto a display 6 by means of a display driver 5.

    (19) The method steps carried out by the tracking system 2 are shown in FIGS. 2 to 4.

    (20) The coordinates of the probe sensor 21 have to be converted from being based on the position of the transmitter 20 to being based on the position of the reference sensor 22 placed on the patients head.

    (21) The reference sensor 22 also has coordinates in transmitter coordinate system, and these coordinates change continuously with any patient movement because the sensor is fixed to the head.

    (22) As shown in FIG. 2, the coordinates of the probe sensor 21 with respect to the transmitter 20 are read and a rotation matrix RP is generated.

    (23) The rotation matrix RP contains both the rotation and translation of the probe sensor 21 in the frame of reference 23 with origin in the transmitter 20.

    (24) Also the coordinates of the reference sensor 22 with respect to the transmitter 20 are read and a rotation matrix RR is generated.

    (25) The rotation matrix RR contains both the rotation and translation of the reference sensor 22 in the frame of reference 23 with origin in the transmitter 20.

    (26) As shown in FIG. 3, the coordinates of the reference sensor 22 represented by matrix RR are inverted and saved in matrix RR.sup.1.

    (27) Multiplying the coordinates of the reference sensor 22 with its inversion puts the reference sensor back to the origin of the frame of reference 23 where the transmitter 20 is, i.e. RR.Math.RR.sup.1=1.

    (28) As shown in FIG. 4, the coordinates of the probe sensor 21 are multiplied with the inverted coordinates of the reference sensor 22.

    (29) This puts the probe sensor 21 to a position relative to the reference sensor 22, creating a rotation matrix RP of the probe sensor 21 with respect to the reference sensor 22, i.e. RP.Math.RR.sup.1=RP, and can thus identify the position and orientation of the probe 1 with respect to the body.

    (30) For example, if both sensors experience the same translation and rotation, as they were fixed to each other, during the motion the calculation means will yield the same coordinates for the probe sensor 21, no matter where it is positioned with respect to the transmitter 20.

    (31) This happens due to the fact that the reference sensor 22 coordinates are translated to the origin and orientated in such a way that their axes match the transmitter 20 coordinate system axes.