3D ULTRASOUND IMAGING SYSTEM
20230068399 · 2023-03-02
Inventors
Cpc classification
A61B8/12
HUMAN NECESSITIES
A61B8/463
HUMAN NECESSITIES
G06V10/42
PHYSICS
A61B8/523
HUMAN NECESSITIES
A61B8/085
HUMAN NECESSITIES
A61B8/483
HUMAN NECESSITIES
International classification
G06V10/42
PHYSICS
A61B8/00
HUMAN NECESSITIES
Abstract
The present invention relates to an ultrasound imaging system comprising: an image processor configured to receive at least one set of volume data resulting from a three-dimensional ultrasound scan of a body and to provide corresponding display data, an anatomy detector configured to detect a position and orientation of an anatomical object of interest within the at least one set of volume data, a slice generator for generating a plurality of two-dimensional slices from the at least one set of volume data, wherein said slice generator is configured to define respective slice locations based on the results of the anatomy detector for the anatomical object of interest so as to obtain a set of two-dimensional standard views of the anatomical object of interest, wherein the slice generator is further configured to define for each two-dimensional standard view which anatomical features of the anatomical object of interest are expected to be contained, and an evaluation unit for evaluating a quality factor for each of the generated plurality of two-dimensional slices by comparing each of the slices with the anatomical features expected for the respective two-dimensional standard view.
Claims
1. An ultrasound imaging system comprising: one or more processors operable to: receive, from a trans-esophageal echocardiography (TEE) probe positioned within an esophagus of a body, first volume ultrasound data of a first trans-esophageal scan of an anatomical object within the body and second volume ultrasound data of a different second trans-esophageal scan of the anatomical object; generate, based on the first volume ultrasound data, a first plurality of two-dimensional (2D) ultrasound images corresponding to a set of standard views of the anatomical object; generate, based on the second volume ultrasound data, a second plurality of 2D ultrasound images corresponding to the set of standard views; select, for a first standard view of the set of standard views, a first ultrasound image from the first plurality; select, for a second standard view of the set of standard views, a second ultrasound image from the second plurality; and output, to a display in communication with the one or more processors, a third plurality of 2D ultrasound images corresponding to the set of standard views, wherein the third plurality comprises the first ultrasound image and the second ultrasound image.
2. The ultrasound imaging system of claim 1, wherein, to select the first ultrasound image, the one or more processors are operable to: determine a first quality factor of the first ultrasound image; determine a second quality factor of a third ultrasound image from the second plurality, wherein the first ultrasound image and the third ultrasound image correspond to the first standard view; and select, for the first standard view, the first ultrasound image based on a comparison of the first quality factor and the second quality factor.
3. The ultrasound imaging system of claim 2, wherein the one or more processors are operable to: select the first ultrasound image responsive to determining, based on the comparison of the first quality factor and the second quality factor, that the first quality factor is greater than the second quality factor.
4. The ultrasound imaging system of claim 2, wherein, to determine the first quality factor, the one or more processors are operable to compare the first ultrasound image and the first standard view.
5. The ultrasound imaging system of claim 4, wherein the one or more processors are operable to compare of a field of view of the first ultrasound image and a feature of the anatomical object associated with the first standard view.
6. The ultrasound imaging system of claim 5, wherein the one or more processors are operable to determine an amount of overlap between the field of view and the feature.
7. The ultrasound imaging system of claim 4, wherein the one or more processors are operable to compare the first ultrasound image and a geometrical model comprising the first standard view.
8. The ultrasound imaging system of claim 2, wherein the one or more processors are operable to output a graphical representation of the first quality factor to the display.
9. The ultrasound imaging system of claim 8, wherein the graphical representation of the first quality factor comprises a percentage or an icon.
10. The ultrasound imaging system of claim 8, wherein the graphical representation of the first quality factor comprises: a first icon responsive to the first quality factor being within a first range of values; a second icon responsive to the first quality factor being within a second range of values; and a third icon responsive to the first quality factor being disposed within a third range of values.
11. The ultrasound imaging system of claim 10, wherein the first icon is configured to indicate that a user obtain additional volume ultrasound data of the anatomical object.
12. The ultrasound imaging system of claim 1, the one or more processors are operable to output the third plurality such that each 2D ultrasound image of the third plurality is displayed simultaneously.
13. The ultrasound imaging system of claim 1, further comprising the display.
14. The ultrasound imaging system of claim 1, further comprising the TEE probe.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0055] These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter. In the following drawings
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
DETAILED DESCRIPTION OF THE INVENTION
[0063]
[0064] A particular example for a three-dimensional ultrasound system which may be applied for the current invention is the CX40 Compact Xtreme ultrasound system sold by the applicant, in particular together with a X6-1 or X7-2t TEE transducer of the applicant or another transducer using the xMatrix technology of the applicant. In general, matrix transducer systems as found on Philips iE33 systems or mechanical 3D/4D transducer technology as found, for example, on the Philips iU22 and HD15 systems may be applied for the current invention.
[0065] A 3D ultrasound scan typically involves emitting ultrasound waves that illuminate a particular volume within a body, which may be designated as target volume. This can be achieved by emitting ultrasound waves at multiple different angles. A set of volume data is then obtained by receiving and processing reflected waves. The set of volume data is a representation of the target volume within the body.
[0066] It shall be understood that the ultrasound probe 14 may either be used in a non-invasive manner (as shown in
[0067] Further, the ultrasound system 10 may comprise a controlling unit 16 that controls the provision of a 3D image via the ultrasound system 10. As will be explained in further detail below, the controlling unit 16 controls not only the acquisition of data via the transducer array of the ultrasound probe 14, but also signal and image processing that form the 3D images out of the echoes of the ultrasound beams received by the transducer array of the ultrasound probe 14.
[0068] The ultrasound system 10 may further comprise a display 18 for displaying the 3D images to the user. Still further, an input device 20 may be provided that may comprise keys or a keyboard 22 and further inputting devices, for example a trackball 24. The input device 20 might be connected to the display 18 or directly to the controlling unit 16.
[0069]
[0070] Further, the ultrasound system 10 comprises a signal processor (SP) 32 that receives the image signals. The signal processor (SP) 32 is generally provided for analog-to-digital-converting, digital filtering, for example, bandpass filtering, as well as the detection and compression, for example a dynamic range reduction, of the received ultrasound echoes or image signals. The signal processor 32 forwards image data.
[0071] Further, the ultrasound system 10 comprises an image processor (IP) 34 that converts image data received from the signal processor 32 into display data. In particular, the image processor 34 receives the image data, preprocesses the image data and may store it in a memory (MEM) 36. This image data is then further post-processed to provide images to the user via the display 18. In the current case, in particular, the image processor 34 may form the three-dimensional images out of a multitude of two-dimensional images.
[0072] The ultrasound system 10 may in the current case further comprise an anatomy detector (AD) 38, a slice generator (SLG) 40 and an evaluation unit (EU) 42. It shall be noted that the latter mentioned components may either be realized as separate entities, but may also be included in the image processor 34. All these components may be hardware and/or software implemented.
[0073] The anatomy detector (AD) 38 identifies the orientation and position of the anatomical object of interest within the acquired 3D volume data. The anatomy detector (AD) may thereto be configured to conduct a model-based segmentation of the acquired 3D volume data. This may be done by finding a best match between the at least one set of volume data and a geometrical mesh model of the anatomical object of interest. The model-based segmentation may, for example, be conducted in a similar manner as this is described for a model-based segmentation of CT images in Ecabert, O. et al.: “Automatic Model-based Segmentation of the Heart in CT Images”, IEEE Transactions on Medical Imaging, Vol. 27(9), p. 1189-1291, 2008. The geometrical mesh model of the anatomical object of interest may comprise respective segments representing respective anatomic features. Accordingly, the anatomy detector 38 may provide an anatomy-related description of the volume data, which identifies respective geometrical locations of respective anatomic features in the volume data.
[0074] Such a model-based segmentation usually starts with the identification of the orientation of the anatomical object of interest (e.g. the heart) within the 3D ultrasonic volume data. This may, for example, be done using a three-dimensional implementation of the Generalized Hough Transform. Pose misalignment may be corrected by matching the geometrical model to the image making use of a global similarity transformation. The segmentation comprises an initial model that roughly represents the shape of the anatomical object of interest. Said model may be a multi-compartment mesh model. This initial model will be deformed by a transformation. This transformation is decomposed into two transformations of different kinds: A global transformation that can translate, rotate or rescale the initial shape of the geometrical model, if needed, and a local deformation that will actually deform the geometrical model so that it matches more precisely to the anatomical object of interest. This is usually done by defining the normal vectors of the surface of the geometrical model to match the image gradient; that is to say, the segmentation will look in the received ultrasonic image for bright-to-dark edges (or dark-to-bright), which usually represent the tissue borders in ultrasound images, i.e. the boundaries of the anatomical object of interest.
[0075] The segmented 3D volume data may then be further post-processed. The slice generator (SLG) 40 generates a plurality of two-dimensional slices from the 3D volume data. Landmarks are thereto encoded within the geometrical model that defines the planes of said 2D slices. A set of three or more landmarks can represent a plane. These encoded landmarks may be mapped onto the segmented 3D volume data so as to obtain a set of 2D standard views of the anatomical object of interest generated from the 3D volume data. The slice generator 40 may be furthermore configured to define for each 2D standard view which anatomical features of the anatomical object of interest are expected to be contained within said view. This may be done using the geometrical model that is encoded with the anatomic features of the anatomical object of interest. It should thus be known which anatomical features should occur in which 2D standard view.
[0076] The evaluation unit (EU) 42 may then evaluate a quality factor for each of the generated plurality of 2D slices by comparing each of said generated slices with the anatomical features expected for the respective 2D standard view. In other words, the evaluation unit 42 computes the coverage of each of the 2D standard views by the 3D volume data. This may be done by computing the overlap of the structure that should be covered and the field of view of the 3D ultrasound scan. The quality factor that is evaluated within the evaluation unit 42 for each of the generated plurality of 2D slices may thus be a quantitative factor that includes a ratio to which extent the expected anatomical features are included in the respective 2D slice. This may be done by comparing the field of view of each of the 2D slices to the geometrical model of the anatomical object.
[0077] In still other words, this means that for each 2D slice that is generated from the received 3D ultrasound volume data, it is determined how good the 2D standard view, that corresponds to the generated 2D slice, is covered. This information can be presented as a graphical icon, e.g. as a traffic light, and/or as a percentage on the display 18.
[0078] As it will be explained further below in detail with reference to
[0079] In practice there is usually not only performed a single 3D ultrasound scan of the anatomical object of interest. Preferably, a plurality of 3D ultrasound scans of the anatomical object of interest are performed. This results in a plurality of sets of volume data, which result from the plurality of different 3D scans of the body. For each of these sets of 3D volume data, the above-mentioned processing (segmentation, slice generation and evaluation) is performed by the ultrasound system 10. The plurality of sets of volume data resulting from the different 3D scans and the 2D slices that are generated in the above-mentioned way from said sets of volume data may be stored within the memory (MEM) 36 together with the evaluated quality factors of each of the 2D slices.
[0080] In this case a selector (SEL) 44 is configured to select for each 2D standard view a 2D slice that has the highest quality factor. This may be done by comparing the evaluated quality factors of corresponding 2D slices that are generated from each of the plurality of 3D sets of volume data which are stored in the memory 36. In other words, the selector 44 selects for each standard view the best 2D slice out of all 2D slices that have been generated from the different sets of 3D volume data (different ultrasound scans). This means that the different 2D standard views that are simultaneously illustrated on the display 18 may result from different 3D ultrasound scans, wherein the selector 44 automatically determines from which 3D volume data set a specific 2D standard view may be generated best.
[0081] This will be explained in the following in further detail by example of a transesophageal echocardiography (TEE).
[0082] For a full TEE examination, 20 different 2D TEE standard views have to be acquired. An overview of the different standard views is given in schematical manner in
[0083]
[0084] Then, said model is used to compute the planes of all 20 TEE standard views. This is done in step S14 by generating a plurality of 2D slices from the at least one set of 3D volume data. Thereto, the respective slice locations are defined based on the geometrical model of the heart. Due to the segmentation that has been performed in advance (in step S12), these respective slice locations may be mapped onto the 3D volume data, such that it is possible to compute the 2D slices from the 3D volume data set by interpolating the 3D image. For example to compute the ME four chamber view (see
[0085] Then, in step S16 it is defined for each 2D standard view which anatomical features of the heart are expected to be contained in said standard view. This may be done by encoding the geometrical model with an anatomy-related description that identifies segments of the heart within each 2D standard view that correspond to respective anatomic features, for example, the heart chambers, the main vessels, the septa, the heart valves, etc. If the geometrical model is encoded with this anatomy-related information, it is easier in the further procedure to evaluate whether the generated 2D slices cover all information that should be included within the respective 2D standard view.
[0086] In step S18 it is then evaluated for each of the generated 2D slices how good the 2D standard view is covered. Thereto, a quality factor is computed for each of the generated 2D slices, wherein said quality factor may be a quantitative factor that includes a ratio to which extent the expected anatomical features are included in the respective 2D slice. This may be done by comparing the field of view of each of the generated 2D slices to the geometrical model of the heart.
[0087]
[0088] As it may be seen, most of the anatomical features of interest are within the slices 5a, b and d, i.e. most of the border lines 46 are within the field of view. The quality factors that have been evaluated for these slices are therefore comparatively high, which is indicated in
[0089] It may be furthermore seen that the generated slices 5h and i are still acceptable, because the main anatomical features of interest, e.g. the aortic valve within
[0090] Returning back to
[0091]
[0092] It may therefore be seen that the 2D standard views a, b and d are best covered within the 2D slices that were generated from the first 3D volume data set (illustrated in FIG. 5a, b and d), while the 2D standard views h, i and m are best covered within the 2D slices that were generated from the second 3D volume data set (illustrated in
[0093] In step S20 (see
[0094] The result is shown in
[0095] A still further improvement of the method schematically illustrated in
[0096] It shall be noted that step S22 is not a mandatory, but an optional method step. Of course, a combination of both, generating the 2D slices from the 3D volume data set and generating the 2D slices directly by performing an additional 2D scan, is possible as well.
[0097] While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
[0098] In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
[0099] A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
[0100] Any reference signs in the claims should not be construed as limiting the scope.