Image segmentation and prediction of segmentation
11393099 · 2022-07-19
Assignee
Inventors
Cpc classification
G06F17/16
PHYSICS
International classification
Abstract
Systems and methods are provided for generating and using statistical data which is indicative of a difference in shape of a type of anatomical structure between images acquired by a first imaging modality and images acquired by a second imaging modality. This statistical data may then be used to modify a first segmentation of the anatomical structure which is obtained from an image acquired by the first imaging modality so as to predict the shape of the anatomical structure in the second imaging modality, or in general, to generate a second segmentation of the anatomical structure as it may appear in the second imaging modality based on the statistical data and the first segmentation.
Claims
1. A system configured for image segmentation, comprising: an image data interface configured to access an image of an anatomical structure of a patient, wherein the image is acquired by a first imaging modality; a memory comprising instruction data representing a set of instructions; a processor configured to communicate with the image data interface and the memory and to execute the set of instructions, wherein the set of instructions, when executed by the processor, cause the processor to: segment the image to obtain a first segmentation of the anatomical structure of the patient; access statistical data indicative of a structural difference in shape of the type of anatomical structure between a) images acquired by the first imaging modality and b) images acquired by a second imaging modality, wherein the statistical data comprises a weighted sum applied to a mean mesh generated from paired images respectively generated by the first imaging modality and the second imaging modality; and based on the first segmentation and the statistical data, generate a second segmentation of the anatomical structure which represents an estimate of the structural shape of the anatomical structure of the patient in an image acquired by the second imaging modality.
2. The system according to claim 1, wherein the set of instructions, when executed by the processor, cause the processor to compute a measurement from the second segmentation of the anatomical structure.
3. The system according to claim 2, wherein the measurement comprises at least one of: a volume measurement, a distance measurement, an area measurement, a curvature measurement, a measurement of a circumference, and a measurement of a diameter.
4. The system according to claim 1, wherein the image is a pre-interventional image, and wherein the set of instructions, when executed by the processor, cause the processor to overlay the second segmentation of the anatomical structure over an interventional image which is acquired by the second imaging modality.
5. A method for image segmentation, comprising: accessing an image of an anatomical structure of a patient, wherein the image is acquired by a first imaging modality; segmenting the image to obtain a first segmentation of the anatomical structure of the patient; accessing statistical data indicative of a structural difference in shape of the type of anatomical structure between a) images acquired by the first imaging modality and b) images acquired by a second imaging modality, wherein the statistical data comprises a weighted sum applied to a mean mesh generated from paired images respectively generated by the first imaging modality and the second imaging modality; and based on the first segmentation and the statistical data, generating a second segmentation of the anatomical structure which represents an estimate of the structural shape of the anatomical structure of the patient in an image acquired by the second imaging modality.
6. A non-transitory computer readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to: access an image of an anatomical structure of a patient, wherein the image is acquired by a first imaging modality; segment the image to obtain a first segmentation of the anatomical structure of the patient; access statistical data indicative of a structural difference in shape of the type of anatomical structure between a) images acquired by the first imaging modality and b) images acquired by a second imaging modality, wherein the statistical data comprises a weighted sum applied to a mean mesh generated from paired images respectively generated by the first imaging modality and the second imaging modality; and generate, based on the first segmentation and the statistical data, a second segmentation of the anatomical structure which represents an estimate of the structural shape of the anatomical structure of the patient in an image acquired by the second imaging modality.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and other aspects of the invention will be apparent from and elucidated further with reference to the embodiments described by way of example in the following description and with reference to the accompanying drawings, in which
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11) It should be noted that the figures are purely diagrammatic and not drawn to scale. In the figures, elements which correspond to elements already described may have the same reference numerals.
LIST OF REFERENCE NUMBERS
(12) The following list of reference numbers is provided for facilitating the interpretation of the drawings and shall not be construed as limiting the claims. 010 image repository 012 data communication 020 first set of images 025 second set of images 030 statistical data 040 image repository 042 data communication 050 image 060 display 062 display data 080 user input device 082 user input data 100 system for generating statistical data 120 image data interface 122 internal data communication 140 processor 142 internal data communication 160 memory 200 system for segmentation 220 image data interface 222 internal data communication 240 processor 242, 244 internal data communication 260 memory 280 user interface subsystem 282 display processor 284 user input interface 300, 302 histogram of scale factors 310, 312 bins representing scale factor 320, 322 occurrence 350, 352 point-to-surface distance between US and MR meshes 400 method for generating statistical data 410 accessing first set and second set of images 420 segmenting first set of images 430 segmenting second set of images 440 generating statistical data 500 method for image segmentation 510 accessing image of patient 520 accessing statistical data 530 segmenting the image 540 generating second segmentation 600 computer readable medium 610 non-transitory data
DETAILED DESCRIPTION OF EMBODIMENTS
(13) Systems and methods are described for generating and using statistical data which is indicative of a difference in shape of a type of anatomical structure between images acquired by a first imaging modality and images acquired by a second imaging modality. This statistical data may then be used to modify a segmentation of the anatomical structure which is obtained from an image acquired by the first imaging modality so as to predict the shape of the anatomical structure in the second imaging modality. Accordingly, the statistical data may also be termed ‘shape difference data’, but is in this description referred to as statistical data since it is generated based on a statistical analysis of a set of segmentations.
(14)
(15) The system 100 is further shown to comprise a processor 140 configured to internally communicate with the image data interface 120 via data communication 122, and a memory 160 accessible by the processor 140 via data communication 142.
(16) The processor 140 may be configured to, during operation of the system 100, segment individual images of the first set of images 020 to obtain a first set of segmentations of the type of anatomical structure, segment individual images of the second set of images 025 to obtain a second set of segmentations of the type of anatomical structure, and based on the first set of segmentations and the second set of segmentations, generate statistical data 030 which is indicative of a difference in shape of the type of anatomical structure between a) the images acquired by the first imaging modality and b) the images acquired by the second imaging modality.
(17)
(18) The system 100 is further shown to comprise a processor 240 configured to internally communicate with the image data interface 220 via data communication 222, a memory 260 accessible by the processor 240 via data communication 242, and a user interface subsystem 280 with a display processor 282 and a user input interface 284 which is configured to internally communicate with the processor 240 via data communication 244.
(19) The processor 240 may be configured to, during operation of the system 200, segment the image 050 to obtain a first segmentation of the anatomical structure of the patient, access statistical data indicative of a difference in shape of the type of anatomical structure between a) images acquired by the first imaging modality and b) images acquired by a second imaging modality, and based on the first segmentation and the statistical data, generate a second segmentation of the anatomical structure which represents an estimate of the shape of the anatomical structure of the patient in an image acquired by the second imaging modality. Although the statistical data itself is not shown in
(20) The user interface subsystem 280 may be configured to, during operation of the system 200, enable a user to interact with the system 200 via a graphical user interface. For that purpose, the display processor 282 may be configured to generate display data 062 for a display 060 so as to display the graphical user interface to a user. The graphical user interface may be represented by a set of interface instructions stored as data in a memory accessible to the display processor 282, being for example the memory 260 or another memory of the system 200. The user input interface 284 may be configured to receive user input data 082 from a user device 080 operable by the user. The user input device 080 may take various forms, including but not limited to a computer mouse, touch screen, keyboard, microphone, etc.
(21) In general, each of the systems of
(22)
(23) However, it has been found that there are systematic differences in the results depending on which imaging modality the measurements are performed. This followed from a comparison of a set of Magnetic Resonance (MR) segmentation results and corresponding Ultrasound (US) segmentation results, which involved comparing a US mesh to a corresponding MR mesh for a set of patients. Before a MR mesh and an US mesh were pairwise compared, the US mesh was registered to the MR mesh by applying a point-based transformation comprising rotation, translation, and, depending on the type of comparison, also a scaling. All registrations were performed with respect to the left ventricle, since the right ventricle and the atria were not fully covered by either of the two imaging modalities.
(24) The scaling factors of a rigid point-based transformation including scaling were compared and are shown in
(25) A second registration was performed to investigate the Euclidean distance between the MR mesh and the US mesh. For the second registration, a rigid point-based registration without scaling was applied (rotation and translation). For each triangle in the MR mesh, the closest triangle in the US mesh was determined by calculating the Euclidean distance between the triangles' centers. The Euclidean distance for each triangle was then averaged over all segmentation results and depicted in
(26) It is possible to compensate for such shape differences, or in particular to predict the shape of an anatomical structure in a second imaging modality from a segmentation of the anatomical structure which is obtained from an image acquired by a first imaging modality, as described in the following. Here, an exemplary embodiment is given which is considered as illustrative and as not limiting the invention, of which modifications may be made without departing from the scope of the invention as set forth in the claims.
(27) It may be assumed that a set of S corresponding segmentation results of two imaging modalities are available that show the anatomical structure of interest in a corresponding state, with the anatomical structure of interest being in the following an organ. The corresponding state may, for example, be a same heart phase in the case of cardiac images, or in general there being no interventions between the acquisitions of the images.
(28) Model-based segmentation may be used to segment the organ and to generate a set of corresponding points on the organ surface. An example of such model-based segmentation is described in “Automatic Model-based Segmentation of the Heart in CT Images” by Ecabert et al., IEEE Transactions on Medical Imaging 2008, 27(9), 1189-1201. An example of a set of M corresponding points on the organ surface is described in “Automated 3-D PDM construction from segmented images using deformable models” by Kaus et al., IEEE Transactions on Medical Imaging 2003, 22(8), 1005-1013. In a specific example, shape-constrained deformable models may be used, e.g., as described in “Shape-constrained deformable models and applications in medical imaging” by Weese et al., Lecture Notes in Computational Vision and Biomechanics, 14:151-184, 2014, which may take advantage of the a-priori knowledge about the shape of the object similar to active shape models, but which may also be flexible similar to active contour models.
(29) In general, the models used in both imaging modalities may be the same, e.g., have a same geometry and level of detail, but may also differ in their geometry and/or level of detail. An example of the latter case is that for MR, a shape-constrained deformable cine model may be used which may have 7284 vertices and 14767 triangles, whereas for Ultrasound, a shape-constrained deformable model may be used having 2411 vertices and 5126 triangles and therefore having a coarser structure compared to the MR model.
(30) In general, the segmentation may result in the shape of the organ of interest being represented as a point distribution model (PDM). The PDM or mesh of the first image modality may comprise M vertices while the mesh of the second image modality may comprise N vertices. Each vertex may represent a three-dimensional vector describing a position in space and may be referred to as x.sub.v.sup.i, with i being an index indicating the patient data set (i∈{1, . . . , S}) and v giving the vertex number of the PDM (v∈{1, . . . , M}) or (v∈{1, . . . , N}). The meshes of the first and second imaging modality may then be defined as:
(31) x.sup.i=(x.sub.1.sup.ix.sub.2.sup.i . . . x.sub.M.sup.i).sup.T i=1, . . . , S; referring to the first imaging modality
(32) y.sup.i=(y.sub.1.sup.iy.sub.2.sup.i . . . y.sub.N.sup.i).sup.T i=1, . . . , S; referring to the second imaging modality
(33) As a first processing step, a patient j∈i=1, . . . , S with a typical organ shape may be selected and a rigid point-based registration may be performed:
x.sup.j=Ry.sup.j+T
(34) to register the mesh of the second imaging modality y.sup.j to the first imaging modality x.sup.j resulting in a registered mesh y.sub.reg.sup.j. R is the rotation matrix and T the translation vector. A transformation that involved scaling may not be needed since the size difference may be modeled by the shape model of differences, as further described in the following.
(35) In the next step, the remaining meshes x.sub.i(i≠j) may be aligned to the selected patient x.sup.j and the remaining meshes y.sub.i(i≠j) may be aligned to the registered reference mesh y.sub.reg.sup.j. Such alignment may involve a rigid point-based registration (rotation and translation) resulting in x.sub.reg.sup.i=(x.sub.1,reg.sup.ix.sub.2,reg.sup.i . . . x.sub.M,reg.sup.i).sup.T and y.sub.reg.sup.i=(y.sub.1,reg.sup.iy.sub.2,reg.sup.i . . . y.sub.N,reg.sup.i).sup.T.
(36) Next, the mean meshes of both modalities may be computed as followed:
(37)
(38) and an eigenvalue analysis of matrix AA.sup.t may be performed with:
(39)
(40) The shape of the organ-of-interest may now be approximated according to:
(41)
(42) where k=1, . . . , p refer to the p greatest eigenvectors and μ.sub.k and v.sub.k refer to the corresponding normalized eigenvectors of the first and second image modality, respectively. The number p of eigenvalues was determined by p=S−1 and w.sub.k are weights.
(43) The shape difference between the two imaging modalities may then calculated as:
(44)
(45) The above steps may be performed by the system of
(46) The system of
{tilde over (x)}=({tilde over (x)}.sub.1{tilde over (x)}.sub.2 . . . {tilde over (x)}.sub.M).sup.T
(47) In a first step, the given mesh may be registered to the mean mesh of the first imaging modality {tilde over (x)} leading to a registered mesh {tilde over (x)}.sub.reg. The shape of the organ may be approximated by using a weighted sum of the eigenvectors and the mean mesh:
(48)
(49) The weights {tilde over (w)}.sub.k which may provide the best approximation of the new mesh may be calculated based on:
(50)
(51) which minimizes the difference between the approximated mesh and the original mesh. Here, the “!” over “=” symbol is used to denote that the equation should be zero. With X={tilde over (x)}−
X=M{tilde over (w)}
(52) The weighting factors {tilde over (w)}=({tilde over (w)}.sub.1 {tilde over (w)}.sub.2 . . . {tilde over (w)}.sub.p).sup.T of this overdetermined system may be determined by applying a QR decomposition, e.g., as described in the handbook “Matrix computations” 3rd ed., The Johns Hopkins University Press, 1983.
(53) By reformulating the earlier described shape difference y−x, the shape of the organ as it would have been observed in the second imaging modality may now be approximated by:
(54)
(55) It is noted that if the numbers of vertices of the mesh of the first imaging modality and the mesh of the second image modality are not the same (M≠N), the numbers may be adapted before this equation can be applied, e.g., by using a mapping that maps the vertices of the left ventricle of the US mesh to their corresponding vertices in the MR mesh.
(56) It will be appreciated that various alternative ways of generating and using the statistical data are conceived and are within reach of the skilled person based on this description. In particular, various other statistical analysis techniques may be instead of principle component analysis (PCA), including but not limited to PCA with Orthomax, sparse PCA, Independent component analysis, Maximum autocorrelation factor (MAF) analysis, Kernel PCA, etc. Moreover, alternatively to linear eigenvalue decomposition, also non-linear decompositions may be used. For example, any suitable non-linear eigenvalue decomposition may be used as introduced in section 3.2 of “Statistical shape models for 3D medical image segmentation: A review” by Heimann et al, Medical Image Analysis 13 (2009), pp. 543-563 which is hereby incorporated by reference with respect to the generation of a mean shape.
(57) Exemplary use cases include, but are not limited, to the following.
(58) Volume measurements such as the heart chamber volume or the volume of brain structures may be computed from the volume enclosed by the corresponding mesh structure after adapting the mesh model to an image. Similarly, diameter measurements, etc., may be derived from the mesh structure. The described approach allows to approximately compute the measurement as it would have been observed in the second imaging modality or to provide information about the variation of the measurement between different imaging modalities. This information may be of help in, e.g., follow-up studies to assess disease progression or treatment outcome when different imaging modalities have been used.
(59) For example, clinical guidelines and recommendations for clinical measurements, such as the threshold of the fraction ejection of the left ventricle, are usually defined for a specific imaging modality, but are used in clinical practice independently of the imaging modality as it may be laborious and expensive to use several imaging modalities for diagnoses or clinical treatment planning. This may result in inaccurate or erroneous measurements since the measured quantity may vary, e.g., in size or shape, across different imaging modalities. The described approach allows to approximately compute the measurement as it would have been observed in the second imaging modality.
(60) For interventional guidance, pre-operatively acquired models of one imaging modality are often overlaid onto interventional images of a second imaging modality. The described approach allows to compensate shape differences between both modalities and to generate an intra-operative overlay over the interventional images with improved accuracy.
(61)
(62) The method 400 comprises, in an operation titled “ACCESSING FIRST SET AND SECOND SET OF IMAGES”, accessing 410 a first set and a second set of images of a type of anatomical structure, wherein the first set of images is acquired by a first imaging modality and the second set of images is acquired by a second imaging modality. The method 400 further comprises, in an operation titled “SEGMENTING FIRST SET OF IMAGES”, segmenting 420 individual images of the first set of images to obtain a first set of segmentations of the type of anatomical structure. The method 400 further comprises, in an operation titled “SEGMENTING SECOND SET OF IMAGES”, segmenting 430 individual images of the second set of images to obtain a second set of segmentations of the type of anatomical structure. The method 400 further comprises, in an operation titled “GENERATING STATISTICAL DATA”, based on the first set of segmentations and the second set of segmentations, generating 440 statistical data which is indicative of a difference in shape of the type of anatomical structure between a) the images acquired by the first imaging modality and b) the images acquired by the second imaging modality. For example, generating 440 may comprise generating statistical data which is indicative of a difference in physical shape of the underlying (e.g. actual or real) anatomical structure imaged by the first and second modalities. As described above, the difference in shape represents differences in the shape of the physical anatomy (e.g. the anatomical structure being imaged) which may be due to, for example, the first image modality imaging different parts of the anatomical structure more or less clearly than the second image modality.
(63)
(64) The method 500 comprises, in an operation titled “ACCESSING IMAGE OF PATIENT”, accessing 510 an image of an anatomical structure of a patient, wherein the image is acquired by a first imaging modality. The method 500 further comprises, in an operation titled “ACCESSING STATISTICAL DATA”, accessing 520 statistical data indicative of a difference in shape of the type of anatomical structure between a) images acquired by the first imaging modality and b) images acquired by a second imaging modality. Operation 520 may comprise accessing statistical data generated using the method 400 as described above. The method 500 further comprises, in an operation titled “SEGMENTING THE IMAGE”, segmenting 530 the image to obtain a first segmentation of the anatomical structure of the patient. The method 500 further comprises, in an operation titled “GENERATING SECOND SEGMENTATION”, based on the first segmentation and the statistical data, generating 540 a second segmentation of the anatomical structure which represents an estimate of the shape of the anatomical structure of the patient in an image acquired by the second imaging modality.
(65) It will be appreciated that the operations of
(66) Each method may be implemented on a computer as a computer implemented method, as dedicated hardware, or as a combination of both. As also illustrated in
(67) Examples, embodiments or optional features, whether indicated as non-limiting or not, are not to be understood as limiting the invention as claimed.
(68) It will be appreciated that the invention also applies to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The sub-routines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise function calls to each other. An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing stage of at least one of the methods set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically.
(69) The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a data storage, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method.
(70) It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or stages other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.