Method and apparatus for segmentation and registration of longitudinal images
09721338 · 2017-08-01
Assignee
Inventors
Cpc classification
G06T2207/10096
PHYSICS
International classification
Abstract
The described invention provides systems and methods for detecting and segmenting a lesion from longitudinal, time series, or multi-parametric imaging by utilizing spectral embedding-based active contour (SEAC). In addition, the described invention further provides systems and methods for registering time series data by utilizing reduced-dimension eigenvectors derived from spectral embedding (SE) of feature scenes (SERg).
Claims
1. A method for detecting and segmenting a lesion from longitudinal, time series, or multiparametric imaging by utilizing spectral embedding-based active contour (SEAC), the method comprising: (a) obtaining a plurality of images from a longitudinal, time series, or multi-parametric imaging; (b) applying spectral embedding (SE) to the longitudinal, time series, or multi-parametric imaging in a pixel-wise fashion to reduce a plurality of time point images to a single parametric embedding image (PrEIm), wherein the spectral embedding (SE) produces three smallest eigenvectors at each pixel location in the single parametric image by presenting color values in the parametric embedding image (PrEIm); (c) calculating spatial tensor-based gradients on the parametric embedding image (PrEIm) derived from spectral embedding (SE) eigenvectors; (d) selecting a point within an object of interest in the parametric embedding image (PrEIm) to serve as an initialization for an active contour (AC) model describing boundaries of shapes in the parametric embedding image (PrEIm); (e) evolving the active contour (AC) model on the parametric embedding image (PrEIm), wherein the active contour model is simultaneously driven by both boundary and region information obtained from the parametric embedding image (PrEIm), wherein the evolving of the active contour (AC) model at a border between a foreground and background area of the parametric embedding image (PrEIm) stops when a gradient magnitude between the foreground and background areas is maximized and a difference between intensity distributions and region statistics for the foreground and background is maximized; and (f) detecting and segmenting the lesion based on morphological features derived from (e).
2. The method according to claim 1, wherein the longitudinal, time series, or multiparametric imaging comprises one or more of the following: dynamic contrast enhanced magnetic resonance imaging (DCE-MRI); T1 imaging; T2 imaging; and diffusion-weighted imaging.
3. The method according to claim 1, wherein at least one of steps (a), (b), (c), (d), (e) and (f) is performed using a computer.
4. The method according to claim 1, wherein at least one of steps (a), (b), (c), (d), (e) and (f) is performed automatically.
5. The method according to claim 1, wherein the lesion is a cancer.
6. The method according to claim 5, wherein the cancer is one or more of the following: a breast cancer; a prostate cancer; a brain cancer; a liver cancer; a kidney cancer.
7. The method according to claim 1, wherein the lesion is a plaque in a blood vessel.
8. The method according to claim 7, wherein the blood vessel is a carotid artery.
9. A system comprising: a computer comprising a computer processor; and a computer-readable storage medium tangibly storing thereon computer program instructions capable of being executed by the computer processor of the computing device, the computer program instructions defining the steps of; (a) obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging; (b) applying spectral embedding (SE) to longitudinal, time series, or multi-parametric imaging in a pixel-wise fashion to reduce a plurality of time point images to a single parametric embedding image (PrEIm), wherein the spectral embedding produces three smallest eigenvectors at each pixel location in the single parametric image by presenting color values in the parametric embedding image (PrEIm); (c) calculating spatial tensor-based gradients on the parametric embedding image (PrEIm) derived from spectral embedding (SE) eigenvectors; (d) selecting a point within an object of interest in the parametric embedding image (PrEIm) to serve as an initialization for an active contour (AC) model describing boundaries of shapes in the parametric embedding image (PrEIm); (e) evolving the active contour (AC) model on the parametric embedding image (PrEIm), wherein the active contour model is simultaneously driven by both boundary and region information obtained from the parametric embedding image (PrEIm), wherein the evolving of the active contour (AC) model at a border between a foreground and background area of the parametric embedding image (PrEIm) stops when a gradient magnitude between the foreground and background areas is maximized and a difference between intensity distributions and region statistics for the foreground and background is maximized; and (f) detecting and segmenting a lesion based on morphological features derived from (a).
10. The system according to claim 9, wherein the longitudinal, time series, or multi-parametric imaging comprises one or more of the following: dynamic contrast enhanced magnetic resonance imaging (DCE-MRI); T1 imaging; T2 imaging; and diffusion-weighted imaging.
11. The system according to claim 9, wherein the lesion is a cancer.
12. The system according to claim 11, wherein the cancer is one or more of the following: a breast cancer; a prostate cancer; a brain cancer; a liver cancer; a kidney cancer.
13. The system according to claim 9, wherein the lesion is a plaque in a blood vessel.
14. The system according to claim 13, wherein the blood vessel is a carotid artery.
15. A method for registering time series data by utilizing reduced-dimension eigenvectors derived from spectral embedding (SE) of a feature scene, the method comprising: (a) obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging; (b) applying spectral embedding (SE) to time series data to transform a plurality of time series images into an alternative data presentation, wherein the spectral embedding (SE) places a pre-contrast image and a post-contrast image in a same embedding space and embeds eigenvectors that separate salient regions from non-salient regions in the image and identify corresponding regions in the both pre-contrast and post-contrast images; (c) driving registration based on areas of the salient regions and the corresponding regions identified in (b); and (d) capturing statistics of multi-dimensional data found in multiple spectral embedding (SE) eigenvectors.
16. The method according to claim 15, wherein capturing in (d) is performed using alpha-mutual information (α-MI).
17. A system comprising: a computer comprising a computer processor; and a computer-readable storage medium tangibly storing thereon computer program instructions capable of being executed by the computer processor of the computing device, the computer program instructions defining steps of: (a) obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging, (b) applying spectral embedding (SE) to time series data to transform time series images into an alternative data presentation, wherein the spectral embedding (SE) places a pre-contrast image and a post-contrast image in a same embedding space and embeds eigenvectors that separate salient regions from non-salient regions in the image and identify corresponding regions in the both pre-contrast and post-contrast images, (c) driving registration based on areas of the salient regions and the corresponding regions identified in (b), and (d) capturing statistics of multi-dimensional data found in multiple spectral embedding eigenvectors.
18. The system according to claim 17, wherein capturing in (d) is performed using alpha-mutual information (α-MI).
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION OF THE INVENTION
Glossary
(14) The term “active contour” or “AC” as used herein refers to a model for evolving contours by iteratively minimizing an energy term associated with a contour. In many cases, the energy is calculated as a sum of internal and external energies.
(15) The term “eigenvector” as used herein refers to a special set of vectors associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors. Each eigenvector is paired with a corresponding so-called “eigenvalue.”
(16) The term “evolving active contour (AC)” as used herein refers to applying a “hybrid active contour (AC)”, which deviates from conventional boundary-based or region-based active contour models by combining both boundary and region information from the image, allowing the two types of image information to simultaneously drive the AC model to optimize the curve such that (1) the difference between region statistics inside the AC and outside the AC is maximized; and (2) gradient magnitude is maximized at the border between foreground and background areas of the image scenes.
(17) The term “image” as used herein refers to any time dependent imaging data, including, but not limited to Computed Tomography (CT) with contrast of any anatomy, Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) of any anatomy, time-dependent ultrasound, cellular imaging during exposure to a reagent or culture medium, cancer imaging in a radiologic domain, including, but not limited to, breast, prostate, brain, liver, kidney, and plaque visualization in cardiac imaging.
(18) The term “initialization” as used herein refers to manually selecting a point within the object of interest (OOI) for an active contour (AC) model.
(19) The term “longitudinal imaging” as used herein refers to any time-dependent imaging data.
(20) The term “normalized mutual information” or “NMI” as used herein refers to a measure of the dependence between random variables.
(21) The term “parametric embedding” as used herein refers to a method for embedding objects with a class structure into a low-dimensional visualization space.
(22) The term “parametric imaging” as used herein refers to a diagnostic procedure in which an image of a physiologic parameter (e.g., blood flow) or an administered radioactive tracer is derived mathematically, such as by dividing one image by another according to anatomic position.
(23) The term “multiparametric” as used herein refers to images derived through various acquisition techniques such as, but not limited to, MRI, positron emission tomography (PET), single-photon emission computed tomography (SPECT), ultrasound, x-ray, CT or histopathology. Another example would be distinct image acquisitions within MRI such as diffusion-weighted imaging, T1-weighted imaging or T2-weighted imaging.
(24) The term “registering” as used herein refers to spatially aligning two distinct images so that anatomy of interest occupies the same pixels in both images.
(25) The term “spatial tensor-based gradient” as used herein refers to calculating the spatial gradient of a vector-valued image, resulting in a tensor-valued gradient.
(26) The term “spectral embedding” or “SE” as used herein, refers to a graph-based manifold learning method used for high dimensional data classification. SE is used to cluster image pixels based on their similarity to provide an alternative data representation to a multidimensional image.
(27) I. Spectral Embedding-Based Active Contour (SEAC) for Lesion Segmentation
(28) According to one aspect, the described invention provides a method for detecting and segmenting a lesion from longitudinal, time series, or multi-parametric imaging by utilizing spectral embedding-based active contour (SEAC), the method comprising:
(29) (a) obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging;
(30) (b) applying spectral embedding (SE) to the longitudinal, time series, or multi-parametric imaging in a pixel-wise fashion to reduce a plurality of time point images to a single parametric embedding image (PrEIm), wherein the spectral embedding produces three smallest eigenvectors at each pixel location in the single parametric image by presenting color values in the parametric embedding image (PrEIm);
(31) (c) calculating spatial tensor-based gradients on the parametric embedding image (PrEIm) derived from spectral embedding (SE) eigenvectors;
(32) (d) selecting an initialization on the image;
(33) (e) evolving active contour (AC) on the parametric embedding image (PrEIm); and
(34) (f) diagnosing the lesion based on morphological features derived from (e).
(35) According to one embodiment of the method, the longitudinal, time series, or multi-parametric imaging is dynamic contrast enhanced magnetic resonance imaging (DCE-MRI).
(36) According to another embodiment, the longitudinal, time series, or multi-parametric imaging is T1 imaging. According to another embodiment, the longitudinal, time series, or multi-parametric imaging is T2 imaging. According to another embodiment, the longitudinal, time series, or multi-parametric imaging is diffusion-weighted imaging.
(37) According to one embodiment, at least one of steps (a), (b), (c), (d), (e) and (f) is performed using a computer.
(38) According to another embodiment, at least one of steps (a), (b), (c), (d), (e) and (f) is performed automatically.
(39) According to another embodiment, the lesion is a cancer. According to another embodiment, the cancer is a breast cancer. According to another embodiment, the cancer is a prostate cancer. According to another embodiment, the cancer is a brain cancer. According to another embodiment, the cancer is a liver cancer. According to another embodiment, the cancer is a kidney cancer.
(40) According to another embodiment, the lesion is a plaque in a blood vessel. According to another embodiment, the blood vessel is a carotid artery.
(41) According to another embodiment, the method can be applied to any time-dependent imaging data, including, but not limited to, computed tomography (CT) with contrast of any anatomy, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of any anatomy, time-dependent ultrasound, and cellular imaging during exposure to a reagent or culture medium.
(42) According to one embodiment, spectral embedding (SE) can be applied to any multi-dimensional, longitudinal or time-series data whereby the multi-attribute data is reduced to a single parametric image representation. Each voxel (volumetric pixel) in this reduced dimensional, parametric image representation is characterized by the set of orthonormal eigenvectors that aim to preserve both local and global similarities (Shi, J. and Malik, J., IEEE PAMI, 22(8):888-905, 2000). According to some such embodiments, a SE approach yield improved region-based statistics, which in conjunction with stronger boundary gradients results in an improved hybrid active contour (AC) scheme.
(43) According to another embodiments, spectral embedding-based active contour (SEAC) provides strong gradients at the margin of an object of interest (OOI).
(44) According to another embodiment, the use of spectral embedding-based active contour (SEAC) of the described invention is not limited to dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data alone and can be used for lesion detection and segmentation on other types of longitudinal or time series data as well as other types of multi-parametric imaging (e.g., T1-, T2-, and diffusion-weighted imaging).
(45) According to another aspect, the described invention provides a computing device comprising:
(46) a processor;
(47) a storage medium for tangibly storing thereon program logic for execution by the processor, the program logic comprising:
(48) (a) logic executed by the processor for obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging;
(49) (b) logic executed by the processor for applying spectral embedding (SE) to longitudinal, time series, or multi-parametric imaging in a pixel-wise fashion to reduce a plurality of time point images to a single parametric embedding image (PrEIm), wherein the spectral embedding produces three smallest eigenvectors at each pixel location in the single parametric image by presenting color values in the parametric embedding image (PrEIm);
(50) (c) logic executed by the processor for calculating spatial tensor-based gradients on the parametric embedding image (PrEIm) derived from spectral embedding (SE) eigenvectors;
(51) (d) logic executed by the processor for selecting an initialization on the image;
(52) (e) logic executed by the processor for evolving active contour (AC) on the parametric embedding image (PrEIm); and
(53) (f) logic executed by the processor for diagnosing the lesion based on morphological features derived from (e).
(54) According to one embodiment, the longitudinal, time series, or multi-parametric imaging is dynamic contrast enhanced magnetic resonance imaging (DCE-MRI).
(55) According to another embodiment, the longitudinal, time series, or multi-parametric imaging is T1 imaging. According to another embodiment, the longitudinal, time series, or multi-parametric imaging is T2 imaging. According to another embodiment, the longitudinal, time series, or multi-parametric imaging is diffusion-weighted imaging.
(56) According to one embodiment, the lesion is a cancer. According to another embodiment, the cancer is a breast cancer. According to another embodiment, the cancer is a prostate cancer. According to another embodiment, the cancer is a brain cancer. According to another embodiment, the cancer is a liver cancer. According to another embodiment, the cancer is a kidney cancer.
(57) According to another embodiment, the lesion is a plaque in a blood vessel. According to another embodiment, the blood vessel is a carotid artery.
(58) According to another embodiment, the method can be applied to any time-dependent imaging data, including, but not limited to, computed tomography (CT) with contrast of any anatomy, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of any anatomy, time-dependent ultrasound, and cellular imaging during exposure to a reagent or culture medium.
(59) According to another embodiment, spectral embedding (SE) can be applied to any multi-dimensional, longitudinal or time-series data, whereby the multi-attribute data is reduced to a single parametric image representation. Each voxel (volumetric pixel) in this reduced dimensional, parametric image representation is characterized by the set of orthonormal eigenvectors that aim to preserve both local and global similarities (Shi, J. and Malik, J., IEEE PAMI, 22(8):888-905, 2000). According to some such embodiments, a SE approach yield improved region-based statistics, which in conjunction with stronger boundary gradients results in an improved hybrid active contour (AC) scheme.
(60) According to another embodiments, spectral embedding-based active contour (SEAC) provides strong gradients at the margin of an object of interest (OOI).
(61) According to another aspect, the described invention provides a computer-readable storage medium tangibly storing thereon computer program instructions capable of being executed by a computer processor of a computing device, the computer program instructions defining steps of:
(62) (a) obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging;
(63) (b) applying spectral embedding (SE) to longitudinal, time series, or multi-parametric imaging in a pixel-wise fashion to reduce a plurality of time point images to a single parametric embedding image (PrEIm), wherein the spectral embedding produces three smallest eigenvectors at each pixel location in the single parametric image by presenting color values in the parametric embedding image (PrEIm);
(64) (c) calculating spatial tensor-based gradients on the parametric embedding image (PrEIm) derived from spectral embedding (SE) eigenvectors;
(65) (d) selecting an initialization on the image;
(66) (e) evolving active contour (AC) on the parametric embedding image (PrEIm); and
(67) (f) diagnosing the lesion based on morphological features derived from (e).
(68) According to one embodiment, the longitudinal, time series, or multi-parametric imaging is dynamic contrast enhanced magnetic resonance imaging (DCE-MRI).
(69) According to another embodiment, the longitudinal, time series, or multi-parametric imaging is T1 imaging. According to another embodiment, the longitudinal, time series, or multi-parametric imaging is T2 imaging. According to another embodiment, the longitudinal, time series, or multi-parametric imaging is diffusion-weighted imaging.
(70) According to one embodiment, the lesion is a cancer. According to another embodiment, the cancer is a breast cancer. According to another embodiment, the cancer is a prostate cancer. According to another embodiment, the cancer is a brain cancer. According to another embodiment, the cancer is a liver cancer. According to another embodiment, the cancer is a kidney cancer.
(71) According to another embodiment, the lesion is a plaque in a blood vessel. According to another embodiment, the blood vessel is a carotid artery.
(72) According to another embodiment, spectral embedding (SE) can be applied to any multi-dimensional, longitudinal or time-series data, whereby the multi-attribute data is reduced to a single parametric image representation.
(73) According to another embodiments, spectral embedding-based active contour (SEAC) provides strong gradients at the margin of an object of interest (OOI).
(74) 1. Spectral Embedding-Based Active Contour (SEAC)
(75) Step 1. Apply SE to the Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI)
(76) Let F=[F(c)].sup.TεR.sup.N×T,∀cεC be the data matrix of N=|C| (where |•| is the cardinality of a set) feature vectors with dimensionality T. F(c) represents the features assigned to a given pixel, c. In this case, F(c) contains the assigned signal intensity values at every pixel c at each time point tε{0, 1, 2, . . . , T−1} where T is the number of time points the DCE-MRI time series. t=0 refers to the time at which the pre-contrast image is acquired and tε{1, . . . , T−1} refers to the times at which the subsequent post-contrast images are acquired. The aim of spectral embedding (SE) is to reduce FεR.sup.N×T to a low d-dimensional space where d<<T. The three smallest eigenvectors at each pixel location in the image, which result from the spectral embedding are used represent the color values in the parametric embedding image (PrEIm) (see
(77) TABLE-US-00001 TABLE I Description of Notation Symbol Description C 2D Cartesian grid of pixels c = (x, y) F(c) Signal intensity vector of c v(c) Eigenvectors associated with pixel c f Single feature vector C The zero level set C = {c ∈ θ : φ (c) = 0} H(φ)
(78) Step 2. Calculate Spatial Gradients on the Parametric Embedding Image (PrEIm):
(79) The spatial (X- and Y directional) tensor-based gradients are derived from the SE eigenvectors, which are fed into the energy functional of a hybrid active contour (AC) model. The described invention leverages the findings in Xu et al. (Medical image analysis. 1 Dec. 2011, volume 15 issue 6 Pages 851-862), which provides that the tensor gradient derived from the vectorial image provides stronger gradients for driving the active contour (AC) model than the corresponding gradient derived from a scalar image.
(80) Step 3. Select an Initialization on the Image:
(81) A point within the object of interest (OOI) is selected manually, which serves as the initialization for the AC model.
(82) Step 4. Evolve Active Contour (AC) on Parametric Embedding Image (PrEIm):
(83) The hybrid AC deviates from traditional boundary- or region-based AC models by combining both boundary and region information from the image, allowing the two types of image information to simultaneously drive the AC model to optimize the curve such that: (1) the difference between region statistics inside the active contour (AC) the and outside the active contour (AC) is maximized; and (2) gradient magnitude is maximized at the border between foreground and background areas of the image scene. Moreover, the described invention employs a selective, intelligent weighting of the region and boundary terms of the hybrid active contour (AC). This is done by selecting a range of values (detailed in Table II) for both the region and boundary terms and selecting the combination that provides the best approximation to the ground truth segmentation based on both boundary and area based metrics.
(84) 2. Gradient Tensors in a Spectrally Embedded Space
(85) 2.1. Theory of Spectral Embedding
(86) Let v(c) be the function that defines the eigenvectors associated with c, ∀cεC and let ν be the eigenspace defined by v(c). For simplicity of notation in the spectral embedding formulation, f is defined as s single vector in F, dissociated from its spatial pixel location in the image and {circumflex over (V)}=[v(c)∀cεC].sup.TεR.sup.N×T as the vectorized form of v(c). The optimal {circumflex over (v)} is obtained by solving the generalized eigenvalue decomposition problem,
(D−W)={circumflex over (v)}={circumflex over (v)}Λ.sub.dD, Equation [1]
(87) where A.sub.d is the matrix corresponding to the eigenvalues of associated with the smallest d eigenvectors. W is the weighted adjacency matrix that characterizes the similarity between pair wise observations, I and j. The graph edge weight of two nodes, i and j, can be formulated by Gaussian similarity function
(88)
where σ.sub.c is a scaling parameter. D is the degree matrix such that the degree of a vertex is defined as di=Σ.sub.jw(i,j),i,jε{1, . . . , N} The graph theoretic derivation of the eigenvalue problem found in Equation 1 can be found in (Shi, J, and Malik, J, IEEE PAMI, 22(8):888-905, 2000; von Luxburg, U., A tutorial on spectral clustering. Technical report, Max Planck Institute for Biological Cybernetics, 2006; the entire content of each reference is incorporated herein by reference).
(89) The resulting d eigenvectors corresponding to d smallest eigenvalues can be used to construct parametric embedding image (PrEIm) representations of the DCE-MRI data such as those seen in
(90) 2.2. Computing Spatial Gradients in Spectrally Embedded Space
(91) Following the calculation of the eigenvectors by solving the minimization of Equation (1), the gradients of the embedding vectors can be subsequently calculated along the spatial coordinates axes, which results in a tensor gradient function, ∇V.
(92) ∇V is inspired by the Cumani operator (Aldo, C. Edge detection in multispectral images. pages 40-51. Academic Press, Inc., 1991), a second-order differential operator for vectorial images, based on the Di Zenzo multi-valued geometry (Di Zenzo, S., A note on the gradient of a multi-image. volume 33, pages 116-125, Academic Press Professional, Inc., San Diego, Calif., USA, 1986). Thus, ∇V defines a tensor gradient over the eigenvector space and the gradient is calculated via the local structure tensor. For an eigen image V=ν(C), where v(c) at each c cεC can be written in matrix form as
(93)
(94) ρ.sub.22 is defined similarly to ρ.sub.11 along the y-axis. It is important to note that the gradients, ρ.sub.11, ρ.sub.12, ρ.sub.22 are computed on the eigenvectors in the embedding space. The matrix
(95)
is the first fundamental form in vector eigenspace and is also referred to as the local structure tensor. For the matrix [ρ(v(c))], the maximum and minimum eigenvalues of the matrix ({tilde over (λ)}.sub.+ and {tilde over (λ)}.sub.−) representing the extreme rates of change in the direction of their corresponding eigenvectors. ({tilde over (λ)}.sub.+ and {tilde over (λ)}.sub.−) may be formally expressed by {tilde over (λ)}.sub.±=(ρ.sub.11+ρ.sub.22±√{square root over (Δ)})/2, where Δ=(ρ.sub.11−ρ.sub.22).sup.2+4ρ.sub.12.sup.2. The tensor gradient is defined as,
γ(ν(c))=√{square root over ({tilde over (λ)}.sub.+−{tilde over (λ)}.sub.−)}, Equation [4]
(96) ∀cεC. Thus, ∇V≈γ(ν(C)) when the tensor gradient is calculated over the entire image scene, C. From Equations (2)-(4), it is also easy to show that for pixel c the gray scale gradient
(97)
wherein j=1, (widely employed for edge detection) is a special case of the tensor gradient γ(•). In contrast, the tensor gradient of the embedding vectors in xy plane (ν.sub.1, ν.sub.2, ν.sub.3) are computed as
(98)
where jε{1, 2, 3}. An example of the improved gradient information found in the tensor gradient derived from SE compared to the grayscale intensity image can be found in
3. Spectral Embedding-Based Active Contour
(99) 3.1. Active Contour Model and its Energy Functional
(100) It is assumed that the image plane Ωε.sup.2 is partitioned into two regions by a curve
. The foreground region, or region of interest, is defined as Ω.sub.1, and the background region (i.e., the remainder of the image) is defined as Ω.sub.2 (see Table I). The relationship among and its constituents are as follows:
Ω=Ω.sub.1∪Ω.sub.2∪ Equation [5]
(101) where Ω.sub.1 and Ω.sub.1 are non-overlapping. In other words,
Ω.sub.1∩Ω.sub.2=. Equation [6]
(102) 3.2. Edge Based Active Contour
(103) Previous active contour methods have proposed various approaches to formulate the optimal partition of the image plane Ω, which can be obtained by the minimization of an energy functional. An active contour deforms in order to approximate the border between region of interest and non-region of interest (Chan, T. et al., mage Processing, IEEE Transactions on, 10(2):266-277, 2001; Tony, F. et al., Journal of Visual Communication and Image Representation, 11(2):130-141, 2000; Cohen, L., CVGIP: Image Understanding, 53(2):211-218, 1991; Rousson, M., and Deriche, R., In Motion and Video Computing, 2002. Proceedings. Workshop on, pages 56-61, 2002; Xu, J. et al., Medical image analysis. 1 Dec. 2011, volume 15 issue 6 Pages 851-862). In the simplified case, the energy functional is formulated as the integral of an edge detector function:
E.sub.1=α.sub.1∫.sub.Yg(v(c))dc Equation [7]
(104) which will converge to the contour that represents the regions of steepest gradient in the image. Modifying the simple model, Cohen added a second term to the energy functional, which represents the balloon force, or the force that pushes the contour outward in search of the regions of steepest gradient, much like the contour of a balloon increases as it is inflated. The energy functional containing both edge-detector and balloon force terms appears as follows:
E.sub.1+E.sub.2=α.sub.1∫.sub.rg(v(c))dc+α.sub.2∫.sub.Ω.sub.
(105) where
(106)
(107) Note that in this case, the gradient function, (v(c)) is often calculated by the gray PGP-30E level gradient (Cohen, L., CVGIP: Image Understanding, 53(2):211-218, 1991). However, in the present invention, the gradient function was chosen to be a color gradient function (Xu, J. et al., Medical image analysis. 1 Dec. 2011, volume 15 issue 6 Pages 851-862).
(108) In order to maintain stable curve evolution and eliminate the need for re-initialization of the curve at each iteration of the evolution of the curve (Li, C. et al., In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), Vol. 1-Vol. 01, CVPR '05, pages 430-436, Washington, D.C., USA, 2005. IEEE Computer Society), a third term
E.sub.3=ζ∫.sub.Ω½(λ∇φ∥)−1).sup.2dc. Equation [10]
(109) has been implemented.
(110) The combined energy functional is defined as
(111)
(112) 3.3. Region-Based Active Contour
(113) Region-based AC (RAC) models were proposed to address some of the limitations of edge based active contour models. The region-based model employs statistical information derived from different regions (foreground and background) to drive the AC model, which is independent of the edge detector function and does not require precise initialization (Fatakdawala, H. et al., Biomedical Engineering, IEEE Transactions on, 57(7):1676-1689, 2010).
(114) One important RAC model is the Rousson-Deriche model (Rousson, M. and Deriche, R., In Motion and Video Computing, 2002. Proceedings. Workshop on, pages 56-61, December 2002), which assumes that the image plane comprises two regions and the intensities of pixels within each region satisfy a Gaussian distribution. The contour evolves as a result of competition between the log probability of current pixels c belonging to the foreground and background regions. The optimal image partition generated by maximizing the a posteriori partition probability, P(Ω|v(c)). The assumptions are: (1) All partitions are equally possible. (2) Homogeneity of a region exists within a given boundary. (3) Pixels within a given region are independent. The generalized energy functional for the region-based term can then be described as follows (Rousson, M. and Deriche, R., In Motion and Video Computing, 2002. Proceedings. Workshop on, pages 56-61, December 2002):
E(φ)=−α′∫.sub.Ω[H(φ)log p(v(c))|θ.sub.1)+(1−H(φ))log p(v(c)|θ.sub.2)]dc+β′∫.sub.Ω|∇H(φ)|dc Equation [12]
(115) where H(φ) is the Heaviside function and p(v(c)|θ.sub.h)(hε{1, 2}) is the multivariate Gaussian distribution function with parameter θ.sub.h={μΣ.sub.h}, where μ.sub.h and Σ.sub.h are the mean and covariance of the intensity in the region h(hε{1, 2} and are estimated by
(116)
(117) 3.4. Hybrid Active Contour Energy Functional
(118) Since RAC models do not typically include boundary information, a hybrid AC model can be employed to which combine the strengths of boundary-based (Cortes, C. and Vapnik, V., Machine Learning, 20(3):273-297, 1995) and region-based (Di Zenzo, S., A note on the gradient of a multi-image, 33, 116-125, Academic Press Professional, Inc., San Diego, Calif., USA, 1986) models by incorporating both gradient and region information into the AC model. The corresponding energy functional can be shown as
(119)
(120) Using calculus of variations, the curve evolution function can be derived by minimizing the energy function (14):
Equation [15]
(121)
(122) where H(φ) is the Heaviside function, Ω.sub.1 and Ω.sub.2 are the image foreground and background, respectively, φ(t;c) is the level set function, α and β are positive constant parameters that can be used to variably weight the boundary- and region-based terms, ζ is the weight of the contour stabilization term (see Equation 10), and δ(φ) is the Delta function. From an initial contour φ.sub.0, the curve evolution function in Equation (15) is evolved until model convergence. The iterative implementation of the curve evolution can be found in Algorithm V E.
(123) 4. Spectral Embedding-Based Active Contour (SEAC) Algorithm
(124) TABLE-US-00002 Input: Ĉ = (C, γ) Output: Final AC: φ.sub.T Begin 1: {Begin by computing tensor gradient of parametric embedding image (PrEIm) as described in Equations (2)-(4).} 2:
II. Spectral Embedding (SE)-Based Registration Method
(125) According to another aspect, the described invention provides a method for registering time series data by utilizing reduced-dimension eigenvectors derived from spectral embedding (SE) of a feature scene, the method comprising:
(126) (a) obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging;
(127) (b) applying spectral embedding (SE) to time series data to transform time series images into an alternative data presentation, wherein the spectral embedding (SE) places a pre-contrast image and a post-contrast image in a same embedding space and allows embedding eigenvectors to separate salient regions from non-salient regions in the image and to identify corresponding regions in the both pre-contrast and post-contrast images;
(128) (c) driving registration based on areas of the salient regions and the corresponding regions; and
(129) (d) capturing statistics of multi-dimensional data found in multiple spectral embedding eigenvectors.
(130) According to one embodiment, capturing in (d) is performed using alpha-mutual information (α-MI).
(131) According to another aspect, the described invention provides a computing device comprising:
(132) a processor;
(133) a storage medium for tangibly storing thereon program logic for execution by the processor, the program logic comprising:
(134) (a) logic executed by the processor for obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging;
(135) (b) logic executed by the processor for applying spectral embedding (SE) to time series data to transform time series images into an alternative data presentation, wherein the spectral embedding (SE) places a pre-contrast image and a post-contrast image in a same embedding space and allows embedding eigenvectors to separate salient regions from non-salient regions in the image and to identify corresponding regions in the both pre-contrast and post-contrast images;
(136) (c) logic executed by the processor for driving registration based on areas of the salient regions and the corresponding regions; and
(137) (d) logic executed by the processor for capturing statistics of multi-dimensional data found in multiple spectral embedding eigenvectors.
(138) According to one embodiment, capturing in (d) is performed using alpha-mutual information (α-MI).
(139) According to another aspect, the described invention provides a computer-readable storage medium tangibly storing thereon computer program instructions capable of being executed by a computer processor of a computing device, the computer program instructions defining steps of:
(140) (a) obtaining a plurality of images from a longitudinal, time series, or multiparametric imaging;
(141) (b) applying spectral embedding (SE) to time series data to transform time series images into an alternative data presentation, wherein the spectral embedding (SE) places a pre-contrast image and a post-contrast image in a same embedding space and allows embedding eigenvectors to separate salient regions from non-salient regions in the image and to identify corresponding regions in the both pre-contrast and post-contrast images;
(142) (c) driving registration based on areas of the salient regions and the corresponding regions; and
(143) (d) capturing statistics of multi-dimensional data found in multiple spectral embedding eigenvectors.
(144) According to another embodiment, capturing in (d) is performed using alpha-mutual information (α-MI).
(145) Spectral embedding (SE), a nonlinear dimensionality reduction (DR) method as described herein has the advantage over the linear DR method, such as principle component analysis (PCA), which was previously used by Staring et al. (IEEE Trans. Med. Imag. 28(9), 1412-1421, 2009) for time dependent data, of having the ability to capture the nonlinearities inherent in biological data, especially functional data such as DCE-MRI, where noise in the form of contrast agent is added in a nonlinear fashion to the image.
(146) A linear DR method such as principle component analysis (PCA) would not be able to take such nonlinearities into account when embedding a pre-contrast image and a post-contrast image in the same space. In contrast, by embedding C.sup.pre and C.sup.post in the same embedding space, the SE approach allows the embedding eigenvectors to not only separate salient from non-salient regions in the image, but also to identify corresponding regions in both images.
(147) The registration is driven then by these areas of correspondence and salience. In addition, instead of using the more commonly used histogram-based mutual information (MI) measure as the optimization function, α-MI (Neeumuchwala, A. and Carson, P., mage matching using alpha-entropy measures and entropic graphs. Volume 85. 277-296, 2005, incorporated herein by reference in its entirety) is employed, which is a higher order entropy measure that can more accurately capture the statistics of multidimensional data such as that found in the multiple SE eigenvectors.
(148) 1. Theory and Motivation Behind SERg
(149) Spectral Embedding
(150) Let =[F(x.sub.1), F(x.sub.2), . . . , F(x.sub.N)ε
.sup.N×D] be the data matrix of N feature vectors with dimensionality D. The aim of SE is to reduce
ε
.sup.N×D to a low d-dimensional space where d<<D. Let V=[ν.sub.1, ν.sub.2, . . . , ν.sub.Nε
.sup.N×D] be the optimal low dimensional projections (Bradley, W. et al., American Journal of Neuroradiology, 8(6):1057-62, 1987) and the associated eigenvectors of a given object x.sub.i, where iε{1, . . . , N}, are ν.sub.iε
.sup.1×d where ν.sub.i=[ν.sub.1, ν.sub.2, . . . , ν.sub.N] and ν.sub.iε
.sup.1×1, iε{1, . . . , d} is an individual eigenvector for a given v.sub.i. The optimal V can be obtained by solving,
(151)
(152) where w.sub.rs is the (r, s) element of the weight matrix W=[w.sub.rs]ε.sup.N×N, r, sε{1, . . . , N}, which assigns edge weights to characterize similarities between the pairwise observations x.sub.r and x.sub.s and d(r)=Σ.sub.sw.sub.rs, r, sε{1, . . . , N} The graph edge weight of two nodes, r and s, can be formulated as
(153)
where σ.sub.1.sup.2 is a scaling parameter. The minimization of Equation (1) reduces to a minimum eigenvalue decomposition problem,
(D−W)v=λDv Equation [17]
(154) where D is a diagonal matrix, D.sub.n=Σ.sub.5W.sub.rs. The three eigenvectors of ν.sub.i=[ν.sub.1, ν.sub.2, ν.sub.3 where iε{1, . . . , N} associated with the smallest eigenvalues, λ1, λ2, λ3, are used in conjunction with SERg.
(155) 2. Algorithm for SERg and its Use with α-Mutual Information
(156) In the conventional manner, the registration method is formulated as an optimization problem such that a function (18),
(157)
(158) describes the optimization, where φ is the objective function, measuring similarity between C.sup.A and C.sup.B which is maximized when C.sup.A and C.sup.B are aligned optimally, and {circumflex over (μ)} is the vector of coordinates that transforms the target image, C.sup.B, to be in alignment with template image, C.sup.A. In the case of 1-dimensional images such as intensity images, MI, φ.sub.MI is used to optimize image alignment. MI is typically defined as
φ.sub.MI(C.sup.pre,C.sup.post)=H(C.sup.pre)+H(C.sup.post)−H(C.sup.pre,C.sup.post) Equation [19]
(159) where H(C)=−Σp.sub.f log p.sub.f is the Shannon entropy of the intensities of image C, C, H(C.sup.pre,C.sup.post) is the joint entropy, and p.sub.f is the intensity probability at intensity f. However, in the case of multidimensional images, a multivariate approach must be taken to the objective function. This can be done through two methods: (1) histogram-based mutual information (MI), or (2) α-MI via entropic graphs (Neeunuchwala, H. et al., Image matching using alpha-entropy measures and entropic graphs. Volume 85. (2005) 277-296). The joint entropy is typically calculated using a joint histogram; however, issues of sparseness in the joint histogram can lead to inaccuracies in the computation of joint entropy. The use of entropic graphs was recently introduced by Neeumuchwala et al. (Image matching using alpha-entropy measures and entropic graphs. Volume 85. (2005) 277-296, incorporated herein by reference in its entirety) to overcome issues with using a joint histogram in the calculation of joint entropy, and in the described invention a-MI is used to estimate image similarity in a multivariate fashion. α-MI is calculated as,
(160)
(161) where Γ.sup.pre,post is neighborhood graph for all points in both the pre- and post-contrast images, Γ.sup.pre is the neighborhood graph for the pre-contrast, template image, Γ.sup.post is the neighborhood graph for the post-contrast, target image, K=|c| where |•| is the cardinality of any set, γ=D(1−α) where D is the length of the feature vector describing c, and 0<α<1 (α=0.99 for all experiments herein as was previously done by Staring et al. (IEEE Trans. Med. Imag. 28(9), 1412-1421, 2009). Registration is performed using a free-form deformation previously described (Rueckert, D. et al., IEEE TMI, 18(8), 712-721, 1999).
(162) 3. Algorithm for SERg
(163) TABLE-US-00003 Algorithm SERg Input: Image scenes C.sup.pre,C.sup.post. Output: Image transformation, C.sup.post(T.sub.μ(C)). Begin 0. Initialize C.sup.pre,C.sup.post; 1. Apply spectral embedding (Shi, J. and Malik, J., IPPE PAMI 22(8), 888-905, 2000, incorporated by reference herein in its entirety) to = [F(x.sub.1),F(x.sub.2),...,F(x.sub.N) ε
.sup.N×D] to obtain V = [ v.sub.1, v.sub.2, ... , v.sub.N ε
.sup.N×D ]; 2. Initialize ψ(t) = ψ .sub.0 where t = 0; 3. while |ψ(t) − ψ(T − 1)> ε d.sub.0| 4. Maximize ψ(T.sub.μ(c);
(164) The present invention is described below with reference to block diagrams and operational illustrations of methods and devices to select and present media related to a specific topic. It is understood that each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations, can be implemented by means of analog or digital hardware and computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implements the functions/acts specified in the block diagrams or operational block or blocks.
(165) For the purposes of this disclosure a computer readable medium stores computer data, which data can include computer program code that is executable by a computer, in machine readable form. By way of example, and not limitation, a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals. Computer readable storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor.
(166) For the purposes of this disclosure a module is a software, hardware, or firmware (or combinations thereof) system, process or functionality, or component thereof, that performs or facilitates the processes, features, and/or functions described herein (with or without human interaction or augmentation). A module can include sub-modules. Software components of a module may be stored on a computer readable medium. Modules may be integral to one or more servers, or be loaded and executed by one or more servers. One or more modules may be grouped into an engine or an application.
(167) Those skilled in the art will recognize that the methods and systems of the present disclosure may be implemented in many manners and as such are not to be limited by the foregoing exemplary embodiments and examples. In other words, functional elements being performed by single or multiple components, in various combinations of hardware and software or firmware, and individual functions, may be distributed among software applications at either the client or server or both. In this regard, any number of the features of the different embodiments described herein may be combined into single or multiple embodiments, and alternate embodiments having fewer than, or more than, all of the features described herein are possible. Functionality may also be, in whole or in part, distributed among multiple components, in manners now known or to become known. Thus, myriad software/hardware/firmware combinations are possible in achieving the functions, features, interfaces and preferences described herein. Moreover, the scope of the present disclosure covers conventionally known manners for carrying out the described features and functions and interfaces, as well as those variations and modifications that may be made to the hardware or software or firmware components described herein as would be understood by those skilled in the art now and hereafter.
(168) Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although any methods and materials similar or equivalent to those described herein also can be used in the practice or testing of the described invention, the preferred methods and materials are now described. All publications mentioned herein are incorporated herein by reference to disclose and describe the methods and/or materials in connection with which the publications are cited.
(169) Where a range of values is provided, it is understood that each intervening value, to the tenth of the unit of the lower limit unless the context clearly dictates otherwise, between the upper and lower limit of that range and any other stated or intervening value in that stated range is encompassed within the invention. The upper and lower limits of these smaller ranges which may independently be included in the smaller ranges is also encompassed within the invention, subject to any specifically excluded limit in the stated range. Where the stated range includes one or both of the limits, ranges excluding either both of those included limits are also included in the invention.
(170) It must be noted that as used herein and in the appended claims, the singular forms “a”, “and”, and “the” include plural references unless the context clearly dictates otherwise. All technical and scientific terms used herein have the same meaning.
(171) The publications discussed herein are provided solely for their disclosure prior to the filing date of the present application. Nothing herein is to be construed as an admission that the described invention is not entitled to antedate such publication by virtue of prior invention. Further, the dates of publication provided may be different from the actual publication dates which may need to be independently confirmed.
(172) The described invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof and, accordingly, reference should be made to the appended claims, rather than to the foregoing specification, as indicating the scope of the invention.
Examples
(173) The following examples are put forth so as to provide those of ordinary skill in the art with a complete disclosure and description of how to make and use the present invention, and are not intended to limit the scope of what the inventors regard as their invention nor are they intended to represent that the experiments below are all or the only experiments performed. Efforts have been made to ensure accuracy with respect to numbers used (e.g. amounts, temperature, etc.) but some experimental errors and deviations should be accounted for. Unless indicated otherwise, parts are parts by weight, molecular weight is weight average molecular weight, temperature is in degrees Centigrade, and pressure is at or near atmospheric.
(174) Experimental Design
(175) A. Data Description
(176) A total of 50 (30 malignant, 20 benign) breast DCE-MRI studies were obtained from the Hospital at the University of Pennsylvania. All of these were clinical cases where screening mammogram revealed a lesion suspicious for malignancy. All studies were collected under Institutional Review Board approval, and lesion diagnosis was confirmed by biopsy and histologic examination. Sagital T1-weighted, spoiled gradient echo sequences with fat suppression consisting of one series before contrast injection of Gd-DTPA (pre-contrast) and 3 to 8 series after contrast injection (post-contrast) were acquired at 1.5 Tesla (Siemens Magnetom). Single slice dimensions were 384×384 or 512×512 with a slice thickness of 3 cm. Temporal resolution between post-contrast acquisitions was a minimum of 90 seconds.
(177) B. Application of Spectral Embedding-Based Active Contour (SEAC) to Breast DCE-MRI
(178) For each pixel, c, in each image, a dynamic signal intensity vector was created consisting of the signal intensity values of the pixel and each time point in the time series. F(c) is the function that assigns a signal intensity value at every pixel c at each time point tε{0, 1, 2, . . . , T−1} where T is the number of time points the DCE-MRI time series. t=0 refers to the time at which the pre-contrast image is acquired and tε{1, 2, . . . , T−1} refer to the times at which the subsequent post-contrast images are acquired
(179) C. Implementation of Spectral Embedding-Based Active Contour (SEAC)
(180) For each dataset, a seed point is selected in the region of interest based on visual inspection of the MR image. The same seed point is used across all alternative image representations.
(181) Since the hybrid active contour incorporates region and boundary information, the individual constituent terms need to be differentially weighted based on the specific image domain to optimize the active contour. Thus, a brute force examination of these weights was carried out by first trying all combinations of weights on a subset of 10 of the datasets. The accuracy of the segmentation as described in the next section was used to determine the optimal parameters for a given image initialization. The range of parameters used with each AC model is described in Table II, and these ranges were based on the empirically derived optimal weights used previously (2). The stopping criterion for the AC is that which minimizes the energy function such that the difference in energy between a given iteration of the AC and the iteration preceding it is less than epsilon (see Algorithm V E for further details).
(182) TABLE-US-00004 TABLE II Summary of Experiments and Corresponding Parameters Comparison Strategies Method (n = 50) Parameters Used Boundary Peak CE, PCA α = 1, β = 0, ζ = 1 FCM, PrEIm Region Peak CE, PCA α = 1, β = 0, ζ = 1 FCM, PrEIm Hybrid FCM, PrEIm α, β ∈ {0.1, .2, 0.4, 0.8 1.6, 3.2, 6.4, 12.8}, ζ = 1 SVM Classification FCM, PrEIm Radial basis function.sup.11
(183) D. Comparative Image Representation Strategies
(184) Like SE, fuzzy c-means (FCM) and principle component analysis (PCA) are applied to the images on a per-pixel basis that allows for direct comparison of the methods to serve as an initialization to the active contour segmentation.
(185) Fuzzy C-Means (FCM)
(186) Fuzzy c-means (FCM) clustering as described in Chen et al. (9) is widely used for automated segmentation of breast lesions on DCE-MRI (4,28). FCM is a data clustering scheme similar to k-means in that data are clustered around a prescribed number of centroids. However, unlike k-means, the resulting class membership is a fuzzy membership to each cluster. In the described invention the method in Shi et al. (Medical Physics, 36(11):5052-5063, 2009) (referred to as FCM+AC) was implemented to compare the active contour (AC) driven by FCM to that driven by SE in spectral embedding-based active contour (SEAC), where FCM is applied to the time-intensity vectors, F.
(187) Principal Component Analysis (PCA)
(188) Principle component analysis (PCA) is a linear dimensionality reduction method which attempts to reduce the dimensionality of the data while retaining maximum variance of the dataset (Jolliffe, I., Principal component analysis. Springer series in statistics. Springer-Verlag, New York, 2nd ed. edition, 2002). PCA is most popularly implemented by performing an eigenvalue decomposition of a covariance matrix generated from the original data. The resulting eigenvectors are then considered to be the principle components, and the first few retain the maximum variance in the original dataset. In addition, if the eigenvectors are chosen to be orthonormal, then the variance captured by a given eigenvector is reflected by the corresponding eigenvalue. In the described invention, the input matrix to PCA is also F.
(189) Applying the Hybrid Active Contour to the Comparative Strategies
(190) After FCM and PCA are performed on all images, the hybrid active contour model is applied to both the 3-dimensional FCM parametric image result (referred to as FCM+AC) and the 3-dimensional PCA parametric image result (referred to as PCA+AC). In order to demonstrate the limitations of using the grayscale intensity images, spectral embedding-based active contour (SEAC) is compared to an AC driven by the peak contrast enhancement image automatically selected for each dataset.
(191) Segmentation Performance Evaluation Measures
(192) A. Ground Truth Generation
(193) The region of interest (ROI) associated with the lesion was then manually segmented via MRIcro imaging software (Rorden, C. and Brett, M., Behav Neurol, 12(4):191-200, 2000) by an attending radiologist (M.A.R) with expertise in MR mammography who was naive to the lesion diagnosis. The radiologist selected a 2D slice of the MRI volume that was most representative of each lesion, and the analyses were performed only for that 2D slice. The ground truth segmentation is defined as the manual segmentation performed by the radiologist, and ground truth diagnosis was confirmed on histopathologic examination of lesion biopsy by a pathologist.
(194) B. Boundary-Based Metrics
(195) Mean absolute difference (MAD) is calculated by evaluating the mean difference between each point, c.sub.z, on G.sub.1.sup.a(Ĉ)={c.sub.z|zε{1, . . . , Z}} (spectral embedding-based active contour (SEAC), PCA+AC or FCM+AC) where Z=|G.sub.1.sup.a(Ĉ)| and |•| is the cardinality of any set, and the corresponding closest point, c.sub.ψ, on the ground truth (GT) manual segmentation such that G.sub.1.sup.b(C)={c.sub.ψ|ψε{1, . . . , |G.sub.1.sup.b(C|} such that,
(196)
(197) Lower values of MAD reflect a more similar segmentation to the GT manual segmentation. Maximum Hausdorff distance (HD.sub.max) is calculated as
(198)
(199) C. Area-Based Matrices
(200) Dice similarity coefficient (DSC) is calculated as follows:
(201)
(202) where G.sub.2.sup.a(Ĉ) is the area enclosed by the automated segmentation and G.sub.2.sup.a(C) is the area enclosed by the manual GT segmentation. The closer the DSC value is to 1, the more similar the automated lesion segmentation is to the GT segmentation.
(203) D. Classifier-Based Metrics
(204) Because an accurate lesion segmentation is necessary for accurate morphological feature extraction, improved classification accuracy would be the ultimate test to quantitatively demonstrate the superiority of SEAC over other state of the art breast lesion segmentation methods. Morphological features based on the lesion contour are extracted and used in conjunction with a support vector machine (SVM) classifier (a) to determine if morphological features based on SEAC contours can yield similar classifier accuracy to morphological features extracted from (a) manually segmented boundaries; and (b) to determine if morphological features based on SEAC segmentations will result in higher classifier accuracy compared to morphological features based on FCM+AC segmentations of breast lesions. In the present invention, 6 morphological features (Agner, S. et al., J. Digital Imaging, 24(3):446-463, 2011, incorporated by reference herein in its entirety) on 50 datasets (20 benign; 30 malignant) were calculated. The features extracted include those listed in Table III.
(205) TABLE-US-00005 TABLE III Morphological Features Extracted from SEAC, FCM + AC and Manual Segmentation Area Overlap Ratio Normalized Average Radial Distance Ratio Standard Deviation of Variance of Distance Ratio Normalized Distance Ratio Compactness Smoothness
(206) The morphological feature calculation requires a pre-defined lesion boundary, so the boundaries resulting from SEAC, FCM+AC and PCA+AC, and the manual segmentation were used for morphological feature extraction. The features for each lesion were then used in conjunction with a SVM classifier with leave-one-out cross validation to determine the lesion diagnosis. The method described in Agner et al., (J. Digital Imaging, 24(3):446-463, 2011) was used for converting the distance of an object from the hyperplane to a pseudo-likelihood which was used to generate a ROC curve for evaluating the different SVM classifiers.
(207) E. Paired t-Test
(208) Since FCM+AC is currently the most widely used method, statistical analysis is performed by calculating a paired t-test (Chen et al., Medical Physics, 33(8):2878-2887, 2006) between the metrics obtained using FCM+AC compared to both PCA+AC and SEAC. For the comparison of the AC models, the actual population means is used. For the classification experiment, a population mean of 0.5 is used since the AUC of each method is being compared to an AUC of 0.5 (Bhooshan, M. et al., Radiology, 254(3):680-690, 2010).
(209) In the described invention, two different aspects of spectral embedding-based active contour (SEAC) were evaluated by comparing the alternative data representations obtained via peakCE, FCM, PCA and SE in terms of their ability to drive region-based, boundary-based AC segmentation schema. It was hypothesized that SE would provide better boundary and region statistics for driving an active contour segmentation scheme than peak CE, FCM or PCA. (Agner, S. et al., Spectral embedding-based active contour (SEAC): application to breast lesion segmentation on DCE-MRI., 7963, page 796305, SPIE, 2011). The FCM method was used, which is a commonly used lesion segmentation method for breast DCE-MRI, with SE in the ability of each to drive a hybrid AC model (Bhooshan, M. et al., Radiology, 254(3):680-690, 2010; Shi, J. et al., Medical Physics, 36(11):5052-5063, 2009).
Example 1. Comparison of Different Image Representation Schemes in Capturing Gradients
(210)
(211) TABLE-US-00006 TABLE IV Evaluations of Image Representations in Conjunction with a Boundary Based AC (n = 50) AC Method MAD (μ ± σ) DSC μ ± σ HD.sub.max μ ± σ Peak CE + AC 7.26 ± 12.02 0.56 ± 0.31 13.11 ± 14.11 FCM + AC 6.64 ± 6.37 0.50 ± 0.32 11.86 ± 10.14 PCA + AC 5.50 ± 4.23 0.52 ± 0.25 11.2 ± 7.50 SEAC 5.87 ± 5.94 0.57 ± 0.30 11.06 ± 9.90
Example 2. Comparison of Different Image Representation Schemes in Capturing Region Statistics
(212)
(213) Table V shows the performance of region-based ACs. Again, spectral embedding-based active contour (SEAC) provided the segmentations that were most similar to ground truth in terms of MAD, DSC, and HD.sub.max. Considering the fact that dimensionality reduction methods are meant to preserve global patterns in the data, it is not surprising that the region term is able to capitalize on this strength of both the linear and nonlinear dimensionality reduction methods used in the described invention.
(214) TABLE-US-00007 TABLE V Evaluations of Image Representations in Conjunction with a Region Based AC (n = 50) AC Method MAD (μ ± σ) DSC μ ± σ HD.sub.max μ ± σ PCA + AC 4.87 ± 4.35 0.48 ± 0.24 10.9 ± 7.01 SEAC 4.24 ± 3.80 0.53 ± 0.26 9.76 ± 7.35
Example 3. Comparing Image Segmentation Strategies
(215) A. Boundary-Based Measures
(216) Since spectral embedding-based active contour (SEAC) performed best based on both the boundary-based and region-based AC models, spectral embedding-based active contour (SEAC) was compared to the previously published FCM+AC model using the hybrid active contour model (Shi et al., Medical Physics, 36(11):5052-5063, 2009). Table VI shows the results for the optimized hybrid models. Both FCM+AC and spectral embedding-based active contour (SEAC), and spectral embedding-based active contour (SEAC) resulted in a statistically significant improvement over FCM+AC in terms of MAD, DSC and HD max (indicated by an asterisk (*) in Table VI. The best results for the hybrid model weighted the region term at 0.4 and the boundary term at 6.4. Since the FCM segmentations degraded when the region-based term was added to the model, the optimized spectral embedding-based active contour (SEAC) is compared to the boundary-based FCM+AC.
(217) In addition, a t-test was performed to test the hypothesis (1) that spectral embedding-based active contour (SEAC) more accurately approximates the ground truth segmentation in terms of MAD, HD, and DSC compared to FCM; and (2) that spectral embedding-based active contour (SEAC) demonstrates a statistically significant improvement segmentation over the popular FCM segmentation method.
(218) TABLE-US-00008 TABLE VI Evaluations of Image Representations in Conjunction with a Hybrid AC (n = 50, *p < 0.05) AC Method MAD (μ ± σ) DSC μ ± σ HD.sub.max μ ± σ FCM + AC 6.64 ± 6.37 0.50 ± 0.32 11.86 ± 10.14 SEAC* 3.37 ± 2.70 0.66 ± 0.22 8.79 ± 6.87
(219) B. Classifier Accuracy
(220) Since FCM is one of the most widely used breast lesion segmentation methods, FCM and spectral embedding-based active contour (SEAC) were compared as lesion segmentation methods from which to derive quantitative morphological features for classifying lesions as benign or malignant. Once the active contour was optimized for FCM+AC and spectral embedding-based active contour (SEAC), the best case active contour for each respective method was used to generate the lesion contours from which morphological features are extracted. Lesion classification then was performed using a support vector machine (SVM) classifier (Cortes, C. and Vapnik, V. Machine Learning, 20(3):273-297, 1995, incorporated by reference herein in its entirety) based on these automatically derived contours. The classification results using the automatically derived contours then are compared to classification driven by contours from manual segmentations. A Student's t-test, which is performed to compare the AUC for each classifier to the null hypothesis, is considered an AUC=0.5, or no difference in classification from that expected by chance (Bhooshan, M. et al., Radiology, 254(3):680-690, 2010). This is tested against the hypothesis that there is no statistical difference between the AUC derived from breast lesion classification using the spectral embedding-based active contour (SEAC) or FCM segmentation and an AUC of 0.5. Table VII shows the SVM classification AUC for 10 trials of 10-fold cross validation based on morphological features based on FCM+AC and spectral embedding-based active contour (SEAC). Spectral embedding-based active contour (SEAC) performed better than FCM+AC, and the AUC for the ROC curve for spectral embedding-based active contour (SEAC) is statistically significant compared to an AUC of 0.5 (Bhooshan, M. et al., Radiology, 254(3):680-690, 2010), whereas the AUC for FCM+AC was not statistically significantly better than an AUC of 0.5. The AUC of 0.67 for spectral embedding-based active contour (SEAC) is also comparable to the AUC for lesion diagnosis based on morphological features found elsewhere in the literature (Gilhuijs, K. et al., Medical Physics, 25(9):1647-1654, 1998).
(221) TABLE-US-00009 TABLE VII Classification Accuracy based on Automated Segmentation (n = 50, *p < 0.05) Segmentation Method AUC (μ ± σ) FCM + AC 0.50 ± 0.07 SEAC* 0.67 ± 0.05
(222) When using a hybrid AC model, determining the weights for each term in the energy functional is a difficult task. In the present invention, the optimal weights for the hybrid AC model were obtained by first performing the segmentations using various combinations of a predetermined range of weights. The weights that provided the best segmentations for the subset of data were used then for the entire dataset.
(223) It was found that when the spectral embedding-based active contour (SEAC) automated segmentation results were used for morphological feature extraction, they resulted in better lesion classification than when using FCM+AC. Spectral embedding-based active contour (SEAC)'s outperforming the FCM+AC in the lesion classification task suggests that spectral embedding-based active contour (SEAC) was capable of capturing subtleties of lesion morphology that are critical to accurate lesion classification.
(224) The classification results of the present invention also suggest that endpoint classification may be an additionally important metric for the evaluation of automated segmentation methods designed for use in conjunction with computer-aided diagnosis systems. It was also found that, for the datasets that ranged in image grid size from 384×384 to 512×512 pixels, total run-time ranged from approximately 3-12 minutes, depending on the size of the image.
Example 4. Evaluation of SERg
(225) Table VIII shows a summary of the 4 datasets used which sum to 38 image pairs for evaluation of SERg. Synthetic brain data was generated from the MNI BrainWeb database (Kwan, R. et al., IEEE Transactions on Med. Imaging 18(11), 1085-97, 1999). Contrast dye introduced to the subject can be interpreted as noise in the images, and as such, the image without noise is considered to be the pre-contrast image. The 1st post-contrast time point is simulated by the image with 1% noise, the 2nd post-contrast time point is simulated by the image with 5% noise, and the 3rd post-contrast time point is simulated by the image with 9% noise. Image deformations were imposed using thin-plate splines (Bookstein, F., IEEE PAMI 11(6) 567-585, 1989).
(226) TABLE-US-00010 TABLE VIII Performance measures to evaluate SEAC Data type Image pairs Data description Synthetic brain 12 4 different image deformations, 3 different noise levels Fat-suppressed 10 Peak-post-, pre-contrast enhancement breast DCE-MRI fat-suppression in acquisition Fat-saturated 6 Peak-post-, pre-contrast enhancement breast DCE-MRI no fat-suppression in acquisition
(227) Eight 1st order statistical features along with 7 edge detection filters at a window size of 5×5 voxels are calculated for each pre- and post-contrast image. This feature set plus the original raw intensity image results in a feature vector of length 16.
(228) A. Measures of Registration Accuracy
(229) The registration accuracy obtained by SERg was compared to an intensity based registration, which was first proposed by Rueckert et al. (IEEE TMI 18(8) (1999) 712-721). Data was evaluated qualitatively by visual inspection of the difference image resulting from C.sup.post−C.sup.pre. Quantitative measures of registration accuracy were correlation ratio (CR) (Roche, A. et al., The correlation ratio as a new similarity measure for multimodal image registration. In: MICCAI'98, LNCS1496, Springer Verlag, 1115-1124, 1998), and NMI.
(230) B. Quantitative Results
(231)
(232) C. Qualitative Results
(233)
(234) While the described invention has been described with reference to the specific embodiments thereof it should be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the true spirit and scope of the invention. In addition, many modifications may be made to adopt a particular situation, material, composition of matter, process, process step or steps, to the objective spirit and scope of the described invention. All such modifications are intended to be within the scope of the claims appended hereto.