Method of computing a boundary
11610316 · 2023-03-21
Assignee
Inventors
- Noha El-Zehiry (Plainsboro, NJ, US)
- Karim Amer (Cairo, EG)
- Mickael Sonni Albert Ibrahim Ide (Lawrence Township, NJ, US)
- Athira Jacob (Plainsboro, NJ, US)
- Gareth Funka-Lea (Princeton, NJ, US)
Cpc classification
G06V10/44
PHYSICS
A61B8/085
HUMAN NECESSITIES
G06V10/25
PHYSICS
International classification
G06V30/262
PHYSICS
G06V10/25
PHYSICS
Abstract
The disclosure relates to a method for determining a boundary about an area of interest in an image set. The includes obtaining the image set from an imaging modality and processing the image set in a convolutional neural network. The convolutional neural network is trained to perform the acts of predicting an inverse distance map for the actual boundary in the image set; and deriving the boundary from the inverse distance map. The disclosure also relates to a method of training a convolutional neural network for use in such a method, and a medical imaging arrangement.
Claims
1. A method of determining a boundary about an area of interest in an image set, the method comprising: obtaining the image set from an imaging modality; and processing the image set in a convolutional neural network, wherein the convolutional neural network has been trained through: predicting an inverse distance map for an actual boundary in the image set by blurring the actual boundary by a Gaussian function; and deriving the boundary from the inverse distance map.
2. The method of claim 1, wherein the deriving of the boundary from the predicted inverse distance map comprises thresholding the inverse distance map to obtain a binary boundary and subsequently extracting a medial axis of the binary boundary.
3. The method of claim 1, wherein the boundary computed by the convolutional neural network is an open boundary or a closed surface boundary.
4. The method of claim 1, wherein the convolutional neural network is configured to perform semantic image segmentation.
5. The method of claim 1, wherein the convolutional neural network implements a U-Net architecture comprising four layers comprising encoder blocks and decoder blocks, wherein each encoder block comprises a plurality of convolution filters, and wherein each decoder block comprises a plurality of deconvolution filters.
6. The method of claim 5, wherein the image set comprises a plurality of two-dimensional (2D) images, wherein the inverse distance map is predicted for a two-dimensional surface boundary about an area of interest in a 2D image of the plurality of 2D images, and wherein a two-dimensional boundary is inferred from the inverse distance map.
7. The method of claim 5, wherein the image set comprises a three-dimensional (3D) image volume, wherein the inverse distance map is predicted for a three-dimensional surface boundary about an area of interest in the 3D image volume, and wherein the three-dimensional surface boundary is inferred from the inverse distance map.
8. The method of claim 7, further comprising, prior to predicting the inverse distance map: generating the 3D image volume from a plurality of image slices obtained from a 2D imaging modality.
9. The method of claim 1, wherein the image set comprises a plurality of two-dimensional (2D) images, wherein the inverse distance map is predicted for a two-dimensional surface boundary about an area of interest in a 2D image of the plurality of 2D images, and wherein a two-dimensional boundary is inferred from the inverse distance map.
10. The method of claim 1, wherein the image set comprises a three-dimensional (3D) image volume, wherein the inverse distance map is predicted for a three-dimensional surface boundary about an area of interest in the 3D image volume, and wherein the three-dimensional surface boundary is inferred from the inverse distance map.
11. The method of claim 10, further comprising, prior to predicting the inverse distance map: generating the 3D image volume from a plurality of image slices obtained from a 2D imaging modality.
12. The method of claim 1, wherein the imaging modality is intracardiac echocardiography.
13. A method of training a convolutional neural network for use in determining a boundary about an area of interest in an image set, the method comprising: annotating the image set to identify the boundary about the area of interest in the image set; replicating an inverse distance map of the boundary for use as a ground truth by the convolutional neural network, wherein the replicating of the inverse distance map of the boundary is performed by blurring the boundary by a Gaussian function; applying the convolutional neural network to the image set and the ground truth to predict the inverse distance map approximating the ground truth; and repeating the annotating, the replicating, and the applying until a desired level of accuracy has been achieved.
14. The method of claim 13, wherein the convolutional neural network is configured to minimize a mean squared error between the ground truth and the predicted inverse distance map.
15. A medical imaging arrangement comprising: a processor comprising a convolutional neural network loaded into a memory of the processor, wherein the processor is configured to: obtain an image set and process the image set in the convolutional neural network, and wherein the convolutional neural network has been trained by predicting an inverse distance map for an actual boundary in the image set by blurring the actual boundary by a Gaussian function and deriving a computed boundary from the inverse distance map; and a user interface configured to display the computed boundary in a context of the image set.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Other objects and features of the present disclosure will become apparent from the following detailed descriptions considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the disclosure.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10) In the diagrams, like numbers refer to like objects throughout. Objects in the diagrams are not necessarily drawn to scale.
DETAILED DESCRIPTION
(11) The diagrams illustrate exemplary embodiments using images of a 3D Intracardiac Echocardiography (ICE) image volume, but it shall be understood that the method may be deployed to infer a boundary about a region of interest in a 2D image. Furthermore, it shall be understood that the method may be deployed for other imaging modalities. For example, a set of images from a 2D imaging modality may be assembled into a sparse 3D volume for use as an input to the training network. A 3D volume is considered sparse in the case of incomplete sampling of all voxel locations. While the following diagrams may only show images in the plane of the page, it shall be understood that the method is performed on a 3D volume and that any image shown in a diagram is only one “slice” of a 3D volume.
(12)
(13)
(14) As explained above, this convolutional neural network 1 is trained on a non-binary ground truth representation, (e.g., non-binary representations of boundaries are used to train the convolutional neural network).
(15) In an exemplary embodiment, the ground truth is an inverse distance map. This stage of the training is illustrated with
(16) The drawing indicates an image 30 shown in the display 21, and any such image may be assumed to be similar to the 3D ICE images shown in
(17) The task of the convolutional neural network 1 of
(18) After training is complete, the model 1 may be applied to detect or infer boundary surfaces in a 3D image volume. In inference mode, the model 1 predicts a distance map M.sub.infer from the surface boundary about an area of interest, after which thresholding and skeletonization acts are performed to arrive at the predicted surface boundary.
(19)
(20)
(21) The trained convolutional neural network 1 described in
(22) The inverse distance map M.sub.infer is then thresholded to obtain a binary boundary B.sub.binary, (e.g., a set of values that either belong to the boundary or do not belong to the boundary). An exemplary result of thresholding the distance map M.sub.infer is superimposed on the image 30 and shows a band B.sub.binary representing the values that are deemed to belong to the boundary. The “thickness” of the binary boundary B.sub.binary depends on the choice of threshold, and a threshold value may be chosen that will provide a non-fragmented boundary.
(23) In a final stage, the surface boundary B.sub.infer is refined by performing a skeletonization act (or “thinning”) on the binary boundary B.sub.binary to extract the medial axis. The 3D surface boundary B.sub.infer may then be presented to the user by a suitable graphics program. Alternatively, as indicated here, a slice through the surface boundary B.sub.infer may be superimposed on the corresponding slice 30 of the 3D volume and shown to the user. Once the 3D surface boundary B.sub.infer is established for the area of interest, the user may interact with the imaging modality to alter the viewing angle, and the image presented on the monitor is continually updated to show the correct slice through the surface boundary B.sub.infer.
(24)
(25)
(26)
(27) Although the present disclosure has been discussed in the form of certain embodiments and variations thereon, it will be understood that numerous additional modifications and variations may be made thereto without departing from the scope of the disclosure. For example, when inferring a boundary from a 2D image, the architecture of the CNN is based on two-dimensional filters, and the ground truths used to train the CNN may be simple contours (e.g., open or closed) obtained by manual annotation of 2D images.
(28) For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other acts or elements. The mention of a “unit” or a “module” does not preclude the use of more than one unit or module.
(29) It is to be understood that the elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present disclosure. Thus, whereas the dependent claims appended below depend from only a single independent or dependent claim, it is to be understood that these dependent claims may, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or dependent, and that such new combinations are to be understood as forming a part of the present specification.
(30) While the disclosure has been illustrated and described in detail with the help of the disclosed embodiments, the disclosure is not limited to the disclosed examples. Other variations may be deducted by those skilled in the art without leaving the scope of protection of the claimed disclosure.