SYSTEM AND METHOD FOR INTERACTIVE CONTOURING OF MEDICAL IMAGES

20230100255 · 2023-03-30

    Inventors

    Cpc classification

    International classification

    Abstract

    A method and imaging system for contouring medical images is described. The method comprising: receiving at least one input 2D image slice, from a set of two-dimensional (2D) image slices constituting the 3D image, and at least one set of data representing an input contour identifying one or more structures of interest in the 3D image within the at least one input 2D image slice; receiving at least one selected target image slice, from the set of the 2D image slices; and predicting target contour data for the selected target image slice that identifies at least one of the same one or more structures of interest within the target image slice, based on one or more of the received input 2D image slices and the data representing an input contours.

    Claims

    1-35. (canceled)

    36. A method of contouring a three-dimensional (3D) image, comprising: receiving at least one input 2D image slice, from a set of two-dimensional (2D) image slices constituting the 3D image, and at least one set of data representing an input contour identifying one or more structures of interest in the 3D image within the at least one input 2D image slice; receiving at least one selected target image slice, from the set of the 2D image slices; and predicting target contour data for the selected target image slice that identifies at least one of the same one or more structures of interest within the target image slice, based on one or more of the received input 2D image slices and the data representing an input contours.

    37. A method according to claim 36, wherein the target contour prediction is done using a machine learning model.

    38. A method according to claim 37, where the machine learning model is one or more of a neural network or random forest.

    39. A method according to claim 36, wherein the input contour identifying one of more structures of interest is identifying a previously unidentified structure of interest.

    40. A method according to claim 36, wherein at least one of: the target image slice, the input image slice, and the input contour, provides contextual information to identify a relevant location for a contour on the target image slice.

    41. A method according to claim 40, wherein the contextual information is provided from a plurality of sources comprised of at least one input image slices, at least one input contour, and a target image.

    42. A method according to claim 40, wherein the contextual information comprises one or more of information on image features and/or contour features, or spatial relations between image data and/or contour data.

    43. A method according to claim 42, wherein the contextual information on spatial relations between image data and/or contour data is learnt from a training data set.

    44. A method according to claim 40, wherein the contextual information is information relating to one or more features shared between image slices in the set of 2D image slices.

    45. A method according the claim 44 wherein the image slices in the set of 2D image slices are consecutive image slices.

    46. A method according to claim 36, wherein the image is a medical image and the modality of the 3D image is one of: CT, MRI, Ultrasound, CBCT.

    47. A method according to claim 37, where the machine learning model for predicting target contour data has been trained using an image dataset that includes a plurality of images each with one or more structures of interest shown on the images in the image dataset.

    48. A method according to claim 37, where the training of the machine learning model is performed on a plurality of different imaging modalities.

    49. A method according to claim 37, further comprising the step of updating the machine learning model based on user edits to the structures on one or more target image slices.

    50. A method according to claim 36, where contours for adjacent slices from the set of two-dimensional (2D) image slices are sequentially predicted.

    51. A method according to claim 36, wherein a first structure of interest is selected for a first 2D image slice and contours for the first structure are predicted for a first 2D image slice, and the predicted contours for the first 2D image slice are used for contouring the same structure of interest for one or more subsequent 2D image slices from the set of 2D image slices.

    52. A method according to claim 51, wherein the predicted contours are propagated through sequential image slices using direct propagation of the predicted contours.

    53. A method according to claim 51, wherein the predicted contours are propagated through sequential image slices by iterative propagation, with predicted contours for each subsequent image propagated based on iteration of the contours for the immediately preceding image slice.

    54. A method according to claim 36, wherein the data representing an input contour is either a user-generated contour, or obtained by one or more of manual contouring, auto-contouring, or user-interactive contouring.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0064] Further details, aspects and embodiments of the invention will be described, by way of example only, with reference to the drawings. In the drawings, like reference numbers are used to identify like or functionally similar elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.

    [0065] FIG. 1a: Performance of a single-structure model trained on the heart, during iterations of training;

    [0066] FIG. 1b: Performance of a single-structure model trained on the heart at iteration 40;

    [0067] FIG. 2a: Performance of a multi-structure model, during iterations of training;

    [0068] FIG. 2b: Performance of a multi-structure model at iteration 40

    [0069] FIG. 3: Simplified block diagram of an example of a system for the segmentation of a 3D image according to the invention;

    [0070] FIG. 4: Different variants of data inputs for the contour prediction engine;

    [0071] FIG. 5: Block diagram of an interactive contouring workflow of a 3D medical image according to the invention;

    [0072] FIG. 6a: Illustrative example for target contours prediction for 3D medical image with iterative propagation showing Initial state, 3D image;

    [0073] FIG. 6b: Illustrative example for target contours prediction for 3D medical image with iterative propagation showing contour prediction with iterative propagation;

    [0074] FIG. 6c: Illustrative example for target contours prediction for 3D medical image with iterative propagation showing final state 3D image with contours;

    [0075] FIG. 7a: Illustrative example for target contours prediction for 3D medical image with propagation, using user initial image slice only showing Initial state, 3D image;

    [0076] FIG. 7b: Illustrative example for target contours prediction for 3D medical image with propagation, using user initial image slice only showing contour prediction with user initial image slice;

    [0077] FIG. 7c: Illustrative example for target contours prediction for 3D medical image with propagation, using user initial image slice only showing Final state 3D image with contours;

    [0078] FIG. 8: Block diagram for training a Machine Learning model for a contouring application;

    [0079] FIG. 9: Generation of training sets for multi-structure training of a Machine Learning model;

    [0080] FIG. 10a: Schematic example of an image with the associated contours identifying the different structures used for training the ML model;

    [0081] FIG. 10b Schematic examples of images with structures/organs to be contoured showing an example of an unseen structure by the ML in training.

    [0082] FIG. 11 Illustrative examples of image planes as used in 3D medical images.

    [0083] FIG. 12 shows comparison of Ground Truth (GT) and predictions of a single organ heart model;

    [0084] FIG. 13 shows comparison of ground truth (GT) and predictions of a multi-organ model trained on oesophagus, heart, lung, left and right lung and spinal cord.

    DETAILED DESCRIPTION

    [0085] While deep learning contouring has been shown to be beneficial in automatic contouring OARs on image slices, such approaches require a large volume of training data. In the absence of such data, for example when contouring previously unconsidered organs, manual contouring of the image slices is required.

    [0086] This invention uses an interactive contouring approach preferably within a deep-learning framework and investigates how this contouring approach behaves when provided with contextual information. It was found that despite using an architecture that has contextual information available to it, the model only learns to segment a known organ if training is only performed on single organ data. Using training data from multiple organs enforces that the context provided by the user is learnt.

    [0087] In contrast to the prior art methods discussed above, the disclosed method for this invention requires a model that can contour various structures of interest (that may be an organ in the human body, or other structure in the human body such as a tumour) on an image slice from the contextual information provided as described above. The contouring of various different structures of interest on an image slice can be achieved, for example, by training the model on multiple different structures (such as heart, lung, spinal cord, etc. . . . ) simultaneously. Such models should be able to provide outputs that reflects better the user guidance when segmenting and contouring previously unseen structures in a medical image, as a variety of structures were used in the training of the model. Preferably the model is a machine learning model.

    [0088] The disclosed invention addresses the problem of segmenting a previously unseen object or organ in a medical image using machine learning methods that takes account of user provided contextual information, such as a user-generated contour. Using input limage slices and input contours of a stucture of interest, the contour of the structure of interest on a target image slice is predicted using a contour prediction engine. In this invention, the input contour identifies the structure of interest on the input image slice.

    [0089] This approach enables contouring of a structure of interest on a medical image, even if the structure of interest has not been represented in the training set of the model. So for example, if the training set was all of images on which only the organs were contoured, the model can still be applied to contour other structures in a new image such as tumours. This greatly increases the efficiency of the model.

    [0090] The medical image contouring system described herein, provides the methods and tools for contouring a 3D medical image, composed of a stack of 2D image slices. In an example of the invention the stack may include all sequential images, or a range of one or more images selected from a sequence, such as every alternate medical image in the sequence for example. An example image contouring system in an embodiment of the invention may include a medical image database and a contour prediction engine. In an embodiment of the invention, the image database may be used to store a set of medical images, that are 2D and/or 3D medical images. In an embodiment of the invention the contour prediction engine may be configured to receive at least one input image slice from a set of two-dimensional image slices, with associated contour data representing an input contour identifying one or more structures of interest to the user and to also receive a target image slice from a 3D medical image in the image database. In an embodiment of the invention the structures of interest may have been previously unidentified. The contour prediction engine may further be configured to use a model, for example a machine learning model to predict the target contour data for the previously unidentified structure of interest on the target image slice, based on one of more of the received input 2D image slices and the associated data representing an input contour.

    [0091] An example for contouring a 3D medical image may include the following steps: receiving an input 2D image slice from the 3D medical image and input contour data associated with the input 2D image slice, the input contour data identifying the one or more structures within the input 2D image slice; identifying a target image in the 3D medical image to contour; using a machine learning model to generate the target contour, the target contour data identifying the same one or more structures within the target image.

    [0092] A model, such as a machine learning model that is used within the contour prediction engine, is required to be able to identify structures (such as heart, lungs, oesophagus, spinal cord for example) that have been used within the training set of the machine learning model. The model should also be able to handle unseen structures—structures that have not been previously included in the training set of the machine learning model-. In an embodiment of this invention this is achieved by simultaneously training the model using various different anatomical structures within the training set.

    [0093] In an embodiment of the invention a deep learning segmentation model, using a convolutional neural network, such as a U-Net, was trained using two alternative approaches. The models were trained either including contours from a single organ (FIG. 1 as discussed above) or from a variety of different organs (FIG. 2 discussed below). Contextual information was provided to the model, using the prior contoured image slice as an input, in addition to the image slice to be contoured. The case of the dataset contains five OARs: heart, left and right lung, oesophagus and spinal cord. 12082 contoured organ image slices were used for slice-by-slice training. Results were evaluated on 4647 image slices using Dice similarity coefficient. Both models, were evaluated on all OARs, regardless of the training set used.

    [0094] An example of results demonstrating this using the subject invention is shown in FIG. 2. It shows how the contouring performance (as measured by the Dice score) generally improves for all structures tested during training (FIG. 2a) if trained on a training set that includes various anatomical structures simultaneously. In this embodiment of the invention, FIG. 2a shows results for testing the model using the esophagus, heart, left and right lungs and the spinal cord. For a multi-organ model, the Dice score for heart segmentation was 0.92 with a mean Dice score of 0.76 for the other OARs.

    [0095] This is in contrast to the alternative single-structure model that is shown in FIG. 1, where training was done using the heart, and the results for all the other organs (that were not used during the training had very low Dice scores, where the mean Dice score for the other 4 OARs was 0.025. As is clear from this figure, the single-organ model can only contour the organ it has been trained on, but fails to contour organs outside the training set despite being provided context information.

    [0096] In contrast, a model trained on various different organs learns to predict different organs based on the context between the image and corresponding image contour. This invention has demonstrated that user provided context (such as a user-generated contour) can be incorporated into deep learning contouring to facilitate semi-automatic segmentation of medical imaging, for a variety of different imaging methodologies. An appropriate training set is selected to ensure that the approach generalises to use prior context rather than learning organ-specific segmentation. Such an approach may enable faster de-novo contouring in clinical practice. In this invention, the segmentation performance of a single-organ model trained on a heart training set (FIG. 1) is compared to a multi-organ model that has been trained on a set of organs: heart, left/right lung, oesophagus, spinal cord (FIG. 2).

    [0097] A single-organ model that has only been trained on hearts only learns to contour exclusively that structure, despite provided context (FIG. 1b), while a multi-structure model can contour various structures successfully (FIG. 2b). It should be noted that in the training process the single-organ model performs reasonably for the first iteration of training, but then gets worse for all organs but the heart (FIG. 1a). In contrast, the multi-organ model shows improvements in all organs tested throughout training (FIG. 2a). In addition, the heart segmentation performance of the multi-organ model remains similar to the performance of the single-organ model.

    [0098] Therefore, an appropriate training set must be selected to ensure that the approach generalises to use prior context rather than learning organ-specific segmentation. Such an approach may enable faster de-novo contouring in clinical practice.

    [0099] The models to be discussed below will be referred to as either single-structure or multi-structure model based on the training set provided. Preferably they will be machine learning models. A single-structure model is a model trained using a single structure (for example hearts or right lungs etc.), while in multi-structure models, a diverse set of different structures are used during training (e.g hearts and lungs and spinal cord and tumours). In a preferred embodiment of the invention, the machine learning system distinguishes which structure to segment by the contextual information of the user generated contour data provided associated with the previous image slice. Labelling refers to the labels assigned to different structures in the data set. Generally, different structures can be distinguished by their associated label. Multi-structure training is based on a training set composed of different structures, but all having the same label. This allows for generalized learning of contextual information rather than structure specific features because a diverse set of structures is available for training, which to the model are indistinguishable unless the provided context information is learnt.

    [0100] A single-structure model that has been trained on only one structure (FIG. 1) will only learn how to contour the structure on which it has been trained, while a multi-structure model that has been trained using a plurality of different structures will be able to contour various different structures (FIG. 2) in a medical image. Because the multi-structure model learns from context, the multi-structure model will have the ability to segment previously unseen structures that have not been included in the training set. This method could provide an invaluable tool to assist clinicians in segmenting any structures they need to contour without requiring individual models for each structure or organ of interest. This includes structures for which it is challenging to find a suitable training set (for example tumours).

    [0101] We demonstrate this by providing input contours of diverse structures (for example, contours corresponding to heart, left lung, right lung, spinal cord, oesophagus) as input to the machine learning model and preferably evaluating the segmentation of the images using the Dice score as a performance metric. If the input contour is the structure that the model has been initially trained on, then the single-structure model segmentation is successful. However, if a different structure is provided as input, rather than the structure as used for training the model, the single-structure model image segmentation fails, despite the provided context (FIG. 1b). Therefore, this model is unable to contour previously unconsidered structures.

    [0102] For multi-structure training (FIG. 2b), the ML model learns to segment different structures in the image based on the contextual information between input image slice and the associated input contour without any prior labelling of the structure. In such multi-structure training the machine learning model generalizes and is able to segment diverse structures based on the input contour provided by the user, as a user generated contour.

    [0103] FIG. 3 shows a flow diagram of an example system for contouring a 3D medical image to delineate structures of interest in the image, comprising a set of 2D medical image slices as used in an embodiment of this invention. As shown, the system includes contour prediction engine 301. This is a system component that predicts the target contour using the target image slice, input image slice(s) and input contour(s). A manual contouring and editing tool 302 and an image rendering engine 303 are also provided. The image rendering engine is a system component that enables display of the image data and contour data, and the manual contouring and editing tool is a tool to create contours or edit existing contours within the contour set relating to the set of 2D image slices from a 3D medical image. These system components enable the user to create and edit a set of contours 304 from a set of 2D medical image slices 305. In an embodiment of the invention, the prediction engine 301, contouring and editing tool 302 and image rendering engine 303 are provided as part of a computer (not shown) or as part of a computer program package, the computer may also include a storage system for storing a database of 3D and 2D medical images, as well as a display, for display of the image, as well as external hardware components such as a mouse, or other device for manual contouring of the images if needed.

    [0104] As used throughout this description, the structure of interest in an image refers to an anatomical structure the user wants to contour within the 3D medical image, the structure of interest may include organs (such as heart or lungs for example) which generally have a known well-defined structure or anatomy, or tumours, which have highly variable appearance and size/shape. Unlike most machine learning approaches, the method of this invention includes structures that have not previously been used in the machine learning training of the contour prediction engine. For example, a model that has included lung, heart, bladder, and lung tumours in its training set needs to generalize and be able to also predict the contours of the spinal cord, given the provided input contour data 311 represents the spinal cord as the structure of interest.

    [0105] The set of 2D image slices 305 and the set of contours corresponding to those image slices 304, can be displayed to the user via an Image Rendering Engine 303 and any appropriate displaying device 315.

    [0106] A 3D medical image 306 is used to create the set of 2D images slices 305. Typically, in an embodiment of the invention this will be image slices according to one of the 3 planes, Axial, Sagittal, Coronal or according to the image acquisition plane. FIG. 11 (as discussed later) shows these various different image acquisition planes.

    [0107] When a 3D medical image 306 is initially selected/loaded, an initial image contour set 307 is either loaded from an existing set of contours associated with the 3D medical image 306, or an initial image contour set is created by the user for one or more image slices using the manual contouring tools 302. This initial contour set is the set of contours prior to any processing using the contour prediction engine.

    [0108] The initial contour set 307 is then added to the contour set 304.

    [0109] An input target image slice 308 to be contoured is selected from the set of 2D image slices 305. The required contextual information input 309 is selected from the contour set 304 and the set of 2D image slices 305.

    [0110] The contour prediction engine 301 is provided with different inputs which allow the prediction of the contours. As shown in FIG. 3, in an embodiment of the invention there are two inputs to the contour prediction engine 301.

    [0111] The first input is a target image slice 308, which is the image slice to be contoured and/or the image slice for which a contour is predicted, provided from the set of 2D image slices 305, of the 3D medical image 306.

    [0112] The second input to the contour prediction engine 301 is a contextual information input 309. In an embodiment of the invention the contextual information input preferably consists of a set of at least one input image slices 310 and a set of at least one associated input contour 311.

    [0113] The second (contextual information) input 309 could be for example, one of the possible combinations, as illustrated in FIG. 4 which shows three examples for the different input variants 401, 402, 403 to the contour prediction engine 301. Other examples may also be possible in alternative embodiments of the invention.

    [0114] The first example input has one single contour of the structure of interest+its associated 2D image slice as illustrated at 401;

    [0115] The second example input has multiple contours of the structure of interest+their associated 2D image slices. In this example input, one of the contours is an empty mask 412, indicating an image slice that does not contain the structure of interest as illustrated at 402;

    [0116] The third example input is 403. This shows multiple contours of the structure of interest+their associated 2D image slices.

    [0117] A target image slice 404, is shown for each of the three different inputs 401, 042, 403. The target image slice 404 is the image slice to be contoured and/or the image slice for which a contour is predicted.

    [0118] All the three different input variants 401, 402, 403 include the target image slice 404, that contains the structure of interest 405 and can contain one or more other structures 406. These structures, may be heart, lungs, other organs, or other structures such as a tumour for example. If the target image slice does not contain the structure of interest 405, then no contour is returned by the contour prediction engine 301.

    [0119] The first input variant 401, shows the minimal input required for the contour prediction engine 301. As a minimum, the input to the contour prediction engine 301 requires target image slice 404 as contextual information input: one input image slice 407 and one input contour 408 associated to the image slice 407. The structure of interest 409 on input contour 408 identifies the structure of interest 405 to be segmented on the target slice 404.

    [0120] In an embodiment of the invention, the combination of input image slice and associated input contour referring to the structure of interest is required to provide the necessary contextual information. This combination of input image slice and associated input contour relates the structure described by the input contour to the input image slice and enables the machine learning model to identify what structure to contour on the target image slice. The ML model of the contour prediction engine is thus sensitive to what the user intends to contour rather than having learnt to contour a set of specific organs. The contextual information is important in allowing the contour prediction engine to determine what is to be contoured rather than identifying structures based on image features alone.

    [0121] Input variant 402, requires an input image slice 410 and associated input contour data 411 in addition to the minimum input requirements that are described above for the first input 401. In this example input, the input contour 411 is empty and does not include any reference 412 between the structure of interest 405 to the input image slice 410. This can be useful contextual information, as it indicates to the contour prediction engine what the structure of interest does not look like.

    [0122] The third input variant 403, requires an input image slice 413 and associated input contour data 414 in addition to the minimum input requirements of input 401. The input contour 414 identifies the structure of interest 415 that relates to the structure of interest 405 to be segmented on the target image slice 404.

    [0123] Using either of the different input variants 401, 402, 403 or any extension of them, the target contour 416 associated to the target image slice 404 can be predicted by the contour prediction engine 301. The structure of interest 417 in the target contour 416 relates to the structure of interest 405.

    [0124] An extension of the different input variants 401, 402, 403 is possible in some embodiments of the invention, for example, by adding additional input image slice and input contour pairs as inputs to the contour prediction engine 301.

    [0125] The at least one input image slices 310, and the at least one associated input contour data 311 to the machine learning model can be adjacent image slices to the target image slice, but they may also be separated by any number of slices from the target image slice. Thus, this invention does not have any limitation on the image slices that can be used as input and target image slices. In a preferred embodiment of the invention, contours for sequential image slices will be predicted sequentially, and the predicted contour for the nth image slice will be the input for the n+1th target image slice, to predict the contour for that subsequent target image slice.

    [0126] Reverting back now to the flow diagram of FIG. 3, contour prediction engine 301 uses the different inputs 401, 402, 403 as described with reference to FIG. 4 in order to predict the target contour 312 for the target image slice 308 using a machine learning model. The predicted target contour 312 can then be added to the contour set 304, after it has been predicted.

    [0127] A manual contouring or editing tool 302 allows a user to interact with the contour set 304. This may be provided within the overall system, or as a separate add-on program for performing contouring or editing. The user can either manually edit existing contours within a particular contour set 304, or the user may manually create new contours of one or more structures of interest associated to the set of 2D image slices 305. The one or more new contours created or edited with the manual contouring tool are added to the contour set 304.

    [0128] Thus, as described above, and illustrated in FIG. 3, contour prediction engine 301 requires the following:

    1) a target image slice 308, which is the image slice to be contoured/the image slice for which a contour is predicted; and
    2) contextual information input 309 that consists of a set of at least one input image slices 310 and a set of at least one associated input contour 311.

    [0129] Using these two separate inputs to the contour prediction engine 301, the machine learning model of the contour prediction engine 301 estimates the target contour 312 of the target image slice 308 of the structure that has been contoured on the input slice 310.

    [0130] Different sets of inputs can be at different slice distances. For example, image slice 606 and input contour 605 can be 3 slices from the target image slice 602, while the input image slice 608 can be at 5 slices of the target image slice in either direction within the 3D image stack.

    [0131] FIG. 5 is a flow diagram showing the interactive contouring application workflow for this invention. As shown, the figure details different elements of the contour propagation.

    [0132] Step 501 shows the start of the workflow for the interactive contouring. This is followed by the user loading a patient 3D image at 502. After this, the user selects an initial 2D image slice that they want to contour 503. In some alternative embodiments of the invention, the use may also select multiple image slices that they want to predict contours for.

    [0133] The user either manually draws a contour on the selected 2D image slice identifying the structure of interest 504, OR the user can load a contour of the structure of interest. The user can accept or edit the loaded contour.

    [0134] The user chooses a target image slice (To serve as a stopping image slice) 505. The target image slice will be an image slice in the image set 305, that is n slices away from the initial image slice, where all image slices between the initial image slice and the target image slice will be contoured, hence the target image slice, will be contoured, but then the contouring will be stopped and no further image slices of the set of 2D image slices 305 will be contoured.

    [0135] In an embodiment of the invention, to predict the target contour with the contour prediction engine 301, two different contour propagation methods are possible (the method that will be used is typically selected by user or pre-set in the system configuration). These two alternative methods are Direct contour propagation 506 or iterative contour propagation 512.

    [0136] The steps for direct contour propagation 506 in an embodiment of the invention are as follows: User or system (according to system configuration) selects the inputs 507 to contour prediction engine 301 given any of the input variants 401, 402, 403 as previously described in FIG. 4. The input to the contour prediction engine 301 includes the (manually defined or loaded with or without manual edits) user contour(s) as already defined above in step 504 and also the target image slice previously identified in step 505.

    [0137] The contour prediction engine 301 then predicts the target contour associated with the target image slice at step 508. Following on from this, the user accepts or edits the predicted contour at step 509. Following this the newly generated target contour is added to the contour set at step 510. The process then ends at step 511.

    [0138] The alternative contour propagation pathway is iterative contour propagation at step 512. In this alternative pathway, the system identifies an adjacent intermediate target image slice (to initial slice) at step 513. The intermediate target image slices are all slices that are spatially between the initial user identified image slice and the target image slice. Following the identification of the intermediate image slice, the system (according to system configuration) selects inputs to the contour prediction engine at step 514.

    [0139] If this step is the first iteration of the contour propagation, then the user input(s) is(are) used for the contextual information input and the intermediate target image slice as “the target image slice” For every following iteration after the first, the prior intermediated target image slice(s) and the associated predicted contour(s) for the image slice(s) are used as the contextual information that is input to the contour prediction engine 301.

    [0140] At step 515 contour prediction engine generates the target contour for the current adjacent intermediate target image slice. At step 516 the user may or may not edit the current target contour. If the intermediate target image slice is not the target image slice identified by the user at step 517, then steps 513 to 516 are repeated for all necessary iterations until arriving at the target image slice that was previously identified by the user.

    [0141] If the intermediate target image slice is the target image slice identified by the user 518, then the iterative contour propagation steps are completed and the user can accept or edit the predicted contour(s) at step 509. The target contour is then added to contour set 510, and the iterative contour propagation ends at step 511.

    [0142] When the end is reached at step 511, the user can select a new target image slice and repeat the workflow if required to generate contours for the new target image slice, using either of the two propogation methods described above. In an embodiment of the invention, all the contours are generated using the same contour propogation method. However, in an alternative embodiment of the invention, subsequent contours may be generated using either of the contour propagation methods.

    [0143] The system may have a configuration setting such that the workflow is executed until all 2D image slices have a corresponding contour in the 3D image. The workflow can be managed automatically or manually according to user preference.

    [0144] FIG. 6 illustrates the iterative propagation approach for prediction of target contours as described with reference to FIG. 5 above. The figure shows the initial state (FIG. 6(a)), the contour prediction with iterative propagation (FIG. 6(b)), and the final 3D image with contours (FIG. 6(c)).

    [0145] An exemplary starting state in the contouring application is shown in FIG. 6a. The set of 2D image slices from a 3D medical image consists of 3 image slices 601, 602, 603. The user selected initial image slice 601 has an associated contour 604 describing a structure of interest. The user target image slice 603 is the image slice the user wants to contour and has no associated contour (illustrated by 606). Spatially in-between the user initial image slice 601 and the user target image slice 603 is one intermediate target image slice 602. The intermediate target image slice 602 has no associated contour (illustrated by 605).

    [0146] The iterative contour propagation based on the starting state is illustrated in FIG. 6b.

    First, the contour 605 for the intermediate target image slice 602 needs to be predicted. Therefore, [0147] the intermediate target image slice 602, [0148] and contextual information input consisting of [0149] the user initial image slice 601 [0150] and associated contour data 604 are processed by the contour prediction engine 607 to predict the intermediate target contour 608.
    Second, the contour 606 for the user target slice 603 also has to be predicted. Therefore, [0151] the user target image slice 603, [0152] and contextual information input consisting of [0153] the intermediate target image slice 602 [0154] and associated intermediate target contour data 608 are processed by the contour prediction engine 607 to predict the user target contour 609.

    [0155] This iterative contour prediction process may be applied for multiple intermediate target image slices.

    [0156] The final state after iterative contour propagation is shown in FIG. 6c. The image slices 601, 602, 603 and corresponding contours 604, 608, 609, respectively, can be displayed. As shown, the images are CT images, but the methodology of this method, as shown in this figure can be applied to arrange of different imaging modalities.

    [0157] The intermediate target contours 608 as shown in figure (b) may be discarded or kept depending on system configuration, as they may be useful for subsequent processing or applications.

    [0158] FIG. 7 illustrates the alternative direct propagation approach for the prediction of target contours corresponding to steps 506-509 of the workflow of FIG. 5. The figure shows the initial state (FIG. 7(a)), the contour prediction with the initial user image slice (FIG. 7(b)), and the final 3D image with contours (FIG. 7(c)).

    [0159] An exemplary starting state in the contouring application is shown in FIG. 7a. The set of 2D image slices from a 3D medical image consists of 3 image slices 701, 702, 703. The user selected initial image slice 701 has an associated contour data 704 describing a structure of interest. The user intends to contour either the image slices 702 and 703, or only one of the two. As shown, image slice 702 and image slice 703 have no associated contours (illustrated by 705 and 706, respectively).

    [0160] The direct contour propagation based on the starting state is illustrated in FIG. 7b.

    [0161] If the user selects the image slice 702 as the target image slice of the contour prediction engine, the contour 705 on the target image slice 702 is the contour that has to be predicted. Therefore, [0162] the target image slice 702, [0163] and contextual information input consisting of [0164] the user initial image slice 701 [0165] and associated contour data 704 are processed by the contour prediction engine 707 to predict the target contour 708.
    If the user selects image slice 703 as the target image slice of the contour prediction engine, the contour 706 on the target image slice 703 is to be predicted. Therefore, [0166] the target image slice 703, [0167] and contextual information input consisting of [0168] the user initial image slice 701 [0169] and associated contour data 704 are processed by the contour prediction engine 707 to predict the target contour 709.
    The final state after propagation is shown in FIG. 7c. The image slices 701, 702, 703 and corresponding contours 704, 708, 709, respectively, can be displayed.

    [0170] FIG. 8 illustrates the process of training machine learning model for contouring a medical image. The training process start at step 801 and progresses to step 802. At step 802 one or more example images are loaded from a database 803 of images with previous contoured structures. In the preferred embodiment of the invention the images with the associated contours from the database are then processed at step 804 with the chosen architecture of the machine learning model, with its current parameter settings (these may have been initialized randomly, or from a prior trained machine learning model on other data). The output of the current ML model is then compared at step 805 to the known “correct” contours associated with the inputs images, loaded from database 803. An update to the ML model parameters is calculated at step 806 so as to reduce the error between the predicted contours by the current ML model and the known correct contours. The process can be iterated over the whole database of images 803 in batches loaded at 802, progressing from 807 back to 802 until all training images have been considered. Once all images, or a chosen size subset of images, have been considered, the ML model parameters calculated at 806 at each iteration are combined and the ML model is updated at 808. The process from step 802 to step 808 is repeated for a set number of iterations or until it is determined that the ML model parameters have converged at 809. Once this iteration process is complete, the training terminates at step 810 with a set of ML model parameters tuned.

    [0171] The disclosed method integrates a machine learning model to predict the target contours from given input image slices and associated input contours. Convolutional neural networks have shown great success in image segmentation tasks. But different machine learning models or a combination of these, such as for example random forest models or decision trees can also be adapted for the interactive contouring method of this invention. Recurrent neural networks can be particularly useful to further take advantage of contextual features when propagating the contour in the entire 3D medical image. The contour prediction engine of the disclosed invention preferably uses machine learning, and in preferred embodiments of the invention may be any combination of the below machine learning techniques:

    [0172] Deep neural network including Convolutional Neural network

    [0173] Random forest

    [0174] Recurrent neural networks

    [0175] For the disclosed method of the subject invention, it is important to construct a model that can contour a user defined structure in a 3D medical image based on contextual information, including information from those unidentified structures of interest in the medical image that have not been represented within the training set. In a preferred embodiment of the invention, one or more of the target image slice, the input of one or more image slices and the input contours, can provide contextual information to identify a relevant location for a contour on the target slice. In some embodiments of the invention, the contextual information may comprise one or more items of information on image features, or spatial relations between image data and/or contour data, that may be learnt from the training data

    [0176] In some embodiments of the invention, the contextual information is information relating to one or more features shared between image slices in the set of 2D image slices, in some cases the image slices are consecutive image slices in the set of 2D images.

    [0177] Generalization of the model can be achieved, for example, by an adequate choice of the training set and using the training of the model as disclosed for this invention. Instead of training on a single anatomical structure (e.g. only heart), defined by a single contour label, as is widely done for optimization in segmentation tasks in medical images, training should be performed simultaneously on multiple structures (e.g. heart and lung and spinal cord), but with a shared single contour label.

    [0178] The generation of training data in the multi-structure approach is shown in FIG. 9.

    [0179] The training set typically consists of multiple previously contoured images, each of which can consist of a different number of structures. An image that is associated with n different structures may be copied and used n-times. However, each time that the image is used, it will only be used with one structure at a time with an identical label.

    [0180] Previous approaches have associated a different label with each structure in the image, thus training multi-label segmentation approached (i.e the model predicts more than one foreground label at once, for example a heart label and a lung label). However, when training with multiple structures to allow contextual information to be learnt, it is important that all structures are labelled the same and as such are indistinguishably labelled such that a generalizable model can be learnt. The spatial contextual information provided with the image slice to be segmented identifies what structure to contour in the image.

    [0181] For all contoured images in the training set, each input image slice 901 is paired with its associated contour data 902. The contour data 902 may contain contours of multiple different structures 903. For each of the multiple different structures shown on contour data 902, a separate image-contour pair (904, 905, 906) is added to the training set. Each structure of the multiple different structures 903 will be labelled with the same label 907, in FIG. 9, these are shown with the label “1” on the pictures on the righthand side of the figure. The initial 3D image 901 remains unchanged in the process.

    [0182] In the various embodiments of the invention described above, the method has been applied to CT images. However, to further increase generalizability of the model, the training set can be extended to include different medical imaging modalities, including but not limited to CT, MRI, ultrasound, CBCT. In one embodiment of the invention the training of the machine learning is done based on a single imaging modality. In other embodiments of the invention, the training of the machine learning model is performed on multiple imaging modalities.

    [0183] We note that machine learning methods are broadly applied that employ multiple structures. These refer to the simultaneous learning to classify different structures or predict segmentation of different structures simultaneously where a different label is applied to each structure to identify that structure. In contrast to the disclosed invention, these methods rely on a training set with distinctly labelled structures, while in the disclosed method the label information is deliberately neglected.

    [0184] FIG. 10 show schematic examples of structures in the training set and examples of structures to be contoured with the disclosed invention. The figure illustrates the contouring of unseen structures by the ML in training.

    [0185] FIG. 10a. shows a schematic example of the data set used for training the ML model. The training set consists of 3D medical images 1001 and the associated contours data 1002. All the 3D medical images in the training set are showing three distinct structures of interest:

    [0186] a triangle 1013, representing for example a lung tumour

    [0187] a circle 1014, representing for example a heart structure.

    [0188] a rectangle 1015, representing for example the liver.

    [0189] The associated contours in the training set includes contours for all three distinct structures of interest: [0190] a triangle 1003, labelled 1, representing the lung tumour contour [0191] a circle 1004, labelled 2, representing the heart contour.a rectangle 1005, labelled 3, representing the liver contour.

    [0192] This is merely an example of an embodiment of the invention, and in other cases, other structures make be contoured on the images, and the image make include more than three contoured structures of interest.

    [0193] All the structures will be labelled with label “1” as described by FIG. 9 in the training process.

    [0194] FIG. 10b shows three schematic examples of medical images with structures/organs to be contoured showing an example of an unseen structure by the ML in training.

    [0195] The first example shows a medical image, 1010, having all the 3 organs, lung tumour 1013, heart 1014, and liver 1015 (as described above) as the typical image data used in training the ML model (as shown in FIG. 10a). The contour prediction engine 301 previously described can be used to segment any of the three organs (1013, 1014, 1015) for example 1014, predicting the contour 1007 for the particular selected organ or structure of interest.

    [0196] The second example shows a medical image, 1011, having only one of the organs used in the training of the ML model (1014). In this case the image shows the heart, but may show another single organ instead. The contour prediction engine 301 can be used to segment the structure of interest 1014, predicting the contour 1008 for the structure of interest.

    [0197] The third example shows a medical image, 1012, having one of the organs used in the training of the ML model (1013), in this case representing the lung tumour, and a new organ, shown by star 1016, that was not previously indicated on the images of the training set, that also represent a structure or organ of interest to the user. The new organ 1016, representing for example a kidney (or any other new structure), is unseen by the ML model because it was not present in the training set, as detailed on FIG. 10a. The contour prediction engine trained as detailed in the disclosed invention can be used to segment the unseen structure of interest 1016, predicting the contour 1009 for this new previously unseen organ. This contour prediction works even though the structure 1016 was not in the training set.

    [0198] Cross-sectional 2D images of a 3D medical image are typically displayed in the axial, sagittal and coronal plane. FIG. 11 illustrates these three orthogonal planes, coronal plane 1102, axial plane 1103, and sagittal plane 1104) with exemplary CT cross-sections images 1105, 1106, 1107. The orientations of the planes relative to the human body are shown in reference to a humanoid 3D-icon 1101.

    [0199] The coronal plane 1102 divides the body into front and back and corresponds to the CT cross-section 1105. The axial plane 1103 is parallel to the ground and divides the body into top and bottom parts. It corresponds to the CT cross-section 1106. The sagittal plane 1104 divides the body into left and right and corresponds to the CT cross-section 1107.

    [0200] The performances of the model at different stages of training are illustrated in FIG. 12 (single-structure-training done using the heart as the single image) and FIG. 13 (multi-structure). These figures show the original scan and the contours that result after training of the model for 1 or 40 iterations. The figures show an example of an axial and sagittal planes of a 3D medical image with or without contours. The skilled person in the art would understand that FIGS. 12 and 13 are the visual outputs (illustrations) that correspond to the models as shown in FIGS. 1 and 2 respectively.

    [0201] FIGS. 12 (a) and 12 (d) show the original CT scan images; FIGS. 12(b), (c) show further the GT contour for the heart and the predicted contours for a model trained for 1 and 40 iterations respectively. FIGS. 12(e) and 12(f) show also the GT contour for the spinal cord and the predicted contours after 1 and 40 iterations respectively. As shown, in FIG. 12(b), the GT contour for the heart has been correctly located. That is, the heart has been correctly contoured on the image. However, the predicted contour for the heart, after one iteration of training is not quite in alignment with the GT contour, and covers a small area of the heart relative to the GT contour as shown in the axial image, and also appears to be displaced upwards in the sagittal plane, relative to the GT contour. In FIG. 12(c), it can be seen that the predicted contour obtained with a model that was trained for 40 iterations is a good match for the GT contour in both the axial and the sagittal plane. It appears that the single organ model, trained on the heart is successful in predicting contours of the heart.

    [0202] FIGS. 12(e) and (f) show the result for predicting the contour of the spinal cord, compared to the GT contour of the spinal cord, with a model trained with hearts for 1 and 40 iterations respectively. This prediction is much less successful, and by 40 iterations of the model training, the prediction is contouring the heart (FIG. 12(f)), which is the organ on which the model was originally trained on, and not the spinal cord, which is the organ that needed to be contoured and for which user input was provided as such. This clearly shows the failure of the single organ model.

    [0203] FIG. 13 shows corresponding images to FIG. 12, but which have been obtained using a model trained with multiple structures, including heart, left lung, right lung, spinal cord and oesophagus. As for FIG. 12, images 13(a) and 13(d) show the original CT scan images. FIGS. 13(b), (c) show further the GT contour for the heart and the predicted contours for a model trained for 1 and 40 iterations respectively. FIGS. 13(e) and 13(f) show also the GT contour for the spinal cord and the predicted contours after 1 and 40 iterations respectively. As shown, in FIG. 13(b), the GT contour for the heart has been correctly located. That is, the heart has been correctly contoured on the image. However, the predicted contour for the heart, after one iteration is not quite in alignment with the GT contour, and covers a small area of the heart relative to the GT contour as shown in the axial image, and also appears to be displaced upwards in the sagittal plane, relative to the GT contour (as in FIG. 12 (b)). In FIG. 13(c), obtained with a model trained for 40 iterations, it can be seen that the predicted contour is a good match for the GT contour in both the axial and the sagittal plane. It appears that the multi-structure model (like the single-structure model) is successful in predicting contours of the heart.

    [0204] FIGS. 13(e) and (f) show the result of the model for predicting the contour the spinal cord, compared to the GT contour of the spinal cord, with a multi-structure model trained for 1 and 40 iterations respectively. When compared to FIGS. 12(e) and (f) these predictions have been much more successful. Whilst the results after 1 iteration look very similar to the results after 1 iteration of training of the single structure model, after 40 iterations of the multi-structure model it is clear that the predicted contour aligns closely with the GT contour in both the axial and the sagittal planes.

    [0205] These images clearly show that the single-structure model trained on hearts doesn't learn to contour based on the provided context and actually learns to predict hearts contours exclusively (FIG. 12(f)). In contrast, the multi-structure model accurately predicts the spinal cord at iteration 40 (FIG. 13f) and thus makes a good use of the provided contextual information.

    [0206] Data augmentation approaches, as already known to those skilled in the art, can also be applied during training to assist in generalization of the machine learning model

    [0207] A system and a method have been described that enable contextual information that is provided through input contours on medical images to be used for improving contouring performance and interactivity. The contour prediction engine allows efficient segmentation of structures of interest within a 3D medical image, regardless of whether examples of the structure of interest were provided in the training set, by use of a generalized machine learning model, that accounts for contextual information provided by the user.

    [0208] Examples of this invention may be applied to any or all of the following: Picture archiving and communication systems (PACS); Advanced visualisation workstations; Imaging Acquisition Workstations; Web-based or cloud-based medical information and image systems; Radiotherapy Treatment planning system (TPS); Radiotherapy linear accelerator consoles; Radiotherapy proton beam console.

    [0209] The present invention has been described with reference to the accompanying drawings. However, it will be appreciated that the present invention is not limited to the specific examples herein described and as illustrated in the accompanying drawings. Furthermore, because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.

    [0210] The invention may be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention.

    [0211] A computer program is a list of instructions such as a particular application program and/or an operating system. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system. Therefore, some examples describe a non-transitory computer program product having executable program code stored therein for automated contouring of cone-beam CT images.

    [0212] The computer program may be stored internally on a tangible and non-transitory computer readable storage medium or transmitted to the computer system via a computer readable transmission medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to an information processing system. The tangible and non-transitory computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; non-volatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.

    [0213] A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.

    [0214] The computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices. When executing the computer program, the computer system processes information according to the computer program and produces resultant output information via I/O devices.

    [0215] In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the scope of the invention as set forth in the appended claims and that the claims are not limited to the specific examples described above.

    [0216] Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality.

    [0217] Any arrangement of components to achieve the same functionality is effectively ‘associated’ such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as ‘associated with’ each other such that the desired functionality is achieved, irrespective of architectures or intermediary components. Likewise, any two components so associated can also be viewed as being ‘operably connected,’ or ‘operably coupled,’ to each other to achieve the desired functionality.

    [0218] Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.

    [0219] However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.

    [0220] In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms ‘a’ or ‘an,’ as used herein, are defined as one or more than one. Also, the use of introductory phrases such as ‘at least one’ and ‘one or more’ in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles ‘a’ or ‘an’ limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases ‘one or more’ or ‘at least one’ and indefinite articles such as ‘a’ or ‘an.’ The same holds true for the use of definite articles. Unless stated otherwise, terms such as ‘first’ and ‘second’ are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.

    REFERENCES

    [0221] [R1] Ardon, R.; Cohen, L. D. “Fast constrained surface extraction by minimal paths”, International Journal of Computer Vision, Springer, vol. 69, pp. 127-136, 2006. [0222] [R2] Bai, X.; Sapiro, G. “Geodesic Matting: A Framework for Fast Interactive Image and Video Segmentation and Matting”, International Journal of Computer Vision, vol. 82, no. 2, p. 113, 2009. [0223] [R3] Boykov, Y.; Jolly, M. P. “Interactive Organ Segmentation Using Graph Cuts”, MICCAI 2000. Lecture Notes in Computer Science, vol 1935. Springer, Berlin, Heidelberg, 2000. [0224] [R4] China, D. et al “Anatomical Structure Segmentation in Ultrasound Volumes Using Cross Frame Belief Propagating Iterative Random Walks”. IEEE J Biomed Health Inform, 2019. [0225] [R5] Criminisi, A.; Sharp, T.; Blake, A. “GeoS: Geodesic Image Segmentation.” In Computer Vision—ECCV 2008. Springer, 2008. [0226] [R6] Gooding, M., et al. “PV-0531: Multi-centre evaluation of atlas-based and deep learning contouring using a modified Turing Test.” Radiotherapy and Oncology, 127:S282-3, 2018. [0227] [R7] Léger, J.; Brion, E. et al. “Contour Propagation in CT Scans with Convolutional Neural Networks”, ACIVS 2018. Lecture Notes in Computer Science, vol 11182. Springer, Cham, 2018. [0228] [R8] Lin, D. et al. “ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation.” arXiv:1604.05144 [cs.CV] 2016. [0229] [R9] Lustberg T, et al. “Clinical evaluation of atlas and deep learning based automatic contouring for lung cancer”, Radiotherapy and Oncology, 126(2):312-7, 2018. [0230] [R10] Novikov, A. A.; Major, D; Wimmer, M. et al. “Deep sequential segmentation of organs in volumetric medical scans”. IEEE Trans Med Imaging 2018. doi:10.1109/TMI.2018.2881678 pmid:30452352 [0231] [R11] Oh, S. W. et al. “Fast User-Guided Video Object Segmentation by Interaction-and-Propagation Networks”. The 2018 DAVIS Challenge on Video Object Segmentation. arXiv:1904.09791 [cs.CV], 2019a. [0232] [R12] Oh, S. W. et al. “A Unified Model for Semi-supervised and Interactive Video Object Segmentation using Space-time Memory Networks.” The 2019 DAVIS Challenge on Video Object Segmentation—CVPR Workshops, 2019b. [0233] [R13] Olabarriaga S. D. et al “Interaction in the segmentation of medical images: A survey,” Medical Image Analysis, vol. 5, pp. 127-142, 2001. [0234] [R14] Ramkumar, A. et al. “User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.” Journal of Digital Imaging, vol. 29, pp. 264-277, 2016. [0235] [R15] Ronneberger, O. et al. “U-Net: Convolutional Networks for Biomedical Image Segmentation”.— MICCAI 2015. [0236] [R16] Sakinis, T. et al. “Interactive segmentation of medical images through fully convolutional neural networks”. arXiv:1903.08205 [cs.CV] 2019. [0237] [R17] Schipaanboord B et al. “Can atlas-based auto-segmentation ever be perfect?” Insights from Extreme Value Theory. IEEE transactions on medical imaging. 2018. [0238] [R18] Sharp G, et al. “Vision 20/20: perspectives on automated image segmentation for radiotherapy.” Medical physics, 41, 5, 2014. [0239] [R19] Wang, G. et al. “DeeplGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation”. arXiv:1707.00652 [cs.CV], 2017. [0240] [R20] Wang, G. et al., “Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning,” in IEEE Transactions on Medical Imaging, vol. 37, no. 7, pp. 1562-1573, July 2018. doi: 10.1109/TMI.2018.2791721 [0241] [R21] Zheng, Q. et al. “3D Consistent & Robust Segmentation of Cardiac Images by Deep Learning with Spatial Propagation”. IEEE Transactions on Medical Imaging, Institute of Electrical and Electronics Engineers, 2018.