IMAGE ANALYSIS METHOD SUPPORTING ILLNESS DEVELOPMENT PREDICTION FOR A NEOPLASM IN A HUMAN OR ANIMAL BODY

20170236283 · 2017-08-17

    Inventors

    Cpc classification

    International classification

    Abstract

    The present invention relates to an image analysis method for providing information for supporting illness development prediction regarding a neoplasm in a human or animal body. The method includes receiving for the neoplasm first and second image data at a first and second moment in time, and deriving for a plurality of image features a first and a second image feature parameter value from the first and second image data. These feature parameter values being a quantitative representation of a respective image feature. Further, calculating an image feature difference value by calculating a difference between the first and second image feature parameter value, and based on a prediction model deriving a predictive value associated with the neoplasm for supporting treatment thereof. The prediction model includes a plurality of multiplier values associated with image features. For calculating the predictive value the method includes multiplying each image feature difference value with its associated multiplier value and combining the multiplied image feature difference values.

    Claims

    1. An image analysis method for providing information for supporting illness development prediction regarding a neoplasm in a human or animal body, the method comprising: receiving, by a processing unit, image data of the neoplasm, wherein said receiving comprises: receiving first image data of the neoplasm at a first moment in time, and receiving second image data of the neoplasm at a second moment in time; deriving, by the processing unit, for each of a plurality of image features associated with the neoplasm: a first image feature parameter value from the first image data, and a second image feature parameter value from the second image data, said first image feature parameter value and second image feature parameter value being a quantitative representation of said respective image feature at said first and second moment in time respectively; and calculating, by the processing unit, for each one of the plurality of image features, an image feature difference value by calculating a difference between the first image feature parameter value and the second image feature parameter value for said one of the plurality of image features; wherein the method further includes: deriving, by said processing unit using a prediction model, a predictive value associated with the neoplasm for supporting treatment thereof, wherein said prediction model includes a plurality of multiplier values, each one of the plurality of multiplier values being associated with an image feature for multiplying an associated image feature difference value therewith, and wherein calculating, during the deriving, the predictive value associated with the neoplasm includes: multiplying each image feature difference value with its associated multiplier value, and combining the multiplied image feature difference values to obtain the predictive value associated with the neoplasm.

    2. The image analysis method according to claim 1, wherein the first and second image feature parameter values are quantitative values obtained from said first and second image data and associated with image feature parameters, the image feature parameters being one or more elements of the group consisting of: first-order gray level statistics obtained from image pixels or areas of the image from the image data; second-order gray level statistics obtained from co-occurrence matrices of the image data; shape and size based features; and intensity volume histogram based features of positron emission tomography images.

    3. The image analysis method according to claim 1, wherein calculating an image feature difference value comprises calculating an image feature difference percentage.

    4. The image analysis method according to claim 1, wherein the multiplier values include one or more weighing factors associated with the image features, wherein the weighing factors indicate an importance of the respective image features for obtaining the predictive value.

    5. The image analysis method according to claim 1, wherein the multiplier values include one or more selector values associated with the image features, and wherein the selector values indicate whether the associated image features are to be included for obtaining the predictive value.

    6. The image analysis method according to claim 1, wherein the image data is obtained using at least one of the group consisting of: a computed tomography imaging method, a positron emission tomography method, a magnetic resonance imaging method, a single photon emission computed tomography imaging method, a planar gamma imaging method, an ultrasonography imaging method, a thermography imaging method, and a photo-acoustic imaging method.

    7. The image analysis method according to claim 1, wherein the image data is obtained using a computed tomography imaging method, and wherein the first and second image feature parameter values are quantitative values obtained from said first and second image data and associated with image feature parameters, the image feature parameters including at least: a high-high-low filtered wavelet transform of the parameter kurtosis from the class first level statistics; and a high-low-low filtered wavelet transform of the parameter homogeneity second type from the class second order gray-level statistics.

    8. The image analysis method according to claim 7, wherein the multiplier values are represented by hazard ratios associated with the image feature parameters, the prediction model including: for the high-high-low filtered wavelet transform of the parameter kurtosis a hazard ratio—HR—between 0.980 and 0.995; and for the high-low-low filtered wavelet transform of the parameter homogeneity second type a hazard ratio—HR—between 1.005 and 1.020.

    9. The image according to claim 1, wherein the image data is obtained using a computed tomography imaging method, and wherein the first and second image feature parameter values are quantitative values obtained from said first and second image data and associated with image feature parameters, the image feature parameters including at least: the parameter run-length non-uniformity from the class run-length gray-level statistics; the parameter uniformity from the class first order statistics; and a high-low-high filtered wavelet transform of the parameter normalized inverse difference moment from the class second order gray-level statistics.

    10. The image analysis method according to claim 9, wherein the multiplier values are represented by hazard ratios associated with the image feature parameters, the prediction model including: for the parameter run-length non-uniformity a hazard ratio—HR—between 1.01 and 1.10; for the parameter uniformity a hazard ratio—HR—between 1.005 and 1.020; and for the high-low-high filtered wavelet transform of the parameter normalized inverse difference moment a hazard ratio—HR—between 1.09 and 4.10.

    11. The image analysis method according to claim 1, wherein the image data is obtained using a positron emission tomography imaging method, and wherein the first and second image feature parameter values are quantitative values obtained from said first and second image data and associated with image feature parameters, the image feature parameters including at least: the parameter entropy from the class first level statistics; and the parameter root mean square from the class first level statistics.

    12. The image analysis method according to claim 11, wherein the multiplier values are represented by hazard ratios associated with the image feature parameters, the prediction model including: for the parameter entropy a hazard ratio—HR—between 0.9 and 1.0, preferably HR=0.95; for the parameter root mean square a hazard ratio—HR—between 1.01 and 1.05, preferably HR=1.03.

    13. The image analysis method according to claim 1, wherein the image data is obtained using a positron emission tomography imaging method, wherein the first and second image feature parameter values are quantitative values obtained from said first and second image data and associated with image feature parameters, and wherein the image feature parameters include at least: the parameter absolute intensity of a part of a volume of the neoplasm determined using an intensity volume histogram of said image data; the parameter total lesion glycolysis for a volume of the neoplasm having an absolute intensity above a threshold determined using an intensity volume histogram of said image data; and the parameter uniformity from the class first order statistics.

    14. The image analysis method according to claim 13, wherein the multiplier values are represented by hazard ratios associated with the image feature parameters, the prediction model including: for the parameter absolute intensity of a part of a volume of the neoplasm, a hazard ratio—HR—between 1.01 and 1.05; for the parameter total lesion glycolysis for a volume of the neoplasm having an absolute intensity above a threshold, a hazard ratio—HR—between 0.97 and 1.01; and for the parameter uniformity a hazard ratio—HR—between 0.99 and 1.03.

    15. A non-transitory computer-readable medium comprising computer-executable instructions that, when executed by a computer, perform an image analysis method comprising: receiving, by a processing unit, image data of the neoplasm, wherein said receiving comprises: receiving first image data of the neoplasm at a first moment in time, and receiving second image data of the neoplasm at a second moment in time; deriving, by the processing unit, for each of a plurality of image features associated with the neoplasm: a first image feature parameter value from the first image data, and a second image feature parameter value from the second image data, said first image feature parameter value and second image feature parameter value being a quantitative representation of said respective image feature at said first and second moment in time respectively; and calculating, by the processing unit, for each one of the plurality of image features, an image feature difference value by calculating a difference between the first image feature parameter value and the second image feature parameter value for said one of the plurality of image features; wherein the method further includes: deriving, by said processing unit using a prediction model, a predictive value associated with the neoplasm for supporting treatment thereof, wherein said prediction model includes a plurality of multiplier values, each one of the plurality of multiplier values being associated with an image feature for multiplying an associated image feature difference value therewith, and wherein calculating, during the deriving, the predictive value associated with the neoplasm includes: multiplying each image feature difference value with its associated multiplier value, and combining the multiplied image feature difference values to obtain the predictive value associated with the neoplasm.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0031] The present invention will further be elucidated by means of some specific embodiments thereof, with reference to the enclosed drawings, wherein:

    [0032] FIG. 1A-1 is a gray level image describing tumor intensity of a first tumor;

    [0033] FIG. 1A-2 is a histogram of the image of FIG. 1A-1;

    [0034] FIG. 1A-3 provides an overview of image feature parameter values and corresponding image feature parameters derived from first order gray level statistics of the gray level image of FIG. 1A-1;

    [0035] FIG. 1B-1 provides a gray level image describing tumor intensity of a second tumor;

    [0036] FIG. 1B-2 is a histogram of the gray level image of FIG. 1B-1;

    [0037] FIG. 1B-3 provides an overview of image feature parameter values and associated image feature parameters derived from first order gray level statistics obtained by analyzing the image of FIG. 1B-1;

    [0038] FIG. 2A-1 illustrates a three dimensional representation of a third tumor;

    [0039] FIG. 2A-2 provides an overview of image feature parameter values of image feature parameters obtained from shape and/or size analysis of the tumor based on FIG. 2A-1;

    [0040] FIG. 2B-1 provides a three dimensional representation of a fourth tumor;

    [0041] FIG. 2B-2 provides an overview of image feature parameter values and associated image feature parameters obtained by shape and/or size analysis based on the image illustrated in FIG. 2B-1;

    [0042] FIG. 3 is an illustration of a surface contour analysis for obtaining the maximum diameter of a tumor;

    [0043] FIG. 4A provides an image of a fifth tumor;

    [0044] FIG. 4B provides an image of a sixth tumor;

    [0045] FIG. 5 is a schematic illustration of a decision support system in accordance with an embodiment of the present invention;

    [0046] FIG. 6 is a schematic illustration of an embodiment of an image analysis method of the present invention;

    [0047] FIG. 7 is a further schematic illustration of a method of the present invention;

    [0048] FIG. 8 is a gray scale ROI image of a tumor from which a gray-level co-occurrence matrix may be determined in accordance with an embodiment of the invention;

    [0049] FIG. 9 schematically illustrate decomposition using a wavelet transform.

    DETAILED DESCRIPTION

    [0050] Before providing a more detailed description of the various image feature parameters which may be derived from image features obtained from imaging data of neoplasms such as tumors, a description will be given herein below with reference to FIGS. 5 and 6 of a decision support system and an image analysis method in accordance with the present invention.

    [0051] FIG. 5 schematically illustrates a decision support system in accordance with an embodiment of the present invention. In FIG. 5, the decision support system 1 comprises at least an analysis unit 3 which is connected to an imaging system 8. The imaging system 8 may be any suitable imaging system used in medical environments for diagnostic purposes, in particular for visualizing tumors. The imaging system 8 may for example be a magnetic resonance imaging system (MRI), a computer tomography system (CT), a positron emission tomography system (PET), a single photon emission computer tomography system (SPECT), an ultrasonography system, a tomography system, or a photo acoustic imaging system. The imaging system 8 may provide image data directly to the analysis system 3, or may alternatively store the image data in a data repository data system 10 from which it may be obtained by the analysis system 3 at any time required. As will be appreciated, the analysis system 3, the imaging system 8, the data repository system 10, and any output terminal or system 12, may be connected with each other via a data network, or via direct data connections.

    [0052] As mentioned hereinabove, the analysis system 3 receives imaging data either directly from the imaging system 8 or retrieves it from a data repository system 10 where the image data may be stored. Another possibility is that part of the image data is received directly from the imaging system 8 by analysis unit 3, and another part of imaging data, e.g. imaging data taken from a same tumor at an earlier stage during a treatment of a patient, may be obtained from data repository system 10. As will be appreciated, imaging data may alternatively be obtained from another source or via other means. For example, such data may be obtained from a remote network, from an e-mail server, or from a data storage entity such as a memory stick or an SD card. Performing an analysis in accordance with the present invention on imaging data taken at various stages throughout a treatment process provides information to a medical practitioner that may be used for evaluating the treatment process, and to take necessary action.

    [0053] The analysis unit 3 comprises a processing unit which receives the image data from input/output ports 6 and 7. The processing unit is arranged for deriving a plurality of image feature parameter values of image feature parameters associated with image features from the image data received. An image feature parameter is a quantitative representation of the associated image feature, and the image feature parameter value is the value of that respective parameter for the respective image data considered. For said deriving, the processing unit 4 applies various analysis algorithms, such as statistical analysis algorithms, graphic analysis algorithms and the like. Such algorithms may for example be stored in memory unit 5 within analysis unit 3. The processing unit may further be arranged to obtain one or more prediction models from memory unit 5. Each of the obtained prediction models comprises selector values which determine whether or not, and to what extend or with what weight specific image feature parameters are to be included in the respective prediction model. The prediction models may comprise either one or both of said selector values and weighting factors separately or integrated in a single multiplier value, as stored in memory unit 5. Such weighting factors not only determine that a certain image feature parameter is included in the prediction model, but also enable to prescribe the importance of a certain image feature parameter in the prediction model, e.g. in terms of its predictive value in relation to or in combination with other parameters.

    [0054] The processing unit 4 is arranged for receiving image data of the neoplasm via input 6 or 7. This receiving step comprises receiving first image data of the neoplasm taken at a first moment in time, and receiving second image data of the neoplasm taken at a second moment in time. From this first and second image data, the processing unit derives for each of a plurality of image features associated with the neoplasm, image feature parameter values. Thus a first image feature parameter value from the first image data is derived, and a second image feature parameter value from the second image data is derived by the processing unit. These first and second image feature parameter values are quantitative representations of the respective image feature considered, at said first and second moment in time respectively.

    [0055] From the first and second image feature parameter values now available, the processing unit calculates for each of the plurality of image features, an image feature difference value. This may be performed by calculating a difference between the first and second image feature parameter value for said respective image feature. Alternatively, in accordance with an embodiment, the processing unit calculates a relative image feature difference value or percentage value, instead of or in addition to the absolute difference value. For example, the following formula may be used:

    [00001] ( IFP 2 - IFP 1 ) IFP 1 × 100 .Math. % = Δ .Math. .Math. IFP %

    wherein, IFP.sub.1 and IFP.sub.2 are respectively the first and second image feature parameter value, and ΔIFP % is the relative image feature difference value or percentage value.

    [0056] Using a prediction model obtained from memory 5 (although it could equally be obtained from an external repository such as data repository system 10), the processing unit derives a predictive value associated with the neoplasm for supporting treatment thereof. The predictive value provides an indication of whether or not, and to what extend, treatment of the tumor or neoplasm is effective. If it is not effective, there may be no or insufficient difference between the first and second image feature parameter values considered; or alternatively such image feature parameter values could indicate a progression of the illness in the undesired direction (e.g. growth of the tumor). If, however, the treatment is effective, the combined image feature difference values will indicate a change in a desired direction. The image features considered, as indicated by the prediction model obtained from memory 5, are suitably selected and multiplied with their weighing factors such as to be early indicators for such successful or unsuccessful treatment.

    [0057] To this end, the prediction model includes a plurality of multiplier values, each multiplier value being associated with an image feature for multiplying an associated image feature difference value therewith. For calculating the predictive value the processing unit 4 multiplies each image feature difference value with its associated multiplier value and combines the multiplied image feature difference values such as to obtain the predictive value for the neoplasm. Such combination may be via any suitable algorithm, e.g. a mere summing, a polynomial or a logarithmic or exponential algorithm.

    [0058] In FIGS. 6 and 7, embodiments of analysis methods in accordance with the present invention are schematically illustrated. FIG. 6 schematically illustrates the data processing performed in a method of the present invention. FIG. 7 illustrates an image analysis method of the present invention as it may be implemented in terms of steps performed by hardware or software elements of a decision support system in accordance with the invention. To explain the method in relation to a decision support system of the invention, reference is also made to the reference numerals and features of FIGS. 5, 6 and 7. As will be appreciated, the method and the system are only provided as an example and should not be interpreted limiting. The order of the steps illustrated in FIG. 7 may be different such as to enable performing data processing in accordance with FIG. 6. Also, additional or alternative steps may be implemented without departing from the teachings of the present invention. Some steps may be dispensed with in case they may be implemented in an alternative manner, without departing from the present invention.

    [0059] In FIG. 7, in step 70 a prediction model 60 (as illustrated in FIG. 6) may be retrieved from a memory 5 of the analysis unit 3 (or from an external memory 10, as illustrated in FIG. 5). Then, in step 72, first image data of a tumor (or other neoplasm) is received by the processing unit 4. For example, this first image data may be image data 15 illustrated in FIG. 6. Likewise, in step 74, second image data of the same tumor is received by the processing unit 4. This second image data may be image data 25 illustrated in FIG. 6, and comprises image data of the same tumor as in image data 15 but taken at a later moment in time. For example, the first image data 15 may have been obtained prior to treatment, e.g. on the day the that the treatment started, while second image data 25 may have been obtained after n days of treatment (wherein n indicates the number of days), e.g. after 15 days. The first and second image data may be received in steps 72 and 74 from a memory (e.g. memory 5 or data repository 10) or may be received directly from an imaging system (e.g. imaging system 8). Moreover, step 70, which is in FIG. 7 illustrated prior to steps 72 and 74, may be performed after receiving the image data in steps 72 and 74, and even after the subsequent steps 75 and 76.

    [0060] In step 75, a number of first image feature parameters, such as first image feature parameters 16, 17, 18 and 19 in FIG. 6, are derived from the first image 15. The decision on which first image feature parameters 16-19 are to be derived may be based on information obtained from the prediction model 60 received in step 70; but alternatively it is also possible that the processing unit 4 calculates all possible image feature parameters of the first image data 15 and stores them for later selection. Likewise, in step 76, the second image feature parameter values 26, 27, 28 and 29 are derived from the second image data 25. As will be appreciated, the second image feature parameters 26-29 relate to the same image feature parameters α, β, γ, and δ as the first image feature parameters 16-19 that were taken from the first image data 15.

    [0061] In step 80 of FIG. 7, the image feature difference values 33, 35, 37, and 39 as illustrated in FIG. 6 are calculated by the processing unit 4. These difference values 33, 35, 37, and 39 may be real or absolute difference values (e.g. for image feature parameter α, this could be: Δα=α.sub.2−α.sub.1; thus based on parameter values 16 and 26). Alternatively, a percentage difference may be used (e.g. for parameter α, this could be: Δα=100%*((α.sub.2−α.sub.1)/α.sub.1); thus based on parameter values 16 and 26). Although in some embodiments, the decision to use percentage differences or real or absolute differences may be predetermined (e.g. processing all parameters in the same manner in this respect), in other embodiments the prediction model 60 may indicate whether for a certain image feature parameter, either the real, absolute or percentage difference may have to be used. Whichever implementation is used, step 80 of FIG. 7 provides the image feature difference values, e.g. values 33, 35, 37, and 39 indicated in FIG. 6.

    [0062] In step 83 of FIG. 7, the processing unit 4 retrieves the relevant multiplier values from the prediction model; e.g. these may be multiplier values m.sub.1-m.sub.4 (elements 48, 49, 50, and 51) from prediction model 60 in memory 5 as illustrated in FIG. 6. Also in step 83, these multiplier values 48-50 are multiplied with the image feature difference values 33, 35, 37, and 39 of their associated image features. This yields the weighed image feature difference values 40, 41, 42, and 43 illustrated in FIG. 6.

    [0063] In the optional implementation suggested above, wherein in steps 75 and 76 the processing units has calculated all image feature parameter values without first consulting the prediction model 60 on which image features to include, the associated difference values of all these parameters would have been calculated and stored in step 80, and step 83 would multiply these difference values with their multiplier values. In that case, the multiplier values m.sub.1-m.sub.4 (e.g. 48-51) could both act as weighing factors as well as selector values by being equal to ‘0’ in case the image feature difference value is not to be included, while being equal to the real weighing factor in case the image feature difference value is to be included.

    [0064] In step 85 of FIG. 7, the weighed image feature difference values are combined such as to yield the predictive value (e.g. predictive value Q (element 55) in FIG. 6) that is the outcome of the method. For example, as illustrated in FIG. 6, combination (element 53) could include the calculation of the predictive value using function f.sub.Δ. However, the function f.sub.Δ could be a summing of the values 40-43 in FIG. 6. After step 85, the method may end.

    [0065] As will be appreciated, the decision support system of FIG. 5 and the image analysis method of FIGS. 6 and 7 are embodiments of the present invention, however the invention may be practice otherwise then specifically described with reference to FIGS. 5, 6 and 7. For example, additional image feature parameter values from either one or both of the first image data 15 and second image data 25, or even from third image data of an image taken at another point in time, may be included for calculating the predictive value. Such additional image feature parameter values thus not necessarily are to be included by their difference values with respect to a second image, although this is of course not prohibited.

    [0066] The present invention uses image feature parameter values obtained from image features derived from image data of a tumor. FIGS. 1A-1 through 1B-3 provide as a first example a number of image feature parameters and their values that may be obtained from first order gray level statistical analysis of an image. In FIG. 1A-1, a gray level image of a tumor is illustrated. The gray level scale is indicated with reference numeral 103 to the right of FIG. 1A-1. Also visible in FIG. 1A-1 is the contour 101 of the tumor to be analyzed. It is to be noted that the contour defining the tumor will usually be determined by a medical practitioner, or any other analysis method or system. The present description assumes this information to be available to the method.

    [0067] In FIG. 1A-2 a histogram 105 is illustrated which is based on the image data illustrated in FIG. 1A-1. The histogram 105 resembles the tumor image only, i.e. the histogram is based on the pixels of the gray level image FIG. 1A-1 inside the contour 101. All parts of the image outside contour 101 are disregarded from the analysis and is considered to be healthy tissue. The histogram 105 is plotted onto a first access 107 indicating the gray level considered, and a second access 108 resembling the number of pixels occurring with gray level.

    [0068] FIG. 1B-1 illustrates a second tumor within contour 121, and FIG. 1B-2 illustrates a corresponding histogram 123 associated with this second tumor illustrated in FIG. 1B-1. From a qualitative comparison of the images of FIG. 1A-1 and FIG. 1B-1, one can see a number of characteristic differences between the two tumors. For example, the first tumor within contour 101 appears to be inhomogeneous, while the gray level of the second tumor 121 is more uniform. This difference is for example directly visible in the histograms 105 and 123. Histogram 123 is clearly concentrated around a uniform gray level as a small but sharp peak. Histogram 105 illustrates a broad distribution having a peak at approximately gray level 1050 and a more distributed trail across almost all gray levels below this value. From the histogram of the image of the tumor, relevant information can be quantitatively derived that may also be derived from qualitative examination of the images.

    [0069] In FIGS. 1A-3 and FIG. 1B-3, an overview is provided from a number of image feature parameter values and associated image feature parameters that may be derived from first order gray level statistical analysis of the images of FIGS. 1A-1 and 1B-1 respectively. These image feature parameters, which will be described with more detail later on in this document, may be used in the various prediction models to obtain a predictive value, which may be supportive to the medical practitioner in early assessment of treatment effectiveness.

    [0070] FIGS. 2A-1 through 2B-2 provide an example of image feature parameter and image feature parameter values that may be obtained from analysis of shape and size related features, derivable for example from three dimensional (3D) representations of tumors based on imaging data obtained. In FIG. 2A-1 a three dimensional (3D) representation of a third tumor 130 is illustrated. In FIG. 2B-1 a three dimensional (3D) representation of a fourth tumor 135 is illustrated. From qualitative comparison of the two tumors in FIGS. 2A-1 and FIGS. 2B-1, a number of differences may be derived such as a difference in size of the tumor. The fourth tumor 135 is much larger than the third tumor 130, although the third tumor 130 appears to have a much larger surface.

    [0071] An overview of the image feature parameter values that may be derived from the imaging data in FIGS. 2A-1 and FIG. 2B-1 is provided in FIGS. 2A-2 and 2B-2 respectively. These image feature parameter values for example include the volumes of the tumors, their total surface and their maximum diameter. Besides this, more quantitative information on image feature parameters which may be characteristic for a specific type of tumor growth (phenotype) is derivable from the images. For example, the sphericity provides information on how spherical (i.e. regular) the tumor is. The surface to volume ratio (SVR) expresses how spiky or sharp the tumor is. A maximum diameter represents the maximum distance between the most remote points on the surface of the tumor in the three dimensional representation.

    [0072] FIG. 3 provides an illustration of a contour analysis from which the maximum diameter of a tumor may be derived. The most remote points in FIG. 3 are at the ultimate ends of the tumor 140, to the left and right side of the plot in FIG. 3. In respect of FIG. 3 it is noted that the points depicted in the plot are voxels lying on the surface of the tumor.

    [0073] As a further example in FIGS. 4a and 4b, a fifth tumor 143 and a sixth tumor 146 are respectively illustrated. From qualitative observation of the images in FIG. 4a and FIG. 4b, a striking difference is visible in terms of the texture of the tumors illustrated. For example, the sixth tumor 146 in FIG. 4b illustrates a strong variation in color inside the tumor and across its surface. The tumor 143 in FIG. 4a is more homogeneous, being more or less of one color. These differences in texture can be derived from co-occurrence matrices obtained from pixel color analysis of the images of these figures. The concept of co-occurrence matrices will be explained later.

    Image Feature Parameter Descriptions (Part 1)

    [0074] In this section ‘image feature parameter descriptions (part 1)’, a large number of image feature parameters are described which can be obtained from the image data received. The image feature parameters relate to different classes of parameters, wherein each class of parameters is somewhat similar in terms of the manner in which the parameters are obtained. For example, for some classes, a certain preprocessing step is to be performed by a processing unit 4 on an analysis unit 3, in order to calculate each parameter. This preprocessing is similar for all parameters in that class, for example as is the case for the second order gray-level statistics. Other classes may relate to a certain type of parameters, e.g. first order gray-level statistics includes statistical parameters obtainable directly from the image data. Further image feature parameter descriptions for these and other classes may be found in section ‘image feature parameter descriptions (part 2)’ later on.

    First-Order Gray Level Statistics

    [0075] In this section various image feature parameters are described that can be used to extract and summarize meaningful and reliable information from CT images. We will describe the extraction of image traits that may be used to derive prognostic metrics, and that may be incorporated into prediction models of a decision support system, to beneficially support the clinical review process of a treatment, and to modify a patient's treatment in case of predicted ineffectiveness of a present treatment. As appreciated, the objective of the invention is to support (not take over) the decision making process of the medical practitioner with advanced information taken from the images; i.e. image feature data that cannot be objectively assessed by means of qualitative interpretation.

    [0076] We explore first-order statistics of the image histogram through the commonly used metrics. We denote by I(x,y) as the intensity or gray-level values of the two-dimensional pixel matrix. Note that this notation I(x,y) assumes a two dimensional image. However, the invention may well apply corresponding image feature parameters of three-dimensional images, in which case the intensity matrix may be denoted as I(x,y,z) or I(X) where X=X(x,y,z) (or with different coordinates if not Cartesian). Summing may in that case be correspondingly performed over all values, hence in three coordinates. The formulas used for the first order statistics are as follows:

    [0077] 1. Minimum


    I.sub.min=min{I(x,y)}  (B.1)

    [0078] 2. Maximum


    I.sub.max=max{I(x,y)}  (B.2)

    [0079] 3. Range


    R=max{I(x,y)}min{I(x,y)}  (B.3)

    [0080] 4. Mean

    [00002] μ = 1 XY .Math. .Math. x = 1 X .Math. .Math. y = 1 Y .Math. I ( x , y ) ( B .Math. .4 )

    [0081] 5. Variance

    [00003] σ 2 = 1 ( XY - 1 ) .Math. .Math. x = 1 X .Math. .Math. y = 1 Y .Math. [ I ( x , y ) - μ ] 2 ( B .Math. .5 )

    [0082] 6. Standard Deviation

    [00004] s = ( 1 XY - 1 .Math. .Math. i = 1 XY .Math. ( x i - μ ) 2 ) 1 / 2 ( B .Math. .6 )

    [0083] 7. Skewness

    [00005] 1 XY .Math. .Math. x = 1 X .Math. .Math. y = 1 Y .Math. [ I ( x , y ) - μ σ ] 3 ( B .Math. .7 )

    [0084] 8. Kurtosis

    [00006] 1 XY .Math. .Math. x = 1 X .Math. .Math. y = 1 Y .Math. { [ I ( x , y ) - μ σ ] 4 } - 3 ( B .Math. .8 )

    [0085] 9. Entropy


    H=Σ.sub.i=1.sup.XYP(i).Math.log.sub.2P(i)  (B.9)

    [0086] In B.9 P(i) is the first order histogram, that is, P(i) is the fraction of pixels with gray level i. The variance (μ.sub.2), skewness (μ.sub.3) and kurtosis (μ.sub.4) are the most frequently used central moments. The variance is a measure of the histogram width, that is, a measure of how much the gray levels differ from the mean. The skewness measures the degree of histogram asymmetry around the mean, and kurtosis is a measure of the histogram sharpness. As a measure of histogram uniformity or randomness we computed the entropy of the image histogram. The closer to a uniform distribution the higher the entropy, or seen in a different way, H would take low values in smooth images where the pixels have the same intensity level.

    Second-Order Gray Levels Statistics

    [0087] The features shown above that resulted from the first-order statistics provide information related to the gray-level distribution of the image; however they do not provide any information regarding the relative position of the various gray levels over the image. This information can be extracted from the so called co-occurrence matrices where pixels are considered in pairs and which provide a spatial distribution of the gray level values. The co-occurrence features are based on the second-order joint conditional probability function P(i,j;a,d) of a given image. The ith, jth element of the co-occurrence matrix for a given tumor image represents the number of times that the intensity levels i and j occur in two pixels separated by a distance (d) in the direction (a). The co-occurrence matrix for a pair (d,a) is defined as the N.sub.g× N.sub.g matrix where N.sub.g is the number of intensity levels. The N.sub.g levels were obtained by scaling the gray-level image to a discrete N.sub.g number of gray-level values. The N.sub.g values are normally selected in powers of 2; here we have selected 32 discrete gray-level values which in practice is a sufficient choice for representing the image. Here d was set to a single pixel size and a covered the four available angular directions (horizontal, vertical, diagonal and anti-diagonal). Let for example an image array I(x,y) be:

    [00007] I = [ 3 5 8 10 8 7 10 3 5 3 7 3 5 1 8 2 6 7 1 2 1 2 9 3 9 ] ( B . .Math. 11 )

    which corresponds to a 5×5 image. We can assume the number of discrete gray levels is equal to 10. Thus for the image (B.11) and a relative pixel position (1,0°) we obtain:

    [00008] GLCM .Math. 0 ( d = 1 ) = [ 0 2 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 3 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 ] ( B . .Math. 12 )

    [0088] In other words, for each of the intensity pairs, such as (1, 2), we count the number of pixel pairs at relative distance (d=1) and orientation α=0° (horizontal) that take these values. In our case this is 2. There are two instances in the image (B.11) where two, horizontally adjacent pixels have the values 1 and 2. The element (3, 5) in the GLCM is 3 because in the example image there are 3 instances in which two, horizontally adjacent pixels have the values 3 and 5. From the same image (B.11) and (d=1, α=45°) we obtain:

    [00009] GLCM .Math. 45 ( d = 1 ) = [ 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 ] ( B . .Math. 13 )

    [0089] As illustrative example we will obtain the gray-level co-occurrence matrix from a given tumor image. FIG. 8 provides an example of a given gray scale ROI image (color map changed for better visual inspection) to the left, and a scaled version of the left image to 32 discrete gray levels to the right. In FIG. 8, the image in the left hand side corresponds to a given gray-level ROI image, the color map has been changed to enhance the differences for visual inspection. The image in the right corresponds to the scaled ROI with 32 discrete gray values. The co-occurrence matrices are obtained from the scaled image.

    [0090] Having defined the probabilities of occurrence of gray levels with respect to relative spatial position we can define the relevant co-occurrence features that have been extracted; in some cases they have a direct physical interpretation with respect to the texture of an image, for example, they quantify coarseness, smoothness, randomness, etc. Others do not have such a property but they still encode highly discriminative texture-related information. Denoting by P(i,j) the normalized co-occurrence matrix, by N.sub.g the number of discrete gray levels of the image, the co-occurrence features relevant for our application are defined as follows:

    [0091] 10. Contrast

    [00010] Con = .Math. n = 1 N g .Math. n 2 .Math. { .Math. i = 1 N g .Math. .Math. j = 1 N g .Math. P ( i , j ) .Math. i - j .Math. = n } .Math. ( B . .Math. 14 )

    [0092] This is a measure intensity contrast between a pixel and its neighbor over the entire image, that is, a measure of the local gray level variations. For a constant image this metric is zero. The n.sup.2 dependence weights the big differences more.

    [0093] 11. Correlation

    [00011] Correlation = .Math. i = 1 N g .Math. .Math. j = 1 N g .Math. ( i - μ i ) .Math. ( j - μ j ) .Math. P ( i , j ) σ i .Math. σ j ( B . .Math. 15 )

    [0094] This metric measure how correlated is a pixel to its neighbor over the entire image. Correlation takes the values 1 or −1 for perfectly positively or negatively correlated image.

    [0095] 12. Energy


    Energy=Σ.sub.i=1.sup.N.sup.gΣ.sub.j=1.sup.N.sup.g(P(i,j)).sup.2  (B.16)

    [0096] Energy is the sum of the squared elements of an image and a measure of smoothness. If all pixels are of the same gray level then energy is equal to 1; at the other extreme if we have all possible pairs of gray levels with equal probability, the region is less smooth, with a more uniformly distributed P(i,j) and a lower energy.

    [0097] 13. Homogeneity

    [00012] Homogeneity = .Math. i = 1 N g .Math. .Math. j = 1 N g .Math. P ( i , j ) 1 + .Math. i - j .Math. ( B . .Math. 17 )

    [0098] This feature measures how close is the distribution of elements in the co-occurrence matrix to the diagonal of the co-occurrence matrix. Homogeneity is 1 for a constant image.

    [0099] 14. Inverse Difference Moment

    [00013] IDM = .Math. i = 1 N g .Math. .Math. j = 1 N g .Math. P ( i , j ) 1 + .Math. i - j .Math. 2 ( B . .Math. 18 )

    [0100] This feature takes high values for images with low contrast due to the (i−j).sup.2 dependence.

    [0101] 15. Sum Average


    SA=Σ.sub.i=2.sup.2N.sup.g[iP.sub.x+y(i)]  (B.19)

    [0102] In A.18 P.sub.x(i) and P.sub.y(i) are the row and column marginal probabilities, obtained by summing the rows or columns P(i,j).

    [0103] 16. Sum Variance


    SV=Σ.sub.i=2.sup.2N.sup.g[(i−sum average).sup.2P.sub.x+y(i)]  (B.20)

    [0104] 17. Sum Entropy


    SE=−Σ.sub.i=2.sup.2N.sup.g[P.sub.x+y(i)log [P.sub.x+y(i)]]  (B.21)

    [0105] All the second-order statistics based features are functions of the distance d and the orientation a. Here for the direction d=1, the resulting values for the four directions are averaged. These metrics take into account the local intensity and spatial relationship of pixels over the region and are independent to tumor position, size, orientation and brightness.

    Run-Length Gray-Level Statistics

    [0106] Additionally we examined gray-level runs derived from run-length matrices (RLM) using a run-length metrics. A gray level run is a set of consecutive pixels having the same gray level value. The length of the run is the number of pixels in the run. Run length features describe textural information related with the number of times each gray level appears by itself, in pairs and so on, in a certain distance and orientation. Taking for example the image

    [00014] I = [ 5 2 5 4 4 3 3 3 1 3 2 1 1 1 3 4 2 2 2 3 3 5 3 3 2 ] ( B . .Math. 22 )

    with five possible gray levels. For each of the previously defined angular directions (0°, 45°, 90° and 135°) the corresponding run length matrices are defined. The run length matrix is an N.sub.g×N.sub.r array where N.sub.r is the largest possible run length in the image. For distance (d=1) and orientation (a=0°) we obtain:

    [00015] Q RL ( 0 .Math. ° ) = [ 1 0 1 0 0 3 0 1 0 0 4 1 1 0 0 1 1 0 0 0 3 0 0 0 0 ] ( B . .Math. 23 )

    [0107] The element (1,1) of the run length matrix is the number of times that the gray level 1 appears by itself, the second element is the number of times it appears in pairs (zero in the example), and so on. The element (3,3) is the number of times the gray level 3 appears in the image with run length 3. For the diagonal direction we obtain:

    [00016] Q RL ( 45 .Math. ° ) = [ 2 1 0 0 0 6 0 0 0 0 7 1 0 0 0 3 0 0 0 0 3 0 0 0 0 ] ( B . .Math. 24 )

    [0108] Denoting by P the total number of pixels of an image, by Q.sub.RL(i,j) the (i,j)-th element of the run length matrix for a specific distance d and a specific angle α and by Nr the number of different runs that occur, based on the definition of the run length matrices, the following rung length features are defined:

    [0109] 18. Short Run Emphasis

    [00017] SRE = .Math. i = 1 N g .Math. .Math. j = 1 N r .Math. ( Q RL ( i , j ) / j 2 ) .Math. i = 1 N g .Math. .Math. j = 1 N r .Math. Q RL ( i , j ) ( B . .Math. 25 )

    [0110] This feature emphasizes small run lengths. The denominator is the number of run lengths in the matrix, for example, 17 in B.23 and 23 in B.24.

    [0111] 19. Long Run Emphasis

    [00018] LRE = .Math. i = 1 N g .Math. .Math. j = 1 N r .Math. ( Q RL ( i , j ) .Math. j 2 ) .Math. i = 1 N g .Math. .Math. j = 1 N r .Math. Q RL ( i , j ) ( B . .Math. 26 )

    [0112] In this case long run lengths are emphasized. For smoother images RLE should take larger values while SRE takes larger values with coarser image.

    [0113] 20. Gray Level Non-Uniformity

    [00019] SRE = .Math. i = 1 N g .Math. [ .Math. j = 1 N r .Math. Q RL ( i , j ) ] 2 .Math. i = 1 N g .Math. .Math. j = 1 N r .Math. Q RL ( i , j ) ( B . .Math. 27 )

    [0114] This feature takes small values when the runs are uniformly distributed among the gray levels.

    [0115] 21. Run Percentage

    [00020] RP = .Math. i = 1 N g .Math. .Math. j = 1 N r .Math. Q RL ( i , j ) P ( B . .Math. 28 )

    [0116] Run percentage takes high values for coarse images. For each angular direction, the complete set of second-order statistics and run-length features was computed but only the average value was used as feature.

    Shape and Size Based Features

    [0117] We extended the number of extracted image traits by adding measurements of the size and shape of the tumor region. For every two-dimensional image of the tumor in a given CT stack three features are obtained, maximum cross-sectional area, perimeter and mayor axis length as follows:

    [0118] 22. Area

    [0119] We count the number of pixels in the ROI's and the maximum count is denoted as the maximum cross-sectional area.

    [0120] 23. Perimeter

    [0121] Is the distance between each adjoining pair of pixels around the border of the region; the total sum of the perimeters for each ROI image is taken as feature.

    [0122] 24. Mayor Axis Length

    [0123] This feature specifies the maximum length in pixels of the mayor axis of a two-dimensional ROI image.

    [0124] 25. Volume

    [0125] The total volume of the tumor is determined by counting the number of pixels in the tumor region and multiplying this value by the voxel size. The voxel size is obtained from the PixelSpacing section of the CT Dicom Header which specifies the size of a voxel in the x, y, and z directions. The result is a value in mm.sup.3. Based on the CT-GTV volume that was described above, 3D representations of the tumor volume have been rendered.

    [0126] 26. Maximum Diameter

    [0127] In contrast with the mayor axis-length which was determined in two-dimensional ROI images, this feature examines the maximum diameter of the tumor region in a three-dimensional space. Firstly, we obtain the coordinates of all the points located at the surface of the tumor region; secondly, the distance between each pair of points in the tumor contour is determined using the following metric called “City Bloc Distance”:


    D=|x.sub.1−x.sub.2|+|y.sub.1−y.sub.2|+|z.sub.1−z.sub.2|  (B.29)

    [0128] The points in the tumor contour whose edges touch are 1 unit apart; points diagonally touching are separated by two units. The two points with the maximum distance are the points at the edges of the maximum diameter. In FIG. 3, as referred to above, a plot of the points in the surface of a given tumor volume is shown; the maximum diameter is calculated among the points in this image.

    [0129] So far we have described the extraction of image traits regarding the gray level and spatial relationship between pixels in a region, as well as size measurements of the tumor region in two and three-dimensions. Another important issue in the task of patter recognition is the analysis of shape; in this regard the extracted image traits are completed by adding the following three shape-based features:

    [0130] 27. Surface to Volume Ratio.

    [0131] This feature is intended to express how spiky or sharp is the tumor volume. A more lobulated tumor volume would result in a higher surface to volume ratio. To calculate this feature first we determine and count the pixels located at the surface of the tumor (e.g. as shown in FIGS. 2A-1 and 2B-1); the resulting number is divided by the sum of all the pixels in the tumor volume.

    [0132] 28. Sphericity

    [0133] This is a measure of how spherical or rounded is the shape of the tumor volume. Defined in [16], the sphericity of an object is the ratio of the surface area of a sphere (with the same volume as the given object) to the surface area of the object:

    [00021] Ψ = π 1 3 ( 6 .Math. V ) 2 3 A ( B .Math. .30 )

    Where A and V are the surface area and volume of the tumor respectively as determined for the surface to volume ratio.

    [0134] 29. Compactness

    [0135] This is an intrinsic characteristic of the shape of objects that has been widely used in pattern recognition tasks and represents the degree to which a shape is compact. The compactness of a three-dimensional tumor volume is obtained as follows:

    [00022] Comp = V π .Math. A 2 / 3 ( B .Math. .31 )

    [0136] The similarity to a sphere and compactness features are dimensionless numbers and they are independent to scaling and orientation. The feature generation phase of this methodology can be performed in a semi-fully automated fashion since the tumor delineations carried out by the physician are needed by the algorithm. The features enlisted in this appendix will be fed to a classifier as inputs in the learning and recognition phase of the classification task.

    Image Feature Parameter Description (Part 2)

    [0137] Additional First Order Gray-Level Statistics

    [0138] In this section various further image feature parameters in the class first order gray-level statistics are described, which are additional to the parameters described in section ‘Image feature parameter description (part 1)’, sub-section ‘First order gray-level statistics’. Herein, I(x,y,z) represents a standard uptake value (SUV) or intensity of a voxel, i.e. a voxel value taken from the image data directly.

    [0139] 30. Energy

    [0140] This feature is described by the following equation:

    [00023] E tot = V voxel .Math. .Math. x = 1 x .Math. .Math. y = 1 y .Math. .Math. z = 1 z .Math. I ( x , y , z ) 2 ( B .Math. .32 )

    [0141] Where V.sub.voxel is the voxel volume of the three dimensional image. The voxel volume is the product of the pixel spacing in x-direction, the pixel spacing in y-direction and the pixel spacing in z-direction. Total Energy is normalized by the voxel volume.

    [0142] 31. Mean Absolute Deviation

    [0143] This feature equals the mean of the absolute deviations of all voxel intensities around the mean intensity value represented by image feature parameter 4 (equation B.4) above.

    [0144] 32. Median

    [0145] Median is a well known statistical parameter, indicating in this case the median intensity value.

    [0146] 33. Root Mean Square

    [0147] Root mean square is a well known statistical parameter, indicating in this case the quadratic mean, or the square root of the mean of squares of all voxel intensities.

    [0148] 34. SUV Peak

    [0149] This image feature parameter is defined as the mean standard uptake value (SUV) within a 1 cm.sup.3 sphere centered around the maximum SUV voxel.

    [0150] 35. Uniformity

    [0151] Let P define the first order histogram and P(i) the fraction of voxels with intensity level i. N.sub.i is the number of discrete intensity levels. Then:

    [00024] uniformity = .Math. i = 1 N l .Math. P ( i ) 2

    Additional Shape and Size Based Features

    [0152] Below, a number of further shape and size (i.e. geometric) features are described, in addition to those features described as image feature parameter numbers 22 to 29 above, describing the shape and size of the volume of interest. Let V be the volume and A be the surface area of interest.

    [0153] 36. Compactness Second Type (‘Compactness 2’)

    Compactness second type is an alternative image feature parameter that relates to the compactness of the neoplasm considered. This parameter is described by:

    [00025] compactness .Math. .Math. 2 = 36 .Math. π .Math. V 2 A 3

    [0154] 37. Spherical Disproportion

    [0155] Where R is the radius of a sphere with the same volume as the tumor, the spherical disproportion is:

    [00026] spherical .Math. .Math. disproportion = A 4 .Math. π .Math. .Math. R 2

    [0156] 38. Maximum 3D Diameter

    This parameter equals the maximum three dimensional tumor diameter measurable from the image.

    Additional Second-Order Gray Levels Statistics

    [0157] These additional features are again based on the gray-level co-occurrence matrices. Recalling from the above description of the co-occurrence matrices, let:

    [0158] P(i,j) be the co-occurrence matrix,

    [0159] N.sub.g be the number of discrete intensity levels in the image,

    [0160] μ be the mean of P(i,j),

    [0161] μ.sub.x(i) be the mean of row i,

    [0162] μ.sub.y(j) be the mean of column j,

    [0163] σ.sub.x(i) be the standard deviation of row i,

    [0164] σ.sub.y(j) be the standard deviation of column j,

    [0165] p.sub.x(i)=Σ.sub.j=1.sup.N.sup.gP(i,j),

    [0166] p.sub.y(i)=Σ.sub.i=1.sup.N.sup.gP(i,j),

    [0167] p.sub.x+y(k)=Σ.sub.i=1.sup.N.sup.gΣ.sub.j=1.sup.N.sup.gP(i,j), i+j=k, k=2, 3, . . . , 2N.sub.g,

    [0168] p.sub.x−y(k)=Σ.sub.i=1.sup.N.sup.gΣ.sub.j=1.sup.N.sup.g=k, |i−j|=k, k=0, 1, . . . , N.sub.g−1,

    [0169] HXY1=−Σ.sub.i=1.sup.N.sup.gΣ.sub.j=1.sup.N.sup.gP(i,j)log(p.sub.x(i)p.sub.y(j)),

    [0170] HXY2=−Σ.sub.i=1.sup.N.sup.gΣ.sub.j=1.sup.N.sup.gp.sub.x(i)p.sub.y(j)log(p.sub.x(i)p.sub.y(j)).

    Then the following further image feature parameters are included in this class.

    [0171] 39. Autocorrelation

    [00027] autocorrelation = .Math. i = 1 N g .Math. .Math. j = 1 N g .Math. ijP ( i , j )

    [0172] 40. Cluster Prominence:

    [00028] cluster .Math. .Math. prominence = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. [ i + j - μ x ( i ) - μ y ( j ) ] 4 .Math. P ( i , j )

    [0173] 41. Cluster Shade:

    [00029] cluster .Math. .Math. shade = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. [ i + j - μ x ( i ) - μ y ( j ) ] 3 .Math. P ( i , j )

    [0174] 42. Cluster Tendency:

    [00030] cluster .Math. .Math. tendency = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. [ i + j - μ x ( i ) - μ y ( j ) ] 2 .Math. P ( i , j )

    [0175] 43. Difference Entropy

    [00031] difference .Math. .Math. entropy = .Math. i = 0 N g - 1 .Math. .Math. P x - y ( i ) .Math. log 2 [ P x - y ( i ) ]

    [0176] 44. Dissimilarity

    [00032] dissimilarity = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. .Math. i - j .Math. .Math. P ( i , j )

    [0177] 45. Homogeneity Second Type

    [0178] This parameter relates to an alternative manner of describing the homogeneity.

    [00033] homogeneity .Math. .Math. 2 = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. P ( i , j ) 1 + .Math. i - j .Math. 2

    [0179] 46. Entropy

    [00034] entropy = - .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. P ( i , j ) .Math. log 2 [ P ( i , j ) ]

    [0180] 47. Informational Measure of Correlation 1 (IMC1):

    [00035] IMC .Math. .Math. 1 = H - HXY .Math. .Math. 1 max .Math. { HX , HY }

    [0181] Where H is the entropy.

    [0182] 48. Informational Measure of Correlation 2 (IMC2):


    IMC2=√{square root over (1−e.sup.−2(HXY2−H))}

    [0183] Where H is the entropy.

    [0184] 49. Inverse Difference Moment Normalized (IDMN):

    [00036] IDMN = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. P ( i , j ) 1 + ( .Math. i - j .Math. 2 N 2 )

    [0185] 50. Inverse Difference Normalized (IDN):

    [00037] IDN = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. P ( i , j ) 1 + ( .Math. i - j .Math. N )

    [0186] 51. Inverse Variance:

    [00038] inverse .Math. .Math. variance = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. P ( i , j ) .Math. i - j .Math. 2 , i j

    [0187] 52. Maximum Probability:


    maximum probability=max{P(i,j)}

    [0188] 53. Variance:

    [00039] variance = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N g .Math. .Math. ( i - μ ) 2 .Math. P ( i , j )

    Additional Run-Length Gray-Level Statistics

    [0189] Additional run length matrix based features are defines below. In the equations below, we denote p(i,j|θ) to be the (i,j).sup.th entry in the given run-length matrix p for a direction θ. Thus p(i,j|θ) as denoted in the equations below equals the earlier defined Q.sub.RL(θ), the run length matrix in direction θ.

    [0190] 54. Run Length Non-Uniformity (RLN)

    [00040] RLN = .Math. j = 1 N r .Math. .Math. [ .Math. i = 1 N g .Math. .Math. p ( i , j θ ) ] 2 .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. p ( i , j θ )

    [0191] 55. Low Gray Level Run Emphasis (LGLRE)

    [00041] LGLRE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. [ p ( i , j θ ) i 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. p ( i , j θ )

    [0192] 56. High Gray Level Run Emphasis (HGLRE)

    [00042] HGLRE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. i 2 .Math. p ( i , j θ ) .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. p ( i , j θ )

    [0193] 57. Short Run Low Gray Level Emphasis (SRLGLE)

    [00043] SRLGLE - .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. [ p ( i , j θ ) i 2 .Math. j 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. p ( i , j θ )

    [0194] 58. Short Run High Gray Level Emphasis (SRHGLE)

    [00044] SRHGLE - .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. [ p ( i , j θ ) .Math. i 2 j 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. p ( i , j θ )

    [0195] 59. Long Run Low Gray Level Emphasis (LRLGLE)

    [00045] LRLGLE - .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. [ p ( i , j θ ) .Math. j 2 i 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. p ( i , j θ )

    [0196] 60. Long Run High Gray Level Emphasis (LRHGLE)

    [00046] LRHGLE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. p ( i , j θ ) .Math. i 2 .Math. j 2 .Math. i = 1 N g .Math. .Math. .Math. j = 1 N r .Math. .Math. p ( i , j θ )

    Gray-Level Size-Zone Matrix Based Features

    [0197] A further class of image feature parameters relate to the gray-level size zone matrix (GLSZM), which will first be briefly introduced herewith. The (i,j).sup.th entry of the GLSZM p(i,j) describes the number of connected areas of gray-level (i.e. intensity value) i and size j. GLSZM features therefore describe homogeneous areas within a tumor volume, describing tumor heterogeneity at a regional scale.

    [0198] Taking for example the image with five possible gray-levels:

    [00047] I = [ 5 4 4 2 2 1 4 2 2 5 1 1 3 3 3 2 2 3 3 1 2 2 2 1 1 ]

    [0199] The resulting GLSZM will then be:

    [00048] p = [ 0 0 2 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 2 0 0 0 0 ]

    [0200] In the above example, both p(1,1) equals 0 because there are no regions of gray-level value 1 with only one connected area; while p(1,3) equals 2, because there are two regions with three connected areas of gray level 1. In three dimensions, voxels of the same gray-level are considered to be connected (i.e. belonging to the same area) if they are part of a 26-connected neighborhood.

    [0201] Let: p(i,j) be the (i,j)th entry in the given GLSZM p; N.sub.g the number of discrete intensity values in the image; N.sub.z the size of the largest, homogeneous region in the volume of interest; N.sub.a the number homogeneous areas in the image.

    [0202] 61. Small Area Emphasis (SAE)

    [00049] SAE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. [ p ( i , j ) j 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0203] 62. Large Area Emphasis (LAE)

    [00050] LAE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. j 2 .Math. p ( i , j ) .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0204] 63. Intensity Variability (IV)

    [00051] IV = .Math. i = 1 N g .Math. [ .Math. j = 1 N z .Math. .Math. p ( i , j ) ] 2 .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0205] 64. Size-Zone Variability (SZV)

    [00052] SZV = .Math. j = 1 N z .Math. [ .Math. i = 1 N g .Math. .Math. p ( i , j ) ] 2 .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0206] 65. Zone Percentage (ZP)

    [00053] ZP = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j ) N a

    [0207] 66. Low Intensity Emphasis (LIE)

    [00054] LIE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. [ p ( i , j ) i 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0208] 67. High Intensity Emphasis (HIE)

    [00055] HIE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. i 2 .Math. p ( i , j ) .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0209] 68. Low Intensity Small Area Emphasis (LISAE)

    [00056] LISAE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. [ p ( i , j ) i 2 .Math. j 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0210] 69. High Intensity Small Area Emphasis (HISAE)

    [00057] HISAE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. [ p ( i , j ) .Math. i 2 j 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0211] 70. Low Intensity Large Area Emphasis (LILAE)

    [00058] LILAE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. [ p ( i , j ) .Math. j 2 i 2 ] .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    [0212] 71. High Intensity Large Area Emphasis (HILAE)

    [00059] HILAE = .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j ) .Math. i 2 .Math. j 2 .Math. i = 1 N g .Math. .Math. .Math. j = 1 N z .Math. .Math. p ( i , j )

    Intensity Volume Histogram (IVH) Features (PET Only)

    [0213] Where the imaging method used is positron emission tomography (PET), a further class of image features of interest is based on the intensity volume histogram of the PET image. The intensity volume histogram summarizes the complex three dimensional (3D) data contained in the image into a single curve, allowing for a simplified interpretation.

    [0214] The image feature parameters below relate to relative volumes or intensities (denoted by x), absolute intensities (denoted by y) and/or absolute volumes (denoted by z). Relative steps in volume and intensity (x) were taken in 10% increments; x={10%, 20%, . . . , 90%}. Absolute steps in intensity (y) were taken in 0.5 [SUV]; y={0.5, 1, . . . , SUVmax}, where SUVmax is the maximum image intensity value. Absolute steps in volume (z) were taken in 0.5 ml increments; z={0.5 ml, 1 ml, . . . , V}, where V is the tumor volume.

    [0215] The following image feature parameters may be applied.

    [0216] 72. AVAI.sub.y

    [0217] Volume (AV) [ml] above (i.e. with at least) an intensity (AI).

    [0218] 73. RVAI.sub.y

    [0219] Relative volume (RV) [%] above (i.e. with at least) an intensity (AI).

    [0220] 74. AVRI.sub.x

    [0221] Volume (AV) [ml] above (i.e. with at least) a relative intensity (RI).

    [0222] 75. RVRI.sub.x

    [0223] Relative volume (RV) [%] above (i.e. with at least) a relative intensity (RI).

    [0224] 76. AIAV.sub.z

    [0225] Intensity thresholds (AI) [SUV] for the Z ml highest intensity volume (AV).

    [0226] 77. AIRV.sub.x

    [0227] Intensity thresholds (AI) [SUV] for the X % highest intensity volume (RV).

    [0228] 78. MIAV.sub.z

    [0229] Mean intensity (MI) [SUV] in the Z ml highest intensity volume (AV).

    [0230] 79. MIRV.sub.x

    [0231] Mean intensity (MI) [SUV] in the X % highest intensity volume (RV).

    [0232] 80. TLGAI.sub.y

    Total lesion glycolysis (TLG) for volume (TLG) above (i.e. with at least) an intensity (AI).

    [0233] 81. TLGRI.sub.x

    [0234] Total lesion glycolysis (TLG) for volume (TLG) above (i.e. with at least) a relative intensity (RI).

    Wavelet Transforms of Image Feature Parameters

    [0235] The features described in this document may further be utilized by performing a wavelet transform on the images. By taking the wavelet transformed image data as input, in addition to the untransformed image data, a complete new set of image feature parameters is obtained that can be used to obtain the predictive value in a method in accordance with the present invention.

    [0236] Wavelet transform effectively decouples textural information by decomposing the original image, in a similar manner as Fourier analysis, in low and high-frequencies. For example, a discrete, one-level and undecimated three dimensional wavelet transform may be applied to a CT image, which decomposes the original image X into 8 decompositions. Consider L and H to be a low-pass (i.e. a scaling) and, respectively, a high-pass (i.e. a wavelet) function, and the wavelet decompositions of X to be labeled as X.sub.LLL, X.sub.LLH, X.sub.LHL, X.sub.LHH, X.sub.HLL, X.sub.HLH, X.sub.HHL and X.sub.HHH. For example, X.sub.LLH is then interpreted as the high-pass sub band, resulting from directional filtering of X with a low-pass filter along x-direction, a low pas filter along y-direction and a high-pass filter along z-direction and is constructed as:

    [00060] X LLH ( i , j , k ) = .Math. p = 1 N L .Math. .Math. q = 1 N L .Math. .Math. r = 1 N H .Math. L ( p ) .Math. L ( q ) .Math. H ( r ) .Math. X ( i + p , j + q , k + r )

    [0237] Where N.sub.L is the length of filter L and N.sub.H is the length of filter H. The other decompositions are constructed in a similar manner, applying their respective ordering of low or high-pass filtering in x, y and z-direction. Wavelet decomposition of the image X is schematically depicted in FIG. 9. Since the applied wavelet decomposition is undecimated, the size of each decomposition is equal to the original image and each decomposition is shift invariant. Because of these properties, the original tumor delineation of the gross tumor volume (GTV) can be applied directly to the decompositions after wavelet transform.

    [0238] The wavelet transform is a time-frequency-transformation based on a wavelet series. Wavelet series are a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This representation is performed on a Hilbert basis defined by orthonormal wavelets. The wavelet transform provides information similar to the short-time-Fourier-transformation, but with additional special properties of the wavelets, which show up at the resolution in time at higher analysis frequencies of the basis function. Wavelet transforms provide the frequency of the signals and the time associated to those frequencies. High-low-high filtering applies to data analysis methods relying on wavelet transforms to detect certain activity patterns or variation patterns in the data; the high-low-high is thereby indicative of the wavelet shape. The transform is applied directly on the raw CT image.

    [0239] In the above description, the invention is described with reference to some specific embodiments thereof. However, it will be appreciated that the present invention may be practiced otherwise than specifically described herein, in relation to these embodiments. Variations and modifications to specific features of the invention may be apparent to the skilled reader, and are intended to fall within the scope of the invention. The scope of the invention is merely restricted by the scope of the appended claims.